anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Photodissociated iodine laser and population inversion
Question: Iodine molecules ($\ce{I2}$) can absorb in the visible region and dissociate into $\ce{2 I^.}$ radicals. One of the I atoms is in ground electronic state $\mathrm{^2P_{3/2}}$ and the other I atom is in the excited $\mathrm{^2P_{1/2}}$. So, photodissociation of $\ce{I2}$ produces 50% of the species in the excited state and 50% in ground state. The $\mathrm{^2P_{1/2}\rightarrow^2P_{3/2}}$ of iodine has been extensively used in laser production, in chemical iodine laser or methyl iodide photodissociation laser for example. But in those cases, most of the $\ce{I}$ atoms are produced in the excited state. The criteria for laser production is population inversion, where >50% of the species are in the excited state. Here I am not sure if the photodissociation of iodine molecule satisfies the criteria of population inversion, as exactly 50% of the species is in excited state. So my question is—can we consider 50% excited species as population inversion and if so can photodissociated $\ce{I_2}$ generate laser? Answer: You are making a chemical laser. A laser depend on keeping a positive difference in population between the two levels involved (upper >lower), and of course feedback for a laser oscillator. It does not matter how many I* you make as long as the lasing transition does not go the the lowest ground state level. You must make the transition end at a higher level in the ground state I atoms that has no (i.e. insignificant) thermal population, thus a make a 3 level laser meaning that the laser radiation cannot pump this lower level to the higher level from which lasing occurs. You must of course keep producing I* atoms.
{ "domain": "chemistry.stackexchange", "id": 14963, "tags": "physical-chemistry, spectroscopy, radicals, light, emission" }
Guitar String Replucks
Question: I'm analyzing guitar string plucks and sustains. I'm having good success with auto correlation using FFT's. Now I'd like to detect plucks while the string is still vibrating. Since I already am periodically taking FFT's to find the pitch, I thought maybe I can take advantage of the fft results and look for changes that might indicate a repluck. Do I simply add up the bins for some kind of power measurement? Any ideas? Answer: A pluck might produce significantly more broadband noise than a free string. An FFT of such noise would show more relative energy outside of all the FFT result bins that are related to (F0 or overtones of) a single pitch. Also, a free string has a more predictable decay rate in any FFT magnitudes related to the pitch across successive offset FFT frames windowing the sound data. A sudden change from this decay rate (a stop or a big increase) might infer a disturbance, such as a pluck.
{ "domain": "dsp.stackexchange", "id": 758, "tags": "fft, music, power-spectral-density" }
problem in displaying an image
Question: Hi, I want to publish an image..previously I posted the same question but I got that I forgot to add ros::spin().. But even then there is a problem..It compiled successfully..while running it not displaying an image.. pub.cpp: #include <ros/ros.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include <cv_bridge/cv_bridge.h> #include <image_transport/image_transport.h> #include <sensor_msgs/image_encodings.h> int main(int argc, char** argv) { ros::init(argc, argv, "image_publisher"); ros::NodeHandle nh; cv_bridge::CvImage cv_image; cv_image.image = cv::imread("/home/abi/Pictures/attachments/",CV_LOAD_IMAGE_COLOR); cv_image.encoding = "bgr8"; sensor_msgs::Image ros_image; cv_image.toImageMsg(ros_image); image_transport::ImageTransport it(nh); image_transport::Publisher pub = it.advertise("/static_image/compressed", 3); ros::Rate loop_rate(5); while (nh.ok()) { pub.publish(ros_image); loop_rate.sleep(); } } sub.cpp: #include <ros/ros.h> #include <image_transport/image_transport.h> #include <opencv/cv.h> #include <opencv/highgui.h> #include <cv_bridge/cv_bridge.h> void imageCallback(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, "bgr8"); //ROS_INFO("Hi"); cv::imshow("view", cv_ptr->image); cvNamedWindow("view",CV_WINDOW_AUTOSIZE); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } } int main(int argc, char **argv) { ros::init(argc, argv, "image_listener"); ros::NodeHandle nh; cvNamedWindow("view",CV_WINDOW_AUTOSIZE); image_transport::ImageTransport it(nh); image_transport::Subscriber sub = it.subscribe("/static_image/", 1,imageCallback); ros::spin(); cvDestroyWindow("view"); } Originally posted by Abinaya on ROS Answers with karma: 1 on 2014-03-26 Post score: 0 Answer: It should work like this: cv::imshow("view", cv_ptr->image); cv::waitKey(10); To wait for a couple of miliseconds with waitKey() when trying to display an image is important in OpenCV, because it allows some time for GUI processing. You don't really need cvNamedWindow() if you use the default parameter CV_WINDOW_AUTOSIZE because a window is already created by imshow(). If you want to use name window anyway, you should stick to the OpenCV C++ API: cv::namedWindow() Originally posted by Malefitz with karma: 136 on 2014-07-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17435, "tags": "opencv" }
Attractive Feynman diagrams and virtual photons
Question: In the electromagnetic interaction a photon is exchanged which can cause a repulsive force between to charged particles like the electron/electron or up/up quarks interactions. But when i look at the Feynman diagram two opposite sign charges the attractive force is also mediated by the photon. This is strange as a transfer of momentum by an intermediate particle would not cause two objects of unlike charge to come together but repel. I read in another post on this forum that it should be interpreted like a boomerang effect. But the photon does not do this. It goes from one particle directly to the other one. Can someone explain what is happening really? That is quantum mechanically? Answer: This is a good question and if you look at the virtual particles classically, then such a thing is indeed impossible. But you should not look at Feynman diagrams as "real processes" but mathematical tools in particle interaction calculations. When you do such calculations, the charge sign in the calculation appears at the vertices and the propagator (photon) does not carry information on the specific values of the charge directly. For a quantum mechanical explanation, we can do this (roughly) by consider Heisenberg's uncertainty principle i.e., $$\Delta x \Delta p \ge \frac{\hbar}{2}$$ Recall that if we are calculating momentum transfer between two particles which are (almost) localised then they have a well defined position meaning that the momenta will be extremely uncertain. A virtual particle with a specific momentum will therefore correspond to a wave with a wavefront spread all over space due to uncertainty in position. Because this wavefront is spread everywhere, either particle can create the photon anywhere and the other can absorb it anywhere meaning either can get a "kick pushing it" in the direction of the other particle which would correspond to an "attractive force". Once again it should be stressed that you should never view virtual particles in Feynman diagrams as having the trajectories of particles in classical physics.
{ "domain": "physics.stackexchange", "id": 73252, "tags": "quantum-electrodynamics, feynman-diagrams, coulombs-law, virtual-particles, carrier-particles" }
Converting "a(b(cd)e(fg))" into a tree
Question: This was actually asked in an interview. I have made it in the best possible way I could and would like to optimize it if there is any chance of doing it. The input is: a(b(cd)e(fg)) The output should be: a / \ b e / \ / \ c d f g My current code is: #include<stdio.h> #include<malloc.h> #include<string.h> struct node { char x; struct node *left; struct node *right; }*root; char a[30]; int i=0,n; void tre(struct node * p) //function that I am using to convert the string to a tree { if(a[i]=='(') { i++; struct node *temp=malloc(sizeof(struct node)); temp->x=a[i]; temp->left=temp->right=NULL; i++; if(p->left==NULL) { p->left=temp; tre(p->left); } if(a[i]!='('&&a[i]!=')') { struct node *tempp=malloc(sizeof(struct node)); tempp->x=a[i]; i++; p->right=tempp; tre(p->right); } } else if(a[i]==')') { i++; } } void inorder(struct node *p)//inorder traversal of the tree made { if(p!=NULL) { inorder(p->left); printf("%c ",p->x); inorder(p->right); } } main() { printf("Enter the string : "); scanf("%s",a); struct node *temp=malloc(sizeof(struct node)); temp->x=a[i]; temp->left=temp->right=NULL; i++; root=temp; tre(root); inorder(root); } Answer: If this was asked at an interview then as the person asking the question I would have expected you to ask a couple of more questions about the input format. What we have here is relatively clear for the simple example but there are some subtitles that come out form this that we need to tease out from the input format. How are NULL branches represented? How are nodes with the value ')' or '(' represented. If we open '(' will there always be two nodes before the ')' My first problem would be these global variables: char a[30]; int i=0,n; Global variables make the code harder to modify and maintain in the long run and should be avoided in most situations. Pass them as parameters and things become much more flexible. Second point is to declare each variable on its own line (it is much more readable). And try and make the names more meaningful a,i,n hold no meaning so I have no idea what you are going to use them for. In C I find it usefull to typedef structures to make sure I can use the short version of the name (I have not use C in anger recently so I am not sure if this best practice anymore but I think it makes the code more readable). typedef struct node { char x; struct node *left; struct node *right; } Node; void tre(struct node * p) //function that I am using to convert the string to a tree If we are building a tree I would normal expect the return value to be the tree we are building. Not a way of filling out the tree. So I would expect the function signature to look like this: Node* tre(char const* input) // potentially more parameters to deal with position. I find the logic in you main function very hard to follow. You only seem to make nodes if there is an open brace '(' that does not seem correct. You do nothing when there is a close brace? You test if the left on the current node is null but not the right on the current node. I would expect the code to be inherently more symmetrical. The fact that it is not makes it harder to follow. As an interviewer the things I would have looked for: Error checking: Binary tree code is inherently symmetrical. Notice that below. Doing the left and right branches should look the same. the function that parses the input should work on one node: You seem to work on one node and part of the next. I wrote the code and min looks like this: (Now that I have written it looks identical to Winston) Node* parseTree(char const* data, size_t* pos) { if (data[(*pos)] == '\0') { // Appropriate error msg exit(1); } // Allocate the tree here. (initialize all members) Node* result = malloc(sizeof(Node)); result->value = data[(*pos)++]; result->left = NULL; result->right = NULL; // Now check for sub branches. if (data[(*pos)] == '(') { // Move past the '(' (*pos)++; result->left = parseTree(data, pos); result->right = parseTree(data, pos); if ((*pos) != ')') { // Appropriate error msg exit(1); } // Move past the ')' (*pos)++; } // Done. return result; } Node* buildTree(char const* data) { if (data == NULL) { // Appropriate error msg exit(1); } size_t pos = 0; return parseTree(data, &pos); } Perfect in-order traversal of the tree. But it does not print out what the output should be: a / \ b e / \ / \ c d f g To print this you need a couple of things: The depth of the tree. The max width (related to max depth) A breadth first way of printing the tree. Note: If I was doing the interview I would ask this to make sure you could do a breadth first traversal of the tree. Which involves using a loop rather than recursion.
{ "domain": "codereview.stackexchange", "id": 23251, "tags": "c" }
Relation between Electric and magnetic fields
Question: I've read that both the electric and magnetic field vectors are perpendicular to each other in an electromagnetic wave. Passing steady current through a straight conductor shows some magnetic flux (because most of the energy is wasted into outer space as magnetic lines). But, when it is passed through a helical coil such as the solenoid or even an inductor, a steady magnetic field is produced along the axis of the coil. Hence, both magnetic and electric fields are related to each other. How is this actually happening? Can one field be always produced by the other? An explanation focusing on Electric and magnetic fields would be appreciated. Answer: While I think your question may be problematic to some because they are very weary of the "why" question, because physics can only go so deep, I recognize that it is hard to just accept a causal relationship between things that seem arbitrarily related, so lets try to look deeper. A magnetic field is caused by a moving electric charge correct? The moving electric charge causes an increase in the electric field in front of it and a decrease in the electric field in back of it, and these changes create a magnetic field, but let's go back to the charge. Let's imagine that this charge is moving extremely fast, at relativistic speeds even. Next to it and parallel to its motion is an infinitely long wire with current flowing through it, a lot of current too. Let's say that the electrons in this wire are moving just as fast as the electron, and in the same direction. We could even imagine them with race helmets on, racing each other off to infinity. Now this wire is electrically neutral, for every electron in the wire there is a proton, so the electron traveling alongside should feel no pull towards the wire or a push away. However, this is all from our perspective. To us the electrons are moving fast, but what about to them? According to relativity, they have every right to say that they are not moving. What looked like racing hats to us were actually top hats, and they were sitting down having some tea while we zoomed by at nearly the speed of light. Now we would look sort of funny, because the effects of relativity cause us to look squashed. This is important, because we would not be the only things zooming past the electrons. The protons in the wire as well would be zooming past them as well. The same relativistic squashing happens with them, but this time it's more important. The relativistic length contraction not only squishes the protons, but because it is a whole column of protons moving past them, the column squishes as well, increasing the positive charge density of the wire. The electron feels the effect of this increased charge density as a pull inwards and so it drifts closer to the wire. We see this in our frame as well and are perplexed, why would the electron feel a pull from the wire? We see no excess charge in the wire, so we ascribe this effect to a different force, the magnetic force. However, from the electrons reference frame, this behavior is perfectly normal, the protons moving past him are closer together than the electrons standing still and the electric force from the wire pull him closer. This is kind of what magnetism is, electricity's compensation for relativity. For if magnetism didn't exist, we would see the electron attracted to the wire, for no explainable reason. Magnetism is sort of the relativistic form of electricity. As for them interacting and causing each other, this must happen or else other laws of physics could possibly be broken (or you would have a meaningless thing like a force from nothing). This is a comforting example of how a part of physics holds itself up by itself.
{ "domain": "physics.stackexchange", "id": 53358, "tags": "electromagnetism" }
Failed to process package 'rviz':
Question: -- Boost version: 1.54.0 -- Found the following Boost libraries: -- filesystem -- program_options -- signals -- system -- thread -- Eigen found (include: /opt/local/include/eigen3) -- checking for module 'OGRE' -- package 'OGRE' not found Package OGRE was not found in the pkg-config search path. Perhaps you should add the directory containing `OGRE.pc' to the PKG_CONFIG_PATH environment variable No package 'OGRE' found -- OGRE_PLUGIN_PATH= -- Using CATKIN_DEVEL_PREFIX: /Users/Trung/ros_catkinws/devel_isolated/rviz -- Using CMAKE_PREFIX_PATH: /Users/Trung/ros_catkinws/install_isolated -- This workspace overlays: /Users/Trung/ros_catkinws/install_isolated -- Using default Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /Users/Trung/ros_catkinws/build_isolated/rviz/test_results -- Found gtest: gtests will be built -- catkin 0.5.73 CMake Error at /Users/Trung/ros_catkinws/install_isolated/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a package configuration file provided by "map_msgs" with any of the following names: map_msgsConfig.cmake map_msgs-config.cmake Add the installation prefix of "map_msgs" to CMAKE_PREFIX_PATH or set "map_msgs_DIR" to a directory containing one of the above files. If "map_msgs" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:39 (find_package) -- Configuring incomplete, errors occurred! make: *** [cmake_check_build_system] Error 1 <== Failed to process package 'rviz': Command '/Users/Trung/ros_catkinws/install_isolated/env.sh make cmake_check_build_system' returned non-zero exit status 2 Originally posted by fti on ROS Answers with karma: 1 on 2013-09-07 Post score: 0 Answer: Looks like you are trying to compile rviz from source. Why? You appear to be missing the map_msgs package, upon which rviz depends. There are 29 rviz dependencies, and you probably need them all. Originally posted by joq with karma: 25443 on 2013-09-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Ashkr on 2019-10-30: how can i do this? please help!
{ "domain": "robotics.stackexchange", "id": 15452, "tags": "ros, rviz, catkin" }
Can generic data sets be suitable for specific sentiment analysis
Question: I have used the stanford movie review dataset for creating a experimentation of sentiment analysis. Managed to create a basic application on top of Spark using the Naive bayes classification algorithm. Steps that I did for pre-processing from the spark ML pipeline Tokenization Bigrams The provided dataset above also has a testing dataset with itself which is separate of the training set. After training it I got around 97% accuracy which I believe is pretty good for Naive bayes. Now can I use this ML model to predict for other texts such as email/chat etc., My guess is that this dataset has a large enough collection of words to perform good predictions and certain english words regardless of the business context like "I dont like this","This does not look good" is the same across different domains such as Movies/Emails/Chats etc. I have not done the experiment since the data that I need to get hold of belongs to the customer and due to privacy restrictions we cannot access the data. Any help/guidance would be much appreciated. Answer: It depends. You're basically asking if your sample (training data) is representative of the population (all written words). Are you doing sentiment analysis on movie reviews? It'll work great. Are you doing sentiment analysis on TV reviews? It'll probably work great. Are you doing sentiment analysis on book reviews? I would give better than 50-50 odds it'll work. Are you doing sentiment analysis on Twitter posts? Now we're getting shaky. People tend to write much less, use less formal language, and use more emojis which your movie review model wouldn't have seen. That being said, there are definitely "generic" sentiment analysis services like here. Try out your model against Algorithmia on what you would consider a generic set of data (e.g. a bunch of tweets) and see how it does.
{ "domain": "datascience.stackexchange", "id": 1576, "tags": "machine-learning, apache-spark, sentiment-analysis" }
Why does the Eulerian approach become invalid for intersecting trajectories?
Question: When studying cold dust (pressureless non relativistic matter) in cosmology with the Eulerian approach, one says that the equations are valid until the intersection of particles. Why? Is this because I can't distinguish between two particles in the same point? Answer: The Eulerian approach is to have a (time-dependent) vector-field. At each point the field tells you the direction of the particle. If two particles intersect, which means they come from different directions, and most likely go to different directions, you would have to assign two vectors at this point to the field. This is of course not possible.
{ "domain": "physics.stackexchange", "id": 38295, "tags": "fluid-dynamics, cosmology" }
Can constant acceleration/velocity be written as $a_c$/$v_c$?
Question: I was wondering if constant acceleration, for example, could just be written as $a_c$. I have seen it written that way, but I was not sure if it was legitimate. I did a quick search on Google but to no avail, so I turned to the best website out there. I am new to physics, so having the knowledge of properly writing functions is a big deal. And if that is how you would denote constant acceleration the would just $a$ for acceleration mean that the acceleration is changing, as in a slope on a graph? Sorry if this is confusing! Answer: As dmckee says in a comment there is no hard and fast rule for writing a constant acceleration or velocity, so it's up to you to make clear what notation you are using. Having said that, if I saw $a_c$ I would not interpret this as meaning the acceleration is constant. When I see the notation $a_i$ I normally expect this to mean the components of the acceleration vector ($a_x$, $a_y$, $a_z$) or with multiple bodies the acceleration of the $i$th body. I've never seen a $c$ subscript used to mean that quantity is constant. To the extent that there is an accepted convention constants are generally written using uppercase letters, so if I wanted to say the acceleration was constant I would use: $$ a = A $$ and explain in the text the meaning of the constant $A$. Likewise for velocity, though since $V$ is sometimes used as the symbol for potential energy you'd need to make it clear in the text what you meant.
{ "domain": "physics.stackexchange", "id": 38348, "tags": "acceleration, velocity" }
What is the difference between a perturbative and a non-perturbative vacuum?
Question: What is the difference between a perturbative and a non-perturbative vacuum in quantum field theory? Is there an analog of these ideas in non-relativistic quantum mechanics? Answer: Consider an interacting (not necessarily relativistic) quantum field theory, whose Hamiltonian can be written as $H = H_{\rm free} + \lambda H_{\rm int}$, where the second term is a small interaction Hamiltonian with strength $\lambda$ and the first term is the free Hamiltonian. Then, generically we have two vacuum states $|\Omega\rangle \neq |0\rangle$, where $H|\Omega\rangle=E_{\Omega}$ and $H_{\rm free} |0\rangle=0$. In other words, $|0\rangle$ is no longer the "lowest energy state" since the introduction of an interaction shifts the energy spectrum. The non-perturbative vacuum (or the actual lowest energy state) is $|\Omega\rangle$.
{ "domain": "physics.stackexchange", "id": 85940, "tags": "quantum-mechanics, quantum-field-theory, vacuum, perturbation-theory, non-perturbative" }
Propositional truncation of excluded middle
Question: It is clear to me that it should be impossible to prove : exclMidl = isProp A → ((A) ⊎ (¬ A)) Because it would give deciding oracle for every Proposition. My question is about the following Types: exclMidl' = isProp A → ∥ ((A) ⊎ (¬ A)) ∥ or exclMidl'' = isProp A → ∥ ((A ≡ Unit) ⊎ (A ≡ Empty)) ∥ Are they provable in cubical Agda? What HoTT can say about them? Those types would only state that propositions are either True or False but would hide information about actual Truthness of A, preventing us from constructing universal oracle, but allowing, for example, to do a proof for both cases even for undecidable propositions. Is my intuition about such types correct? Answer: The only way to prove ∥ X ∥ is to prove X (unless you admit some other axiom). So, assuming P is a proposition, there is no way to prove ∥ ((A) ⊎ (¬ A)) ∥ if you cannot prove ((A) ⊎ (¬ A)). It is not a correct intuition that undecidable propositions are either true or false, just we do not know about it. A formal system S with an undecidable proposition P can always be extended either in S'=(S,p:P), or S''=(S,p:¬ P) where p is taken as a new axiom. Both S' and S'' are then consistent formal systems, provided S is. If P were already true or false in S, then either S' or S'' would be inconsistent. A formal system is meant to be interpreted, and can usually be interpreted in different ways. Undecidable propositions in a formal system are propositions that can be true in some interpretations, and false in other interpretations. A proposition is said to be true (resp. false) within a formal system when it is true (resp. false) in all possible interpretations. So there are lots of propositions that are neither true nor false within the formal system. The confusion comes from the different meanings for 'true': the truth within the formal system usually does not match exactly the truth in the interpretation (or intuition) that you have in mind when you are working with the formal system...
{ "domain": "cs.stackexchange", "id": 14272, "tags": "dependent-types, homotopy-type-theory, agda, cubical-type-theory" }
How to calculate the Jaccard index
Question: I want to calculate the Jaccard index between two compounds. What is the algorithm? I have searched for it, it just gives the formula but how to apply it on compounds is not known to me. Can you help? Answer: The Jaccard index is a measure of similarity between two sets. Take a look at the Wikipedia article here. It is very easy to compute: The Jaccard similarity coefficient for sets X and Y is defined as: J(X,Y) = |intersection(X,Y)| / |union(X,Y)| Where | | indicates the size (number of elements) of the set. Imagine you have two sets X and Y defined as follows: X = {A, B, C, D} Y = {C, D, E, F, G} Then: intersection(X,Y) = {C, D} => |intersection(X,Y)| = 2 union(X,Y) = {A,B,C,D,E,F} => |union(X,Y)| = 5 Therefore: J(X,Y) = 2/5 Alternatively, the Jaccard distance would be D(X,Y) = 1 - J(X,Y) = 1 - 2/5 = 3/5 In Biology the Jaccard index has been used to compute the similarity between networks, by comparing the number of edges in common (e.g. Bass, Nature methods 2013) Regarding applying it to compounds, if you have two sets with different compounds, you can find how similar the two sets are using this index. The elements on the sets, in this case the compounds, correspond to A, B, C, etc. in my example.
{ "domain": "biology.stackexchange", "id": 4017, "tags": "statistics, organic-chemistry, biochemistry" }
Deleting pointers in Qt5
Question: I am relatively new to the Qt framework and I was wondering if I should delete pointers in my program. I know that, in C++, if memory is not return it could lead to memory leak, but I am not sure if the same applies to Qt. #include "mainwindow.h" #include <QApplication> #include <QTextEdit> #include <QPushButton> #include <QVBoxLayout> #include <QIcon> int main(int argc, char *argv[]) { QApplication a(argc, argv); QWidget *w = new QWidget(); w->setWindowTitle("Music Player"); QIcon * mainwind_icon = new QIcon("MusicPlayer.png"); w->setWindowIcon(*mainwind_icon); QPushButton * enter_button = new QPushButton(); QTextEdit * textbox = new QTextEdit(); QHBoxLayout * vlayout = new QHBoxLayout; vlayout->addWidget(textbox); vlayout->addWidget(enter_button); w->setLayout(vlayout); w -> show(); return a.exec(); } Answer: In Qt, there is a concept of parents and a hierarchy. In essence, it means that when a qobject owns another qobject (the other qobject's parent is the first) then the parent will take care of cleaning its children. There are 2 ways an object becomes a child of another: when it is set in the constructor or when reparented. The qwidget family automatically reparents widgets which you add to it Common use for the main function is to allocate the root statically and all the rest dynamically as child of the root: int main(int argc, char *argv[]) { QApplication a(argc, argv); QWidget w (); w.setWindowTitle("Music Player"); QIcon * mainwind_icon = new QIcon("MusicPlayer.png"); w.setWindowIcon(*mainwind_icon); QPushButton * enter_button = new QPushButton(); QTextEdit * textbox = new QTextEdit(); QHBoxLayout * vlayout = new QHBoxLayout; vlayout->addWidget(textbox); vlayout->addWidget(enter_button); w.setLayout(vlayout); w.show(); return a.exec(); } The setlayout takes ownership of the vlayout and its children so the destructor of w will delete those as well When you are inside an event loop then you can delete a qobject by calling deleteLater on it which will schedule the object (and its children) for deletion. You can also connect a signal to it if needed.
{ "domain": "codereview.stackexchange", "id": 6369, "tags": "c++, memory-management, qt" }
How does adaptive Huffman coding work?
Question: Huffman coding is a widely used method of entropy coding used for data compression. It assumes that we have complete knowledge of a signal's statistics. However, there are versions of Huffman coding that are used with streaming media and cannot possibly know everything about the signal's statistics. How do these adaptive Huffman encoders work? Answer: The Wikipedia article has a pretty good description of the adaptive Huffman coding process using one of the notable implementations, the Vitter algorithm. As you noted, a standard Huffman coder has access to the probability mass function of its input sequence, which it uses to construct efficient encodings for the most probable symbol values. In the prototypical example of file-based data compression, for example, this probability distribution can be calculated by histogramming the input sequence, counting the number of occurrences of each symbol value (symbols could be 1-byte sequences, for example). This histogram is used to generate a Huffman tree, like this one (taken from the Wikipedia article): The tree is arranged by decreasing weight, or probability of occurrence in the input sequence; leaf nodes at the top represent the most probable symbols, which therefore receive the shortest representations in the compressed data stream. The tree is then saved along with the compressed data and is subsequently used by the decompressor later to regenerate the (uncompressed) input sequence again. As one of the early entropy code implementations, standard Huffman coding is fairly straightforward. The adaptive Huffman coder's structure is quite similar; it uses a similar tree-based representation of the input sequence's statistics to select efficient encodings for each input symbol value. The main difference is that, as a streaming implementation of the algorithm, no a priori knowledge of the input's probability mass function is available; the sequence's statistics must be estimated on the fly. If one is to use the same Huffman encoding scheme, this means that the tree used to generate each symbol's encoding in the compressed stream must be built and maintained dynamically as the input stream is processed. The Vitter algorithm is one way of accomplishing this; as each input symbol is processed, the tree is updated, maintaining its characteristic of decreasing probability of symbol occurrence as you move down the tree. The algorithm defines a set of rules for how the tree is updated over time, and how the resulting compressed data is encoded in the output stream. As the input sequence is consumed, the tree's structure should represent a more and more accurate description of the input's probability distribution. In contrast to the standard Huffman coding approach, the decompressor does not have a static tree to use for decoding; it must perform the same tree-maintenance functions continuously during the decompression process. In summary: The adaptive Huffman coder operates very similarly to the standard algorithm; however, instead of a static measurement of the entire input sequence's statistics (the Huffman tree), a dynamic, cumulative (i.e. from the first symbol to the current symbol) estimate of the sequence's probability distribution is used to encode (and decode) each symbol. In contrast to the standard Huffman coding approach, the adaptive Huffman algorithm requires this statistical analysis at both the encoder and decoder.
{ "domain": "dsp.stackexchange", "id": 6, "tags": "compression" }
Model doesn't stop light
Question: Hello community! I'm building a gazebo track where I need the light from a light bulb (simulated as a white sphere with a pointlight) to be stopped by the model's wall so that way the light does not reach to other parts of the track. But, the light seems to go through the model's walls and reaches more zones than desired. In the image, we can see that light reaches the other end of the track. This happens becasue the light goes through the red wall, illuminates one side of the yellow wall and then goes through the yellow wall, reaching the other end. Also, notice that one side of the green wall -not visible in the image- is illuminated but the other side isn't, and the light still reaches outside the track. Note: The pointlight is very close to the ground, so that discards that the problem is happening because the light illuminates the whole track from above. Note 2: Simulating with a spotlight seems to have no different effect. So the question is, How can we stop the light from going through the wall so that way it doesn't reach the other part of the track? How can we make the lights stay trapped in the track's walls? As always, thanks for your awsome help. Originally posted by imstevenpm on Gazebo Answers with karma: 3 on 2017-12-11 Post score: 1 Original comments Comment by imstevenpm on 2017-12-11: Any help is appreciated :) Answer: Unfortunately, it is a known issue that spot and point lights don't cast shadows, only directional lights do: https://bitbucket.org/osrf/gazebo/issues/2083/point-and-spot-lights-dont-cast-shadows Originally posted by chapulina with karma: 7504 on 2017-12-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4220, "tags": "gazebo" }
Can a sentence have different parse trees?
Question: I just read about the concept of a parse tree. In my understanding, a valid parse tree of a sentence needs to be validated by a linguistic expert. So, I concluded, a sentence only has one parse tree. But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)? Answer: But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)? The fact that a single sequence of words can be parsed in different ways depending on context (or "grounding") is a common basis of miscommunication, misunderstanding, innuendo and jokes. One classic NLP-related "joke" (around longer than modern AI and NLP) is: Time flies like an arrow. Fruit flies like a banana. There are actually several valid parse trees for even these simple sentences. Which ones come "naturally" will depend on context - anecdotally I only half got the joke when I was younger, because I did not know there were such things as fruit flies, so I was partly confused by literal (but still validly parsed, and somewhat funny) meaning that all fruit can fly about as well as a banana does. Analysing these kinds of ambiguous sentences leads to the grounding problem - the fact that without some referent for symbols, a grammar is devoid of meaning, even if you know the rules and can construct valid sequences. For instance, the above joke works partly because the nature of time, when referred in a particular way (singular noun, not as a possession or property of another object), leads to a well-known metaphorical reading of the first sentence. A statistical ML parser could get both sentences correct through training on many relevant examples (or trivially by including the examples themselves with correct parse trees). This has not solved the grounding problem, but may be of practical use for any machine required to handle natural language input and map it to some task. I did check a while ago though, and most Parts Of Speech taggers in Pythons NLTK get both sentences wrong - I suspect because resolving sentences like those above and AI "getting language jokes" is not a high priority compared to more practical uses for chatbots/summarisers, etc.
{ "domain": "ai.stackexchange", "id": 992, "tags": "natural-language-processing" }
Longitudinal wave in a falling elastic body
Question: Consider an elastic rod hung from a high point with density $\rho$ and Young's modulus $Y$, subject to gravitational acceleration $g$. The coordinate from the hanging point is $x$, while the displacement from the original position is $\xi$. The equation of a longitudinal wave travelling through the rod is (I will omit detailed derivations for brevity, but am happy to add them as an addendum if that's helpful): $$ \boxed{ \frac{\partial^2 \xi}{\partial t^2} = c^2 \frac{\partial^2\xi}{\partial x^2} + g, \quad c=\sqrt{\frac{Y}{\rho}}.\; } $$ We solve this partial differential equation using Fourier series. The initial conditions at $t=0$ for $\xi\left(x, 0\right)$ and its time derivative $\frac{\partial \xi}{\partial t}\left(x,0\right)$ are $$ \xi\left(x, 0\right) = \frac{g}{c^2}\left(Lx-\frac{1}{2}x^2\right) , \qquad \frac{\partial \xi}{\partial t}\left(x,0\right) = 0, $$ where the former condition is derived by considering the stationary case without free fall, so $\frac{\partial^2 \xi}{\partial t^2} = 0$. The boundary conditions are $$ \frac{\partial\xi}{\partial x}\left(0,t\right)=0, \qquad \frac{\partial \xi}{\partial x}\left(L,t\right) = 0, $$ since the endpoints are free during the fall. Fit the time-independent part of the solution to $\xi\left(x,0\right)$ using the usual Fourier series method with $t=0$, we have: $$ \boxed{ \xi\left(x,t\right) = \frac{1}{2}gt^2 + \frac{g L^2}{c^2}\left[ \frac{1}{3} - \frac{2}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}\cos\left(k_n x\right) \cos\left(\omega_n t\right) \right], \quad k_n = \frac{n\pi}{L}, \quad \omega_n = ck_n. \;} $$ This is a reasonable solution, where the time averaged motion is that of a free fall, with a longitudinal wave travelling back and forth with period $\Delta T = \frac{2L}{c}$. But we can find the solution to the wave equation above with another method, using the generalised d'Alembert formula: For a solution to $$\frac{\partial^2 u}{\partial t^2} - c^2 \frac{\partial^2u}{\partial x^2} = h\left(x, t\right), $$ subject to the boundary conditions $$ u\left(x,0\right) = \phi(x), \quad \frac{\partial u}{\partial t}\left(x, 0\right)=\psi(x), $$ the solution is given by $$ u\left(x,t\right) = \frac{1}{2}\left[ \phi\left(x+ct\right) + \phi\left(x-ct\right) \right] + \frac{1}{2c}\int_{x-ct}^{x+ct}\psi\left(\zeta\right) \mathrm{d}\zeta + \frac{1}{2c}\int_0^t \int_{x-c\left(t-\tau\right)}^{x+c\left(t-\tau\right)} h\left(\zeta, \tau\right) \mathrm{d}\zeta\mathrm{d}\tau. $$ However, plugging in $$h\left(x,t\right) = g,$$ $$\xi\left(x,0\right) = \phi\left(x\right) = \frac{g}{c^2}\left(Lx-\frac{1}{2}x^2\right),$$ and $$\frac{\partial \xi}{\partial t}\left(x,0\right) = \psi\left(x\right) = 0, $$ I am unable to recover the falling solution from the Fourier method, instead only recovering the static solution $$\xi\left(x,t\right) = \xi\left(x,0\right) = \frac{g}{c^2}\left(Lx-\frac{1}{2}x^2\right).$$ Where have I gone wrong in the second solution? Edit: As @hyportnex pointed out, the boundary condition $$\psi(x) = \frac{\partial\xi}{\partial t}\left(x,0\right) $$ is ambiguous. I will assume that $g$ is not turned on at $t=0$, such that $\psi\left(x\right)=0$. From hindsight, by differentiating the solution found using Fourier's method and plugging in $t=0$, we also get $\psi\left(x\right)=0$. Answer: The generalized solution does not have any boundary conditions imposed. In particular, note that we have in general $$ \frac{\partial u}{\partial x}\left(0,t\right) = \frac{1}{2}\left[ \phi'\left(ct\right) + \phi'\left(-ct\right) \right] + \frac{1}{2c} \left[ \psi(ct) - \psi(-ct)\right] + \frac{1}{2c}\int_0^t \left[ h(c(t-\tau),\tau) - h(-c(t-\tau),\tau) \right] \mathrm{d}\tau. $$ In your case you have $$ \phi'(x) = \frac{g}{c^2}\left(L-x\right) \qquad \psi(x) = 0 \qquad h(x,t) = g $$ and so the general solution for the given initial conditions will have $$ \frac{\partial u}{\partial x}\left(0,t\right) = \frac12 \left[\phi'(ct) + \phi'(-ct)\right] = \frac{gL}{c^2} \neq 0. $$ One can also find a similar expression for $\frac{\partial u}{\partial x}$ at $x=L$, and it will not generally vanish either. To use the generalized solution while maintaining boundary conditions, I think (I haven't worked through all the details explicitly) that one can use a method-of-images technique. For your initial conditions, use a period-$L$ version of your original initial displacement function, defined for all $-\infty < x < \infty$: $$ \phi(x) = \begin{cases} \frac{g}{c^2}\left(Lx-\frac12x^2\right) & 0 \leq x \leq L \\ \phi(2L - x) & n L < x \leq (n+1) L \quad \text{($n$ odd)} \\ \phi(x + 2L) & n L < x \leq (n+1) L \quad \text{($n$ even & $n \neq 0$)} \end{cases} $$ Effectively, this "periodic version" is the function on your original domain $0 \leq x \leq L$ repeated over the real numbers, with the function "flipped" left-to-right in the "odd" domains. (This doesn't matter in your case, since this "flipping" doesn't change the parabola, but it would matter in a more general case.) The periodicity & symmetry of this function means that we will have $\phi'(x) = -\phi'(-x)$ for all $x$, and so the boundary condition $$ \frac{\partial u}{\partial x}\left(0,t\right) = 0 $$ will always be maintained. Finally, the above periodic version of $\phi(x)$ can be easily written as a Fourier series—in fact, I believe that it will be the Fourier series you found in your original solution. Plug this function into the general d'Alembert solution, simplify it using standard trigonometric identities, and I expect that you'll end up with your original solution.
{ "domain": "physics.stackexchange", "id": 97168, "tags": "waves, elasticity, differential-equations" }
Function to create URL relative to a configured base URL
Question: I have written this function to create a URL: function createUrl(urlPath) { const endpoint = url.parse(env.config.get('myEndpoint')); endpoint.pathname = path.posix.join(endpoint.pathname, urlPath); return endpoint; } const basically creates a mutable object whose properties can still be modified. Here, I need to declare some Object which will only be modified once. Am I correct in declaring endpoint as const? Answer: Yes, it's correct to use const here. That would be considered idiomatic Javascript. If you're using const when you shouldn't, you'll notice as it will produce a TypeError. People coming from other languages might disagree as they think that const means that the variable is constant/immutable (which is not the case, it's just not re-assignable). But that is really a critique of the naming of the keyword, not your usage of the keyword (which is correct).
{ "domain": "codereview.stackexchange", "id": 35489, "tags": "javascript, url, constants" }
Pattern that adds a different value to the same attribute for many classes
Question: The purpose of this question is to get some advice for improving the following code in order to make it less repetitive. .parent-div a { border-bottom: 10px solid; .blue & { border-color: $blue; } .red & { border-color: $red; } .green & { border-color: $green; } .pink & { border-color: $pink; } .yellow & { border-color: $yellow; } .purple & { border-color: $purple; } // here comes again all the styles but for each hovered anchor .blue &:hover { border-color: darken($blue, 10%); } .red &:hover { border-color: darken($red, 10%); } } Answer: You can minimize your SASS code using @each and nesting for :hover .parent-div a { border-bottom: 10px solid; @each $color in blue, red, green, yellow, pink, purple { // Iterate through colors .#{$color} & { border-color: $color; &:hover { // Nested inside border-color: darken($color, 10%); } } } } which compiles to the following output Codepen Demo .parent-div a { border-bottom: 10px solid; } .blue .parent-div a { border-color: blue; } .blue .parent-div a:hover { border-color: #0000cc; } .red .parent-div a { border-color: red; } .red .parent-div a:hover { border-color: #cc0000; } .green .parent-div a { border-color: green; } .green .parent-div a:hover { border-color: #004d00; } .yellow .parent-div a { border-color: yellow; } .yellow .parent-div a:hover { border-color: #cccc00; } .pink .parent-div a { border-color: pink; } .pink .parent-div a:hover { border-color: #ff8da1; } .purple .parent-div a { border-color: purple; } .purple .parent-div a:hover { border-color: #4d004d; } <div class="parent-div"> <a href="#">Test</a> </div> <div class="blue"> <div class="parent-div"> <a href="#">Test</a> </div> </div> <div class="red"> <div class="parent-div"> <a href="#">Test</a> </div> </div> <div class="green"> <div class="parent-div"> <a href="#">Test</a> </div> </div> <div class="yellow"> <div class="parent-div"> <a href="#">Test</a> </div> </div> <div class="pink"> <div class="parent-div"> <a href="#">Test</a> </div> </div> <div class="purple"> <div class="parent-div"> <a href="#">Test</a> </div> </div> But the above code seems to carry little logic. I think you were looking to nest the color classes after parent element like this: .parent-div a { border-bottom: 10px solid; @each $color in blue, red, green, yellow, pink, purple { // Iterate through colors &.#{$color} { border-color: $color; &:hover { // Nested inside border-color: darken($color, 10%); } } } } which would be logically correct and with better HTML and CSS syntax: Codepen Demo .parent-div a { border-bottom: 10px solid; } .parent-div a.blue { border-color: blue; } .parent-div a.blue:hover { border-color: #0000cc; } .parent-div a.red { border-color: red; } .parent-div a.red:hover { border-color: #cc0000; } .parent-div a.green { border-color: green; } .parent-div a.green:hover { border-color: #004d00; } .parent-div a.yellow { border-color: yellow; } .parent-div a.yellow:hover { border-color: #cccc00; } .parent-div a.pink { border-color: pink; } .parent-div a.pink:hover { border-color: #ff8da1; } .parent-div a.purple { border-color: purple; } .parent-div a.purple:hover { border-color: #4d004d; } <div class="parent-div"> <a href="#">Test</a> </div> <div class="parent-div"> <a class="blue" href="#">Test</a> </div> <div class="parent-div"> <a class="red" href="#">Test</a> </div> <div class="parent-div"> <a class="yellow" href="#">Test</a> </div> <div class="parent-div"> <a class="green" href="#">Test</a> </div> <div class="parent-div"> <a class="yellow" href="#">Test</a> </div> <div class="parent-div"> <a class="pink" href="#">Test</a> </div> <div class="parent-div"> <a class="purple" href="#">Test</a> </div>
{ "domain": "codereview.stackexchange", "id": 16650, "tags": "sass" }
Why is this not considered pressure?
Question: When you throw a ball in the air it is being pulled down by gravitational force, so we can take gravity as the force and the ball as the surface. However, it seems that this isn't pressure, but why not? Answer: The answer is: "pressure" is force per area. In other words: Pressure is the word used for "force per square meter" for example. When sitting on a chair, a normal force $F$ holds you up. It could be 500 Newton. Divide this with the area $A$ of the contacting surfaces, and you have something we could call the normal pressure $p=F/A$. This simply says something about how "spread out" the force is over the chair. If the chair was half a square meter large, $A=0.5 \;\mathrm{m^2}$, the pressure would be $p=500\;\mathrm{N}/0.5\;\mathrm{m^2}=100\;\mathrm{N/m^2}$. A 50 kg woman in high heels might break through the grass lawn and sink in with the heels, because the heel area is tiny compared to a flat shoe, even though the lawn carries the same weight. The point of pressure is knowing how "spread out" the force is.
{ "domain": "physics.stackexchange", "id": 37236, "tags": "newtonian-mechanics, forces, newtonian-gravity, pressure, projectile" }
Work done on a container, resulting in pressure change
Question: Why does pumping gas into a rigid container (causing the pressure and temperature to increase), not mean that work has been done on the container? Work is given by the formula : W = -P ΔV If I go off the formula alone, ΔV=0 (as the container is rigid) and hence W=0 But if I ignore the formula and take the definition of work as "the energy it takes to move an object against a force", wouldn't pumping gas into the container require work in itself, due to the effort required to get the gas molecules into the container e.g. like pumping a bike tyre, the gas is not going to go into the container unless I pump it in? For reference: Question from textbook: A rigid container of constant volume is used to store compressed gas. When gas is pumped into the container, the pressure of the gas inside the container is increased and the temperature of the container also increases. Which statement is true of the work done on the container? (a) The work is equal to the increase in the pressure inside the container. (b) The work is equal to the increase in the temperature inside the container. (c) The work is equal to the sum of the pressure and temperature increases. (d) There is no work done on the container. Answer from textbook: (d) the container does not expand, so as there is no change in volume of the container, no work is done on the container. Answer: Yes, the explanation the container does not expand, so as there is no change in volume of the container, no work is done on the container. is right as it takes into account that the container is rigid, so its volume can not be changed. Thus, there is no work done by the gas on the container. Also, if you say " pumping gas into the container require work in itself, due to the effort required to get the gas molecules into the container " Yes, there is work done on the gas, but it is by the piston(or the pump). You should see how the laws of Thermodynamics make clear about the boundary between system and its surroundings, so work has to be done by "something" on "something". Also, In the question, it is clearly asked "Which statement is true of the work done on the container?" So, as the container's volume is unaltered, there is no work on it.
{ "domain": "chemistry.stackexchange", "id": 5959, "tags": "thermodynamics, pressure" }
Which of the following compounds do not undego electrophilic aromatic substitution?
Question: Which of the following does not undergo electrophilic aromatic substitution? Answer is option (D). But I am not able to understand why. All of them have lone pairs. I can notice only two differences. Functional group in Option (d) has the largest size and lowest electronegativity. I think that may be because of a large difference in size of Selenium and Carbon, pie bond formation in between them will be a bit difficult, but that bond is not playing any role here except stabilizing the compound. Low electronegativity seems like a good thing to me as it does not deactivates the compound. So what am I missing here? Answer: Wikipedia disagrees. Selenophene does undergo EAS, and apparently faster than thiophene. None of the proposed answers to the original question is correct.
{ "domain": "chemistry.stackexchange", "id": 9736, "tags": "organic-chemistry, aromatic-compounds" }
Invert bits of binary representation of number
Question: This is the code I came up with. I added comments to make the solution more verbose. int findComplement(int num) { // b is the answer which will be returned int b = 0; // One bit will be taken at a time from num, will be inverted and stored in n for adding to result int n = 0; // k will be used to shift bit to be inserted in correct position int k = 0; while(num){ // Invert bit of current number n = !(num & 1); // Shift the given number one bit right to accesss next bit in next iteration num = num >>1 ; // Add the inverted bit after shifting b = b + (n<<k); // Increment the number by which to shift next bit k++; } return b; } Is there any redundant statment in my code which can be removed? Or any other better logic to invert bits of a given integer Answer: int n = 0; This initialization is not used. It could simply be int n;, or could be int n = !(num & 1); inside the loop, to restrict the scope of n. This loop: int k = 0; while (num) { ... k++; } could be written as: for(int k = 0; num; k++) { ... } Since you are doing bit manipulation, instead of using addition, you should probably use a “binary or” operation to merge the bit into your accumulator: b = b | (n << k); or simply: b |= n << k; Bug You are not inverting the most significant zero bits. Assuming an 8-bit word size, the binary compliment of 9 (0b00001001) should be 0b11110110, not 0b00000110. And the compliment of that should return to the original number (0b00001001), but instead yields 0b00000001. And, as mentioned by @Martin R, you could simply return ~num;
{ "domain": "codereview.stackexchange", "id": 35591, "tags": "c++, bitwise" }
Why can't one get extremely high accuracy with an machine learning algorithm?
Question: Suppose we have a classical machine learning problem. Say $m$ training examples and $n$ features with $m >> n$. Suppose I find a great algorithm using automl or otherwise that gives 95% accuracy on the training set. What prevents us from learning from the 5% mistakes (if $0.05 m >>1$) and training a new algorithm just on these 5% mistakes using another algorithm and so on till we get a small sample when this can't be done anymore? Answer: Getting a high training accuracy on data without contradictions is nearly always possible, given a parametric function that can fit it in theory (enough flexibility and free parameters). It often does not require any special handling, such as the divide and conquer approach suggested in the question. Although that could possibly work. However, copying the training data accurately as possible is not the ultimate goal for most ML. Instead the more normal goal is to use the trained model to predict and make use of output values when they are not otherwise known - e.g. they predict future events, or things that take more effort to measure/calculate than the input data. For these kinds of use, the performance of ML on the training data is secondary to its performance on data outside of the training set. Forcing a perfect match to training data often leads to a phenomenon called overfitting, where performance on new unseen data is far worse than performance on the training data. So usually ML researchers focus their effort on improving results against validation and ultimately test data, which is data in the same format as training data including known target values. As the point is to not train against this data, it is not possible to force accuracy in the same way, and doing so would be counterproductive anyway: you want to know how good the ML is when used in reality, not just get a high score in a virtual space
{ "domain": "ai.stackexchange", "id": 4023, "tags": "machine-learning" }
What keeps Chthonian planets so dense?
Question: The cores of gas giants are kept under incredible pressure by the weight of the gasses of the giant. In a chthonian planet, said gases are stripped away, revealing the core of the gas giant, which should alleviate the pressure compressing the core of the gas giant. In a chthonian planet candidate, Kepler 52-c, its density is over 600 times that of Jupiter. What is keeping the planet at such a high density? Answer: 10 Jupiter masses at around 2 Earth radii? That for sure doesn't exist / would be quite the sensation to discover. When you look at data of any kind, one should pay attention to the measurment errors at least as much as to the actual value. A regular physics result (for example for a measurment of the gravitational acceleration $g$ of where you stand) looks like $$g=(9.81 \pm 0.02) \frac{m}{s^2} $$ or if for any reason you have asymmetric errors $$g=(9.81^{+0.02}_{-0.01}) \frac{m}{s^2} $$ and errors always give an idea about how uncertain the method is with which the value was derived. Now if you do take a look at the errors reported for the Mass quoted on the website you see that they are $$M_{planet} = (10.41^{+0.0}_{-10.41}) $$ or so to say highly asymmetric, which should make one suspicious. A look then into the original publication makes it clear that this cited mass is in fact only an absolute upper limit. The authors of the paper were using two methods to estimate the masses of planets. Looking for transit timing variations of known, and seen transiting systems. That means they had the system Kepler 52, with transiting planets K52b,c . K52b because it transits way more often than c has a well-determined period (Period with small errors!) and because of that any deviation in expected future transit time could be attributed to the \textbf{maximum masses} of K52c. The more massive and the more compact a system is, the quicker it will destabilize. This fact is often used in reverse, to take the system age and at given distances derive maximum masses below which the system must lie, or else it would have already flown apart. Both methods can only give maximum masses and I'll just leave here fig. 5 from the original paper with the planet you're interested in: Now remembering, that $1 M_J \approx 320 M_{\oplus}$, you see where your 10 Jupiter masses for K52c come from: That's the planets possible maximum mass for system stability. The TTV method gives already a constraint that is 100 times lower ($37.4 M_{\oplus} \approx 0.11 M_J$). Thus $37.4 M_{\oplus}$ is the planets true maximum mass. This is clearly an error on the side of exoplanet.eu, but then probably there are too many planets and papers to read for whoever puts those data in there. Summarizing What we have here is only a maximum mass. Also the wrong one. To say what is now more probable, if $M_{K52c} = 37.4 M_{\oplus}$ or $M_{K52c} = 3.74 M_{\oplus}$ i'm not certain enough if I understand their anticorrelation method for the TTV signals.
{ "domain": "astronomy.stackexchange", "id": 2534, "tags": "exoplanet, chthonian" }
Nature of spacetime in the holographic boundary
Question: My understanding from popular science articles is that the boundary is a field theory that has no gravity and has one less spatial dimension than the bulk. However, I am not sure I understood this picture correctly. I just read a recent article at quanta magazine that states: "A solar system in the central anti-de Sitter region, for instance, can be described as a collection of particles scattered around the boundary that obey only quantum theory and have no sense of gravity or space-time at all". So my question is, did the quote mean no curved spacetime, or there is not even a minkowski spacetime associated with the physics in the boundary? I dont even understand how it is possible to have a quantum theory of particles without either space or time. Answer: The AdS/CFT correspondence is a conjecture where every (quantum)-gravitational theory in $D$-dimensional space-time with AdS asymptotics is proposed to be a dual of some (quantized) conformal field theory (CFT) in $D-1$ dimensional space-time conformally equivalent to Minkowski space-time. In other words, the CFT lives in a metric of the form $$ds^2 = \Omega^2 (t,x^i)\left[-d t^2 + \sum_{i =1}^{D-2} (d x^i)^2 \right]\,,$$ where $\Omega$ is any smooth nonvanishing function on space-time. The reason for this freedom is that the CFT is not sensitive towards this rescaling, it only cares about the lightcone (causal) structure of the space-time, not about the absolute magnitudes of time or space intervals. Funnily enough, Minkowski space-time can conformally map to huge patches of other space-times with different curvature, such as dS and AdS space-times, so the CFT is really quite insensitive to the local space-time geometry. This is also the reason why people are not particularly eager to specify which sort of geometry the CFT lives in, since this is not particularly meaningful. I believe this is what the author of the Quanta magazine article was trying to hint at. P.S.: CFTs are funny beasts in various different ways. The Quanta article talks about particle in the CFT at the boundary, but what exactly should one call a particle in a CFT is up to debate.
{ "domain": "physics.stackexchange", "id": 96993, "tags": "ads-cft, holographic-principle" }
If a carbonic acid to bicarbonate reaction involves the release of H+, why does pH increase?
Question: OK, I'm an engineer that took Aqueous Geochem 6 years ago, so please forgive this basic (get it?) question. But why do the two reactions which start from dissolution of CO2 in water increase pH if they both release H+ ions? I.e., the conversion of carbonic acid to bicarbonate releases a hydrogen proton, and the conversion from bicarbonate to carbonate releases a hydrogen proton. Yet each reaction results in a higher pH. Specifically, why don't the additions of H+ decrease pH? $\ce{H2CO3 + 2 H2O ⇌ HCO3- + H3O+ + H2O ⇌ CO3^{2-} + 2 H3O+}$ Thanks, and apologies for the (probably simplistic) question. Mike Thank you all for your help with this. I should clarify that what is really troubling me is the interpretation of the Bjerrum Plot for the carbonate system (aqueous geochem). Looking, for example, at that plot here: https://en.wikipedia.org/wiki/Bjerrum_plot#Bjerrum_plot_equations_for_carbonate_system For HCO3- to transition to CO32-, a H+ ion is released (going rightward in the plot), but that is associated with higher pH (less acidity). But if increasing H+ ions are the very definition of increasing acidity, why in the plot is that reaction associated with increasing pH? Thanks again! Mike Answer: There is sure to be a more technical (and more pleasing to the physical chemist) answer to this, but as a heuristic, don't consider what the carbonic acid is doing to solution, consider how the solution is acting on the bicarbonate. The plot your referencing describes mole fractions of diprotic acids in solutions relative to pH. This isn't the same as the pH of the carbonic by itself. Of course, carbonic acid in a cup of water decreases the pH of the solution, but instead consider an equilibrium condition in which a happy amount of bicarbonate and hydronium exist. We're not concerned with what happens if we add more carbonic acid, we only care about what happens to the bicarbonate if we vary the concentration of our hydronium. In a solution rich with hydronium such that there are many free protons to act on bicarbonate, equilibrium will shift towards the production of carbonic acid. Thus, at a lower pH, carbonic acid is the dominant species, as we can clearly see by our mole fraction of the acid hitting a maximum at lower pH's. In a solution quite poor in hydronium, such that there are very few protons to act on bicarbonate, equilibrium will shift to produce carbonate. This is because we perhaps have free hydroxide ions in solution, which are perfect homes for bicarbonate protons. Therefore, we see a higher mole fraction of our carbonate ion at higher pH's. Again, the dissociation of carbonic acid, which is really just dissolved carbon dioxide, is in fact a proton-producing process. But the plot your describing is a relationship that describes which species are dominant at different concentrations of hydronium, not the concentration of hydronium (pH) as carbonic acid dissociates. To directly address the equilibrium you've set shown us- the real bump on the road is the dissociation of the bicarbonate. Remember that in this situation, we must have a very hydronium poor solution, such that it's likely that we actually have free hydroxide ions. So, what happens to the 2 protons on the far right side of your formula? Well, they pair up with the hydroxide! The issue is that you can't see that, because again the equilibrium formula you have doesn't intuitively show that there is an entire solution that impacts the equilibrium. So where do we get these hydroxide ions from? Take a look at one of the answers below! The Le Chatelier way of saying this is if we decrease the concentration of hydronium relative to the bicarbonate, equilibrium will shift to try to increase it again. But clearly the best way to do that isn't by producing more bicarbonate, because the stoichiometry shows the fastest way to get more hydronium is by producing more carbonate. For every one mole of hydronium relative to bicarbonate, we have two moles of hydronium relative to carbonate. So in a weird way, by increasing our pH to become hydronium deficient, equilibrium shifts towards the intuitively lower pH side of the reaction to try and counteract this and we get carbonate as a byproduct. This may be the source of your confusion. "Ask not what your acid can do for your solution, but what your solution can do for your acid" - JFpKa (buh-dum-tss)
{ "domain": "chemistry.stackexchange", "id": 16144, "tags": "ph" }
Cannot find duration__rosidl_typesupport_connext_c
Question: Hello! I am trying to compile ROS2 builtin_interfaces__rosidl_typesupport_c.dir/rosidl_typesupport_c for a non supported DDS and I am hurting this error: [ 46%] Building CXX object CMakeFiles/builtin_interfaces__rosidl_typesupport_c.dir/rosidl_typesupport_c/builtin_interfaces/msg/duration__type_support.cpp.o [ 50%] Building CXX object CMakeFiles/builtin_interfaces__rosidl_typesupport_c.dir/rosidl_typesupport_c/builtin_interfaces/msg/time__type_support.cpp.o /usr/bin/c++ -Dbuiltin_interfaces__rosidl_typesupport_c_EXPORTS -I/ros2_src/build/builtin_interfaces/rosidl_typesupport_c -I/ros2_src/build/builtin_interfaces/rosidl_typesupport_connext_c -I/ros2_src/install/include -I/ros2_src/build/builtin_interfaces/rosidl_generator_c -Wall -Wextra -Wpedantic -g -fPIC -Wall -std=gnu++14 -o CMakeFiles/builtin_interfaces__rosidl_typesupport_c.dir/rosidl_typesupport_c/builtin_interfaces/msg/duration__type_support.cpp.o -c /ros2_src/build/builtin_interfaces/rosidl_typesupport_c/builtin_interfaces/msg/duration__type_support.cpp /usr/bin/c++ -Dbuiltin_interfaces__rosidl_typesupport_c_EXPORTS -I/ros2_src/build/builtin_interfaces/rosidl_typesupport_c -I/ros2_src/build/builtin_interfaces/rosidl_typesupport_connext_c -I/ros2_src/install/include -I/ros2_src/build/builtin_interfaces/rosidl_generator_c -Wall -Wextra -Wpedantic -g -fPIC -Wall -std=gnu++14 -o CMakeFiles/builtin_interfaces__rosidl_typesupport_c.dir/rosidl_typesupport_c/builtin_interfaces/msg/time__type_support.cpp.o -c /ros2_src/build/builtin_interfaces/rosidl_typesupport_c/builtin_interfaces/msg/time__type_support.cpp /ros2_src/build/builtin_interfaces/rosidl_typesupport_c/builtin_interfaces/msg/duration__type_support.cpp:16:77: fatal error: builtin_interfaces/msg/duration__rosidl_typesupport_connext_c.h: No such file or directory compilation terminated. I cannot find where the duration__rosidl_typesupport_connext_c.h is generated. Could you tell me where it is? Thanks :) Originally posted by pokitoz on ROS Answers with karma: 527 on 2017-11-23 Post score: 1 Answer: Found it. It was under rosidl_default_generators/rosidl_default_generators-extras.cmake.in set(_exported_dependencies HERE ) And I forgot to add my dependency. Originally posted by pokitoz with karma: 527 on 2017-11-28 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 29436, "tags": "ros, makefile" }
Threshold to consider new feature as a new finding to a model?
Question: I am working on binary classification problem with 5K records and 60 features. Through feature selection, I narrowed it down to 14 features. In existing literature, I see that there are well-known 5 features. I started my project with an aim to find new feature that can help improve the predictive power of the model However, I see that with well known features (reported in literature), it produces an AUC of 84-85 and having all my 14 features decreases it to 82-83. So I tried manual add and drop and found out that if I add only one feature (let's say magic feature), it increases the AUC to 85-86. I see that there is a difference of 1 point in AUC. 1) Is it even useful to be happy that this adds some info to the model? 2) Or me looking at AUC is not the right way to measure model performance? 3) Does it mean the other new features (9 out of 14) that I selected based on different feature selection/ genetic algorithm aren't that useful? Because my genetic algorithm returned 14 features, so I was assuming that was the best subset but still through my previous experiments I know that model had better performance when it had 5 features. Any suggestions here? What can I do? 4) I am currently using train and test split as my training and testing data. I applied 10 fold cv to my data. Should I be doing anything different here? 5) If I add around 16-17 features, I see the AUC is increased to 87 but this can't be over fitting right? Because if it's overfitting, shouldn't I be seeing the AUC as97-100or just100? I know we haveoccam razor's principleto keep the model parsimonius but in this case, just having 16-17 features in model is not too complex or heavy. Am I right? Because it's increasing theAUC`. Any suggestions on this? Answer: A lot of questions here. Here are some thoughts. should you be happy about a 1 point increase in AUC? Yes. An effect can be genuine, but small. A 1 point advantage is still an improvement. But do I trust that outcome? Not sure. you need some more data. Your sample size is not large. Furthermore, cross-validation is a wonderful thing, but you have been running a lot of tests on the same small data set, so cross-validation notwithstanding -- we really don't know how your classifier will perform on brand new, unseen data. point 3. This sounds like an issue with over-fitting. point 4. I'm not sure what you are doing here. Cross-validation is an alternative to "train" and "test" sets, since each fold acts as the "test" set to the model fit on the "Non-fold" part of the data. Did you mean to say that you were doing CV on the train part, and then used the "test" part for a final check on performance? That would be a good thing to do, but if you have been using the same test set to check every model you have tried, then that test set is beginning to look like a training set. You will need another "test" set. point 5. You don't need a ridiculously high AUC to be guilty of over-fitting. Over-fitting happens when you fit features to the noise in your data set. An over-fit model will have a higher mean squared error on a fresh test set than the optimal model, although it will do better on the training set. Having 16 to 18 features is too many if, in fact, the outcome is well explained with only 5. Given that you have reduced your candidate features to 14, you now have a much smaller problem, and it should be possible to examine the features from a subject matter perspective. Which features would a SME recommend you to retain? And you can also examine correlations between the 14 features and check for redundancy that way. With a small data problem (which is what this is), you can work to understand your data directly. That approach might yield some interesting insights. You might want to use a different method of model selection than maximal AUC. This paper discusses the merits of AUC and offers an alternative. Could be interesting.
{ "domain": "datascience.stackexchange", "id": 6705, "tags": "machine-learning, deep-learning, predictive-modeling, statistics, feature-selection" }
Multi machine communication through Internet
Question: Hi All, I have been using ROS for a while now and most of my activities involve multi-machine communication. However, all this happens in a network formed with a single router (LAN) in the same subnet. My question is, is it possible to achieve multi-machine communication using ROS in a global scale? I believe it is definitely possible with complex TCP/IP programming or using the concept of server/ client. However, i would like to know are there any simpler techniques to achieve this the way I have been able to do simple multi-master communication using ROS in LAN. It would be great if someone would be able walk me through the beginner's steps for this. Thanks. Originally posted by SivamPillai on ROS Answers with karma: 612 on 2012-08-28 Post score: 1 Answer: I would use a VPN. Have a look at this answer. Edit: Maybe I should also provide some explanation. As long as you set up your routing tables correctly, you can have nodes communicating across subnet boundaries. But it is important that nodes can directly reach each other, i.e. every computer needs to be pingable from every other computer. Theoretically, it would also be possible to create relay computers that relay nodes from one sub-net into another one but that's sort of hard to get right. The easiest solution for having multiple nodes in different subnets is definitely to set up a vpn and put all nodes into a virtual subnet. Originally posted by Lorenz with karma: 22731 on 2012-08-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by SivamPillai on 2012-08-29: I did go through your link but it did not seem so simple for me the way its stated there coz am nt tht good on networking concepts. but it acts like a starter. Maybe I'll read more before i could post any sensible query on tht if i come across one. Thanks for your link and your answer there. :) Comment by SivamPillai on 2012-09-03: I'll implement this answer at quiet a later time... but after my reading through several information on the internet your answer makes perfect sense now... vpn should definitely work. Thanks.
{ "domain": "robotics.stackexchange", "id": 10802, "tags": "ros, c++, ros-fuerte, multi-machine" }
Mnemonic for vector cross versus dot?
Question: I'm always stuck to remember the difference between vector multiplication with dot product versus cross product? Is there any mnemonic so that I can easily remember which is which? Answer: Couldn't be easier! The cross is of course a cross i.e., as in say the Christian symbol, the ordinary English word cross. It's called a "cross" because it's two lines crossing. It's difficult to see how an easier word could be chosen. Since it is a cross of two lines it is called a cross. Again, it's hard to see how it could be easier to remember!! "Dot" is equally simple: the result is just a simple value ... kind of like a number, "speed", or size. (Note that of course in ordinary kindergarten arithmetic, the dot just means simple multiplication. For example 3.4 = 12. Or "6.a" is "6a". The result of a dot is nothin more than a number, like 13.3 or 28. It's exactly, totally, the same when dealing with vectors - of course the result of a dot, is, simply a number!) Regarding making a cross, here's an excellent and entertaining article by some drunk about the two ways you can go when you make that cross (either "up" or "down," so to speak), which is just decided by convention depending on what chipset you're using. http://answers.unity3d.com/answers/267076/view.html Of course, the "cross" if two vectors is just the thing that sticks up when the vectors "cross" -- what else could it be called? You'll never forget it again!
{ "domain": "physics.stackexchange", "id": 27011, "tags": "vectors, mnemonic" }
Can a single-qubit state be nontrivially extended to a non-pure state?
Question: Consider a generic single-qubit state $$\rho=\lambda_1\lvert \lambda_1\rangle\!\langle \lambda_1\rvert+\lambda_2\lvert \lambda_2\rangle\!\langle \lambda_2\rvert\in\mathcal H_S.$$ I am interested in understanding what are the possible extensions of $\rho$, that is, the states $\tilde\rho\in\mathcal H_{SE}$ such that $\operatorname{tr}_E(\tilde\rho)=\rho.$ If is relatively easy to find the general structure of extensions that are pure, but less so in the more general case of non-pure extensions. In particular, is it possible to have a non-trivial extension of $\rho$ which is not a purification? By non-trivial here I mean that it must also decrease the amount of uncertainty associated with $\rho$. This means no trivial extensions of the form $\tilde\rho=\rho\otimes\sigma$, and no extensions built by simply attaching a set of orthonormal states to the eigenvectors of $\rho$, that is, no extensions of the form $\tilde\rho=\sum_k \lambda_k \lvert\lambda_k\rangle\!\langle\lambda_k\rvert\otimes\sigma_k$ with $\lambda_k$ the eigenvalues of $\rho$. Answer: Sure. Just take any random purification with a large purifying space $\mathbb C^2\otimes C^d$, and trace the $\mathbb C^d$ component. To give a randomly made up example, $$ \rho = \left(\begin{matrix} .25 & .20 & .10 & .05 \\ .20 & .25 & .00 & .05 \\ .10 & .00 & .25 & -.15 \\ .05 & .05 & -.15 & .25 \end{matrix}\right) $$ is a purification of the state $$ \rho_A = \left(\begin{matrix} .50 & .05 \\ .05 & .50 \end{matrix}\right)\ . $$ That the example is not compatible with the special forms $\tilde\rho$ you give above can be straightforwardly checked from the eigenvalues of $\rho$, which are incompatible with the forms $\tilde\rho$ you give above -- for both those $\tilde\rho$, it holds that the eigenvalues of $\rho$ can be written as a sum of two eigenvalues of $\tilde\rho$ each, which can be easily tested not to be the case. To explain the last argument in more detail: Let $\tilde\rho=\sum \lambda_k |\lambda_k\rangle\langle\lambda_k|\otimes\sigma_k$ (which includes the first purification if all $\sigma_k$ are equal). Denote by $\mu_i(\sigma_k)$ the eigenvalues of $\sigma_k$. Then, the eigenvalues of $\tilde\rho$ are $$ \tau_{i,k} = \lambda_k\,\mu_i(\sigma_k)\ . $$ Thus, we have that $$ \sum_i\tau_{i,k} = \lambda_k $$ (as $\mathrm{tr}\,\sigma_k=1)$, i.e., each two (where "two" is the dimension of the purification) eigenvalues of $\tilde\rho$ add up to an eigenvalue $\lambda_k$ of $\rho$. It can be easily checked that this property does not hold for the example. Note that this richness of extensions is exactly a problem in computing the squashed entanglement, where one optimizes over (non-pure) extensions of arbitrary dimensions.
{ "domain": "physics.stackexchange", "id": 53253, "tags": "hilbert-space, quantum-information, quantum-entanglement, density-operator, quantum-states" }
Where can find default setup.zsh file?
Question: I have unexpectedly changed /opt/ros/indigo/setup.zsh file and I now am unable to source the file. I want to replace the contents of /opt/ros/indigo/setup.zsh file to the default values. Where can I find the default file? Originally posted by anonymous25787 on ROS Answers with karma: 31 on 2016-10-23 Post score: 0 Answer: You could run sudo apt-get install --reinstall ros-indigo-catkin to get the file back. Here is the source of the file if you'd rather manually replace it. Originally posted by jarvisschultz with karma: 9031 on 2016-10-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26027, "tags": "ros" }
Which way Le Chatelier's principle
Question: I have been pondering about the effect of pressure change in regard to Le Chatelier's principle on reactions. For this, I considered the following reaction: $$\ce{CO(g) + 3H2(g) <=> CH4(g) + H2O(g)}$$ Le Chatelier's principle states that when the pressure on the whole reaction mixture increases, the equilibrium will shift in the direction where the number of moles of reactant(s)/product(s) is lesser as the partial pressure of a gaseous substance is directly proportional to it's mole fraction. This happens to oppose the effect of pressure being applied that tends to disturb the established equilibrium. But here is my argument that, if external pressure applier has to be opposed, why can't it just shift backward instead? If it would shift in the backward direction where there are greater number of moles, will it not be more effective to oppose the external pressure? Answer: This can get confusing. What you seem to have expressed is you want to reduce the effect of an increased external force by increasing the internal pressure. This can be done in such a situation of a cartridge primer where the firing pin initiates a chemical reaction to give an increase in pressure; that is not a system at equilibrium. A discussion of LeChatliers principle is about the effect on the equilibrium under the changing conditions. Please Simplify your question in your own mind. At equilibrium the rates of the forward and reverse reactions are equal; the equilibrium constant is the ratio of the rate constants. This reduces to the reaction quotient. A simple increase in pressure will increase each factor in the reaction quotient by the same ratio. There are 4 factors one CO and 3H2 in the forward reaction and two in the reverse. Increasing the pressure accelerates the forward reaction more. In a quick pressure increase it could be possible to follow the reaction by monitoring the pressure.
{ "domain": "chemistry.stackexchange", "id": 17431, "tags": "physical-chemistry, equilibrium, pressure" }
Will Emf induce in current carrying conductor in solenoid?
Question: If a current carrying conductor is placed inside a solenoid where current passes through both solenoid & conductor, Will an extra emf(potential difference)/current induce in the conductor? [current in conductor is less than that of solenoid]? Let emf already existing in conductor be E1. If emf is induced: Assume emf induced by solenoid to be E2. Will the emf induced be E1+E2 or E2 or any other value? Please answer soon... Answer: Yes. If you place a current carrying conductor in the vicinity of a loop (here solenoid), the magnetic flux passing through the loop will change(because earlier the current carrying conductor was not around) inducing an emf in the loop. The emf induced $E_1$ can be calculated using Faraday's law and its value, in general, may be either positive or negative depending on the direction of the magnetic field that the current carrying conductor brings with it. If the previous value of emf in the loop (solenoid) was $E_2$, and the induced emf is $E_1$, the new emf would be $E_2 + E_1$ where $E_1$ may be either positive or negative as mentioned.
{ "domain": "physics.stackexchange", "id": 41771, "tags": "electromagnetic-induction" }
Does the existence of PH-complete problems relativize?
Question: The Baker-Gill-Solovay result showed that the P = NP question does not relativize, in the sense that no relativizing proof (insensitive to the presence of an oracle) can possibly settle the P = NP question. My question is: Is there a similar result for the question, "Does there exist a PH-complete problem?" An answer in the negative to this question would imply P != NP; an answer in the affirmative would be unlikely but interesting because it would mean that PH collapses to some level. I'm not sure, but I suspect that a TQBF oracle would lead PH to be equal to PSPACE, and thus to have a complete problem. In addition to being uncertain regarding this, I am curious as to whether or not there is an oracle relative to which PH provably does not have a complete problem. -Philip Answer: Yao showed, in 1985, that there exist oracles relative to which the Polynomial Hierarchy is infinite. Relative to such an oracle, there don't exist PH-complete problems. Also, you are right that with a TQBF oracle, PH equals PSPACE. In fact, even P = PSPACE in the presence of a TQBF oracle.
{ "domain": "cstheory.stackexchange", "id": 390, "tags": "cc.complexity-theory, complexity-classes" }
spawn_model fails in ROS groovy (Gazebo 1.3.0)
Question: Hello, A few days ago I switched from fuerte to groovy. The shipped gazebo version at this time was 1.2.5 . Scince everything was working fine (simulating robots, spawning models, etc.) I decided to install groovy on 2 other machines. On those machines simulator gazebo has version 1.3.0 and it is not possible to spawn any models via "rosrun gazebo spawn_model ...". When using files with SDF version 1.3 I always get the following error: [ERROR] [1359494067.230314246]: GazeboRosApiPlugin SpawnModel Failure: input xml format not recognized I've made some experiments with really simple models and even older sdf versions. Nothing seems to work here. Any suggestions why this not working anymore? Best regards, Tobi Originally posted by Tobi_von_T on Gazebo Answers with karma: 21 on 2013-01-29 Post score: 2 Answer: How to spawn a model: $ gzfactory spawn -f <sdf_file> There are bunch of other options: $ gzfactory -h Originally posted by nkoenig with karma: 7676 on 2013-02-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 2984, "tags": "gazebo-1.3, groovy" }
Why is ephedrine optically active?
Question: Acyclic amine in which three groups are different with a lone pair of electron is optically inactive and nitrogen is achiral because of the rapid pyramidal inversion. Does similar inversion happen in ephedrine? If yes, doesn't the inversion cancel the optical activity of the chiral carbon. So why then is ephedrine optically active? Answer: Nitrogen will always perform nitrogen inversion if it is not configurationally fixed due to steric or electronic reasons. Fixing the configuration sterically could be achieved, for example, by a polycyclic compound as in 1 below. Fixing the configuration electronically is exemplified by the amide 2 (note that the amide is achiral; the nitrogen is forced into a planar configuration by amide resonance as shown). Figure 1: examples of nitrogen-containing compounds where nitrogen inversion is not possible. 1 is chiral, 2 is not. However, nitrogen inversion only concerns nitrogen atoms and prevents most nitrogens in organic compounds from being asymmetric even if they are attached to three different residues (including a lone pair). Ephedrine contains two additional asymmetric atoms both of which are carbon atoms. These carbons cannot perform an inversion similar to nitrogen inversion as they are covalently bound to four different residues. As there is no plane or centre of symmetry transforming one of these asymmetric carbon atoms onto the other, this makes the molecule as a whole chiral. The effect of the asymmetric carbon atoms can be seen in the number of possible stereoisomers: both carbons can be either (R) or (S) configured giving 4 possible isomers ((+)-ephedrine, (–)-ephedrine, (+)-pseudoephedrine and (–)-pseudoephedrine). If nitrogen inversion were not a thing, we would expect 8 stereoisomers as then each of these 4 could occur with an (S) or an (R) configured nitrogen. However, only those four are known. For an abundance of clarity: nitrogen inversion only affects nitrogen atoms and only turns an (R)-nitrogen centre into an (S)-nitrogen centre; non-nitrogen centres are unaffected.
{ "domain": "chemistry.stackexchange", "id": 15035, "tags": "stereochemistry, chirality" }
Voting system for a website
Question: I am currently adding a voting system to my website. Below is my voting class. I feel that it could be simplified and that there is repeated code regarding the functions, although to the best of my knowledge this is the simplest I can make it. I would love to hear other implementation ideas and systems that have been used. <?php class Vote{ var $db; function __construct($db){ $this->db = $db; } function vote($user_id, $post_id, $type){ // check if vote already exists against user and post to prevent duplicates if($this->userVoteExists($user_id, $post_id)){ // retreive vote $vote = $this->getVote($user_id, $post_id); // check if the requested vote is the same to prevent needlessly deleting and inserting if($vote['type'] != $type){ if($vote){ // remove the vote if it exists $this->removeVote($user_id, $post_id); } // add the vote $this->addVote($user_id, $post_id, $type); } } } function userVoteExists($user_id, $post_id){ $query = $this->db->prepare("SELECT * FROM votes WHERE user_id = :user_id AND post_id = :post_id LIMIT 1"); $query->execute(array(':user_id' => $user_id, ':post_id' => $post_id)); return $query->rowCount(); } function getVote($user_id, $post_id){ $query = $this->db->prepare("SELECT * FROM votes WHERE user_id = :user_id AND post_id = :post_id LIMIT 1"); $query->execute(array(':user_id' => $user_id, ':post_id' => $post_id)); return $query->fetch(); } function removeVote($user_id, $post_id){ $query = $this->db->prepare("DELETE FROM votes WHERE user_id = :user_id AND post_id = :post_id"); $query->execute(array(':user_id' => $user_id, ':post_id' => $post_id)); } function addVote($user_id, $post_id, $type){ $timestamp = time(); $query = $this->db->prepare(" INSERT INTO votes ( post_id, user_id, type, timestamp ) VALUES ( :post_id, :user_id, :type, :timestamp )"); $query->execute(array( ':post_id' => $post_id, ':user_id' => $user_id, ':type' => $type, ':timestamp' => $timestamp )); } } ?> Answer: A Vote should not have access to the database. It should be a dumb 'entity' with setters, getters and a constructor to make the required object a valid one. You should take a look at the Repository and Data Mapper patterns. The point is, the object you want to save should not know how it is saved - meaning that you can change where you want to save it's data in the future without touching the underlying entity. For an example implementation of this, check out Doctrine ORM. The Repository is the the object that uses a data mapper to save data to the relevant data source. Change the data source, the repository object-api (how you save / retrieve data) stays the same, but how the data is saved changes. Pretty basic concept but really powerful, and not just limited to ORMs or PHP, either. Imagine entities in JavaScript with a data mapper for local/session storage, for example. The data mapper would have get, update, delete (etc) methods, and the repository would call the underlying data mapper (relying on an interface) to actually perform the saving / retrieving of data, which you can swap out at any time. Also, our vote needs to have it's own id (unique, primary key) completely separate from every other attribute so it can be uniquely identified. Aside from that, all lowercase, underscores for spaces and prepared statements is a good thing. I'm not going to write your code for you, but effectively, you need to take a look at doing the following: Abstract out the thing that saves, updates and removes data from your persistence. This is your data mapper, it is stand-alone, and just takes data and maps it to your given persistence. This is the object that would have your instance of \PDO dependency injected (yes, typehint for it). Have a class that takes an instance of your data mapper that is responsible for saving your vote (a VoteRepository). You don't need an ORM, but you can simulate some of it's useful abstraction to help with SoC. This class contains your object-specific stuff which it forwards to it's data mapper, with your methods like userHasVote(User $user, $voteId). You could probably get away with putting addVote(Vote $vote) and removeVote(Vote $vote) in your repository. This will require you to construct a new Vote object before you save it to the database (a Factory will help here, and your repository will give you a Vote object when you request one from the database. Remember, you're trying to use OOP here: everything is an object with a single responsibility. The logic to remove / add a vote depending on something I would personally put in a Voting component (name it something better), with this single responsibility within it. Also, read the difference between a component and a service. Responsibility wise, you need to work upwards from the datasource, building a nice object api for each object without having it concern others (so they can work standalone) Learn how to document your code. Phpdoc is a standard everyone and their dog's framework are employing to help other developers (and automated documentation tools) understand what code does You're not writing php 4 any more. var has no place in your code. Use the correct access modifier, as in protected $db. The same goes for your functions (methods), make the ones you want others to consume (the public api) public function name instead. So, all in all, a lot of refactoring to be done here. Well done for using \PDO and prepared statements though.
{ "domain": "codereview.stackexchange", "id": 12295, "tags": "php, mysql, pdo" }
Can we observe the same very old object more than once?
Question: Assuming the universe is curved and that is it possible to travel in a straight line and return to your starting point, it would make sense that at a younger age when the universe was smaller that light from an object sent in opposite directions would eventually reach the opposite "pole". Do we see any possibilities of this? Seeing the same object from opposite sides in opposite sides of the sky? Such object would have to be very old, I know. Answer: We can't, because the universe isn't curved enough to allow light to travel far enough since the big bang If the universe was more highly curved, this might be possible. And the objects that appeared "twice" might not be very distant - or only one of the images would be distant. It was even speculated that it might be possible to see the milky way and Andromeda as a distant pair of galaxies. There were also searches of the cosmic microwave background to investigate if parts of it could be different views of the same pattern. These turned up negative. So while "yes" if the universe had sufficient positive curvature, it might be possible to see the same object twice, it turns out that the curvature of the universe is very close to zero, so we don't.
{ "domain": "astronomy.stackexchange", "id": 6588, "tags": "early-universe" }
Freefall into snow
Question: In the movie Frozen, the following dialogue takes place: Anna: "It's a hundred-foot drop." Kristoff: "It's two hundred." Anna: "Okay, what if we fall?" Kristoff: "There's 20 feet of fresh powder down there. It will be like landing on a pillow... Hopefully. Then they fall all the way to the bottom and survive. My question is this: would this be actually possible? My instinct tells me no, but I'm too awful at physics to back it. Answer: As a very rude guess, fresh snow (see page vi) can have a density of $0.3 \ \mathrm{g/cm^3}$ and be compressed all the way to about the density of ice, $0.9\ \mathrm{ g/cm^3}$. Under perfect conditions you could see a 13 feet uniform deceleration when landing in 20 feet of snow, or about 4 meters. Going from $30\ \mathrm{m/s}$ to $0\ \mathrm{m/s}$ (as @Sean suggested in comments), you'd have $(\frac{4\ \mathrm m}{12.5\ \mathrm{m/s}})$ = 0.32 seconds to decelerate. The acceleration is $\frac{30\ \mathrm{m/s}}{0.32\ \mathrm{s}}$ = $93.75\ \mathrm{m/s^2}$. That's about: 9.5G's of acceleration Wikipedia lists 25g's as the point where serious injury/death can occur, and 215g's as the maximum a human has ever survived. So it seems plausible. But it should be noted that since the snow at the bottom is under a lot of pressure from the weight of the snow above, it's likely the density would not be $0.3\ \mathrm{g/cm^3}$ throughout. It would help that the force lasts only a fraction of a second. Edit as pointed out in the comments, the force that the snow will exert could vary with its density. So initially, the force would be rather weak, and as you approach $0.9\ \mathrm{\frac{g}{cm^3}}$ that force would increase, probably exponentially. So the above answer is really a "best case scenario" when it comes to snow compressibility
{ "domain": "physics.stackexchange", "id": 20096, "tags": "homework-and-exercises, newtonian-mechanics, free-fall, applied-physics, estimation" }
Conway life game implementation with scala
Question: I've tried to create optimal in terms of performance and memmory consumption. But also I've tried to make it functional and scala way. I want to get you comment on how to make it more 'scala'stic object LifeGame extends App { trait Matrix extends Iterable[(Int, Int)] { def apply(p: (Int, Int)): Boolean def update(p: (Int, Int), b: Boolean): Unit def newInstance: Matrix; } class SparseMapMatrix extends Matrix { type Point = (Int, Int) private var data = Set[(Int, Int)]() def apply(p: Point): Boolean = data.contains(p) def update(p: Point, b: Boolean) = if (b) data += p else data -= p def newInstance = new SparseMapMatrix() def iterator = data.iterator override def toString() = { val sb = new StringBuilder() def best(func: (Int, Int) => Int)(p1: Point, p2: Point) = (func(p1._1, p2._1), func(p1._2, p2._2)) val minBoundary = ((0, 0) /: iterator)(best(math.min)) val maxBoundary = ((0, 0) /: iterator)(best(math.max)) for (i <- minBoundary._1 to maxBoundary._1) { sb.append("\n") for (j <- minBoundary._2 to maxBoundary._2) { sb.append(if (this((i, j))) "x" else " ") } } sb.append("\nmin=%s, max=%s" format(minBoundary, maxBoundary)) sb.toString() } } object Engine { def apply(input: Matrix): Matrix = { val result = input.newInstance def block2D(pp: (Int, Int)): Seq[(Int, Int)] = for (ii <- block1D(pp._1); jj <- block1D(pp._2)) yield (ii, jj) val liveCells = for (p <- input.iterator.flatMap(block2D).toSet[(Int, Int)].par) yield { val offsets = block2D(p).filter(_ != p) val nn = offsets.map(p => input(p)).count(_ == true) case class State(l: Boolean, n: Int) val newValue = State(input(p), nn) match { case State(true, n) if n < 2 => false case State(true, 2) | State(true, 3) => true case State(true, n) if n > 3 => false case State(false, 3) => true case State(value, _) => value }; if(newValue) Some(p) else None } liveCells.seq.foreach { _ match { case Some(p:Point) => result(p) = true; case _ => ;}} result } } def block1D(i: Int) = i - 1 to i + 1 } Answer: The main-things I changed: Matrix is now completely immutable Code is more self-documenting (extract complex code to methods, etc.) I deleted the unnecessary inheritance I don't like apply-methods which are used as contains-methods -> name change I deleted parentheses in side-effect free methods Many syntax improvements The code: object LifeGame extends App { case class Point(x: Int = 0, y: Int = 0) object Matrix { def empty: Matrix = Matrix(Set.empty) } case class Matrix(private val data: Set[Point]) { def contains(p: Point): Boolean = data contains p def update(p: Point, b: Boolean) = if (b) copy(data+p) else copy(data-p) def iterator: Iterator[Point] = data.iterator override def toString = { def best(func: (Int, Int) => Int)(p1: Point, p2: Point) = Point(func(p1.x, p2.x), func(p1.y, p2.y)) val minBoundary = (Point() /: iterator) { best(math.min) } val maxBoundary = (Point() /: iterator) { best(math.max) } val sb = StringBuilder.newBuilder for (i <- minBoundary.x to maxBoundary.x) { sb.append("\n") for (j <- minBoundary.y to maxBoundary.y) { sb.append(if (this contains Point(i, j)) "x" else " ") } } sb.append("\nmin=%s, max=%s" format (minBoundary, maxBoundary)) sb.toString } } object Engine { def newCell: (Boolean, Int) => Boolean = { case (true, 0 | 1) => false case (true, 2 | 3) => true case (true, _) => false case (false, n) => n == 3 } def apply(input: Matrix): Matrix = { def block1D(i: Int) = Seq(i-1, i, i+1) def block2D(p: Point) = for (x <- block1D(p.x); y <- block1D(p.y)) yield Point(x, y) def points = (input.iterator flatMap block2D).toSet.par def calcCell(p: Point) = { val neighbours = block2D(p) filter { p != } val alive = neighbours map { input contains } count { true == } if (newCell(input contains p, alive)) Some(p) else None } val cells = (points map calcCell).seq collect { case Some(p) => p } (Matrix.empty /: cells) { (result, p) => result(p) = true } } } }
{ "domain": "codereview.stackexchange", "id": 722, "tags": "scala, game-of-life" }
reduction in image quality in ROS
Question: Hi, I am working on iball 12MP camera in ubuntu . The quality of image is good when i view the video using guvcview software. When i launch the camera.launch file and view raw image, the video quality is degrading badly in the raw image. Has anyone faced the same problem? I'm working on face detection and recognition program, the input video quality is very bad, since the video goes into ROS and OpenCV. The final output also degrades. So please post your views. my launch file <launch> <node name="uvc_cam_node" pkg="uvc_cam" type="uvc_cam_node" output="screen"> <param name="device" value="/dev/video0" /> <param name="width" value="640" /> <param name="height" value="480" /> <param name="frame_rate" value="60" /> </node> </launch> i have attatched 2 images from guvcview and uvc_cam driver below Originally posted by nandhini on ROS Answers with karma: 46 on 2012-03-22 Post score: 1 Original comments Comment by joq on 2012-03-22: What ROS camera driver are you using? What driver parameters? Comment by nandhini on 2012-03-22: i'm using the driver given in the link https://github.com/ericperko/uvc_cam Comment by Eric Perko on 2012-03-23: Please update your question to include the camera launch file you are using. When you view it in guvcview, what are the parameters set to (e.g. fps, width, height, auto_exposure) and are they the same in the camera launch file? Answer: Since you are not setting the camera parameters manually in the launch file, they are getting set to the default in the driver. Note that the driver's defaults are NOT initialized to the current settings on the camera (they are specified either in the driver code or in the UVCCam.cfg file), so whatever parameters you set in guvcview are not what is being set by uvc_cam. This could result in very dark images, as that is the desired behavior for the use case the driver forked for. See my answer to Logitech webcams very "dark" when using uvc_cam for details on how to "undarken" your image when using my uvc_cam driver. Your best bet is to find the set of parameters you like using the guvcview program, record them, set them with dynamic_reconfigure's reconfigure_gui while the uvc_cam driver is running (or in the launch file before starting up) and then any values you changed from default you should set in your launch file. Note that setting any unsupported modes in the reconfigure_gui may crash the driver or cause the camera to lock up and need power-cycled (Usually this is only if you try to change the resolution, as the reconfigure_gui doesn't wait to get both a width and height pair before reconfiguring the driver. I suggest setting the width and height in the launch file beforehand.) If your problem is not a very dark image, then I suggest you include both an image from while guvcview is running and an image of the same scene with the uvc_cam driver so that we can understand the type of degradation you are seeing. Originally posted by Eric Perko with karma: 8406 on 2012-03-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by nandhini on 2012-03-26: Eric, thank you. soon i will attatch those 2 images for u to compare... Comment by Eric Perko on 2012-03-27: From your launch file, you aren't setting things like saturation, gain or contrast. I believe the defaults for those result in crappy color images like the ones you see. Try using the reconfigure_gui to play with the values or just set them in the launch file to whatever it's set to in guvcview. Comment by nandhini on 2012-03-27: Thank you Eric! I will do Comment by nandhini on 2012-03-29: is it possible to launch stereo view in uvc_cam? Comment by Eric Perko on 2012-03-30: uvc_cam only supports streaming from one camera at a time, per instance of the driver. You could run two uvc_cam nodes, each connected to a different camera (with each pushed down into a namespace) to view two cameras at the same time. However, there wouldn't be any explicit synchronization.
{ "domain": "robotics.stackexchange", "id": 8681, "tags": "ros, uvc-cam, webcam" }
What are centromeres *really*?
Question: I've gathered that a centromere is a a region* where the DNA is bundles up even tighter (around protein different to Histone) and chromatids are 'joined'. However I'm still mostly in the dark regarding its physical structure and functioning. At what point during DNA replication is the centromere created, and how is it created? How does it hold the chromatids together (what are its components)? Specifically: When it is being created (presumably during cell division) how is the positioning of the centromere controlled? Which flags are used by the enzymes in the process of making the centromere to tell them that it is the right spot: a section of DNA that 'says' "Right part of centromere, to be attached to left part"? Secondly, after their condensation into chromosomes (e.g. during prophase), are the sister chromatids physically intertwined around each other** for the purpose of joining, or are they simply adjacent? Is there a 'loop' in the centromere slung over the adjacent chromatids to join them? If they are intertwined, how is this achieved during DNA replication, whilst the non-centromere parts of the sister chromatids are not intertwined? How does the centromere break down to allow the chromatids to separate (e.g. during meoisis 2 and anaphase)? On a somewhat unrelated note, what in the centromeres do the spindle fibres attach to, and how do the tips of the growing fibres notice it to head it its general direction? *(i.e. there is not a separate physical object dubbed 'the centromere', rather it is a collection of objects in a region) **(That is, considering no other molecules than the chromatids, if you were to pinch the top and bottom ends of the left and right chromatids and pull them apart, could you separate them without them locking together (basically, do they intersect knot-theoretically)?) Answer: There are many questions in your question. I'll try to answer each question pointwise. Which flags are used by the enzymes in the process of making the centromere to tell them that it is the right spot There are some centromere associated repeats in the DNA which mark the site for centromere assembly. There is no particular consensus sequence of this repeat. However, this study says that in certain cases stable chromosomes are formed in the absence of centromeric repeats. are the sister chromatids physically intertwined around each other for the purpose of joining, or are they simply adjacent? They are joined by proteins called cohesins. Cohesins looks like rings which form around the sister chromatids. During anaphase, the anaphase promoting complex (APC) activates an enzyme called separase, which in turn degrades cohesin. what in the centromeres do the spindle fibres attach to, and how do the tips of the growing fibres notice it to head it its general direction? Centromeres serve as a site for the assembly of kinetochore. Kinetochore is a multi-protein complex which forms contact with the spindle fibres (specifically, K-fibres. Refer this previous post). An essential component of kinetochore is the motor protein dynein which makes the kitetochore to crawl along the spindle fibres, towards the pole. The wikipedia article on kinetochore is quite descriptive and you can refer that for details.
{ "domain": "biology.stackexchange", "id": 6397, "tags": "molecular-biology, chromosome, meiosis, mitosis" }
Continuity equation for compressible fluid
Question: A question is given as Consider a fluid of density $ \rho(x, y, z, t) $ which moves with velocity $v(x, y, z, t) $ without sources or sink. Show that $ \nabla \cdot \vec J + \frac{\partial \rho }{\partial t} = 0 ;$ where $ \vec J = \rho \vec v \hspace{0.5 cm}$ ( $\vec v$ being velocity of fluid and $ \rho $ density). In the solution it assumes, $ - \nabla \cdot \vec J $ is the change in $ \vec J$ within the volume element which should be equal to $ \frac{\partial \rho}{\partial t}$ (why equal?). I don't understand this part and I doubt if this question is correct. I think the question should be $$ \rho (\nabla \cdot \vec v) + \frac{\partial \rho}{\partial t} = 0$$ Is the question correct? If it's correct help me to understand the solution (if it's right) or please provide me correct answer. Thank you!! Answer: I) The continuity equation without sink and sources reads $$\frac{\partial \rho }{\partial t} + \nabla \cdot (\rho\vec v) ~=~ 0.$$ Hint on how to derive it: Establish first the integral form of the continuity equation for an arbitrary (sufficiently regular) 3D spatial integration region. Next use the definition of the 3D divergence to argue the differential form of the continuity equation. The continuity equation can be rewritten with the help of a material derivative $\frac{D \rho }{D t}$ as $$\frac{D \rho }{D t} + \rho \nabla \cdot \vec v ~=~ 0.$$ II) For an incompressible fluid, the density $\rho$ of a certain fluid parcel does not change as a function of time, $$\frac{D \rho }{D t}~=0.$$ If the density $\rho\neq 0$, an incompressible fluid then has a divergencefree flow $$\nabla \cdot \vec v~=~0.$$
{ "domain": "physics.stackexchange", "id": 3895, "tags": "fluid-dynamics, conservation-laws, definition, flow, continuum-mechanics" }
Obtain Helmhlotz energy from entropy and quasistatic work function
Question: I have a N particle system for which I know the entropy as a function of temperature T and the quasistatic work as a function of V. From this I should compute the a)Helmholtz free energy b)and then out of this the pressure c)And last the work done under any temperature The work done under quasistatic expansion from $V_{0}$ to $V$ ($V_{0}<V$) at fixed temperature $T_{0}$: $$ \Delta W = Nk_{b}T_{0}ln\left( \frac{V}{V_{0}} \right) $$ And the entropy is given by: $$ S=Nk_{b}\frac{V_{0}}{V}\left ( \frac{T}{T_{0}} \right )^{a} $$ with $a=const$,$V_{0}=const$ and $T_{0}=const$ for the entropy equation. To start with I would use $$ S=-\frac{\partial F}{\partial T} $$ by integration I obtain: $$ F(T,V,N)=-\frac{Nk_{b}V_{0}}{(a+1)VT_{0}^{a}}T^{a+1}+f(V) $$ Since I know the work done I can just insert the given work function for f(V) $$ F(T,V,N)=-\frac{Nk_{b}V_{0}}{(a+1)VT_{0}^{a}}T^{a+1}+Nk_{b}T_{0}ln\left( \frac{V}{V_{0}} \right) $$ The pressure is given by the derivative of F with respect to V $$ P(V,T,N)=-\frac{\partial F}{\partial V}=-\frac{Nk_{b}V_{0}}{(a+1)V^{2}T_{0}^{a}}-\frac{Nk_{b}T_{0}}{V} $$ This would result in a negative pressure what does not make any sense but i can't find my error Answer: The equation $$ F(T,V,N)=-\frac{Nk_{b}V_{0}}{(a+1)VT_{0}^{a}}T^{a+1}+f(V) $$is correct. The pressure is given by: $$P(T,V,N)=-\frac{\partial F}{\partial V}=-\frac{Nk_{b}V_{0}}{(a+1)V^2T_{0}^{a}}T^{a+1}-\frac{df}{dV}$$From the quasistatic work equation at constant temperature $T_0$, we know that: $$P(T_0,V,N)=\frac{Nk_bT_0}{V}$$Therefore,$$P(T_0,V,N)=\frac{Nk_bT_0}{V}=-\frac{Nk_{b}V_{0}T_0}{(a+1)V^2}-\frac{df}{dV}$$ Just integrate this ODE to get f(V) ADDENDUM From the previous equation, it follows that $$\frac{df}{dV}=-\frac{Nk_bT_0}{V}-\frac{Nk_{b}V_{0}T_0}{(a+1)V^2}$$ If I substitute this into the equation for the pressure, I obtain: $$P(T,V,N)=\frac{Nk_{b}V_{0}T_0}{(a+1)V^2}\left[1-\left(\frac{T}{T_0}\right)^{a+1}\right]+\frac{Nk_bT_0}{V}$$ If I integrate the differential equation for f, I obtain: $$f=-Nk_bT_0\ln{(V/V_0)}+\frac{Nk_bT_0}{(a+1)}\frac{V_0}{V}$$ So, $$ F(T,V,N)=-Nk_bT_0\ln{(V/V_0)}+\frac{Nk_bT_0}{(a+1)}\frac{V_0}{V}\left[1-\left(\frac{T}{T_0}\right)^{a+1}\right] $$
{ "domain": "physics.stackexchange", "id": 52380, "tags": "thermodynamics" }
Can physical absorption happen despite the formation of products?
Question: As far as I know, the absorption of $\text{SO}_2$ in water goes under physical absorption, meaning the sulfur dioxide bubbles are held in place by Van der Waals forces, in this case Keesom forces, since both water and sulfur dioxide molecules are dipoles. This table classifies the absorption of sulfur dioxide in water as physical absorption, taken from this site. However, sulfur dioxide does create products (within an equilibrium) with water. $$\ce{SO2 + H2O ⇌ HSO3^- + H3O+}$$ and $$\ce{SO2 + OH- ⇌ HSO3^-}$$ Now, with chemical absorption, the absorbate is held within the absorbent due to interfacial chemical bonding. Could it be that these products don't aid with the absorption at all? Or, could it be that they aid with the absorption in conjunction with, but perhaps to a lesser degree than, the Keesom forces, thus making the absorption more physical than chemical? If so, is this perhaps the norm? Or, is this absorption actually a chemical one? Answer: When you consider the dissolution of a gas like $\ce{SO3}$ , $\ce{CO2}$, $\ce{SO2}$ etc. there are three equilibria existing at the same time: $$\ce{SO2(g)\overset{H2O}{<=>}SO2(aq)\overset{H2O}{<=>}H2SO3(aq)<=>H+(aq) +HSO3-(aq)}$$ The first equilibrium is the dissolution equilibrium of the gas, the second is the hydration equilibrium of the dissolved gas, and the third is the dissociation equilibrium of the acid. The pKa associated with the first deprotonation of $\ce{H2SO3}$ is 1.81 from Wikipedia, so you can pretty much consider the last equilibrium as completely dissociated in dilute solutions. You can consider $\ce{SO2(aq)}$ as the state where the $\ce{SO2}$ molecules are physically absorbed in the solution, the rest exist as $\ce{H+(aq)}$ and $\ce{HSO3-(aq)}$. How much is physcially absorbed and how much is chemically dissociated would depend on the particular compound. Unfortunately, I cannot find any data on $\ce{SO2}$. However, in case of dissolving $\ce{CO2}$ in water, 99% exist as $\ce{CO2(aq)}$ i.e. the physically absorbed gas, and 1% as $\ce{H2CO3(aq)}$, $\ce{H+(aq)}$ and $\ce{CO3^2-(aq)}$. (reference) So, the absorption of $\ce{SO2}$ is partly chemical and partly physical, but note that all of these reactions are reversible. Also keep in mind that the distinction between physical and chemical change is not clear-cut, and gets less and less useful the more you delve into chemistry. The website you mention in the question gets a lot of other things wrong, for example, dissolution of $\ce{SO3}$ in water gets you sulfuric acid, which is quite stable and hygroscopic, so you would be hard pressed to reverse it, but the table lists it as physical absorption. Whereas chlorine does not dissolve that well in water, and you can drive it off by heating, but it's listed as a reversible chemical change. It doesn't make much sense to me, so I would advise you to get a more reliable source of information.
{ "domain": "chemistry.stackexchange", "id": 15622, "tags": "physical-chemistry, equilibrium, absorption" }
Resistance D-C circuit
Question: This question seems simple, but gives two plausable answers: On your first day at work as an electrical technician, you are asked to determine the resistance per meter of a long piece of wire. The company you work for is poorly equipped. You find a battery, a voltmeter, and an ammeter, but no meter for directly measuring resistance (an ohmmeter). You put the leads from the voltmeter across the terminals of the battery, and the meter reads 12.5 V . You cut off a 20.0-m length of wire and connect it to the battery, with an ammeter in series with it to measure the current in the wire. The ammeter reads 7.00 A . You then cut off a 40.0-m length of wire and connect it to the battery, again with the ammeter in series to measure the current. The ammeter reads 4.40 A . Even though the equipment you have available to you is limited, your boss assures you of its high quality: The ammeter has very small resistance, and the voltmeter has very large resistance. From this i took the voltage of the battery to be 12.5 V, and using the relation shape V=IR, i found the resistance of the wire per meter to be 0.089 Ohms using the first set of data, and 0.071 ohms using the second set. Which, if any, is right? Answer: This feels like a homework question, so let me give you a hint first: You actually need both pieces of data to determine the resistance of the ammeter and the internal resistance of the battery. This resistance is in series with the resistance of the wire. Assuming that it is constant, two datapoints are sufficient to solve for the resistance of just the wire (two equations with two unknowns). Putting the resistance of battery plus ammeter = $R_i$, and the resistance of the external wire $R_1$ and $R_2$ with measured currents $I_1$ and $I_2$, the equations you have to solve are $$V = I_1\left(R_i + R_1\right)\\ V = I_2\left(R_i+R_2\right)$$ With a little manipulation, you will find that $R_i$ drops out, and you get an expression for $R_1-R_2$. Which gives you the resistance of 20 m of wire. Note - the ammeter is said to have low resistance, but no mention was made of the voltage across the terminals of the battery with the wire connected across... An ohmmeter does exactly that - it measures both the voltage across the load, and the current through it. If you set up your experiment that way (rather than measuring the unloaded voltage of the battery), then the result would follow more obviously. Apparently, although "the equipment is of high quality", the process used for measuring could be improved...
{ "domain": "physics.stackexchange", "id": 29010, "tags": "homework-and-exercises, electric-circuits, electrical-resistance" }
Grey "skin" on gallium
Question: I have a sample of gallium (a few grams). I've noticed every time I melt the gallium and refreeze it, it forms this grey skin on the outside. When I remelt the gallium, this portion doesn't melt and remains a dull grey color. I have skimmed it off every time and collected it. I looked it up and gallium doesn't seem to create an oxide or other compound unless under extreme conditions. I have noticed this when melting it in a plastic bag, a glass beaker, and a plastic container. I have been using hot water to melt it, but the water doesn't contact the gallium, only the container. It seems I am slowly losing gallium from my sample and have a small amount of this grey material saved in case it can be recovered. Does anyone know what this is and how it can be reverted to elemental gallium? Answer: I have no real knowledge of it, and I've found almost no literature on its handling and storage. From Scientific American 1878:"Unlike lead, however, it acquires only a very slight tarnish on exposure to moist air..." and "In aerated water it tarnishes slightly." My very very old copy of Cotton and Wilkinson (Adv. Inorg. Chem) states that it is a moderately reactive metal and will react with sulfur. And of course we know it readily alloys (I'd say "amalgamates" except that term is specific for mercury alloys) with a large variety of metals. The (III) oxide is white, so unless your scum is white, it ain't that. The (I, sub) oxide is brown-black so that's a possibility. As are mixed oxide /hydroxides. Since the original investigation says it does tarnish in air, I'd take that at face value. There is the obvious possibility that your sample is either contaminated or was sold adulterated. If it's been in contact with any other metal (especially Aluminum) then it is almost certainly contaminated. There's no practical way you'll be able to recover the metal - the expense will far exceed the price to simply replace it. The best way to check for any large contamination is to determine its melting point, it should be very very close to its stated MP, shouldn't even be off by 1°.
{ "domain": "chemistry.stackexchange", "id": 8702, "tags": "home-experiment" }
How does variational autoencoders actually work in comparison to GAN?
Question: I want to know about how variational autoencoders work. I am currently working in a company and we want to incorporate variational autoencoders for creating synthetic data. I have questions regarding this method though, is this the only way to generate synthetic or artificial data? Is there a difference between VAE and GANs, is one preferred over the other? I am also not a person with a lot of mathematical background and a bit wary on the implementation of it. Finally, I have gone through many links and videos on the implementation through PyTorch and Tensorflow. Are both similar in implementation? I went through this link: https://www.youtube.com/watch?v=9zKuYvjFFS8&ab_channel=ArxivInsights However, still not fully grasped a simpler way to implement this technique. Any help with understanding and its implementation would be greatly appreciated. Answer: VAEs were a hot topic some years ago. They were known to generate somewhat blurry images and sometimes suffered from posterior collapse (the decoder part ignores the bottleneck). These problems improved with refinements. Basically, they are normal autoencoders (minimize the difference between the input image and output image) with an extra loss term to force the bottleneck into a normal distribution. GANs became popular also a few years ago. They are known for being difficult to train due to their non-stationary training regime. Also, the quality of the output varies, including suffering the problem of mode collapse (always generating the same image). They consist of two networks: generator and discriminator, where the generator generates images and the discriminator tells if some image is fake (i.e. generated by the generator) or real. The generator learns to generate by training to deceive the discriminator. Nowadays the hot topic is diffusion models. They are the type of models behind the renowned image-generation products Midjourney and DALL-E. They work by adding random noise to an image up to the point they are become only noise, and then learning how to remove that noise back into the image; then, you can generate images directly from noise.
{ "domain": "datascience.stackexchange", "id": 11727, "tags": "deep-learning, autoencoder, vae" }
Are all molecular structures symmetric?
Question: Are all molecular structure symmetric either in relation to a plane within itself or in relation to other molecules? Are there any completely asymmetrical structures when looking at molecular geometry? Are there any studies into the reason for symmetry? Answer: There are a couple of notable types of structurally asymmetric molecules. Firstly, those that are chiral due to different ligands attached to a central atom such as the improbable $\ce{CClFBrI}$ or 'bromo-chloro-fluoro-iodo-methane'. More generally, many molecules have chiral centers; sometimes multiple ones, like inositol which has 6 in a 6-carbon molecule. Secondly, there are rarer structures where there are no chiral centers, but the whole thing is chiral. Usually this is because of steric restrictions that force it to be one shape or another. Helicene is a very nice example. Another might be a cage hydrocarbon in the shape of this graph but I don't know if any exist.
{ "domain": "chemistry.stackexchange", "id": 3589, "tags": "molecules, symmetry" }
Why harmonic components appear only after a certain level when a signal is clipped?
Question: I recently observed this phenomenon that when a signal is clipped the harmonics start to appear only after a certain level. The Python code to reproduce the effect is given below. The signal has 3 components at 30, 60 and 90 Hz. So a maximum possible peak to peak voltage will be 3. I clipped the signal at various levels and observed the magnitude spectrum but the harmonics start to visibly noticable only if the clipping goes below -2.5 and 2.5. Is this a well-known phenomenon? What is the possible explanation behind this?. import numpy as np from scipy import signal import matplotlib.pyplot as plt from scipy.io import wavfile labelsize = 12 width = 3.5 height = width / 1.618 lwidth = 0.9 plt.rc('font', family='serif') plt.rc('text', usetex=True) plt.rc('xtick', labelsize=labelsize) plt.rc('ytick', labelsize=labelsize) plt.rc('axes', labelsize=labelsize) def MagnitudeSpectrum(data, Fs, clip): P = 20*np.log10(np.abs(np.fft.rfft(data))) f = np.linspace(0, Fs/2, len(P)) fig, ax = plt.subplots() fig.subplots_adjust(left=.20, bottom=.25, right=.96, top=.90) plt.plot(f, P, color='k', ls='solid', linewidth=lwidth, label='') plt.xlim(0, 500) plt.title('Signal Clipped BW '+str(-1*clip)+' and'+str(clip)) plt.xlabel('Frequency [Hz]') plt.ylabel('Magnitude [dB]') # plt.show() fig.set_size_inches(width, height) fig.savefig('SignalMagSpectro_'+str(clip)+'.png', dpi = 600) cliplevels = np.arange(2.2, 3.2, 0.1) for clip in cliplevels: Fs = 44100.0 t = np.arange(0, 10, 1/Fs) data = np.sin(2*np.pi*30*t)+np.sin(2*np.pi*60*t)+ np.sin(2*np.pi*90*t) +0.1*np.random.randn(len(t)) data = np.clip(data, -1*clip, clip) MagnitudeSpectrum(data, Fs, clip) Answer: Is this a well-known phenomenon? Yes, of course. You will see harmonics as soon as your clip point is lower than the maximum amplitude in the time domain. The latter is a function of the relative phases between the harmonic components. In your case the max amplitude is indeed 2.5 (plus whatever the noise adds). If you change the phases you will get a different clip point. For example, if you use cosine instead of sine, you will get a clip point of 3.0
{ "domain": "dsp.stackexchange", "id": 6177, "tags": "frequency-spectrum, quantization, magnitude, non-linear, nonharmonic" }
Complexity of finding approximate solutions for systems of polynomial equations
Question: Consider the following problem: Input: $(p_1,...,p_n, \epsilon)$ where each $p_i$ is a polynomial in $m$ variables with integer coefficients and $\epsilon>0$. Output: If there is $(r_1,...,r_m) \in \mathbb{Q}^m$ such that $|p_i(r_1,...,r_m)|<\epsilon$ for all $i$, then output such a tuple. Otherwise, output None. My question: What's known about the complexity of this problem? It's well known that the above problem is difficult when we ask for exact solutions. Answer: The problem is complete for the existential theory of the reals ($\exists\mathbb{R}$). This implies that the problem is NP-hard and can be decided in PSPACE, and there are consequences for the precision of a solution (for more info on $\exists\mathbb{R}$ see https://en.wikipedia.org/wiki/Existential_theory_of_the_reals). In comparison, the exact version is complete for the existential theory of the rationals ($\exists\mathbb{Q}$), more or less by definition. $\exists\mathbb{Q}$ is at least as hard as $\exists\mathbb{R}$, but not known to be decidable. Sketch of proof that the approximate version of the problem is $\exists\mathbb{R}$-hard for $\varepsilon = 1$: Since the solution set is open, we can change the quantifiers to range over $\mathbb{R}$ rather than $\mathbb{Q}$; this change leads to a logically equivalent formula. Testing whether a polynomial $p(x)$ has a real (exact) solution $x\in \mathbb{R}^n$ inside the unit ball is $\exists\mathbb{R}$-complete (this can be shown somewhat like Theorem 5.1 in https://link.springer.com/article/10.1007/s00224-015-9662-0). Testing $p(x) = 0$ for $\Vert x \Vert < 1$ is equivalent to $|p(x)| < 2^{-2^m}$ for some $\Vert x \Vert < 1$, where $m$ is (roughly) the description length of $p$ (the number of bits needed to write down $p$). E.g. see Corollary 3.4 in the paper referenced above in (2). This is then equivalent to $2^{2^m}|p(x)| < 1$ for some $\Vert x \Vert < 1$. Assuming we can compute $c > 2^{2^m}$, the remaining conditions can all be expressed using polynomial conditions of the form $<1$. We are left with showing that we can compute a number $c > 2^{2^{m}}$ using polynomial conditions of the form allowed. We do this using repeated squaring and adding new variables. E.g. $|y_1-3|<1$, $|y_2-(y_1^2+1)| < 1$, etc. Then $y_i > 2^{2^i}$, so $m$ such equations are sufficient to build the constant we need.
{ "domain": "cstheory.stackexchange", "id": 5464, "tags": "cc.complexity-theory, polynomials, algebra" }
Why are low spin tetrahedral complexes so rare?
Question: As I was going through Concise Inorganic Chemistry by J. D. Lee, I realised that there are simply no low spin tetrahedral complexes mentioned in the book. Is there any specific condition required for the formation of such a complex? Usually low-spin complexes are in $\mathrm{dsp^2}$ electronic configuration. But can this kind of orbital form a tetrahedral geometry? Because usually tetrahedron is usually synonymous to $\mathrm{sp^3}$. Answer: Hybridisation won't explain anything in transition metal complexes, so please stop using it, at least to the extent where it is possible to avoid using it. Quite literally everything about transition metal complexes is better rationalised using MO theory, and I am not exaggerating. The reason why low-spin $T_\mathrm d$ complexes are rare is because the splitting parameter, $\Delta_t$, is significantly smaller than the corresponding octahedral parameter $\Delta_o$. In crystal field theory, there is a complicated derivation which leads to the conclusion that (all things being equal) $$\Delta_t = \frac{4}{9}\Delta_o$$ For more information, please see: Why do tetrahedral complexes have approximately 4/9 the field split of octahedral complexes? and Why do octahedral metal ligand complexes have greater splitting than tetrahedral complexes?. Of course, this relationship is not exact in the real world, because CFT is a very simplified model; ligands are not point charges. However, it is still true in a qualitative sense. Since the splitting $\Delta_t$ is smaller, it is usually easier to promote an electron to the higher-energy $\mathrm t_2$ orbitals, rather than to pair the electrons up in the lower-energy $\mathrm e$ orbitals. Consequently, most tetrahedral complexes, especially those of the first-row transition metals, are high-spin. Low-spin ones do exist (e.g. J. Chem. Soc., Chem. Commun. 1986, 1491), but aren't common.
{ "domain": "chemistry.stackexchange", "id": 11964, "tags": "inorganic-chemistry, coordination-compounds" }
Will the JWST be affected by dust at L2 (gegenschein?)
Question: Gegenschein is a "faint brightening of the night sky" at the anti-solar point. A naked eye limiting magnitude of about 7.6 might enable an observer to make out gegenschein. The Wikipedia article on gegenschein suggests that interplanetary dust at the Sun-Earth L2 point might be responsible for gegenschein. The James Webb Space Telescope will also be at the Sun-Earth L2 point. Does this imply that the same interplanetary dust that may be responsible for gegenschein will affect the JWST? Answer: The Wikipedia article cites "Zdeněk (1962)" for the statement that the dust responsible for the Gegenschein has a possible concentration at L2. I haven't been able to obtain that paper, but I can't really see why that would be the case, since L2 is not dynamically stable. However, the dust consists of millimeter-sized grain (see e.g. this APOD image), which is quite large for dust. Such large grains probably has a highly anisotropic "phase function", with a high preference for backscattering. Since when you look from Earth toward L2 you have the Sun in your back, you will thus see an increased brightness compared to other directions, even though the density of that dust is the same. If this is the case, then placing JWST in L2 is no problem (except when it looks exactly away from the Sun, but that would be a problem anywhere in the ecliptic).
{ "domain": "astronomy.stackexchange", "id": 2163, "tags": "space-telescope, lagrange-point" }
The Partition Function of $0$-Dimensional $\phi^{4}$ Theory
Question: My question is related with this question. Several years ago, I posted an answer to the question, and the author of the reference removed the link permanently, now I have no clue what's going on. In the so-called zero-dimensional QFT, one can compute the path-integral $$\mathcal{Z}=\int_{-\infty}^{+\infty}dx e^{-x^{2}-\lambda x^{4}}$$ by mathematica, and the result is $$\mathcal{Z}=\frac{e^{\frac{1}{8\lambda}}K_{\frac{1}{4}}(\frac{1}{8\lambda})}{2\sqrt{\lambda}}$$ for $\mathrm{Re}(\lambda)>0$, and $K_{n}(x)$ is is the modified Bessel function of the second kind. The question is how to show it by hand. The answer appears in one of the lecture notes which I shared there, but has been permanently removed by the author. The idea was as follows: The partition function is given by $$\mathcal{Z}(\hbar)=\int dx e^{-\frac{1}{\hbar}S(x)}\equiv\int_{C}\frac{dz}{2\pi i}e^{-z/\hbar}B(z).$$ where $B(z)$ is the modified Borel transform, given by $$B(z)=-\int\prod_{i=1}^{N}dx_{i}\frac{\Gamma(\frac{N}{2}+1)}{(S(x)-z)^{\frac{N}{2}+1}}. \tag{$\star$}$$ The coutour $C$ encloses the range of $S(x)$. This is the only thing I could remember from the lecture notes, and I have no clue about what the modified Borel transform is, and have no idea how to compute that $N$ dimensional integral. Does anyone know what's going on with equation $(\star)$? Please help me figure out the rest of the steps to obtain the result. Answer: I'll write what I understand from XiYin's notes, of which there is a copy on the wayback machine. Probably I miss some details. We would like to evaluate $$Z = \int dx e^{- S[x]} = \int dx e^{-(\mu x^2 + g x^4)}, \quad S[x] := \mu x^2 + g x^4$$ In the regime $\mu > 0$, you have a single minima of the potential at $x = 0$ that you can expand around, and for $\mu < 0$ you have a choice between two minima (as well as kink instanton solutions, that will be important for later). If you try to do perturbation theory around $g = 0$: $$\mu > 0 : \quad Z = \sqrt{\frac{\pi}{\mu}}\left( 1 - \frac{3g}{4 \mu^2} + \frac{105g^2}{32\mu^4} - \cdots (-1)^n\frac{g^n(4n-1)!!}{\mu^{2n}2^{2n}n!}\cdots\right)$$ You notice that the radius of convergence, $r \sim \left(\frac{(4n-1)!!}{\mu^{2n}2^{2n}n!}\right) \big/ \left(\frac{(4n+3)!!}{\mu^{2n+2}2^{2n+2}(n+1)!}\right) = \frac{4\mu^2(n+1)}{(4n+3)(4n+1)} \to 0$ is zero, so the series doesn't converge. This is generic behaviour, (along the lines of an argument Dyson gave), $g < 0$ behaviour is very different to $g > 0$, so you don't really expect it to converge. A general approach to resumming such divergent series is the Borel Summation. I'll give a rough overview of what I understand of it, (which I learned roughly from Iain Stewarts EFT notes). Take a not-convergent asymptotic series $f$, and do the Borel transform (be careful of the strange indexing here, I'm just following convention) $$f(\alpha) = \sum_{n=-1}^\infty f_n \alpha^{n+1}, \quad F(b) := f_{-1} \delta(b) + \sum_{n = 0}^\infty \frac{1}{n!} f_n b^n$$ Notice that, being a bit lax, we see that we can recover $f$ by doing inverse borel transform: $$\int_0^\infty db e^{-b/\alpha} F(b) = \int_0^\infty db e^{-b/\alpha} \left( f_{-1} \delta(b) + \sum_{n = 0}^\infty \frac{1}{n!} f_n b^n \right) = \sum_{n=-1}^\infty f_n \alpha^{n+1} = f(\alpha)$$ The idea is that we improve the convergence properties of our series in our borel space since we get this $n!$ suppression of the coefficients, sum it up in borel space, and then transform back to regular space. Applying this idea to our series, we get that: $$F(b) = \sqrt{\frac{\pi}{\mu}}\left( \delta(b) + \sum_{n=0}^\infty (-1)^{n+1}\frac{b^n(4n+3)!!}{\mu^{2n+2}2^{2n+2}(n+1)!n!}\right) = \sqrt{\frac{\pi}{\mu}} \left( \delta(b) -\frac{3}{4 \mu^2} \ {}_2 F_1\left(\frac{5}{4},\frac{7}{4},2,-\frac{4b}{\mu^2}\right)\right)$$ At this point, notice that the extra $n!$ in the denominator has given us a nonzero radius of convergence so we have some hope, you can stare at the series definition of the hypergeometric function and notice how to write it as a hypergeometric function. The final step is to do the reverse borel transform - I am not very good at integrals - but you should recover: $$f(\alpha) = \sqrt{\frac{\mu}{4g}} e^{\frac{\mu^2}{8 \alpha}} K_{\frac{1}{4}}\left(\frac{\mu^2}{8 g}\right)$$ The business about using modified Borel Summations comes from a paper by Crutchfield, that I have not yet read through and understood - the idea though is that for $\mu < 0$ when you have instanton solutions, you expect to see a pole on the real positive $b$ axis in your borel transformed function, which is signifying nonperturbative effects that you cannot resum. Crutchfields modified Borel transformation should be a way to deal with this. At the current point, I am confused because I don't see the borel pole right now - as I wrote it above $F(b)$ (aside from an overall normalization factor) is a function of $\mu^2$, so I don't know how the pole arises when $\mu < 0$, but if I figure it out I will edit this answer (also anyone please help me)
{ "domain": "physics.stackexchange", "id": 88781, "tags": "quantum-field-theory, complex-numbers, calculus, partition-function" }
What's the key point to argue that pure gravity can't be renormalizable from two-loop?
Question: Gravity is not power-counting renormalizable in dimensions greater than two. It is known by Gerard 't Hooft, M.J.G. Veltman that pure gravity in four-dimensions is finite to the first loop order and that one-loop finiteness is spoiled by the coupling with matter. Moreorer, four-dimensional gravity is not finite to the second loop order, even in the absence of matter. Except the tedious calculation, what's the key point or physical intuition in argument of following: Pure gravity is finite to one-loop. Gravity coupled to matter can't be renormalizable even in one-loop. Pure gravity can't be renormalizable from the two-loop. And what are important corollaries from above results? It's easy to prove GR is not power-counting renormalizable but I'm trying to understand above results. Answer: There is no key point, only tedious calculation. It is merely a coincidence. A theory can either be renormalisable, or non-renormalisable; and both scenarios are in principle conceivable. Power-counting renormalisability usually offers a strong hint towards deciding whether the theory is renormalisable or not, but this is not an infallible test1. Quantum gravity is not power-counting renormalisable, so in principle you should suspect that the theory is not renormalisable. But it may very well be the case that some miraculous cancellation of divergences comes to the rescue and makes the theory renormalisable after all. Indeed, the theory might contain some hidden symmetry that controls the possible divergences. The fact that the first few loop orders turn out to be finite is no proof that they are finite to any order. You either prove that they are (which is a very non-trivial task), or prove that they aren't (by finding an explicit counter-example). In the case of QG, the first order happens to be finite. There are other examples of theories that are one-loop finite but not to higher orders (e.g., naïve massive Yang-Mills, cf. this PSE post). One does not really need to explain the one-loop finiteness: it just sometimes happens, with no deep reason behind it. It turns out that in QG one may partially explain this phenomenon on grounds of the metric-independence of the Euler-Poincaré characteristic. Quoting DeWitt, Because the metric independence of the Euler-Poincaré characteristic[2], terms quadratic in the full Riemann tensor, in the counter-term needed to cancel the pole term [in the one loop effective action], can be replaced by terms quadratic in the Ricci tensor and in the curvature scalar. The counter-term, thus modified, has the form \begin{equation} \begin{aligned} \Delta S&=\frac{1}{16\pi^2}\frac{1}{d-4}\int g^{1/2}\left(-\frac{429}{36}R^2+\frac{187}{90}R_{\mu\nu}R^{\mu\nu}\right)\mathrm d^4x\\ &=\int\frac{\delta S}{\delta g_{\mu\nu}}A_{\mu\nu}\ \mathrm d^4x \end{aligned}\tag{35.170} \end{equation} where \begin{equation} A_{\mu\nu}=-\frac{1}{16\pi^2\mu^2}\frac{1}{d-4}\left(\frac{187}{180}R_{\mu\nu}+\frac{979}{180}g_{\mu\nu}R\right) \end{equation} Equation $(35.170)$ has exactly the form $(25.90)$. As explained in chapter 25 the presence or absence of the counter-term is therefore irrelevant in the computation of the $S$-matrix, and pure quantum gravity is one-loop finite. This is an accident arising from the existence of the Euler-Poincaré characteristic and does not occur in higher orders. (Emphasis mine) For completeness, we sketch the proof of one-loop finiteness of vacuum quantum gravity. We mainly follow 0550-3213(86)90193-8 (§3.1). A simple power-counting analysis reveals that, to one loop, the most general counter-term reads $$ \Delta S^{(1)}=\int g^{1/2}(c_1R^2+c_2R^{ab}R_{ab}+c_3R^{abcd}R_{abcd})\ \mathrm d^4x $$ for some (formally divergent) constants $c_{1,2,3}$. The first two terms vanish on-shell (in vacuum), while the third in principle does not. But, using the fact that the Euler-Poincaré characteristic is topological (i.e., its integrand is a total derivative), we may write the $c_3$ term as a function of $R^2$ and $R^{ab}R_{ab}$. This in turns means that $$ \Delta S^{(1)}\overset{\mathrm{O.S.}}=\int g^{1/2}(\text{total derivative})\ \mathrm d^4x $$ which proves the one-loop finiteness of quantum gravity (recall that topological terms are invisible to perturbation theory). It is clear that this argument fails in the presence of matter, because the on-shell fields do not satisfy $R^{ab}=0$ anymore, and therefore $\Delta S^{(1)}$ is no longer a total derivative. In the case of two or more loops, the number of available invariants that can be constructed from the metric, and that may appear as counter-terms, is higher than in the one-loop case. Most of these invariants depend on $R^{abcd}$ rather than $R^{ab}$, and therefore they do not vanish on-shell. In fact, we have $$ \Delta S^{(2)}\overset{\mathrm{O.S.}}=c_4\int g^{1/2}R^{ab}{}_{cd}R^{cd}{}_{ef}R^{ef}{}_{ab}\ \mathrm d^4x $$ for some constant $c_4$. Here, there is no identity that relates this combination to a topological term and therefore, unless there is some fortuitous cancellation of divergences that leads to $c_4=0$, the two-loop counter-term Lagrangian is not expected to vanish on-shell. The explicit calculation proves that there is no such cancellation, and therefore quantum gravity is not two-loop finite. To reiterate, it could have been the case that quantum gravity is renormalisable after all. The one-loop finiteness can be established by simple power-counting arguments, but no such conclusion can be reached for higher loops. Thus, the only thing we can do is to go through the tedious calculation. Once we do this, we find that quantum gravity is not finite. Oh well. 1: Take your favourite renormalisable theory and perform a non-linear field redefinition; the resulting theory has new terms that are not power-counting renormalisable, but the $S$ matrix remains the same (it is finite). 2: The Euler-Poincaré characteristic in four dimensions reads $$ \chi_4=\frac{1}{32\pi^2}\int g^{1/2}(R_{abcd}R^{abcd}-4R_{ab}R^{ab}+R^2)\ \mathrm d^4x $$
{ "domain": "physics.stackexchange", "id": 45644, "tags": "quantum-field-theory, general-relativity, renormalization, quantum-gravity, qft-in-curved-spacetime" }
"Wind turbine" Motor balance equations
Question: I have this model of a wind turbine with written balance equations. I don't really understand the equation in the second line, I would really appreciate it if you could tell me what it means exactly. Answer: The second equation $$(J_R+J_M)\dot{\omega}=M_R-k_Mi_G=f_{Rotor}(v_{wind},\omega)-k_Mi_G$$ is the mechanical equation of motion in terms of torque. The left-hand side $(J_R+J_M)\dot{\omega}$ is essentially the definition of torque ($\tau=J\dot{\omega}$) where $J$ is the moment of inertia (of rotor and generator) and $\dot{\omega}$ is the angular acceleration. The right-hand side ($M_R-k_Mi_G$) is the torque, having two components: $M_R$ is the torque which the wind applies to the rotor. $-k_Mi_G$ is the torque from the electromagnetic effect by the electric current $i_G$. The minus sign is there because of Lenz's law. (The current induced [...] is directed [...] to exert a mechanical force which opposes the motion.)
{ "domain": "physics.stackexchange", "id": 92828, "tags": "electric-circuits, angular-momentum, electrical-resistance, torque, inductance" }
Using encryption/hashing to create a secure login
Question: I am creating a login for an encrypted chat application which retrieves login information from a MySQL database. I have got to the point where I feel pretty confident that (to the best of my knowledge) it is relatively secure. I am trying to learn so feel free to criticize! import hashlib import mysql.connector from tkinter import * from tkinter import messagebox from cryptography.fernet import Fernet chat = Tk() #Api I am using to create the GUI for the application #Connect to MySQL database try: loginFRetrieve = open("LK.bin", "rb") #Retrieving Encryption key from file retrivedKey = loginFRetrieve.read() loginFRetrieve.close() loginFRetrieve = open("LC.bin", "rb") #Retrieving MySQL server login credentials retrivedLC = loginFRetrieve.read() loginFRetrieve.close() cipher = Fernet(retrivedKey) retrivedLC = cipher.decrypt(retrivedLC) #Decrypting server login data from file retrivedLC = retrivedLC.decode('utf-8') lC = retrivedLC.split() mydb = mysql.connector.connect(host=lC[0],user=lC[1],passwd=lC[2],database=lC[3]) del(lC) except mysql.connector.Error as err: chat.withdraw() messagebox.showerror("Database Error", "Failed to connect to database") exit() mycursor = mydb.cursor() #hashPass hashes and returns a string of characters using SHA-256 algorithm def hashPass(hP): shaSignature = \ hashlib.sha256(hP.encode()).hexdigest() return shaSignature #userExists checks a database too see if username exists in the database def userExists(userName): mycursor.execute("SELECT username FROM logins WHERE username = '%s'" % userName) userResult = mycursor.fetchall() if userResult: return True return False #Creates a new user in the connected SQL database. def newUser(nU, nP): if userExists(nU) == False: mycursor.execute("SELECT username FROM logins WHERE username = '%s'" % nU) mycursor.fetchall() r = hashPass(nP) sql = "INSERT INTO logins(username, passwordhash) VALUES(%s,%s)" val = (nU, r) mycursor.execute(sql, val) mydb.commit() chat.title(string="User created") else: messagebox.showwarning("User Creation Error", "User already exists") #Checks the connected SQL database for an existing user. def existingUser(uN, pW): if userN.get() != "": if userExists(uN) == True: encryptedPass = hashPass(pW) mycursor.execute("SELECT * FROM logins") passResult = mycursor.fetchall() for row in passResult: if row[1] == uN and row[2] == encryptedPass: chat.title(string="Login Successful!") elif row[1] == uN and row[2] != encryptedPass: messagebox.showerror("Login Error", "Password does not match our records") else: messagebox.showerror("Login Error", "User does not exist") else: messagebox.showwarning("Login Error", "Please enter a username") Answer: Encryption isn't Hashing encryptedPass = hashPass(pW) You're not encrypting the password, you're hashing it. For passwords you should not be hashing them with the SHA2 family. Instead, use bcrypt. Sanitize input From my limited knowledge of Python, it doesn't appear you're sanitizing your input on some functions, for example userExists() and the first query in newUser(). Instead, you're using simple string formatting to substitute values directly. You should be passing the variables as arguments to execute() every time.
{ "domain": "codereview.stackexchange", "id": 33615, "tags": "python, python-3.x, mysql, cryptography, authentication" }
Difference between free and global variables
Question: Is there any difference between free variable and global variable? Or they are just synonyms? In which situations I should use one or another? Answer: They don't play in the same category. The notion of free variable is relative to a scope. If a variable is present in a term (i.e. a subprogram) and its scope is larger than this term, then the variable is said to be free in that term. A global variable is one whose scope is the whole program, or the whole file, or the whole module, or whatever scope is called “global” in the programming language you're considering.
{ "domain": "cs.stackexchange", "id": 1715, "tags": "terminology, programming-languages" }
How to project a composite system down into a smaller subspace in Python?
Question: If we have a composite system over five qubits ($|\psi\rangle = |a\rangle|b\rangle|c\rangle|d\rangle|e\rangle$), and I want to project into a specific subspace of the first three systems, I can build a projector of the form $|011\rangle\langle011| \otimes I_{de}$ (for example). Before projecting, state $|\psi\rangle$ can be thought of as an array with length $2^5 = 32$. My goal is to do the projection and reduce the size of my vector appropriately (so now I only have an array over the final two qubits). I'm doing this in Qiskit (after I get the statevector and am done evolving). My projectors will always have the form above, just perhaps with a different bitstring (in my example, I had "011"). This is what I've done so far: Since the projectors are diagonal, I convert the string "011" into an integer. In this case, it's 3. The corresponding matrix will look like: $$ \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} .$$ Because the subspace is like this, the identity matrix $I_{de}$ will just be a matrix of size $2^2\times2^2$ and when we take the tensor product, we will get a matrix similar to the one above, but now the size of the matrix will be bigger, and the $1$ that's above will be the only place where the identity shows up (since everywhere else will be zero). I won't write down the matrix because it has size $32\times32$. If I have my state $|\psi\rangle$ and I want to project down, I figured I just had to find the components of my 32-element array which correspond to this subspace. If the position of the 1 in my matrix above is given by $p$ and my state is an array called psi, then I want to say that the projection is given by simply slicing my array as such: projected = psi[(2**2)p:(2**2)*(p+1)] My question is: Am I doing the right slicing in step 4? I think I am, but it's easy to get tripped up with these subspaces. I know that this won't work in general since the projection operator could be more involved, but in the case where it's diagonal like the above matrix and is only one element, do I have the steps involved correct? Answer: Your slicing is correct, and gives the right answer in your example. Here is a generalization of your slicing, for the case where you may have a different string of bits. import numpy as np def return_indices(subspace): n_qubits = len(subspace) indices = np.array(range(32)).reshape((2,)*n_qubits) output_indices = indices[subspace].reshape(-1) return output_indices # building a test psi to see if the code works well psi = np.zeros(32, dtype=np.complex) psi[12:16] = list(range(1,5)) # representing the subspace that we would like to project on subspace = (0,1,1,slice(0,2),slice(0,2)) output_indices = return_indices(subspace) #returns array([12, 13, 14, 15]) psi[output_indices] #returns array([1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j]) I also ran another test, by taking subspace2 = (1,0,slice(0,2),1,slice(0,2)) output_indices2 = return_indices(subspace2) Then output_indices2 is array([18, 19, 22, 23]), as it should. Edit: In case you are interested in projecting on the subspace where the first qubit is $0$, the second qubit is $1$ and the third quibit is $+$, then you can simply use linear superposition. Indeed, this is $1/\sqrt{2}$ times the projection of the state on the $|010::\rangle$ subspace plus $1/\sqrt{2}$ times the projection of the state on the $|011::\rangle$ subspace. I am using a colon, just as in Python, to indicate that the corresponding index is free. So you can adapt the code to handle a case where you have a $+$ state. However, the code is written assuming you are mostly interested in $0$ and $1$ states.
{ "domain": "quantumcomputing.stackexchange", "id": 2214, "tags": "programming, projection-operator" }
Why does CaCO3 react with HCl, but not with H2SO4?
Question: I have a wonderful reaction of marble chips, $\ce{CaCO3}$, with hydrochloric acid, $\ce{HCl}$, and carbon dioxide was released beautifully (fast, large volume, easy to measure and makes good visual effect too). But there is no reaction between $\ce{CaCO3}$ and $\ce{H2SO4}$. Why not? Answer: Your marble chips react on the surface. In the case of hydrochloric acid, the resulting salt, calcium chloride, is highly soluble in the acid, dissolves and provides further attack to the (new) surface. With sulfuric acid, the highly insoluble calcium sulfate is formed on the surface of the marble chip. With other words: Calcium sulfate acts like a protective layer.
{ "domain": "chemistry.stackexchange", "id": 7156, "tags": "acid-base, reaction-mechanism" }
Integer sum program
Question: This is a little assignment I am working on. I am a beginner Java programmer and would love some advice on how to improve this code. For more context, here are the assignment details: Create a complete program that has the ability to store and display integer values in an array. The maximum number of values that your program should be able to handle is 10. Add additional elements to the array using screen input (textbox and button). Remove array elements based on screen input (textbox and button). List all of the elements in the array and compute the sum of all the elements. List the even elements in the array and compute the sum of the even elements. List the odd elements in the array and compute the sum of the odd elements. import javax.swing.*; import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.Arrays; public class sumElement implements ActionListener { public static JButton add, remove, sumAll, sumEven, sumOdd; public static JTextField inputField; public static JTextArea textArea; public static int counter = 0; public static int[] numbers = new int[10]; public static JLabel titleText; public static void main(String[] args) { // Frame JFrame frame = new JFrame("Integer Sums"); frame.setSize(178, 240); frame.setResizable(false); frame.setLocationRelativeTo(null); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // Container panel JPanel container = new JPanel(); container.setLayout(new BoxLayout(container, BoxLayout.PAGE_AXIS)); frame.setContentPane(container); // Title panel JPanel titlePane = new JPanel(); titleText = new JLabel("Integer Sums"); // Content panel JPanel content = new JPanel(); content.setPreferredSize(new Dimension(300, 180)); content.setLayout(null); // Buttons add = new JButton("Add"); add.setBounds(4, 50, 80, 20); add.addActionListener(new sumElement()); remove = new JButton("Remove"); remove.setBounds(88, 50, 80, 20); remove.addActionListener(new sumElement()); sumAll = new JButton("Sum All"); sumAll.setBounds(4, 74, 164, 20); sumAll.addActionListener(new sumElement()); sumEven = new JButton("Sum Even"); sumEven.setBounds(4, 98, 164, 20); sumEven.addActionListener(new sumElement()); sumOdd = new JButton("Sum Odd"); sumOdd.setBounds(4, 122, 164, 20); sumOdd.addActionListener(new sumElement()); // Text area, and text field inputField = new JTextField(); inputField.setBounds(4, 21, 164, 25); JLabel inputLabel = new JLabel("Input integer value below:"); inputLabel.setBounds(12, 6, 164, 13); textArea = new JTextArea(); JScrollPane scrollPane = new JScrollPane(textArea); scrollPane.setBounds(4, 144, 165, 36); // Adding everything container.add(titlePane); titlePane.add(titleText); container.add(content); content.add(inputLabel); content.add(inputField); content.add(scrollPane); content.add(add); content.add(remove); content.add(sumAll); content.add(sumEven); content.add(sumOdd); // Extras frame.toFront(); frame.setVisible(true); } public void actionPerformed(ActionEvent event) { if (event.getActionCommand().equals("Add")) { if (counter == 10) { titleText.setText("Error: too many values"); } else { numbers[counter] = Integer.parseInt(inputField.getText()); textArea.setText(Arrays.toString(numbers)); counter++; } } else if (event.getActionCommand().equals("Remove")) { for (int i = 0; i <= counter; i++) { if (Integer.parseInt(inputField.getText()) == numbers[i]) { for (int x = i; x <= counter; x++) { numbers[x] = numbers[x + 1]; } } } counter--; textArea.setText(Arrays.toString(numbers)); } else if (event.getActionCommand().equals("Sum All")) { int sum = 0; for (int i = 0; i <= counter; i++) { sum += numbers[i]; } titleText.setText("Sum is " + sum); } else if (event.getActionCommand().equals("Sum Even")) { int sum = 0; for (int i = 0; i <= counter; i++) { if (numbers[i] % 2 == 0) { sum += numbers[i]; } } titleText.setText("Sum of even values is: " + sum); } else if (event.getActionCommand().equals("Sum Odd")) { int sum = 0; for (int i = 0; i <= counter; i++) { if (numbers[i] % 2 != 0) { sum += numbers[i]; } } titleText.setText("Sum of odd values is: " + sum); } } } Answer: Avoid relying on display for business logic if (event.getActionCommand().equals("Add")) { Be careful with this kind of thing. If you write a real application, it's not uncommon to have to support multiple languages. In some places, it may be required. E.g. French in Quebec. If you are displaying the French word for Add, then it probably won't match the English word that you are using here. Even without multiple languages, consider what happens if you change a label. E.g. in this case, you might switch to the more precise Append. You'd have to make that change in multiple places. Note: by business logic I mean the require logic for the application. I.e. what is the "business" of the application. In this case, determining what to do in response to the input. By display, I mean that "Add" is text that you are displaying. There are often times when you want to change the display without changing the logic of the application. Or vice versa. I am more familiar with the pattern where each event gets a different ActionListener rather than passing the same one and switching inside. That would eliminate the if/else structure, but it might make the variable management more difficult (e.g. counter and numbers). Java 7 added the ability to use a switch on a String. This is often more readable than an if/else ladder. Ranges are more robust than endpoints if (counter == 10) { Consider if (counter >= 10) { The same effect if counter is 10, but this also handles values greater than 10 as well. Consider the possibility of adding multiple values at once. If counter goes to 11, then the original code would keep failing. This version would stop. There are some circumstances when ranges won't work, but this doesn't seem like one of them. Bugs? for (int i = 0; i <= counter; i++) { if (Integer.parseInt(inputField.getText()) == numbers[i]) { for (int x = i; x <= counter; x++) { numbers[x] = numbers[x + 1]; } } } counter--; First, there's no need to parse the value multiple times. We can do that before the loop. But that's not a bug, just a point of style or efficiency. This will go past the end of the array. It should be x < counter, not x <= counter. As the add method is written, counter is the used size of the array, not the last element. And the inner loop should be x < counter - 1 or similar. Or update counter before the inner for loop, but that would cause other changes at the same time. Otherwise, x + 1 will equal counter, which as we just saw, is outside the used portion of the array. What if the same value appears multiple times in the array? This would remove it from the array multiple times but only update counter once. A couple possible solutions: int needle = Integer.parseInt(inputField.getText()); for (int i = 0; i < counter; i++) { if (needle == numbers[i]) { counter--; for (int x = i; x < counter; x++) { numbers[x] = numbers[x + 1]; } } } This updates counter each time it removes an element. It can remove multiple elements in one call. This way we can move the update to counter before the loop where we move the elements of the array. Or int needle = Integer.parseInt(inputField.getText()); int i = 0; for (; i < counter && needle != numbers[i]; i++) ; if (i < counter) { counter--; for (; i < counter; i++) { numbers[i] = numbers[i + 1]; } } This only removes the first match. Consider using a List instead of an array Or if you change numbers to a List rather than an array, you could do away with counter altogether. numbers.remove(Integer.parseInt(inputField.getText())); This would replace the last option above, removing just the first match. Or Integer needle = Integer.parseInt(inputField.getText()); while (numbers.remove(needle)) ; To remove all occurrences. Or more fancily but briefly numbers.removeAll(Arrays.asList(Integer.parseInt(inputField.getText()))); A List would also allow us to change things like for (int i = 0; i <= counter; i++) { sum += numbers[i]; } to something like for (int number : numbers) { sum += number; } Which would eliminate the bug related to counter as well (should be less than counter not less than or equal here too).
{ "domain": "codereview.stackexchange", "id": 22582, "tags": "java, beginner, array, homework, swing" }
Confusion between eigenvalues and vectors of an hermitian operator and the Hilbert space
Question: I am having trouble differentiating between the two. If we have a Hilbert space $H$, which has an ortho-normal basis ${|n\rangle}$. Then is it right to say that ${|n\rangle}$ are the eigenvectors of the Hilbert space, belonging to some eigenvalues? Do we find the eigenvalues by having an hermitian operator A act on the eigenvectors: $A|n\rangle = b |n\rangle$. But if we have $|\Psi\rangle =\Sigma_n c_n|n\rangle$ as a linear superposition of the eigenvectors of the Hilbert space, then we can do the following: $A|\Psi\rangle=k |\Psi\rangle $ where: $k$ is the eigenvalue of the hermitian operator, represented by a matrix and $|\Psi\rangle$ is the eigenvector of the eigenvalue $k$. Does the hermitian operator $A$ and the Hilbert space $H$, each have their own eigenvalues and vectors? Answer: Eigenvectors and eigenvalues are properties of operators, not of Hilbert spaces. Hence, answering the first part of your question, if $| n \rangle$ is an orthonormal basis of $\mathcal{H}$ (the Hilbert space), it is not correct to say they are the eigenvalues of the Hilbert space. We simply say they are an orthonormal basis. Suppose now you are given an operator $A \colon \mathcal{H} \to \mathcal{H}$ on the Hilbert space. Loosely speaking, a matrix. A vector $| \Psi \rangle \in \mathcal{H}$ is said to be an eigenvector of $A$ associated to the eigenvalue $a$ when $A | \Psi \rangle = a | \Psi \rangle$. Notice that I need the operator to define what I mean by eigenvector: eigenvectors are the vectors in which $A$ acts as if it was simply a number, this number being called an eigenvalue. Now if $A$ is hermitian, there are two cool results: the eigenvalues of $A$ are all real; the eigenvectors of $A$ provide an orthonormal basis of the Hilbert space. Hence, if you are given a hermitian operator on the Hilbert space, you can use it to obtain a basis. We usually pick the Hamiltonian, for example, because then each state in the basis has a simple time-evolution in terms of its energy (more specifically, in terms of the eigenvalue of the Hamiltonian to which it is associated). By the very definition of what a basis is, we can then write any vector in the Hilbert space in terms of eigenvectors of the Hamiltonian, i.e., in terms of states of definite energy. This provides a nice way to write down the time evolution of the state, since the Hamiltonian eigenstates have simple time-evolution rules. We could pick other operators, if we so desired. Instead of the Hamiltonian, you could pick some other hermitian operator to obtain a basis, but it would probably be less convenient to work with. Finally, it is worth mentioning that sometimes a few linearly independent states might be associated to the same eigenvalue. We call this degeneracy. In this case, we often need a few more operators (such as angular momentum squared and angular momentum in the $\vec{z}$ direction) to properly label all the states in the basis in a unique way. This is what happens when we solve for the hydrogen atom and need three labels to define uniquely which state in the basis is which, because a few of them have the same energy (as a consequence, we distinguish them by using their angular momentum properties).
{ "domain": "physics.stackexchange", "id": 81687, "tags": "quantum-mechanics, hilbert-space, operators" }
Complexity of ANOTHER HAMILTONIAN CIRCUIT problem
Question: All references I find about the ANOTHER HAMILTONIAN CIRCUIT problem: Given a graph and a hamiltonian circuit on it, is there another hamiltonian circuit on it? I was trying to reduce it to the hamiltonian circuit problem but I always need to add too many or too few circuits to the original one. Am I going through the good direction? It seems to be very simple... Edit Papadimitriou's complexity book refers to its theorem 17.5 for the inspiration for this problem. I think that a good way for the solution is to consider the graph proposed in Papadimitrious' One can for each node of the original graph, transform to the graph shown at the left image. Then, one can connect all these new graphs using the poles N and S and thus obtains a hamiltonian circuit. Then, one uses W and E to connect vertex that were connected in the original graph. In principle, one could connect W with W and E with E. I only have one more doubt, what happens if the original graph is just a segment, that is, two vertex with a edge between them. My construction would give an extra hamiltonian circuit going from W1->E1->E2->W2->W1 even if the original one didn't have one! Can you find any other counterexample? This was the solution given in my course and it seems to be partly wrong... Answer: This result is proved in the senior thesis The complexities of puzzles, cross sum and their another solution problems (ASP) by Takahiro Seta. The systematic study of ASPs was initiated by Ueda and Nagao in their paper NP-completeness Results for NONOGRAM via Parsimonious Reductions. See also Takayuki Yato's master thesis, Complexity and completeness of finding another solution and its application to puzzles.
{ "domain": "cs.stackexchange", "id": 9687, "tags": "complexity-theory, graphs, np-complete, hamiltonian-circuit" }
Why is $C=q/V$ constant for a capacitor?
Question: I get that if you have a parallel plate/spherical you can do the math and find it to be constant, but I don't see why this should be so for a general capacitor of any random shape. In the title $q$ is charge on the capacitor, and $V$ is voltage across the capacitor. Answer: The completely general proof is a little subtle, and involves the properties of solutions to Laplace's equation. Here's a sketch of it. Imagine that we have two conductors of arbitrary shape. We place a charge $+Q$ on conductor #1, and $-Q$ on conductor #2. These charges will distribute themselves in a particular way, giving rise to surface charges $\sigma_1(\vec{r})$ and $\sigma_2(\vec{r})$, respectively. We take the reference point for our potential ($V = 0$) to be at infinity. When we do this, it will give rise to a potential everywhere in space; we can call this our "reference solution" $V(\vec{r})$. This function will satisfy Laplace's equation ($\nabla^2 V = 0$) everywhere in space; it will also satisfying $\hat{n} \cdot \nabla V = -\sigma_1/\epsilon_0$ on the surface of conductor #1 and $\hat{n} \cdot \nabla V = -\sigma_2/\epsilon_0$ on the surface of conductor #2. Now, suppose that we look at a new function $V'(\vec{r}) = \alpha V(\vec{r})$, where $\alpha$ is any real number. This means that we have multiplied the potential difference between the conductors by $\alpha$. What charge distribution on the conductors will give rise to this? Well, on the surface of conductor #1, we have $$ \sigma'_1 = -\epsilon_0\hat{n} \cdot \nabla V' = -\alpha \epsilon_0 \hat{n} \cdot \nabla V =\alpha \sigma_1, $$ and similarly $\sigma'_2 = \alpha \sigma_2$. In other words, the charge distributions are multiplied by $\alpha$ everywhere on the surface of both conductors. In particular, this implies that the total charges on the conductors are $\pm Q' = \pm \alpha Q$; and so the potential difference is always proportional to the amount of charge on the conductors. You might be concerned that this is just one possible way for the charge to distribute itself on the conductors; maybe when we double the total charge on a conductor, it gets extra-concentrated on some parts of the conductor and stays small elsewhere, instead of evenly doubling in density everywhere on the surface. But there's a uniqueness theorem we can appeal to: In a volume surrounded by conductors, the electric field is uniquely determined if the total charge on each conductor is given. (The region as a whole can be bounded by another conductor, or else unbounded.) (From Griffiths's Introduction to Electrodynamics, §3.1.6) Since I have found a solution where net charge is multiplied by $\alpha$ on the conductors, I can then say that by the above uniqueness theorem, this is the only possible way for the charge to distribute itself on the conductors, and thus the potential difference between the conductors must also multiply by $\alpha$.
{ "domain": "physics.stackexchange", "id": 42878, "tags": "electrostatics, electric-fields, charge, capacitance, voltage" }
For an infinitesimal transformation in phase space, what functions are allowed for this to be a canonical transformation?
Question: Consider an infinitesimal transformation: $$(q_{i},p_{j}) \quad\longrightarrow \quad(Q_{i},P_{j}) ~=~ \left(q_{i} + \alpha F_{i}(q,p),~p_{j} + \alpha E_{j}(q,p)\right) $$ where $α$ is considered to be infinitesimally small. Now, if we construct Jacobian matrix, we will have: $$ \jmath =\begin{pmatrix} \delta_{ij}+ \alpha{\frac{\partial F_{i} }{\partial q_{j}}} & \alpha{\frac{\partial F_{i} }{\partial p_{j}}} \\ \alpha{\frac{\partial E_{i} }{\partial q_{j}}} & \delta_{ij}+ \alpha{\frac{\partial E_{i} }{\partial p_{j}}} \end{pmatrix}.$$ What functions $F_{i} (q, p)$ and $E_{i} (q, p)$ are allowed for this to be a canonical transformation? To be canonical transformation, it's required to hold: $$\jmath j \jmath^{T} = j$$ in which $ j = \begin{pmatrix} 0 & 1\\ -1&0 \end{pmatrix}$. To hold the canonical transformation, there should be: $$\frac {\partial F_{i}}{\partial q_{j}} = - \frac {\partial E_{i}}{\partial p_{j}} $$ which is true if $$F_{i} = \frac {\partial G}{\partial p_{i}} \; \; , \; \; E_{i} = - \frac {\partial G}{\partial q_{i}} $$ for some function $G(q, p)$. Now my problem is that by calculating everything I can't figure out how to reach to last two formulas. The formulas which shows the possibilities for $F_{i}$ and $E_{i}$? Answer: First of all, be aware that there exist various different definitions of canonical transformations (CT) in the literature, cf. e.g. this Phys.SE post. What OP (v3) above refers to as a CT, we will in this answer call a symplectomorphism for clarity. What we in this answer will refer to as a CT, will just be a CT of type 2. It is possible to show (see e.g. Ref. 1) that an arbitrary time-dependent infinitesimal canonical transformation (ICT) of type 2 with generator $G=G(z,t)$ can be identified with a Hamiltonian vector field (HVF) $$ \delta z^I~=~\varepsilon\{ z^I,G\}_{PB}~\equiv~ \sum_{K=1}^{2n} J^{IK} \frac{\partial G}{\partial z^K} , $$ $$ X_{-G}~\equiv~-\{G,\cdot\}_{PB}~\equiv~\{\cdot,G\}_{PB},\tag{1} $$ with (minus) the same generator $G$. Here $z^1,\ldots, z^{2n}$, are phase space variables, $t$ is time, $\varepsilon$ is an infinitesimal parameter, and $J$ is the symplectic unit matrix, $$\tag{2} J^2~=~-{\bf 1}_{2n\times 2n}.$$ A general time-dependent infinitesimal transformation (IT) of phase space can without loss of generality be assumed to be of the form $$ \tag{3} \delta z^I~=~\varepsilon \sum_{K=1}^{2n} J^{IK} G_K(z,t) ,\qquad I~\in~\{1,\ldots, 2n\}, $$ because the matrix $J$ is invertible. Next consider a time-dependent infinitesimal symplectomorphism (IS), which can be identified with a symplectic vector field (SVF). It is possible to show that a SVF [written in the form (3)] satisfies the Maxwell relations$^1$ $$\tag{4} \frac{\partial G_I(z,t)}{\partial z^J}~=~(I \leftrightarrow J),\qquad I,J~\in~\{1,\ldots, 2n\}. $$ Eq. (4) states that the one-form $$\tag{5} \mathbb{G}~:=~ \sum_{I=1}^{2n}G_I(z,t) \mathrm{d}z^I$$ is closed $$\tag{6} \mathrm{d}\mathbb{G}~=~0. $$ It follows from Poincare Lemma, that locally there exists a function $G$ such that $ \mathbb{G}$ is locally exact $$\tag{7} \mathbb{G}~=~\mathrm{d}G. $$ Or in components, $$\tag{8} G_I(z,t)~=~\frac{\partial G(z,t)}{\partial z^I},\qquad I~\in~\{1,\ldots, 2n\} .$$ In summary we have the following very useful theorem for a general time-dependent infinitesimal transformation (IT). Theorem. An infinitesimal canonical transformation (ICT) of type 2 is an infinitesimal symplectomorphism (IS). Conversely, an IS is locally a ICT of type 2. 2D counterexample: Consider the phase space $M=\mathbb{R}^2\backslash\{(0,0)\}$ with the symplectic 2-form $\omega =\mathrm{d}p\wedge \mathrm{d}q$. One may check that the vector field $$X=\frac{q}{q^2+p^2}\frac{\partial}{\partial q} +\frac{p}{q^2+p^2}\frac{\partial}{\partial p} $$ is SVF/IS but it is not a HVF/ICT of type 2. The problem is that the candidate ${\rm arg}(q+ip)$ for the Hamiltonian generator is multi-valued, and hence not globally well-defined. References: H. Goldstein, Classical Mechanics; 2nd eds., 1980, Section 9.3; or 3rd eds., 2001, Section 9.4. -- $^1$ OP already listed some (but not all) of the Maxwell relations (4) in his second-last equation. All of the Maxwell relations (4) are necessary in order to deduce the local existence of the generating function $G$.
{ "domain": "physics.stackexchange", "id": 23865, "tags": "classical-mechanics, hamiltonian-formalism, coordinate-systems, phase-space, poisson-brackets" }
Looking for bolt with press nut
Question: I'm looking for a bolt without a thread that I can attach to an object using a kind of press nut. Whats the name of these kind of bolts and nuts? Can't find any information about it? Answer: The washer is often called a lock washer, star lock washer or clamping washer or ring. The threadless bolt is often called a pin.
{ "domain": "engineering.stackexchange", "id": 5371, "tags": "mechanical-engineering" }
Scrabble algorithm review and performance suggestions
Question: After finishing Project Euler 24, I decided to make a scrabble type of application using /usr/lib/dict as my dictionary what I cat that into a word file. It does take a few seconds, and the dictionary selection isn't that great. Is there any way I could make it faster, more effective, and with a better dictionary source? public class Scrabble { public static ArrayList<String> numbers = new ArrayList<String>(); public static ArrayList<String> numbers2 = new ArrayList<String>() { }; public static void main(String[] args) { Scanner dict = null; try { dict = new Scanner(new File("words.txt")); } catch (FileNotFoundException ex) { } while (dict.hasNextLine()) { numbers.add(dict.nextLine()); } String n = "gojy";//random text here rearrange("", n); LinkedHashSet<String> listToSet = new LinkedHashSet<String>(numbers2); ArrayList<String> listWithoutDuplicates = new ArrayList<String>(listToSet); for (int i = 0; i < listWithoutDuplicates.size(); i++) { if (numbers.contains(listWithoutDuplicates.get(i))) { System.out.println(listWithoutDuplicates.get(i)); } } } public static void rearrange( String q, String w) { if (w.length() <= 1) { String k = q + w; numbers2.add(k);//full word numbers2.add(q);//smaller combination to get words with less letters. doesn't work too well } else { for (int i = 0; i < w.length(); i++) { String c = w.substring(0, i) + w.substring(i + 1); rearrange(q + w.charAt(i), c); } } } } Answer: If what you are trying to do is store a huge list of valid words in such a way that you can test whether a given string is a valid word, a Trie works well. See this Stack Overflow question. Load the wordlist into the Trie when your server starts, and use the same Trie for all games (assuming this is a client server game). It's trickier than the linked thread if you need to include more than the standard 26 letters of the alphabet, but I've done a Unicode Trie for a chat filter before and I could help if you need that. I don't understand what you are trying to do with the rearrange method. Can you explain?
{ "domain": "codereview.stackexchange", "id": 5801, "tags": "java, algorithm, performance, search, hash-map" }
"Troll physics": What is wrong with this perpetual machine?
Question: I think this will not work because the water can't flow out of the tube. (But what if we make the other end a little wider, will the water stop right before that part? What confuses me is that even a wet sponge will start dipping eventually.) Answer: I suggest you build one in your kitchen. Cut a sponge into a J shape and hook it over a pencil, so that the straight edge dangles into a bowl of water. Put piece of tissue paper under the hook to catch drips and wrinkle, in case they come while you're asleep. Wait. I suspect you'll find that the top of the sponge never actually gets wet enough to drip. Capillary action has enough energy to fill the capillary, but once the capillary is full the force is gone.
{ "domain": "physics.stackexchange", "id": 15589, "tags": "perpetual-motion, capillary-action" }
Most amenable coastal geographic conditions for harbors?
Question: I've been trying to find sources detailing the natural geographic features most suitable for shipping harbors along a coast—primarily for international trade. So far, I've read vague reference to bathymetry and natural harbors. What other geographic conditions should affect the viability of shipping (historically) along a country coastline? Answer: Here are some considerations: It needs to be deep enough that you can get close to shore with a ship that extends several meters below sea level. It needs to be protected from large waves, so a spit of land that goes around the harbor is useful. It shouldn't be in a place where tides are large and ships will be moving up and down over the course of a day by too much to make loading and unloading difficult. It shouldn't be in a place where winds are frequently too strong to make operations difficult.
{ "domain": "earthscience.stackexchange", "id": 2614, "tags": "oceanography, geography, bathymetry" }
Custom function to scroll spy on the jScrollPane plugin
Question: I have an element where the jScrollPane is being used to provide custom scrollbars for an extensive text. The idea behind the code is to change the image beside the text each time the user reaches a certain amount of scroll while reading the text. To have the code flexible, I've prepared a method to detect the number of images available and from there establish the trigger points for the image change. This is all based on the text wrapper height. Can't this be simplified, thus reducing the amount of code, perhaps in such reduction, have it improved? HTML structure <div id="imageWrap"> <img data-index="1" src="..." class="active" /> <img data-index="2" src="..." /> <img data-index="3" src="..." /> </div> jQuery/JavaScript code $imagesWrap = $('#imageWrap'); if ($imagesWrap.size()==1) { //target all images and count them var $targets = $imagesWrap.find('> img'), total = $targets.size(); //for each found, set the CSS z-index $targets.each(function(){ var $this = $(this), index = $this.data('index'); //revert order $this.css({ 'z-index': total-index }); }); var $scrollSpy = $('.jspPane'), //the element to spy height = $scrollSpy.height(), //its height stepsTrigger = parseInt(height/total), //trigger at "every" this distance currentPos = 0; //current position (active image) setInterval(function(){ //ascertain current position (insure positive number) currentPos = Math.abs(parseInt($scrollSpy.css('top'), 10)); //ascertain the target image var img = Math.round(currentPos/stepsTrigger), id = img==0 ? 1 : img, $tgt = $('img[data-index="'+id+'"]'); //if different from the already active if (!$tgt.hasClass('active')) { $imagesWrap.find('.active').removeClass('active').hide(); $tgt.fadeIn().addClass('active'); } }, 100); } Answer: I'd like to address a couple of style concerns over simplification. Purely for debugging if you have function expressions then name them. eg. setInterval(function(){ could become setInterval(function _scrollDetectionInterval(){ When debugging the call stack will now show you at a glance what this function is (especially helpful if you have multiple). you have define variables outside of the scope you are using them. var $scrollSpy = $('.jspPane'), //the element to spy height = $scrollSpy.height(), //its height stepsTrigger = parseInt(height/total), //trigger at "every" this distance currentPos = 0; //current position (active image) This doesn't make sense unless you aren't going to change them again or you will be using them else where. Assuming this is all the code here I would at least put the currentPos inside the interval function scope. you have a data-index="1" attribute on your elements. Is this actually needed? jQuery has an index you can use in the .each() method. just change your function to $targets.each(function(index){ unless I am missing something it should work. You can also then change the line $tgt = $('img[data-index="'+id+'"]'); to $tgt = $targets.eq(Math.max(0, img - 1)); there is a lot going on in this line. $targets.eq( uses the 0-based index of the element in the array. Math.max( ensures it will never be less than 0. The code will only work if there is one image wrapper on the page. If you are looking to extend this to multiple here is how i would change it: $('.someImageWrapClass') .each(function _eachWrapper() { var $parentWrapper = $(this); //target all images and count them var $targets = $parentWrapper.find('> img'), total = $targets.size(); //for each found, set the CSS z-index $targets.each(function _eachImage(index){ //revert order $(this).css({ 'z-index': total - index }); }); var $scrollSpy = $('.jspPane'), //the element to spy height = $scrollSpy.height(), //its height stepsTrigger = parseInt(height/total); //trigger at "every" this distance setInterval(function _scrollDetection() { //current position (active image) //ascertain current position (ensure positive number) var currentPos = Math.abs(parseInt($scrollSpy.css('top'), 10)); //ascertain the target image var img = Math.round(currentPos/stepsTrigger) $tgt = $targets.eq(Math.max(0, img -1)); //if different from the already active if (!$tgt.hasClass('active')) { $targets.removeClass('active').hide(); $tgt.fadeIn().addClass('active'); } }, 100); });
{ "domain": "codereview.stackexchange", "id": 4596, "tags": "javascript, jquery" }
RSpec: Compound Expectations
Question: It's an exercise from a RSpec-course, which I'm currently doing. Task: Create a compound expectation, which asserts the string sportscar starts with the substring "sports" and ends with the substring "car". Create a compound expectation, which asserts that the number 30 is even and that it responds to the times method. Create a compound expectation, which asserts that the array [2, 4, 8, 16, 32, 64] includes the element 32 and starts with the elements 2, 4, and 8. My solution: RSpec.describe "sportscar" do it "should start with 'sports' and end with 'car'" do expect(subject).to start_with("sports").and end_with("car") end it "30 is even and responds to the times-method" do numb = 30 expect(numb.even?).to be true and numb.respond_to? :times end it "shall include 32 and start with the values 2, 4, 8" do arr = [2, 4, 8, 16, 32, 64] expect(arr).to include(32) and start_with(2, 4, 8) end end Could it become improved? Perhaps more written in a Ruby-Idiomatic way? Are my message-strings (it "should start with ..." etc.) done in a good way? Or should it be written in a different way and if so: How? Answer: I don't know what the exercise says exactly but your describe does not match the two last expectations. You should have 3 describes because you describe three different things (a sportscar, number 30 and an array). RSpec.describe "sportscar" do it "should start with 'sports' and end with 'car'" do expect(subject).to start_with("sports").and end_with("car") end end RSpec.describe 30 do it "is even and responds to the times-method" do numb = 30 expect(numb.even?).to be true and numb.respond_to? :times end end RSpec.describe Array do it "includes 32 and starts with the values 2, 4, 8" do arr = [2, 4, 8, 16, 32, 64] expect(arr).to include(32) and start_with(2, 4, 8) end end This sentence also does not make sense it 30 is even and responds to the times-method So it should be something like it is even and responds to the times-method I would also not test the implementation but functionality Instead of responds_to execute the function count = 0 30.times do { count += 1 } expect(count).to eq(30) Generally I would advice to split distinct functionality into it's own it block. and, or etc are usually a sign that there is more than one functionality. Not sure if this is part of the exercise though. it "should start with 'sports'" do expect(subject).to start_with("sports") end it "should end with 'car'" do expect(subject).to end_with("car") end To be fair, all these examples don't feel very natural.
{ "domain": "codereview.stackexchange", "id": 42363, "tags": "ruby, unit-testing, rspec" }
Trouble to predict the effect of pressure and temperature for a reaction in equilibrium
Question: I have trouble to predict the effect of decreasing pressure and increasing temperature on the reaction $\ce{N2 + 2O2 <=> 2NO2}$. Increasing temperature caused this reaction to shift in the forward direction and decreasing pressure brought no change. I know increasing temperature favour endothermic reaction and decreasing temperature favour exothermic reaction. I also know on increasing pressure the system will shift to the direction where less moles are present. Please tell me how to use this information to predict the effect for increasing temperature and decreasing pressure. Answer: When I posted this question I had no idea how changing each factor bring change in equilibrium. Then I bombarded my brain with everything I know about equilibrium and I found a fairly easy way to understand this. Change in concentration Imagine two vessels A and B connected to each other through a thin hollow pipe through the middle of the vessel. Let vessel A be reactant and vessel B be product. The vessels are filled with water a little above the pipe. If I add water to vessel A water flow from A to B. Thus increasing concentrated of A caused the water in vessel A to flow to B; which in other words is the forward reaction. Change in temperature Think of a reaction which is in equilibrium. 1)Decreasing the temperature of reaction in equilibrium. We know that decreasing the temperature favour exothermic reaction and exothermic reaction are favoured by low temperature. Exothermic mean giving out heat which remove the deficiency of temperature to bring it back to normal in equilibrium. 2) Increasing the the temperature of reaction in equilibrium We also know that increasing the temperature favour endothermic reaction or endothermic reaction are favoured by high temperature. Endothermic means to absorb heat energy which causes the excess heat energy ( from high temperature) to bring it back to normal in equilibrium Effect of pressure Changing pressure play a significant role in gaseous reaction which are accompanied by change in moles. On increasing pressure of the system equilibrium will shift to the direction where number of moles decrease or where pressure decrease (because pressure is directly proportional to the number of moles). If pressure on the system is increased equilibrium shift in the direction in which decrease in number of moles take place. This is the second time I answered on stack exchange so please let me know my mistake if any. If I had known how to create the diagrams I usually find in other answers I would have added them.
{ "domain": "chemistry.stackexchange", "id": 2263, "tags": "equilibrium" }
Question on Partial Differentiation in Thermodynamics
Question: For energetic fundamental relation $U=U(S,X_1,\ldots)$ where $X_k$ represent extensive parameters $V$ or $N_j$, let \begin{equation}P_k=\frac{\partial U}{\partial X_k}.\end{equation} For entropic relation $S=S(U,X_1,\ldots)$, let \begin{equation}F_k=\frac{\partial S}{\partial X_k}.\end{equation} By solving $dU=TdS\,+\,\Sigma\, P_k \,dX_k$ for $dS$ and comparing it with $dS=\frac{1}{T}dU\,+\,\Sigma\, F_k \,dX_k$, I can demonstrate that $F_k=-P_k\,/\,T$. Alternatively, I want to get the same relation with following differentiation method: Set $\bar{S}=S(U(S,X_1,\ldots),\, X_1,\, \ldots)$. Then by chain rule, we have \begin{equation} \left(\frac{\partial \bar{S}}{\partial X_k}\right)_{S,X_j,\,\ldots}=\left(\frac{\partial S}{\partial U}\right)_{X_j,\,\ldots}\left(\frac{\partial U}{\partial X_k}\right)_{S,X_k,\,\ldots} +\left(\frac{\partial S}{\partial X_k}\right)_{U, X_j,\,\ldots}.\end{equation} I can get the desired equation if the LHS vanishes. Then is it permissible to insist that the LHS equals zero since essentially $\bar{S}=S$? Is there any logical fallacy? I am sorry for my clumsy English. I appreciate your help. Answer: When you wrote the first equation $$U=U(S,X_1,\ldots) \tag 1$$ you were stating that $U$ is the dependent variable and the other variables on the right hand side are the independent variables. So loose the bar on $\bar{S}$ and write $$S=S(U(S,X_1,\ldots),\, X_1,\, \ldots) \tag 2$$ Then write down the total differentials for equations $(1)$ and $(2)$ and see if you can get to where ever it is you're going.
{ "domain": "physics.stackexchange", "id": 56098, "tags": "homework-and-exercises, thermodynamics, entropy, differentiation" }
Number of non-zero elements in intersection of two bloom filters
Question: Let us assume I use bloom filters of size $m$ bits with $k$ hash functions. Now I have two set $X$ and $Y$. Let $B(X)$ be bloom filter of the set $X$. In general I know that $B(X\cup Y)= B(X) \lor B(Y)$. Let us assume that $|X|=n, |Y|=m, |X\cap Y|=l$. So I can use formula from wiki for math expectation for number of non zero element in $B(X\cup Y)$. Howewver it is not true that $B(X\cap Y)= B(X) \land B(Y)$. Can I theoretically compute how many non-zero elements $B(X) \land B(Y)$ should have? Answer: Focus on a particular bit position, say the $i$th bit position. Let $E$ be the event that this bit is set in $B(X) \land B(Y)$. Now all you need to do is to estimate $\Pr[E]$. Then the expected number of non-zero elements in $B(X) \land B(Y)$ will be $m\Pr[E]$. To help us estimate $\Pr[E]$, let's break this down into cases. Define $S=X \setminus Y$, $T=Y \setminus X$, $U=X \cap Y$. Also define the event $E_S$ to represent that this bit is set in $B(S)$, $E_T$ that it is set in $B(T)$, and $E_U$ that it is set in $B(U)$. It will be easier to estimate $\Pr[\neg E]$. Note that $$\Pr[\neg E] = \Pr[\neg E_U \land \neg (E_S \land E_T)] = \Pr[\neg E_U] \cdot \Pr[\neg (E_S \land E_T)].$$ Now $$\Pr[\neg E_U] = (1 - \frac{1}{m})^{kn_U} \approx \exp \{-kn_U/m\},$$ where $l=|X \cap Y|=|U|$. Also $$\begin{align*} \Pr[\neg E_S] &= (1 - \frac{1}{m})^{k n_S} \approx \exp\{k n_S/m\}\\ \Pr[\neg E_T] &\approx \exp\{kn_T/m\} \end{align*}$$ so we have $$\Pr[E_S \land E_T] = \Pr[E_S] \cdot \Pr[E_T] \approx (1 - \exp\{k n_S/m\}) (1 - \exp\{kn_T/m\}).$$ It follows that $$\Pr[\neg E] \approx \exp\{-kn_U/m\} \cdot [1 - (1 - \exp\{k n_S/m\}) (1 - \exp\{kn_T/m\})],$$ and $$\Pr[E] \approx 1 - \exp\{-kn_U/m\} \cdot [1 - (1 - \exp\{k n_S/m\}) (1 - \exp\{kn_T/m\})].$$ Multiply by $m$, and that is your desired estimate for the number of non-zero elements in $B(X) \land B(Y)$. If you want to express this estimate in terms of the sizes $|X|$, $|Y|$, and $|X \cap Y|$, note that $n_U=|X\cap Y|$, $n_S=|X|-|X\cap Y|$, and $n_T=|Y| - |X\cap Y|$, and plug into the formula above. If you want to estimate the size of $|X \cap Y|$ from the number of non-zero elements in $B(X) \land B(Y)$, you just need to invert the above equation. You could use binary search on $n_U$ to find the value of $n_U$ that gives the closest match between expected number of bits set and actual bits set.
{ "domain": "cs.stackexchange", "id": 21450, "tags": "probability-theory, probabilistic-algorithms, bloom-filters" }
What is the correct block diagram of CDMA communication system?
Question: I was reading someone's thesis and now i am more confused about CDMA system. From what i understood is CDMA is multiplexing which allows multiple signals to transmit over single channel. so general structure of CDMA communication system should be like shown below: Binary Data --> Multiply PN Code --> XOR(spreading) -->BPSK/QPSK Modulation --> RF End | Channel | Restore Data <-- Despreading <-- Aquistion and lock <-- RF Demodulation <-- RF Reception And Now I had come across block diagram in the thesis, Pg.27 which basically modulate and they do spreading. From my understanding it should be spreaded before doing modulation. This is my understanding, please correct me if i am wrong. But can someone please confirm which one is right ? It would be great if you explain a little bit. Answer: Since spreading and symbol modulation are memoryless linear operations, the order makes absolutely no difference. Both methods of writing the same thing down are equally valid, and the one you choose mainly depends on what is convenient for the rest of your text.
{ "domain": "dsp.stackexchange", "id": 9779, "tags": "digital-communications, modulation" }
What makes the internal resistance of a battery at microscopic level?
Question: What makes the internal resistance of a battery at the microscopic level? And why does voltage drop when in a circuit compared to the open circuit voltage? I think that the field between the plates should change, lower voltage means weaker electric field. And the electric field is generated by excess charges in the two plates. But I don't think that there is less excess charges, because even if one more charge leaves the electrode then it get's remplaced by an other, it will make ,after a long, time the electrodes neutral. My suggestion is that there is some sort of other field inside the cell that counter acts the electrodes plates when there is a current flowing this field must get stronger. Answer: What makes the internal resistance of a battery at the microscopic level? There is no simple answer due to the many different types of batteries and technologies. So the following is only an overview. There are generally two categories of internal resistance of a battery. One has to do with the resistivity of the materials of the internal components (e.g electrodes) that conduct charge from and to the terminals. The other has to do with the mobility of the ions moving internally between the electrodes through the electrolyte and the medium separating the electrodes. Other factors include, but are not limited to, battery size, the particular battery chemistry, temperature and age. And why does voltage drop when in a circuit compared to the open circuit voltage? My suggestion is that there is some sort of other field inside the cell that counter acts the electrodes plates when there is a current flowing this field must get stronger. I think you are getting close, but I think that what you are referring to as "some sort of field inside the cell" is in reality what we call the emf (electromagnetic force) or voltage that the battery generates inside the battery. The internal battery voltage is in series with its internal resistance. When current is delivered to the circuit there is a voltage drop across the internal resistance. The terminal voltage is then the emf minus the internal voltage drop. When no current is delivered (open circuit), the terminal voltage is the same as the emf. The figure below illustrates the point. If $R_L$ is infinite (meaning we have an open circuit) the voltage across the battery terminals equals it internal voltage, or emf. For any finite value of $R_L$ current will occur. That means a voltage drop across its internal resistance $R_b$. The voltage available at the terminals will equal the emf minus the voltage drop across $R_b$. Hope this helps
{ "domain": "physics.stackexchange", "id": 59694, "tags": "electromagnetism, electricity, electric-circuits, electrical-resistance" }
Algorithm to determine two binary expression trees will give the same result based on associative and commutative properties of some operators
Question: Given n different numbers, I would like to find out whether there exists an algebraic expression using all the n numbers, with n−1 binary operators and unlimited number of parentheses, that evaluates to a certain number T. My idea is to make use of binary expression trees, construct all possible trees and then find the result by brute force. As I have learned, the number of possible combinations will be: $$\begin{aligned} \frac{(2(n-1))!}{(n-1)!n!} n! s^{n-1}\\ \end{aligned}$$ where s is the number of different binary operators that is allowed to be used (e.g. if we allow +, −, × and ÷, then s=4), and $$\begin{aligned} \frac{(2(n-1))!}{(n-1)!n!} \end{aligned}$$ is the number of different types of binary expression trees that can be constructed without considering the content of the nodes. To speed up the search, I have an idea to make use of the associative and commutative properties of the operators + and ×. An intuitive case will be, if all the n−1 operators chosen are all +, then all the trees will evaluate to the same result, regardless of how the n numbers are allocated in the leaf nodes. However, this obviously does not apply to operators − and ÷. And, in the general case, the operators chosen will consist of a mixture of +, −, × and ÷, so is there an algorithm to check the trees will evaluate to the same result without actually doing the calculation? e.g. A binary expression tree that represents (8−5)×(6+3)×(7−4) will give the same result as a binary expression tree that represents (8−5)×((6+3)×(7−4)). Going even further, they will also give the same result as a binary expression tree that represents (3+6)×(7−4)×(8−5). Is there such an algorithm to detect these trees will evaluate to the same result? Answer: Note that if you just want to know if two equations without variables have the same result, then just evaluate them already. But it's more tricky with variables: I assume that you are basically just functionally rearranging lists (ie: like Python/JavaScript/Lisp code). For associative binary operators, it's convenient to make them have n operands to avoid a re-association pattern matching mess. A sum or product of one operand is a redundant parenthesis. Because they happen to be commutative, have the sum and product objects consistently sort their operands if you want them to be efficiently comparable (ie: define a canonical form that everything should reduce to). It is harder if variables are involved though. The equals sign can be treated as just another binary operator to create new expressions, where you do things like: a, (a=(b*2)), ((a=(b*2))+6), ((a+6)=(b*2)+6), ((a+6)=(2*b)+6). At that point, you construct new expressions like: equate(expr,expr), add(expr,expr), mul(expr, expr), ldistribute(expr,expr), rdistribute(expr,expr), commute(expr), etc. Some of the rules are obvious like distributing multiplication over addition, or additive or multiplicative cancellation; and usually go in one direction (like automatic differentiation). But there will be other times where you will need to do a more clever search (like a chess engine), doing things like multiply an expression times 1, and equate it with a new variable divided by itself, etc. One of the more tricky things in an algorithm for solving is that some operations like division add new logical constraints. If you construct div(add(a,b),x), you implicitly construct a logical constraint for x!=0; and you need to work these constraints in so that the search for the solution doesn't generate nonsense. These kinds of programs are usually written as pattern matching and rewriting, with provisions made to not go in infinite loops.
{ "domain": "cs.stackexchange", "id": 4248, "tags": "algorithms, search-algorithms, binary-trees" }
K-clustering algorithm using Kruskal MST with Disjoint Set in place to check for cycles
Question: here below a working implementation that finds the minimal distance between k(set =4 below) clusters in a graph. I have doubts mainly on the implementation of the Disjoint Set structure: instead of using STL containers, I went along with an array of pointers as data structure hosting the leader nodes, in order to have some practice with C-style pointers. Here is the Graph structure: Graph.h // creates and manages a graph data structure #ifndef GRAPH_H_INCLUDED #define GRAPH_H_INCLUDED #include <array> #include <vector> #include <list> #include <fstream> #include <string> #include <iostream> class Edge { public: // constructors Edge(); Edge(int, int, int); std::array<int, 2> getNodes() const; void setNodes(const int, const int); int getCost() const; void setCost(const int); void swapNodes(); // a get() that allows writing: int operator[](const int); bool operator==(const Edge&) const; bool operator!=(const Edge&) const; private: int node1, node2; // nodes are just indices of the nodes in the graph int cost; }; class Node { friend class Graph; // friendship needed by printing routine in Graph public: // constructors Node(); Node(const Edge&); int getLabel() const {return label;} Edge getEdge(const int) const; void setLabel(const int in) {label = in;} void addEdge(const Edge in) {edges.push_back(in);} void addManyEdges(const std::vector<Edge>); int getScore() const {return score;} void setScore(const int in) {score = in;} std::list<Edge>::size_type size() const {return edges.size();} // size of list 'edges' // iterators typedef std::list<Edge>::iterator iterator; typedef std::list<Edge>::const_iterator const_iterator; std::list<Edge>::iterator begin() {return edges.begin();} std::list<Edge>::iterator end() {return edges.end();} std::list<Edge>::const_iterator cbegin() const {return edges.begin();} std::list<Edge>::const_iterator begin() const {return edges.begin();} std::list<Edge>::const_iterator cend() const {return edges.end();} std::list<Edge>::const_iterator end() const {return edges.end();} // inserts group of edges to the end of a edges vector std::list<Edge>::iterator insertEdges(const std::list<Edge>::iterator beg_in, const std::list<Edge>::iterator end_in) { return edges.insert(end(), beg_in, end_in); } // erase node std::list<Edge>::iterator erase(int); bool operator==(const Node&) const; bool operator!=(const Node&) const; private: int label; std::list<Edge> edges; int score; // new, starts at 10000000, is equal to lowest cost Edge in 'edges' }; class Graph { public: Graph(); // constructor from txt file Graph(const std::string&); Node getNode(const int index) const {return nodes[index];} void addNode(const Node in) {nodes.push_back(in);} std::vector<Node>::size_type size() {return nodes.size();} // size of vector 'nodes' std::vector<Node>::size_type size() const {return nodes.size();} // size of vector 'nodes' void output() const; // prints graph void output(const int) const; // iterators typedef std::vector<Node>::iterator iterator; typedef std::vector<Node>::const_iterator const_iterator; std::vector<Node>::iterator begin() {return nodes.begin();} std::vector<Node>::iterator end() {return nodes.end();} std::vector<Node>::const_iterator begin() const {return nodes.begin();} std::vector<Node>::const_iterator end() const {return nodes.end();} Node& operator[](const int index) { return nodes[index]; } std::vector<Node>::iterator erase(const int index) { return nodes.erase(nodes.begin() + index); } private: std::vector<Node> nodes; }; bool compareCosts(const Edge&, const Edge&); bool compareScores(const Node&, const Node&); bool compareLabels(const Node&, const Node&); #endif // GRAPH_H_INCLUDED Graph.cpp: #include <iostream> #include <fstream> #include <array> #include <list> #include <string> #include <algorithm> #include "Graph.h" using std::array; using std::ifstream; using std::string; using std::endl; using std::cout; using std::list; using std::equal; // Edge // constructors Edge::Edge():node1(0), node2(0), cost(0) {} Edge::Edge(int node_1, int node_2, int len): node1(node_1), node2(node_2), cost(len) {} array<int, 2> Edge::getNodes() const { array<int, 2> ret = {node1, node2}; return ret; } void Edge::setNodes(const int in1, const int in2) { node1 = in1; node2 = in2; } int Edge::getCost() const { return cost; } void Edge::setCost(const int len) { cost = len; } void Edge::swapNodes() { node1 = node1 - node2; node2 += node1; node1 = node2 - node1; } // same as getNodes() above int Edge::operator[](const int index) { if (index == 0) return node1; else if (index == 1) return node2; else { try {throw;} catch(...) {cout << "edge index must be either 0 or 1" << endl;} return 1; } } bool Edge::operator==(const Edge& rhs) const { if ( (node1 == rhs.getNodes()[0]) && (node2 == rhs.getNodes()[1]) && cost == rhs.getCost() ) return true; else return false; } bool Edge::operator!=(const Edge& rhs) const { if ( !(*this == rhs) ) return true; else return false; } // Node //constructors Node::Node(): label(0), edges(0), score(10000000) {} Node::Node(const Edge& edg): label(edg.getNodes()[0]), edges(0), score(10000000) { edges.push_back(edg); } Edge Node::getEdge(const int index) const { Edge ret; list<Edge>::const_iterator iter = edges.begin(); advance(iter, index); return *iter; } void Node::addManyEdges(const vector<Edge> input) { for (size_t i = 0; i != input.size(); i++) { edges.push_back(input[i]); } } list<Edge>::iterator Node::erase(int index) { list<Edge>::iterator iter = edges.begin(); advance(iter, index); return edges.erase(iter); } bool Node::operator==(const Node& rhs) const { return label == rhs.getLabel() && equal( edges.begin(), edges.end(), rhs.begin() ); // no need to equate scores } bool Node::operator!=(const Node& rhs) const { return !(*this == rhs); } // Graph // constructors Graph::Graph(): nodes(0) {} Graph::Graph(const string& file_input): nodes(0) // constructor from file { string filename(file_input + ".txt"); string line; ifstream is; is.open(filename); int number_nodes; //, number_edges; is >> number_nodes; // >> number_edges; nodes.resize(number_nodes); // reserve the Node vector of size 'number_nodes' int node1, node2, cost; while (is >> node1 >> node2 >> cost) { int nodes_array[2] = {node1, node2}; for (int& node_i : nodes_array) { if (nodes[node_i - 1].size() == 1) nodes[node_i - 1].setLabel(node_i); } Edge current_edge(node1, node2, cost); nodes[node1 - 1].addEdge(current_edge); if (node1 != node2) { current_edge.swapNodes(); nodes[node2 - 1].addEdge(current_edge); } } is.close(); } // prints all input nodes void Graph::output() const { for (size_t i = 0; i != nodes.size(); ++i) { cout << "Node " << nodes[i].getLabel() << ", size = " << nodes[i].edges.size() << " with edges: "; for (size_t j = 0; j != nodes[i].edges.size(); ++j) { int node_left = nodes[i].getEdge(j).getNodes()[0]; int node_right = nodes[i].getEdge(j).getNodes()[1]; int cost = nodes[i].getEdge(j).getCost(); cout << "[" << node_left << "-" << node_right << ", " << cost << "] "; } cout << endl; } } // prints 10 neighbours around picked node void Graph::output(const int picked_node) const { for (int i = picked_node - 5; i != picked_node + 5; ++i) { cout << "Node " << nodes[i].getLabel() << ", with edges: "; for (size_t j = 0; j != nodes[i].edges.size(); ++j) { int node_left = nodes[i].getEdge(j).getNodes()[0]; int node_right = nodes[i].getEdge(j).getNodes()[1]; int cost = nodes[i].getEdge(j).getCost(); int score = nodes[node_right - 1].getScore(); cout << "[" << node_left << "-" << node_right << ", " << cost << ", " << score << "] "; } cout << endl; } } bool compareCosts(const Edge& a, const Edge& b) { return a.getCost() < b.getCost(); } bool compareScores(const Node& a, const Node& b) { return a.getScore() > b.getScore(); } bool compareLabels(const Node& a, const Node& b) { return a.getLabel() > b.getLabel(); } BFS implementation for Kruskal (alternative to Disjoint Set Kruskal implementation) BreadthFirstSearch.h #ifndef BREADTHFIRSTSEARCH_H_INCLUDED #define BREADTHFIRSTSEARCH_H_INCLUDED #include <limits> #include "Graph.h" int const infinity = std::numeric_limits<int>::infinity(); bool breadthFirstSearch(const Graph&, const int, const int); #endif // BREADTHFIRSTSEARCH_H_INCLUDED BreadthFirstSearch.cpp #include <iostream> #include <queue> #include <vector> #include <map> #include <algorithm> #include "BreadthFirstSearch.h" using std::cout; using std::endl; using std::cin; using std::find_if; using std::vector; using std::queue; using std::map; bool breadthFirstSearch(const Graph& G, const int start_node_label, const int target_node_label) { // define type for explored/unexplored Nodes enum is_visited {not_visited, visited}; map<int, is_visited> node_is_visited; for (Graph::const_iterator iter = G.begin(); iter != G.end(); ++iter) node_is_visited[iter->getLabel()] = not_visited; Graph::const_iterator start_node_iter = find_if(G.begin(), G.end(), [=](const Node& i){return i.getLabel() == start_node_label;}); if ( start_node_iter == G.end() ) return false; node_is_visited[start_node_label] = visited; Node next_node; next_node = *start_node_iter; // breadth-first algorithm runs based on queue structure queue<Node> Q; Q.push(next_node); // BFS algorithm while (Q.size() != 0) { // out of main loop if all nodes searched->means no path is present Node current_node = Q.front(); Node linked_node; // variable hosting node on other end for (size_t i = 0; i != current_node.size(); ++i) { // explore nodes linked to current_node by an edge int linked_node_label = current_node.getEdge(i).getNodes()[1]; Graph::const_iterator is_linked_node_in_G = find_if(G.begin(), G.end(), [=](const Node& a){return a.getLabel() == linked_node_label;}); if ( is_linked_node_in_G != G.end() ) { // check linked_node is in G linked_node = *is_linked_node_in_G; //G_tot.getNode(linked_node_label - 1); if (node_is_visited[linked_node_label] == not_visited) { // check if linked_node is already in the queue node_is_visited[linked_node_label] = visited; Q.push(linked_node); // if not, add it to the queue // cout << "current " << current_node.getLabel() // for debug // << " | linked = " << linked_node_label + 1 // << " | path length = " << dist[linked_node_label] << endl; if (linked_node_label == target_node_label) // end search once target node is explored return true; } } else { if (linked_node_label == target_node_label) // end search once target node is explored return false; } } Q.pop(); } return false; } DisjointSet.h #ifndef DISJOINTSET_H_INCLUDED #define DISJOINTSET_H_INCLUDED #include "Graph.h" class DisjointSet { public: DisjointSet(const size_t); ~DisjointSet(); DisjointSet& operator= (const DisjointSet&); int find(const Node&); void unionNodes(const Node&, const Node&); int get(int index) {return *leaders[index];} private: size_t size; // graph size needed for allocation of pointer data members below int* base; // array of int each Node of the graph has its leader int** leaders; // array of pointers to int, allows to reassign leaders to Nodes after unions int find_int(int); // auxiliary to 'find' above DisjointSet(const DisjointSet&); // copy constructor forbidden }; #endif // DISJOINTSET_H_INCLUDED DisjointSet.cpp (here is where advice would be most appreciated) // Union-find structure (lazy unions) #include "DisjointSet.h" DisjointSet::DisjointSet(size_t in): size(in), base(new int[in]), leaders(new int*[in]) { for (size_t i = 1; i != in + 1; ++i) { base[i - 1] = i; leaders[i - 1] = &base[i - 1]; } } DisjointSet::~DisjointSet() { delete[] base; delete[] leaders; } DisjointSet& DisjointSet::operator= (const DisjointSet& rhs) { if (this == &rhs) return *this; // make sure you aren't self-assigning if (base != NULL) { delete[] leaders; // get rid of the old data delete[] base; } // "copy constructor" from here size = rhs.size; base = new int[size]; leaders = new int*[size]; base = rhs.base; for (size_t i = 0; i != size; ++i) leaders[i] = &base[i]; return *this; } // auxiliary to find: implements the recursion int DisjointSet::find_int(int leader_pos) { int parent(leader_pos); if (leader_pos != *leaders[leader_pos - 1]) parent = find_int(*leaders[leader_pos - 1]); return parent; } // returns leader to input Node int DisjointSet::find(const Node& input) { int parent( input.getLabel() ); if (input.getLabel() != *leaders[input.getLabel() - 1]) parent = find_int(*leaders[input.getLabel() - 1]); return parent; } // merges sets by assigning same leader (the lesser of the two Nodes) void DisjointSet::unionNodes(const Node& a, const Node& b) { if (find(a) != find(b)) { if (find(a) < find(b)) leaders[find(b) - 1] = &base[find(a) - 1]; else leaders[find(a) - 1] = &base[find(b) - 1]; } } KruskalClustering.h #ifndef KRUSKALCLUSTERING_H_INCLUDED #define KRUSKALCLUSTERING_H_INCLUDED #include <vector> #include "Graph.h" #include "DisjointSet.h" int clusteringKruskalNaive(const Graph&, const std::vector<Edge>&, int); int clusteringKruskalDisjointSet(const Graph& graph0, const std::vector<Edge>& edges, int); #endif // KRUSKALCLUSTERING_H_INCLUDED KruskalClustering.cpp // Kruskal MST algorithm. Implementation specific to (k=4)-clustering // -naive (with breadth-first-search to check whether new edge creates a cycle); cost: O(#_edges * #_nodes) // -and union-find implementations. Cost: O(#_edges*log2(#_nodes)) #include <iostream> #include <string> #include <vector> #include <algorithm> //std::find_if #include "BreadthFirstSearch.h" #include "KruskalClustering.h" using std::cout; using std::endl; using std::string; using std::vector; using std::find_if; int clusteringKruskalNaive(const Graph& graph0, const vector<Edge>& edges, int k) { Graph T; // Minimum Spanning Tree vector<Edge>::const_iterator edges_iter = edges.begin(); int sum_costs(0), number_of_clusters( graph0.size() ); // keep track of overall cost of edges in T, and of clusters while (number_of_clusters >= k) { // find out if first node of edge is already in T Graph::iterator is1_in_T = find_if(T.begin(), T.end(), [=] (Node& a) {return a.getLabel() == graph0.getNode(edges_iter->getNodes()[0] - 1).getLabel();}); bool is1_in_T_flag; // needed because T gets increased and thus invalidates iterator is1_in_T Node* node_1 = new Node(*edges_iter); // no use of pointer here, it creates a new Node anyway, can't move Nodes to T if ( is1_in_T == T.end() ) { // node_1 not in T so we add it T.addNode(*node_1); number_of_clusters--; // node_1 is not its own cluster any more delete node_1; // node_1 copied to T so ... node_1 = &(T[T.size() - 1]); // ... it now points there sum_costs += (*edges_iter).getCost(); is1_in_T_flag = false; } else { // node_1 is in T already delete node_1; // if so, just update the pointer node_1 = &(*is1_in_T); is1_in_T_flag = true; } // perform BFS to check for cycles bool check_cycles = breadthFirstSearch(T, edges_iter->getNodes()[0], edges_iter->getNodes()[1]); // create an identical edge, but with nodes positions swapped Edge swapped_edge = *edges_iter; swapped_edge.swapNodes(); // find out if second node of edge is already in T Graph::iterator is2_in_T = find_if( T.begin(), T.end(), [=] (Node& a) {return a.getLabel() == graph0.getNode(edges_iter->getNodes()[1] - 1).getLabel();}); // (either node1 or 2 not in T, or both, or both present but in different clusters of T) if (!check_cycles) { // if edges_iter creates no cycle when added to T if (is1_in_T_flag){ // if node_1 was already present in T (*node_1).addEdge(*edges_iter); // just add new edge to node_1 list of edges sum_costs += (*edges_iter).getCost(); } else number_of_clusters++; // if node_1 not present, it means number_of_cl was decreased above ... number_of_clusters--; // ... and number_of_cl can decrease just by one, if adding an edge to T if ( is2_in_T != T.end() ) // node_2 already in T: just update its list of edges (*is2_in_T).addEdge(swapped_edge); else { // node_2 not in T, so add it Node node_2(swapped_edge); T.addNode(node_2); } } else { // cycle created by *edges_iter if (!is1_in_T_flag) // in case the cycle happened just after adding node_1: (*is2_in_T).addEdge(swapped_edge); // add edge to node_2, num_clusters already updated by node_1 } if (number_of_clusters >= k) // advance to next Edge if num_clusters > k edges_iter++; // debug // T.output(); // cout << "next edge: (" << (*edges_iter).getNodes()[0] << "-" // << (*edges_iter).getNodes()[1] << ") " << endl; // cout << "clustering: " << number_of_clusters << endl; } cout << "Sum of MST lengths is: " << sum_costs << endl; return (*edges_iter).getCost(); } // same algorithm, implemented with Union-find structure int clusteringKruskalDisjointSet(const Graph& graph0, const vector<Edge>& edges, int k) { DisjointSet disjoint_set_graph0( graph0.size() ); // create Union-find structure Graph T; vector<Edge>::const_iterator edges_iter = edges.begin(); int sum_costs(0), number_of_clusters( graph0.size() ); while ( number_of_clusters >= k ) { // if nodes in Edge have not the same leader in the disjoint set, then no loop is created, and T can add the edge if ( disjoint_set_graph0.find(graph0.getNode(edges_iter->getNodes()[0] - 1)) != disjoint_set_graph0.find(graph0.getNode(edges_iter->getNodes()[1] - 1)) ) { sum_costs += (*edges_iter).getCost(); number_of_clusters--; // no cycle created so the edge will be added to T // look for node_1 in T Graph::iterator is1_in_T = find_if(T.begin(), T.end(), [=] (Node& a) {return a.getLabel() == graph0.getNode(edges_iter->getNodes()[0] - 1).getLabel();}); if ( is1_in_T == T.end() ) { // if node_1 not in T add it Node node1(*edges_iter); T.addNode(node1); } else // if node_1 already in T only add to it this edge (*is1_in_T).addEdge(*edges_iter); Edge swapped_edge = *edges_iter; swapped_edge.swapNodes(); // look for node_2 in T Graph::iterator is2_in_T = find_if(T.begin(), T.end(), [=] (Node& a) {return a.getLabel() == graph0.getNode(edges_iter->getNodes()[1] - 1).getLabel();}); if ( is2_in_T == T.end() ) { // same as for node_1 Node node2(swapped_edge); T.addNode(node2); } else (*is2_in_T).addEdge(swapped_edge); // merge the 2 nodes' sets: update their disjointed set leaders disjoint_set_graph0.unionNodes( graph0.getNode(edges_iter->getNodes()[0] - 1), graph0.getNode(edges_iter->getNodes()[1] - 1) ); } if (number_of_clusters >= 4) edges_iter++; //debug // T.output(); // cout << "next edge: (" << (*edges_iter).getNodes()[0] << "-" // << (*edges_iter).getNodes()[1] << ") " << endl; // cout << "clustering: " << number_of_clusters << endl; // for (size_t i = 0; i != graph0.size(); ++i) // cout << disjoint_set_graph0.get(i) << " "; // cout << endl; } cout << "Sum of MST lengths is: " << sum_costs << endl; return (*edges_iter).getCost(); } Lastly, main.cpp : /* A max-spacing k-clustering program based on Kruskal MST algorithm. The input file lists a complete graph with edge costs. clusters k = 4 assumed.*/ #include <iostream> #include <fstream> #include <string> #include <vector> #include <algorithm> // std::sort; #include "Graph.h" #include "KruskalClustering.h" using std::cout; using std::endl; using std::string; using std::ifstream; using std::vector; using std::sort; int main(int argc, char** argv) { cout << "Reading list of edges from input file ...\n" << endl; // read graph0 from input file string filename(argv[1]); Graph graph0(filename); // graph0.output(); // debug cout << endl; // re-read input file and create a list of all edges in graph0 ifstream is(filename + ".txt"); int nodes_size; is >> nodes_size; vector<Edge> edges; int node1, node2, cost; while (is >> node1 >> node2 >> cost) { Edge current_edge(node1, node2, cost); edges.push_back(current_edge); } is.close(); // sort the edge list by increasing cost cout << "Sorting edges ...\n" << endl; sort(edges.begin(), edges.end(), compareCosts); // for (vector<Edge>::iterator iter = edges.begin(); iter != edges.end(); ++iter) // debug // cout << (*iter).getNodes()[0] << " " << (*iter).getNodes()[1] << " " << (*iter).getCost() << endl; cout << "Kruskal algorithm: Computing minimal distance between clusters when they are reduced to 4 ...\n" << endl; int k = 4; // number of clusters desired // pick implementation, comment the other //int clustering_min_dist = clusteringKruskalNaive(graph0, edges, k); int clustering_min_dist = clusteringKruskalDisjointSet(graph0, edges, k); cout << "k = " << k << " clusters minimal distance is: " << clustering_min_dist << endl; return 0; } Edit: added input .txt file For completeness, here below an input file in the format accepted by the program. The file contains an undirected Graph, first line is the number of Nodes, the others are Edges(node1, node2, distance). The file should be a .txt . 8 1 2 50 1 3 5 1 4 8 1 5 47 1 6 3 1 7 42 1 8 36 2 3 60 2 4 34 2 5 6 2 6 27 2 7 62 2 8 61 3 4 58 3 5 53 3 6 37 3 7 54 3 8 12 4 5 63 4 6 29 4 7 52 4 8 44 5 6 1 5 7 16 5 8 6 6 7 45 6 8 52 7 8 60 Answer: Use your own typedefs I see this code: typedef std::list<Edge>::iterator iterator; ... std::list<Edge>::iterator begin() {return edges.begin();} You should use your own typedefs, so the second line becomes: iterator begin() {return edges.begin();} Apart for less typing for yourself, it also avoids possible mistakes where you return a different type than the one you typedef'ed, and if you ever want to change the type you only have to do it in one place. Simplify your classes Looking at class Edge, I see that most member functions allow trivial getting and setting the private member variables. Why not make those member variables public, and allow them to be accessed directly? Also, if you have a getNodes() that returns an array and an operator[] to access the nodes as an array, maybe you should have store node1 and node2 as a std::array<int, 2> nodes to begin with. Although if you want edges to be directed, I would keep them separate and name them from and to. Furthermore, use default member initializers to void having to write any constructor: class Edge { public: bool operator==(const Edge &) const; bool operator!=(const Edge &) const; std::array<int, 2> nodes{}; int cost{}; }; Now you can do: Edge e{1, 2, 3}; e.nodes = {4, 5}; e.cost = 6; e.nodes[0] = 7; You can still add some convenience functions as needed, such as swapNodes(). Also, note that in C++20, with the above class, you can let the compiler generate default comparison operators for you in one line, like so: class Edge { public: ... auto operator<=>(const Node&) const = default; }; The same goes for class Node and class Graph: avoid making member variables private to then just write lots of wrapper functions to allow them to be accessed anyway. Avoid unnecessary temporary variables In Edge::getNodes(), there is no need to use a temporary variable, just write: array<int, 2> Edge::getNodes() const { return {node1, node2}; } Of course this function is not necessary at all if you just store them as a public array to begin with. Use std::swap() instead of writing your own Instead of using the hand-written trick to swap two variables (did you check that you don't run in to undefined behaviour due to signed integer overflow?), there's a function in the standard library to do that for you: void Edge::swapNodes() { std::swap(node1, node2); } Don't try {throw;} If you want to signal an error using an exception, just throw an exception of the proper type that contains the error message, like so: int Edge::operator[](const int index) { if (index == 0) return node1; else if (index == 1) return node2; else throw std::runtime_error("edge index must be either 0 or 1"); } What you are doing is too complicated (5 statements on 3 lines of code instead of a single statement), doesn't allow the caller to handle the error, and is illegal: you are only allowed to use throw without an argument if you are already in exception handling code. Don't if (x) return true else return false; If you already have a boolean expression, you don't have to use an if-then-else statement to return exactly the same boolean value, just write: bool Edge::operator==(const Edge& rhs) const { return node1 == rhs.node1 && node2 == rhs.node2 && cost == rhs.const; } You can use initializer lists in member value initializers Ok that sounds like nonsense, but what I mean is that you can construct a std::list with an initializer list to set the initial list members, and do that when doing member variable initialization in a constructor. For example: Node::Node(const Edge& edg): label(edg.getNodes()[0]), score(10000000), // up to now, same as before edges{edg} // initializer list passed to constructor of edges { } Note that you should combine this with default member initializers, so it becomes: class Node { ... int label{}; std::list<Edge> edges; int score{10000000}; }; ... Node::Node() = default; Node::Node(const Edge &edg): label(edg.getNodes[0]), edges{edg} {} Iterating over the list of edges A std::list does not have a random access iterator. You have noticed that already, and that is why in Node::getEdge(), you are using std::advance() to iterate to the element at the given index. You should know that this is quite slow, since it has to follow pointers. If you really need random access, it would be much faster to store the list of edges in a std::vector instead. But, in most cases you are iterating over the list of edges, and do some operation on each of them. In that case, don't use getEdge(), as it will get slower and slower the bigger the index is, but use list iterators directly, or even beter use range-for if possible. For example: void Graph::output() const { for (auto node: nodes) { std::cout << "Node " << node.getLabel() << ", size = " << node.edges.size() << " with edges:"; for (auto edge: node.edges) { std::cout << " [" << edge[0] << "-" << edge[1] << ", " << edge.getCost() << "]"; } std::cout << "\n"; } } Use "\n" instead of std::endl Prefer using "\n" instead of std::endl; the latter is equivalent to the former, except that it also forces a flush of the output buffer, which is often unnecessary and can slow down your program. Writing generic output functions Don't hard-code the use of std::cout in your output functions, but instead take a parameter that tells it what output stream to use, like so: void Graph::output(std::ostream &out) const { for (auto node: nodes) { out << "Node " << node.getLabel() << ", size = " << node.edges.size() << " with edges:"; ... Also consider creating an operator<<(), like so: class Graph { ... void output(std::ostream &out) const; friend std::ostream &operator<<(std::ostream &out, const Graph &graph) { graph.output(out); } }: Now that you have overloaded operator<<() for your class, you can just write: Graph graph; ... std::cout << graph; Writing generic input functions The same goes for functions reading input as well. You have a constructor that takes a filename as an argument, but what if I have my graph stored in memory instead of in a file? Use a std::istream parameter, so that anything that can act as an input stream can be used to read the graph from: class Graph { ... Graph(std::istream &is); }; ... Graph::Graph(std::istream &is) { int number_nodes; is >> number_nodes; ... } And then you just use it like so: std::ifstream is(argv[1]); Graph graph0(is); Do proper error checking when reading and writing Always be prepared that something can happen while reading and writing files. Maybe the disk is corrupt, you have run out of disk space, the permissions are incorrect, the file is on a network drive and the network is down, and so on. Also be prepared that the file might contain unexpected data. Your while-loop in the constructor of Graph that reads from a stream will exit either if it reached the end of the stream, or if there was an error. To ensure that it read everything correctly, check if is.eof() returns true. If not, I suggest you throw a std::runtime_error. When writing to a stream, whether it is a file or std::cout, check at the end of writing everything if out.good() returns true. If not, throw an exception again. About using C-style pointer arrays Yes, you can use those, but now you have to manually allocate and free the arrays. It's fine as an excercise, but an even better excercise is to convert those into std::vectors :) Avoid new and delete in general Manually calling new and delete is hardly ever necessary anymore, and is often a sign something should be managed by an appropriate container, or should not be a pointer in the first place. As an example of the latter, in clusteringKruskalNaive(), there is no need to use new to create the temporary node_1. It is deleted immediately afterwards, possibly without even having been used. Instead write: bool is1_in_T_flag = is1_in_T != T.end(); Node *node_1; if (is1_in_T_flag) { T.addNode(Node(*node_1)); number_of_clusters--; node_1 = ... } else { node_1 = &*is1_in_T; } Try to make the code more readable Especially in KruskalClustering.cpp, the code is very hard to read. Try to shorten the lines, by using auto where possible, simplifying expressions, giving clear and concise names to things, splitting off complex code into their own functions, and so on. Be consistent in how you initialize variables, I see both int x = 1 and int x(1). Also remove commented out code; you should use a revision control system like Git to store previous versions of you code in. Here is how it might look: int clusteringKruskalDisjointSet(const Graph& graph, const vector<Edge>& edges, int k) { // Start with every node being in its own cluster auto number_of_clusters = graph.size(); DisjointSet disjoint_set(number_of_clusters); int min_cost = INT_MAX; int sum_costs = 0; // Iterate over the given edges until only k distinct clusters are left for (auto edge_iter = edges.begin; number_of_clusters >= k; ++edge_iter) { auto &edge = *edge_iter; // Skip the edge if it is within a single cluster if (disjoint_set.find(graph.nodes[edge.nodes[0] - 1]) == disjoint_set.find(graph.nodes[edge.nodes[1] - 1])) { continue; } // This edge joins two clusters min_cost = std::min(min_cost, edge.cost); sum_costs += edge.cost; number_of_clusters--; disjoint_set.unionNodes(graph.nodes[edge.nodes[0] - 1], graph.nodes[edge.nodes[1] - 1]); } std::cout << "Sum of MST lengths is: " << sum_costs << "\n"; return min_cost; } I am left wondering: why keep track of Graph T in this function? It didn't do anything, so I could remove all code related to it. And why subtract 1 from the node indices here? Why loop while number_of_clusters >= k? That will result in k - 1 clusters at the end of the algorithm.
{ "domain": "codereview.stackexchange", "id": 39398, "tags": "c++, breadth-first-search, clustering, union-find" }
How much magnification would I get by this Dall-Kirkham telescope?
Question: I'd like to know how much magnification I would get by this telescope. The main specs are as follows: Focal length: 2563 mm Focal ratio: F/7.2 Number of lenses: 2 Optical diameter of Primary mirror: 355.6 mm Diameter of Secondary mirror: 165 mm Based on this simple class material by NASA (for kids?), the magnification may be computed as a ratio of the focal length of object lens and that of eyepiece. However, the datasheet of this telescope does not provide the focal length of eyepiece. In the first place, I'm doubtful that this simple computation scheme applies to this telescope with mirrors and lenses... Does anyone know more about how to compute the magnification of this kind of telescope? Answer: Telescopes such as this are often described as "astrographs" since they are designed specifically for astrophotography, and specifically astrophotography at the prime focus rather than visual use. The advanced optical engineering in such a telescope is so that a relatively large CCD sensor can be used at the prime focus and have a "flat field" (uniformly in focus) and uniformly illuminated across the sensor. Prime focus astrophotography doesn't use an eyepiece like visual observing; what determines the magnification is the visual scale at the focus, expressed in something like degrees per millimeter. The visual scale is determined entirely by the focal length, since it is the arctangent of 1/focal length.
{ "domain": "astronomy.stackexchange", "id": 6142, "tags": "telescope" }
Dynamically-resizable array implementation, for use in a game-dev-related library
Question: I've been working on a library that includes a list-like structure, which I use internally multiple times to store different types of structs. However, I'm not entirely confident with my ability to safely manage memory in C yet, so I'd appreciate if someone could let me know if there's anything wrong with my code (it seems to work just fine.) This is the code for the list structure (with some bits from the associated header put at the top for simplicity.) // Put this at the top of structs to make it possible to use them in a cg_mem_list #define CG_ITEMIZABLE int cg_index; // Free extra spaces after this many exist #define CG_MEM_EXTRAS 10 typedef struct cg_mem_item { CG_ITEMIZABLE } cg_mem_item; typedef struct cg_mem_list { cg_mem_item** items; int num, capacity, itemsize; } cg_mem_list; void cg_new_mem_list (cg_mem_list* list, int itemsize) { list->items = 0; list->num = 0; list->capacity = 0; list->itemsize = itemsize; } cg_mem_item* cg_mem_list_add (cg_mem_list* list) { cg_mem_item* mitem = malloc (list->itemsize); if (list->capacity - list->num < 1) { list->capacity++; list->items = realloc (list->items, list->capacity * sizeof (cg_mem_item*)); } mitem->cg_index = list->num; list->items[list->num++] = mitem; return mitem; } void cg_mem_list_remove (cg_mem_list *list, cg_mem_item *item) { list->num--; int index = item->cg_index; if (index < list->num) { list->items[index] = list->items[list->num]; list->items[index]->cg_index = index; } if (list->capacity - list->num > CG_MEM_EXTRAS) { list->capacity = list->num; list->items = realloc (list->items, list->capacity * sizeof (cg_mem_item*)); } free (item); } void cg_destroy_mem_list (cg_mem_list* list) { free (list->items); } This is how I use it: // The struct being stored in the list typedef struct some_data_type { CG_ITEMIZABLE int random_data; } some_data_type; int main (void) { // Declare and define the list cg_mem_list data_list; cg_new_mem_list (&data_list, sizeof(some_data_type)); // Allocates a new item and sets it up some_data_type* first_item = (some_data_type*)cg_mem_list_add(&data_list); first_item->random_data = 12; // <More additions and operations> // This frees the item cg_mem_list_remove(&data_list, (cg_mem_item*)&first_item); } I'd also like to note that I didn't implement any checks malloc or realloc because this is in a game-dev-related library, and I figure if it can't allocate any new memory it's probably gonna crash and burn either way. If you have any suggestions for how to gracefully handle this (other than just straight-up throwing an error and aborting, which is pretty much the same as crashing anyways for this sort of application honestly) I'd appreciate it. Answer: Use a memory checker There's a clear bug, identified by running the test program under Valgrind: ==21190== Invalid free() ==21190== at 0x48369AB: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==21190== by 0x109320: cg_mem_list_remove (215759.c:64) ==21190== by 0x109388: main (215759.c:94) ==21190== Address 0x1fff0006a8 is on thread 1's stack ==21190== in frame #2, created by main (215759.c:82) This is where a local variable, not allocated with malloc() has been passed to free(). There's also a leak: ==21190== 16 (8 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 2 ==21190== at 0x48356AF: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==21190== by 0x4837DE7: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) ==21190== by 0x1091F8: cg_mem_list_add (215759.c:36) ==21190== by 0x109366: main (215759.c:88) This is only partially mitigated if we add a call to cg_destroy_mem_list, because the individual items still don't get reclaimed (that's what the "8 indirect" refers to). Other issues We need to include <stdlib.h>, to use malloc() and family. We're completely missing the (necessary) error checking when we call malloc() and realloc(), both of which can return null pointers. Remember to check the result of realloc() before overwriting the old pointer (which is still valid if realloc() failed).
{ "domain": "codereview.stackexchange", "id": 33928, "tags": "c, array, memory-management" }
Is there any electronic component to water conductivity?
Question: Answers to Decrease in temperature of a aqueous salt solution decreases conductivity indicate that the electrical conductivity of salt solutions arises from the mobility of ionic species and therefore the temperature dependence of conductivity is related to viscosity. Question: Is there any measured or predicted electronic component to water conductivity as well, where the charge carriers are electrons rather than ions? This could take place via charge exchange (migration of bound electrons) or even perhaps via solvated electrons, or some other mechanism. In this case I'm interested in both pure water, and salt solutions. Answer: I agree with the commenters that electrical conduction is very unlikely, but it's worth going through some possible mechanisms: actual solvated electrons: As others have noted, free electrons would be expected to react rapidly with protons, even in a basic solution, so this changes quickly to a scenario of sequential electron transfer between protons, so let's do that next. Sequential electron transfer between $\ce{H.}$ and $\ce{H+}$: Let's assume the solution is strongly acidic, so protons are abundant, and a proton gets reduced at the cathode to a hydrogen atom radical. Based on the bond dissociation energies, abstraction of $\ce{H.}$ from water to form $\ce{H2}$ and $\ce{HO.}$ is slightly unfavorable, the hydrogen radical would be preferred over the hydroxyl radical. (The H-H bond formed has a BDE of ~ $\pu{105 kcal/mol}$, while the $\ce{H-OH}$ bond broken has a BDE of ~$\pu{120 kcal/mol}$.) The problem is the rate of quenching by reaction of two hydrogen atom radicals to form $\ce{H2}$ (which is water electrolysis produces hydrogen gas). I couldn't find a rate constant for that, but there is a published rate constant for recombination of hydroxyl radicals in water that is around $\pu{10^10 M-1 s-1}$. As you might expect, that's essentially diffusion limited, so the rate constant of hydrogen atom recombination is going to be at least as high. If we optimistically assume that transfer of the electron from $\ce{H.}$ to $\ce{H+}$ has a comparable rate constant, you would still have to have a very low concentration of radical and very short path to travel in order for an electron to make it from a cathode to an anode, but it doesn't seem theoretically impossible. The third possibility would be sequential transfer of electrons from $\ce{HO-}$ to $\ce{HO.}$. In a strongly basic solution, this also seems like a theoretical possibility given a very short path and a very low concentration of radical, assuming there are no other molecules in solution that can quench the radical. I'm not suggesting that either of these theoretical possibilities actually ever occurs, just that these are the mechanisms that seem most likely to me.
{ "domain": "chemistry.stackexchange", "id": 17696, "tags": "electrochemistry, water, aqueous-solution, electrons, conductivity" }
Physics interpretation of Sobolev space
Question: What is the physics interpretation of Sobolev space? $H^{s,p}:=\left\{u\in L^p(\mathbb{R}^n):\mathcal{F}^{-1}((1+|\cdot|^2)^{s/2}\mathcal{F}(u))\in L^p(\mathbb{R}^n)\right\}$, $s\geq 0,\, 1<p<\infty$ (Sobolev space) For example, if $n=3,\,s=2,\,p=2$. Is there an interpretation? Ask this, to find out if the equation $\mathcal{F}^{-1}((1+|\cdot|^2)^{s/2}\mathcal{F}(u))=g$ with $g\in L^2(\mathbb{R}^n)$ has any physical interpretation or application. Answer: The (non-homogeneous) Sobolev spaces $H^{p,s}$ are the subspaces of $L^p$ of functions that admit $s$ weak derivatives belonging to the same Lebesgue space, where a weak derivative is a derivative in the sense of distributions. The foremost ones of physical significance are the ones $H^s=H^{2,s}$ where the base space is $L^2$. This is due to the fact that $L^2$ is the prototypical Hilbert space for nonrelativistic quantum mechanics. As a concrete example, $H^2$ is the natural domain of definition, and self-adjointness, of the Laplace operator $-\Delta$, that is the kinetic energy of a quantum particle. Another example may be given by nonlinear Schrödinger equations of the type $$i\partial_t \psi = -\Delta \psi + \lvert \psi\rvert^{p-1} \psi \; .$$ Such equations are crucial to describe the effective behavior of condensed matter systems, such as the motion of atoms in a BEC. The existence and uniqueness of solutions to such equations is typically investigated and proved in Sobolev spaces $H^s$.
{ "domain": "physics.stackexchange", "id": 68689, "tags": "mathematical-physics, mathematics" }
Can nfa consume more than one letter at a time
Question: If I have a NFA/DFA and I expect inputs of 00, 01, 10, 11 can I read input in groups of 2 binary digits at a time. like the example shown Answer: Yes, as long as your alphabet is defined as a set of different couples of binary digits
{ "domain": "cs.stackexchange", "id": 15116, "tags": "automata, finite-automata" }
Question on Peskin & Schroeder's QFT: Noether's Theorem; finding conserved currents
Question: So, I am trying to learn how to find conserved currents using what I see in P&S's "Intro to QFT"; sec.2.2 pg.17-18. Particularly, I am working through the example in the text for the complex Klein-Gordon field subject to the transformation:$$\phi \rightarrow e^{i\alpha }\phi. \tag{p.18}$$ It seems that the Noether currents can be found utilizing equation 2.12 for $j^\mu (x)$:$$j^\mu (x)={\partial \mathcal {L} \over \partial (\partial_\mu \phi )}\Delta \phi - \mathcal{J}^\mu.\tag{2.12}$$ So far as I can tell $\partial_\mu\mathcal {J}^\mu$ can be found from calculating:$\Delta \mathcal{L}$, since these two are set equal on page 17. Thus, for the given Lagrangian density we have:$$\partial_\mu\mathcal{J}^\mu=\Delta \mathcal {L}=\partial_\mu ({\partial \mathcal {L} \over \partial (\partial_\mu \phi )} \Delta \phi )+\partial_\mu({\partial \mathcal {L} \over \partial (\partial_\mu \phi^* )}\Delta\phi^*)$$ $$=i\alpha\partial_\mu[(\partial^\mu\phi^*)\phi-(\partial^\mu\phi)\phi^*].$$ However, if I insert this into the above expression for $j^\mu(x)$ I will get zero. What's going on? Answer: In the notation of P&S ${\cal J}^{\mu}$ denotes the improvement term in the Noether current $j^{\mu}$. The improvement term ${\cal J}^{\mu}$ is used when a transformation is only a quasi-symmetry (and not a strict symmetry) of the Lagrangian density. The Lagrangian density for the complex Klein-Gordon field has a strict $U(1)$ symmetry, so ${\cal J}^{\mu}\!=\!0$ is zero in this case.
{ "domain": "physics.stackexchange", "id": 99041, "tags": "lagrangian-formalism, symmetry, field-theory, noethers-theorem, klein-gordon-equation" }
How to find the angular frequency of a simple pendulum using this method?
Question: I am trying to derive an equation for angular frequency of a simple pendulum. Since the torque on the bob is only due to the horizontal component of mg, I can say that, $$ -mg(\sin\theta) l = \tau$$ $$ = I\frac{d^2\theta}{dt^2} \implies \frac{-g\sin\theta}{l}= \frac{d^2\theta}{dt^2} $$ $$ since, $$$$I = ml^2 $$ I am stuck here. How do I reach angular velocity from here? (Note: I am new to this. I only have a basic idea regarding this (the maths, I mean). Kindly go easy on me. Answer: Since you say that you're new to this, my answer will be quite basic. The general formula for the angular velocity of a simple pendulum isn't simple to derive at all. However, it is possible to derive its angular frequency using the small angle approximation. In this approximation, the angle $\theta$ (in radians) is very small $$\theta \ll 1 \implies \sin \theta \approx \theta.$$ In this case, the differential equation becomes: $$\frac{\text{d}^2\theta}{\text{d} t^2} = - \frac{g}{l}\, \sin(\theta) \approx - \frac{g}{l}\ \theta.\label{1} \tag{1}$$ This is just a rewriting of a very well known equation (perhaps the most well known in Physics?), that of Simple Harmonic Motion: $$\frac{\text{d}^2 x}{\text{d} t^2} = - \omega^2 x$$ There are many methods to show that the general solution to this equation can be written in terms of sins and cosines as $$x(t) = A \sin(\omega t) + B \cos(\omega t).$$ (You can plug this solution into the equation above, and see that it does indeed satisfy the equation. You can now see that the quantity I defined as $\omega$ above represents the angular frequency. I now leave it to you to look at Equation (\ref{1}) and figure out what the angular frequency is.
{ "domain": "physics.stackexchange", "id": 80620, "tags": "newtonian-mechanics, harmonic-oscillator, oscillators" }
Why do functions containing `await` need to be `async`?
Question: From what I understand, async/await enables to avoid the "callback pyramid of doom". Let's say that I have funcA(), funcB() and funcC() that are async functions, now if I do something like that: func someFunction() -> T { let myValueA = await funcA() let myValueB = await funcB() let myValueC = await funcC() return doSomething(myValueA, myValueB, myValueC) } Why do some languages (for instance JavaScript) require you to annotate your function with async ? What would be the issue issue if the function wasn't asynchronous? I mean, since the function already contains awaits, can't it be already considered as a non blocking function ? Thank you Answer: The async annotation changes the way the body of the function is interpreted. Now you can clearly infer that if the body contains await, it must be async, but there are a couple of reasons why it makes sense to require an annotation. The primary reason is, frankly, likely clarity. C# popularized this syntax and is a language that emphasizes declaring intent. My understanding (from vague recollections of blog posts) is that the C# designers view the fact that they didn't do a similar thing for generator syntax (i.e. yield) as a mistake or at least unpleasant compromise. Note that in JavaScript you do need to indicate a function is a "generator function" in much the same way as async. At any rate, both async/await and yield lead to dramatically different semantics, and you probably want to be clear that that is intended and understood. (A minor factor for C# was to make the await keyword contextual to avoid backwards compatibility issues.) As a more technical reason, async and JavaScript's function* change the behavior of a function even if you don't use await/yield. In C#, an async method with no await will run synchronously, but it will still package up the result in a Task object. Similarly, in JavaScript, a generator function with no yield still returns an empty Iterable rather than undefined. If the async/function* aspect was inferred from usage of await/yield, then in these cases you would need to explicitly create a Task or Iterable. This isn't so bad in C# where the type system will require this, but in JavaScript there's a much greater chance for uncaught errors. Regardless, it's also bad for exploration since commenting out a line of code might dramatically change the behavior and type of the function.
{ "domain": "cs.stackexchange", "id": 9602, "tags": "programming-languages, concurrency" }
Why can two different signals have the same periodogram?
Question: I have just started trying to self-study an intro course on signal processing. I have just been introduced to the periodogram and have a hard time understanding why completely different signals can sometimes have the same periodogram. Take for example the two signals in the below image: It is stated that these two signals have the same periodogram because they have the same magnitude function. I'm having trouble understanding what a magnitude function is and why these signals have the same periodogram. Any help would be greatly appreciated as I have no idea how to move forward! Answer: Good question, which I'll try and answer without complete mathematical rigorousness, but focus more on the intuition behind. First, let's define the periodogram of a real discrete signal $x[n]$ as the Discrete Fourier Transform of its auto-correlation function: $$\mathcal{F}\{x[n]*x[-n]\} = \big|X[k]\big|^2$$ Here, $n$ is the sample number, $*$ the convolution operator, $\mathcal{F}$ the Discrete Fourier Transform, $X$ the Discrete Fourier Transform (I know, both the transform and the result share the same name), $k$ the frequency bin number and $\big|\cdot\big|$ the absolute value operator. $\big|X[k]\big|$ is called the magnitude spectrum. In plain english, the periodogram of a signal $x[n]$ is an estimate of the spectral power content of $x[n]$, in other words, an estimate of the power at each frequency that make up $x[n]$. The question is, for two signals $x_1[n]\neq x_2[n]$, how can $\big|X_1[k]\big|^2 = \big|X_2[k]\big|^2$ This can be answered using a combination of intuition and mathematics using the definition for the Discrete Fourier Transform. Let's call $f_s$ the sampling frequency with which $x_1[n]$ and $x_2[n]$ were sampled. As I'm sure you're aware, the Discrete Fourier Transform can decompose a time-domain signal into its individual frequency components. Ask yourself, what frequencies make up a chirp signal $x_1[n]$? That one is easy to answer intuitively, since a chirp signal is a signal in which the frequency increases (or decreases) with time, starting at frequency $f_0$ and ending at $f_1$. Since our signal is sampled at $f_s$, let's conveniently set $f_0 = 0\,\text{Hz}$ and $f_1 = f_s/2\,\text{Hz}$: the chirp is made-up of all frequencies between $0$ and $f_s/2\,\text{Hz}$ (discarding frequency resolution considerations here). Assuming the amplitude remains constant and arbitrarily setting it to $1$, that would give you, without the need for complicated mathematics: $$\big|X_1[k]\big|^2 = 1$$ Less intuitive question: what frequencies make up an impulse signal? Well, the answer is all of them. To show this, let's use the Discrete Fourier transform: $$X[k] = \sum_{n=0}^{N-1}x[n]e^{-j2\pi kn/N}$$ and compute it for $x_2[n] = \delta[n-m] = \begin{cases}1 &n=m\\ 0 &n\neq m\end{cases}$ $$ X_2[k] = \sum_{n=0}^{N-1}\delta[n-m].e^{-j2\pi kn/N} = e^{-j2\pi k m/N} $$ The periodogram does not care about phase, it is only interested in the (squared) magnitude of $X_2[k]$: $$\big|X_2[k]\big|^2 = \big|e^{-j2\pi k m/N}\big|^2 = 1$$ So here you have it: $$\big|X_1[k]\big|^2 = \big|X_2[k]\big|^2$$
{ "domain": "dsp.stackexchange", "id": 11650, "tags": "discrete-signals, power-spectral-density, periodogram" }
When will Qiskit Runtime be available to all users?
Question: IBM's Qiskit Runtime was recently made available to a select group of users. Have they announced when it will be available to all users or when they might open it back up to a second round of new users? Answer: Qiskit runtime is now available on ibmq_qasm_simualtor for the public using the default open provider. Qiskit runtime for real quantum system was also available for the participants during IBM Quantum Challenge Fall 2021 for a limited time in Oct 27-Nov 6 2021:
{ "domain": "quantumcomputing.stackexchange", "id": 4000, "tags": "qiskit, ibm-q-experience, ibm-quantum-devices" }
Conformal primaries in momentum space
Question: Consider the Fourier transform of a conformal primary $O$ $$\tilde{O}(k) = \int d^dx e^{ik\cdot x} O(x)$$ Now consider the transformation of the momenta $k \to \lambda k$, so that the above reads $$\tilde{O}(\lambda k) = \int d^dx e^{i\lambda k\cdot x} O(x) = \lambda^{-d}\int d^dx'e^{ik\cdot x'} O(x'/\lambda)$$ where $x' = \lambda x$. Now by using the properties of the conformal primary $O(x/\lambda) = \lambda^{\Delta} O(x)$ we obtain, $$\tilde{O}(\lambda k) = \int d^dx e^{i\lambda k\cdot x} O(x) = \lambda^{\Delta-d}\int d^dx'e^{ik\cdot x'} O(x') = \lambda^{\Delta-d}\tilde{O}(k)$$ Hence, in the momentum space the conformal primary behaves as $$ \tilde{O}(\lambda k) = \lambda^{\Delta-d}\tilde{O}(k)$$, however, the two-point function in the momentum space goes like $$\langle\tilde{O}\tilde{O}\rangle \sim k^{2\Delta-d}$$ which is inconsistent. How does one resolve this enigma ? Answer: Consider the two point function $$\langle \tilde{\mathcal{O}}(-k) \tilde{\mathcal{O}}(k) \rangle = \int d^d x e^{i k \cdot (x_1-x_2) }\left\langle\mathcal{O}(x_1) \mathcal{O}(x_2)\right\rangle,$$ where the integral is over the differences $x=x_1-x_2$. Performing the rescaling $k\rightarrow \lambda k$ we have $$\left\langle \tilde{\mathcal{O}}\left(-k \lambda \right) \tilde{\mathcal{O}}\left(\lambda k\right) \right\rangle = \int d^d x e^{i \lambda k \cdot (x_1-x_2) }\left\langle\mathcal{O}(x_1) \mathcal{O}(x_2)\right\rangle=\lambda^{-d} \int d^d x e^{i k \cdot (x_1-x_2) }\left\langle\mathcal{O}\left( \frac{ x_1}{\lambda}\right) \mathcal{O}\left(\frac{x_2}{\lambda}\right)\right\rangle\\ =\lambda^{2\Delta-d} \int d^d x e^{i k \cdot (x_1-x_2) }\left\langle\mathcal{O}(x_1) \mathcal{O}(x_2)\right\rangle=\lambda^{2\Delta-d} \left\langle \tilde{\mathcal{O}}\left(-k \right) \tilde{\mathcal{O}}\left(k\right) \right\rangle.$$ From here we can deduce that $$\left\langle \tilde{\mathcal{O}} \tilde{\mathcal{O}} \right\rangle \sim k^{2\Delta -d}.$$
{ "domain": "physics.stackexchange", "id": 68802, "tags": "fourier-transform, conformal-field-theory, dimensional-analysis, scaling" }
Why does snow seem to reflect red light best?
Question: I've noticed that snow seems to reflect red light more than any other wavelength at night. It's seen in the sky, as well as the snow itself, both of which take on a reddish tint. Why exactly does it do this? Additionally, why is this only noticeable under moonlight? If it's the moonlight that's tinted red, then alternatively, why is that? My apologies for the rather basic question, but I can't seem to find information on the matter elsewhere. Answer: According to most sources such as this university page on albedo and this modeling paper on albedo versus wavelength, the typical albedo of snow is virtually 1 throughout the visible region, so it seems unlikely that it is the snow itself which is causing this effect. Also, I've never personally observed this as John Rennie stated, so perhaps it's caused by something else in your case, such as an atypical source of illumination.
{ "domain": "physics.stackexchange", "id": 42532, "tags": "visible-light" }
Normal equation for linear regression is illogical
Question: Currently I'm taking Andrew Ng's course. He gives a following formula to find solution for linear regression analytically: $θ = (X^T * X)^{-1} * X^T * у$ He doesn't explain it so I searched for it and found that $(X^T * X)^{-1} * X^T$ is actually a formula of pseudoinverse in a case where our columns are linear independent. And this actually makes a lot of sense. Basically, we want to find such $θ$ that $X * θ = y$, thus $θ = y * X^{-1}$, so if we replace $X^{-1}$ with our pseudoinverse formula we get exactly the $θ = (X^T * X)^{-1} * X^T * y$. What I don't understand is why nobody mentions that this verbose formula is just $θ = y * X^{-1}$ with pseudoinverse. Okay, Andrew Ng's course is for beginners and he didn't want to throw a bunch of math at students. But octave, where the assignments are done, has function pinv() to find a pseudoinverse. Even more, Andrew Ng actually mentions pseudoinverse in his videos on normal equation, in the context of $(X^T * X)$ being singular so that we can't find its inverse. As I mentioned above, $(X^T * X)^{-1} * X^T$ is a formula for pseudoinverse only in case of columns being linear independent. If they are dependent (e.g. some features are redundant), there are other formulae to consider, but anyway octave handles all these cases under the hood of pinv() function, which is more than just a macro for $(X^T * X)^{-1} * X^T$. And Andrew Ng instead of saying to use pinv(X) * y gives this: pinv(X' * X) * X' * y, basically we use a pseudoinverse to find a pseudoinverse. Why? Answer: Hello Oleksii and welcome to DSSE. The formula you are asking about is not for a pseudoinverse. $\theta = (X^TX)^{-1}X^Ty$ Where $\theta$ is your regressor $X$ is a matriz containing stacked vectors (as rows) of your features/independent variables $y$ is a matriz containing stacked vectors (or scalars) of your predictions/dependent variables This equation is a solution to a linear set of equations: $Ax = B$ that occurs in trying to minimize the least squares loss. The reason why you see this pinv() on the code is because if X has not enough linearly independent rows, $X^TX$ (also known as $R$, the autocorrelation matriz of the data, and it's inverse is called the precision matriz) will result in a singular (or near singular) matriz, which inversion might not be possible. Even if this singularity only happens because of the working precision of your computer/programming language. Using pinv() is usually not recommended, because even if it allows to compute a regressor it will overfit the training data. Alternative solutions to working with singular matriz is adding $\delta~I$ to $R$ where $\delta$ is a small constant (usually 1 to 10 eps) and $I$ is the identity.
{ "domain": "datascience.stackexchange", "id": 8038, "tags": "linear-regression, linear-algebra, normal-equation" }
Can Curry-Howard prove a theorem from the types in your program, that has nothing to do with your program?
Question: The following link states: Curry-Howard means that any type can be interpreted as a theorem in some logical system, and any term can be interpreted as a proof of its type. This does not mean that those theorems have anything to do with your program. Take the following function: swap : forall a,b. (a,b) -> (b,a) swap pair = (snd pair, fst pair) The type here is forall a,b. (a,b) -> (b,a). The logical meaning of this type is (a and b) => (b and a). Note that this is a theorem in logic, not a theorem about your program. My question is: Can Howard-Curry prove a theorem from the types in your program, that has nothing to do with your program? Answer: The type $\def\Nat{\mathrm{Nat}}\forall a b. (a,b) \rightarrow (b,a)$ corresponds to the logical statement $\forall a b : \mathrm{Prop}. a \land b \rightarrow b \land a$. $(a,b)$ corresponds to $a \land b$ and quantification over types corresponds to quantification over propositions (and of course implies corresponds to function types). This basically just expresses the fact that logical and is commutative. If you ask me this has nothing to do with a swap function even though the swap function has this type so I think your examples satisfies your question. If you think about functions of types like $\Nat \rightarrow \Nat$ they correspond are logically equivalent to $\top \rightarrow \top$ (which is of course equivalent to $\top$). Why is $\Nat$ correspond logically equivlent to $\top$? Well inhabitants can be constructed of type $\Nat$ no matter the case which is a property only true of tautologies so $\Nat$ is a tautology. One could argue that $\Nat$ corresponds to something else that is a tautology (as mentioned $\Nat \to \Nat$ is also a tautology) but it is still equivalent to $\top$. As Andrej noted logically equivalent does not mean corresponds and I was wrong in saying this. Thinking about it now that would be like saying $\forall a b. (a, b) \to (b, a)$ corresponded to $\top$ which is clearly wrong. Most programs have extremely boring types quite frankly. In Haskell the only real types of interest are are types involving forall, because the rest usually have some type like $\;\cdots\, \to A$, where $A$ is a trivial tautology (like a list type or a number). Languages like Agda and Coq can have far more interesting types if you are interested in this sort of thing.
{ "domain": "cstheory.stackexchange", "id": 2932, "tags": "type-theory, curry-howard" }
Is friction a conservative force when a ball rolls down an inclined plane(pure rolling)?
Question: My book says friction is non dissipative for a cylinder rolling down an inclined plane,so is it conserved? Answer: No, you cannot view friction as a conservative force in this situation. Here is why: We know that for a conservative force $F$ we can define a potential energy function $U$ such that $$F=-\frac{\text d U}{\text d x}$$ and a consequence of this is that the work done by this force is given by $$W=-\Delta U$$ However, static friction does no net work on the cylinder. Therefore for all instances of rolling here $W=0$. This means we must have a constant potential energy, which means that $F=0$. This is a contradiction, since we assumed we had a static friction force. Therefore static friction cannot be conservative.
{ "domain": "physics.stackexchange", "id": 59423, "tags": "rotational-dynamics" }
Conversion between k-SAT and XOR-SAT
Question: According to XOR Satisfiability Solver Module for DPLL Integration by Tero Laitinen, we need $2^{n-1}$ CNF clauses to convert an $n$ literal XOR-SAT clause if we do not want to increase the number of literals. So, I understand that the computational cost for converting an XOR-SAT expression into a strictly CNF $k$-SAT is exponential. My question: What is the computational cost if I want to reverse the process? What is the computational cost of converting a CNF $k$-SAT expression into an XOR-SAT one? I assume the promise that in this case only the $k$-SAT expressions with equivalent XOR-SAT expressions are considered. Answer: If all XOR relationships between variables in CNF formulas could be detected in polynomial time, then this would allow the solution of UNAMBIGUOUS-SAT in polynomial time. By the Valiant–Vazirani theorem this result would imply that NP = RP. To solve UNAMBIGUOUS-SAT, recall that $a \oplus b$ implies $a \neq b$. Find the XOR relationship between each pair of variables and use the results to divide the variables into two groups of equivalent variables. Once this is done, only two test assignments are required to determine satisfiability. In the limited case of recovering XOR relationships encoded in the usual way, i.e. $a \oplus b \oplus c$ to $ \lnot a \lor b \lor c \\ a \lor \lnot b \lor c \\ a \lor b \lor \lnot c \\ \lnot a \lor \lnot b \lor \lnot c $ this can be done in polynomial time by sorting the clauses followed by a linear-time scan.
{ "domain": "cstheory.stackexchange", "id": 2960, "tags": "sat, boolean-functions, boolean-formulas" }
Intel RealSense depth camera on Ubuntu Arm
Question: Hi all, Im going to develop some package with the Intel RealSense depth camera, but before i get that hardware, i wanna know if its possible to use it on the Ubuntu Arm version of ROS. Thank you so much!! Originally posted by dottant on ROS Answers with karma: 185 on 2016-03-03 Post score: 1 Answer: I am working on odroid xu4, with sr300 intel camera! I could use F200 before. But there is no more F200. SR300 is only available now. I am using now SR300 on ODROID XU4 using modified version of librealsene for ARM processor: librealsense , and modified ros package for SR300: realsense. I also patched the linux kernel 14.04 for using V4L2 backend. The only problem that I have now I should run the node twice to work correctly. Originally posted by Zargol with karma: 206 on 2016-06-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Vijayenthiran on 2016-06-22: I am planning to buy SR300 for Odroid XU4. Is it advisable or should I buy F200? Are you using librealsense ?
{ "domain": "robotics.stackexchange", "id": 23981, "tags": "ros, realsense, camera, 3dcamera, intel" }
Why the photon can't produce electron and positron in space or in vacuum?
Question: $$\frac{hc}{\lambda} = K_e + K_p + 2m_e c^2$$ could be the energy conservation equation for a photon of wavelength $\lambda$ decaying into a electron and positron with kinetic energies $K_e$ and $K_p$ and rest mass energy $m_e c^2$. Why does this decay not occur in space or vacuum? Answer: You can't simultaneously conserve energy and linear momentum. Let the photon have energy $E_{\gamma} = p_{\gamma} c$ and the electron have energy $E_{-}^{2} = p_{e}^{2}c^2 + m_{e}^{2}c^4$ and an analogous expression for the positron. Suppose the electron and positron depart from the interaction site with an angle $2\theta$ between them. Conservation of energy. $$ p_{\gamma} c = \sqrt{(p_{e}^{2}c^2 +m_e^{2}c^4} + \sqrt{(p_{p}^{2}c^2 +m_e^{2}c^4},$$ but we know that $p_{p} = p_{e}$ from conservation of momentum perpendicular to the original photon direction. So $$ p_{\gamma} = 2\sqrt{p_{e}^2 + m_e^{2}c^2}$$ Now conserving linear momentum in the original direction of the photon. $$p_{\gamma} = p_e \cos{\theta} + p_p \cos\theta = 2p_e \cos\theta$$ Equating these two expression for the photon momentum we have $$p_e \cos{\theta} = \sqrt{p_{e}^2 + m_e^{2}c^2}$$ $$\cos \theta = \sqrt{1 + m_e^{2}c^2/p_e^{2}}$$ As $\cos \theta$ cannot exceed 1 we see that this impossible.
{ "domain": "physics.stackexchange", "id": 44302, "tags": "kinematics, photons, momentum, conservation-laws, pair-production" }
How are Lagrangian multipliers zero except for support vectors in dual representation of SVM?
Question: How can we conclude that the Lagrangian multipliers are zero, except support vectors, in a dual problem? I cannot seem to see it. $$L(\alpha)=-\frac{1}{2}\sum_i \sum_j \alpha_i \alpha_j y_i y_j x_i' x_j + \sum_i{\alpha_i} $$ Answer: In optimization, we have something called complementary slackness condition, it is part of the KKT conditions. Every constraint, $g_i(x^*)\le 0$ in the primal corresponds to a dual variable $\mu_i$ (Lagrange multiplier). The condition state that $$g_i(x^*)\mu_i=0$$ For points that are not support vectors, we have $g_i(x^*)<0$, hence we must have $\mu_i=0$.
{ "domain": "datascience.stackexchange", "id": 5456, "tags": "machine-learning, svm" }