anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
The principle behind Inertia and it's connection to Equilibrium
Question: Inertia is the tendency of a force-free body to remain in that state or it is something that opposes any act of changing its equilibrium state. Mass is a measure of inertia. I have some questions regarding inertia and equilibrium: 1) Is there a scientific explanation why inertia happens? 2) Is inertia a necessary criterion for equilibrium? If so, then would it be applicable to less massive (I mean the fundamental particles that make up the matter) or massless particles also? For example, the speed of light through vacuum is constant which means the photons are in equilibrium. But photons are massless entities. Do they too have inertia? If so is it necessary for equilibrium to be stated in terms of inertia? Thank you in advance. Answer: Science never answers "why" questions, so in a strict sense there is no such explanation, but one can try to triangulate where we stand, at the moment. In classical physics space, time and the existence of massive bodies are inexplicable pre-physical facts. Inertia then becomes an observed property of massive bodies that allows to differentiate them by their mass. Together with the observed homogeneity of space and time and the isotropy of space much of classical physics follows from there. Relativity unifies space/time and energy/momentum (including rest mass) and inertia becomes a necessary property of systems with localized binding energy, but relativity alone can't describe the internal details of such systems. For that purpose we need quantum mechanics, more precisely quantum field theory. At this level we get to learn that the symmetry properties of spacetime have very profound consequences for matter and its interactions, but we are still short of having found a self-consistent explanation for spacetime itself. So one could say that among the classical mechanics triad space/time/matter, which is needed to "have a home for inertia", we have succeeded somewhat in understanding matter to some extent and we have (somewhat) unified space and time, but the fundamental object of "spacetime", which gives rise to all of this, is still not understood. For me, and this is an opinion, of course, the fundamental question of "How does inertia really work?" (which goes beyond "What does inertia do?"), can not be successfully answered until we have made successful inroad into the "How does spacetime really work?" question. I do not expect rapid experimental and observational progress on that front because spacetime has turned out to be an extremely smooth object, so far... and we just don't have the right microscopes, yet, to see the structures in it that will get us towards a microscopic understanding. Does one need inertia for equilibrium? No, but without it there can't be anything else. In classical physics inertia sets the velocity scale and without it dynamic equations are meaningless. In relativity all systems without rest-mass are moving at the same velocity in all observer systems, which is not very interesting. What makes the world so rich is the interplay between massive and massless fields. I believe the theoreticians can explain why one can't exist without the other, but I don't think that answers your question, either.
{ "domain": "physics.stackexchange", "id": 30006, "tags": "forces, photons, equilibrium, inertia" }
Automaton for substring matching
Question: Given $s$ as a string over some alphabet, what is the best known algorithm to compute a corresponding deterministic finite-state automaton (DFA) that accepts any string that contains $s$? I am mostly interested of the lowest time complexity so if you tell me what is the best known complexity in O notation to build an automaton for a string that would be just as good. Answer: Hendrik Jan is correct about the Knuth-Morris-Pratt algorithm (warning, wikipedia doesn't explain it particularly well, a text on algorithms is probably a better bet). The failure function can be used to extract a DFA that can perform the string matching. It's not immediately obvious that the failure table is a DFA, but with very little work the transition table for the DFA can be constructed. If $W$ is the pattern string you want to match, the time to build the table (and hence construct the DFA) is $O(|W|)$. The central idea is that the failure table $T$ (I'll try to use the same names as wikipedia, just to get a ready-made example) tells you where you could be up to in the string if the current match doesn't work out, i.e. if you don't see the next character you're expecting, perhaps the beginning of the match you want has already been seen, you're just mistakenly further ahead, so you only want to "back-track" a little (note, there's no real backtracking in the usual algorithmic sense). So, shamelessly stealing the example from wikipedia, say we have the failure table $T$: $$\begin{array}{c|cccccc} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline W[i] & A & B & C & D & A & B & D \\ T[i] & -1 & 0 & 0 & 0 & 0 & 1 & 2 \end{array} $$ We can construct the DFA as follows; we have $|W|+1$ states, with a backbone of transitions that match the entire pattern. To match the table we'll call the start state $q_{-1}$ and the final state $q_{|W|}$ (so in the example $q_{6}$). The backbone is then the transitions $\delta(q_{i-1},W[i]) = q_{i}$, so in the example, we go from the start state $q_{-1}$ to state $q_{0}$ on an $A$ and from $q_{2}$ to $q_{3}$ on a $D$. Now the trick is to add the correct "backwards" transitions so that we don't go too far back along the backbone when we get something wrong. This is where we us the failure function. If we're in state $q_{i-1}$ and we don't see $W[i]$ as the next symbol, then we can alternatively transition to $q_{T[i]}$ if we see $W[T[i]]$. From the example, if we're in state $q_{5}$, and we don't see a $D$, but we do see a $C$, we can go to $q_{2}$. Every other symbol returns us to the start state. Reiterating; there are three types of transitions, the backbone transition if everything's going fine, the transition given by the failure table so we might be able to recover a bit, and $|\Sigma| - 2$ other transitions that take us back to the start state. The start and final states are a little special of course, the start state can't go back further, so the failure recovery transition appears with the other transitions, and once we reach the final state, it doesn't matter what else we see, so it's a sink state too. There's one final wrinkle, the case where the failure symbol is the same as the expected (e.g. $W[1]$ and $W[5]$) in this case we can just defer the decision until they're different, or we can create an NFA. The example then results in a DFA that looks (barring mistakes) like: The dashed transitions represent the "everything left over" transitions. It should be easy enough to see that given the table, we can construct the DFA in $O(|W|)$ time (in one pass really). The table can also be constructed in $O(|W|)$ time - the source code on wikipedia is sufficient. If we get clever, of course we can skip the intermediate table step and use the table building algorithm to just create the DFA immediately. This running time is also the best we can hope for, as a DFA that matches only this string has to have at least $|W|$ transitions (and $|W|+1$ states), so we need to read the entire string $W$, which takes $\Omega(|W|)$ steps.
{ "domain": "cs.stackexchange", "id": 1561, "tags": "automata, finite-automata, strings, substrings" }
roscore not working, ImportError: No module named RosStreamHandler
Question: I installed ROS-Fuerte on Linux 12.04 and I am getting an error when I try to run roscore. j@instadmin-HP-Z210-Workstation:/opt/ros/fuerte$ roscore WARNING: unable to configure logging. No log files will be generated Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://instadmin-HP-Z210-Workstation:56432/ ros_comm version 1.8.11 SUMMARY ======== PARAMETERS * /rosdistro * /rosversion NODES auto-starting new master Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7 /threading.pyc'> ignored process[master]: started with pid [24098] Traceback (most recent call last): File "/opt/ros/fuerte/bin/rosmaster", line 35, in <module> rosmaster.rosmaster_main() File "/opt/ros/fuerte/lib/python2.7/dist-packages/rosmaster/main.py", line 73, in rosmaster_main configure_logging() File "/opt/ros/fuerte/lib/python2.7/dist-packages/rosmaster/main.py", line 57, in configure_logging _log_filename = rosgraph.roslogging.configure_logging('rosmaster', logging.DEBUG, filename=filename) File "/opt/ros/fuerte/lib/python2.7/dist-packages/rosgraph/roslogging.py", line 105, in configure_logging logging.config.fileConfig(config_file, disable_existing_loggers=False) File "/usr/lib/python2.7/logging/config.py", line 78, in fileConfig handlers = _install_handlers(cp, formatters) File "/usr/lib/python2.7/logging/config.py", line 153, in _install_handlers klass = _resolve(klass) File "/usr/lib/python2.7/logging/config.py", line 94, in _resolve __import__(used) ImportError: No module named RosStreamHandler [master] process has died [pid 24098, exit code 1, cmd rosmaster --core -p 11311 __log:=/home/jordan/.ros/log/eb156b0e-d7a6-11e2-b471-082e5f0d17cd/master.log]. log file: /home/jordan/.ros/log/eb156b0e-d7a6-11e2-b471-082e5f0d17cd/master*.log ERROR: could not contact master [http://instadmin-HP-Z210-Workstation:11311/] [master] killing on exit When I run roswtf I get: - No package or stack in context ================================================================================ Static checks summary: No errors or warnings ================================================================================ ROS Master does not appear to be running. Online graph checks will not be run. ROS_MASTER_URI is [http://localhost:11311] Any suggestions are much appreciated. Thanks! Originally posted by jjameson18 on ROS Answers with karma: 15 on 2013-06-17 Post score: 1 Answer: I encountered a similar problem once. Check the path via echo $ROS_PACKAGE_PATH. You have to change the path via export ROS_PACKAGE_PATH= .... . Your path might have wrong directories or the path might consist of a symbolic link which is too complex for it to handle. Originally posted by Asfandyar Ashraf Malik with karma: 729 on 2013-06-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14592, "tags": "ros-fuerte, roscore, ubuntu-precise, ubuntu" }
How to get length of array if poses from nav_msgs/Path.msg
Question: Does anyone know how to use the message type nav_msgs/Path.msg. according to the wiki it is an array of poses that are used to plan a path but it I'm unable to get the msg in array form. I need to get the length of the array for my purposes which is why I'm asking. How do I go about that? right now I just have this: #!/usr/bin/python3 import rospy from nav_msgs.msg import Path from rospy.numpy_msg import numpy_msg import numpy as np def callback(msg): print("list: ", msg) rospy.init_node("tester_3") rospy.Subscriber("/move_base/NavfnROS/plan",Path,callback) rospy.spin() Originally posted by distro on ROS Answers with karma: 167 on 2022-01-23 Post score: 0 Answer: Thanks guys! I typed pose instead of poses, that messed me up. Its fixed now, its len(msg.poses) Originally posted by distro with karma: 167 on 2022-01-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37384, "tags": "ros, ros-melodic, nav-msgs" }
Setting header color on scroll for each post
Question: On this site this site, I have some code which you can see below which figures out the offset for each post from the top of the document and assigns a colour from each post to the header when it scrolls into view. I am having to wait for the window load event until the code works which really is not very elegant. Can anyone see how I can improve this code? $(window).load(function() { var $header = $("header"); var numberOfSections = $("section").length; var sectionOffsets = []; for(var i = 0; i < numberOfSections; i++) { sectionOffsets.push($("section").eq(i).offset().top); } $(window).scroll(function () { $("section").each(function () { if ($(window).scrollTop() > $(this).offset().top - 180) { $("header").css('color', $(this).data("colour")); } }); }).scroll(); }); Answer: jsBin demo Instead of storing section positions on window onload time, which can be always kind of inaccurate since your images might still load... get dynamically the current position / size data of an element using: element.getBoundingClientRect() using that JS method you don't even need to calculate the $(window).scrollTop() cause the returned value is the element's position respective to the client top (viewport top edge). To retrieve only the one element that matches a top criteria you can use the jQuery's .filter() method and return the element which... $section.filter(function(){ var r = this.getBoundingClientRect(); return r.top + r.height - 100 > 0; // ...matches this }) (100 is the header height in this demo; set as desired or calculate dynamically) Now, since jQuery filtered more than one element that matches that criteria of (gbcr.top + gbcr.height - headeroffser) > 0 by going directly to chain another method to it like .data(), the value will be respective to the first of elements returned in the .filter() collection: $header.css({ color : $section.filter(function(){ var r = this.getBoundingClientRect(); return r.top + r.height - 100 > 0; }).data().colour // is the colour data of the first of filtered elements }); Store that colouring stuff inside a setScrollColors function and use like: $(function() { // DOM ready shorthand var $header = $("header"); // Cache selectors var $section = $("section"); function setScrollColors() { $header.css({ color : $section.filter(function(){ var r = this.getBoundingClientRect(); return r.top + r.height - 100 > 0; }).data().colour }); } setScrollColors(); // Call inside dom ready $(window).on("load scroll", setScrollColors); // call also on load and scroll }); *{margin:0;} header{ position:fixed; z-index:1; top:0; width:100%; background:#f9f9f9; height:100px; border-bottom:1px solid currentColor; } #content{ margin-top:100px; } section{ min-height: 1000px; min-height: calc(100vh - 100px); border-top:1px solid #000; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <header><h1>HEADER</h1></header> <div id="content"> <section data-colour="black">black</section> <section data-colour="red">red</section> <section data-colour="blue">blue</section> <section data-colour="green">green</section> <section data-colour="fuchsia">fuchsia</section> <section data-colour="orange">orange</section> </div>
{ "domain": "codereview.stackexchange", "id": 13762, "tags": "javascript, jquery, html, css, event-handling" }
Derivation of transformation rule for covectors
Question: I am currently teaching myself GR and am stuck with an exercise where we have to show that covectors transform under this rule, given two overlapping charts $\phi=(x^1, ... , x^n)$ and $\phi'=(x'^1, ... , x'^n)$ $$\left(dx^\mu\right)_p = \left(\frac{\partial x^\mu}{\partial x'^\nu}\right)_{\phi'(p)}\left(dx'^\nu\right)$$ This whole notation with the p is a bit confusing. But I think the best place to start is from the definition of a covector, namely covectors are linear maps from Vectors to the reals. So: $$\left(dx^\mu\right)_p(X)=(X(x^\mu))_p$$ Can one then not combine the maps? So one then has: $$\left(dx^\mu\right)_p(X)=(X(x^\mu\circ \phi^{-1}))_{\phi(p)}$$ but aren't the $x^\mu$'s already a chart? Answer: I will show coordinate transformations for the tangent space and the dual tangent space. To be clear, given $f\in C^{\infty}(M)$ for the manifold $M$, I am defining \begin{align} \left(\frac{\partial f}{\partial x^{\mu}}\right)_p := &(\partial_{\mu}(f\circ x^{-1}))(x(p));\\ (df)_p:T_pM &\longrightarrow \mathbb{R}\\ X &\longmapsto (df)_p(X) := X(f). \end{align} Then we have \begin{align} (dx^{\mu})_p\left(\left(\frac{\partial}{\partial x^{\nu}}\right)_p\right) =\left(\frac{\partial x^{\mu}}{\partial x^{\nu}}\right)_p = (\partial_{\nu}(x^{\mu}\circ x^{-1}))(x(p)) = \delta_{\nu}^{\mu}(x(p)) = \delta_{\nu}^{\mu}. \end{align} Now let's consider a change of coordinate chart (for an overlapping region of the manifold) in the tangent space. To do this, I will insert an identity operator. I will call my charts $x$ and $y$ for clarity. \begin{align} \left(\frac{\partial f}{\partial x^{\mu}}\right)_p &= (\partial_{\mu}(f\circ x^{-1}))(x(p)) \\&= (\partial_{\mu}(f\circ y^{-1} \circ y \circ x^{-1}))(x(p)) \\&= (\partial_{\mu}((f\circ y^{-1} )\circ (y \circ x^{-1})))(x(p)) \\&= (\partial_{\mu}(y\circ x^{-1}))^{\nu}(x(p))\cdot(\partial_{\nu}(f\circ y^{-1}))(\underbrace{(y\circ x^{-1})(x(p))}_{y(p)}) \\&=(\partial_{\mu}(y^{\nu}\circ x^{-1}))(x(p))\cdot(\partial_{\nu}(f\circ y^{-1}))(y(p)) \\ &= \left(\frac{\partial y^{\nu}}{\partial x^{\mu}}\right)_p\left(\frac{\partial f}{\partial y^{\nu}}\right)_p. \end{align} So we conclude that \begin{align} \left(\frac{\partial}{\partial x^{\mu}}\right)_p = \left(\frac{\partial y^{\nu}}{\partial x^{\mu}}\right)_p\left(\frac{\partial}{\partial y^{\nu}}\right)_p. \end{align} Now let's look at the following: \begin{align} (dx^{\mu})_p\left(\left(\frac{\partial}{\partial x^{\nu}}\right)_p\right) &=\left(\frac{\partial x^{\mu}}{\partial x^{\nu}}\right)_p \\&= (\partial_{\nu}(x^{\mu}\circ x^{-1}))(x(p)) \\&=(\partial_{\nu}(x^{\mu}\circ y^{-1} \circ y \circ x^{-1}))(x(p)) \\& = (\partial_{\nu}((x^{\mu}\circ y^{-1}) \circ (y \circ x^{-1})))(x(p)) \\& = (\partial_{\nu}(y\circ x^{-1}))^{\sigma}(x(p))\cdot(\partial_{\sigma}(x^{\mu}\circ y^{-1}))(\underbrace{(y\circ x^{-1})(x(p))}_{y(p)}) \\&= (\partial_{\nu}(y^{\sigma}\circ x^{-1}))(x(p))\cdot(\partial_{\sigma}(x^{\mu}\circ y^{-1}))(y(p)) \\&= \left(\frac{\partial y^{\sigma}}{\partial x^{\nu}}\right)_p\left(\frac{\partial x^{\mu}}{\partial y^{\sigma}}\right)_p \\&= \left(\frac{\partial x^{\mu}}{\partial y^{\sigma}}\right)_p(dy^{\sigma})_p\left(\left(\frac{\partial}{\partial x^{\nu}}\right)_p\right). \end{align} Thus \begin{align} (dx^{\mu})_p = \left(\frac{\partial x^{\mu}}{\partial y^{\sigma}}\right)_p(dy^{\sigma})_p, \end{align} as you wanted. Please ask if anything is unclear, or you think I have made a mistake.
{ "domain": "physics.stackexchange", "id": 82863, "tags": "general-relativity, differential-geometry, coordinate-systems, tensor-calculus" }
Nanopore data clarification regarding merging of samples
Question: We had same set of sample ran on nanopore platform twice. The issue on hand is. First run was done it was nearly done but the power went off. Second run was done completely with enough sequencing depth. Now the question I ran the analysis with the second set which was complete that set of result I have. But so far I have't ran the first part. Question1 : if I run the first part do i get major differences? in the result Question2 : would it be advisable to merge both run since they are now can be called as twi different batches plus would be bring drastic changes in the output ? And in terms of tool for merging I googled which shows poretools. Answer: It is entirely unclear which type of analysis you aim to do. You probably should be more specific when asking questions. That said, given that the run was going perfectly fine until a power issue stopped it, all data produced should be fine and you can indeed combine those runs. You probably shouldn't bother with poretools. If you are using fastq files to start your analysis you can just cat the basecalled files together. Also after alignment, you can merge the bam files. If your analysis requires the fast5 files then you can probably just put these in the same folder, but you should tell us which analysis you are doing.
{ "domain": "bioinformatics.stackexchange", "id": 1724, "tags": "nanopore" }
Would introducing water into a piston increase or decrease pressure?
Question: Suppose you were to introduce a small amount of liquid water into the cylinder of an IC engine just after combustion. The timing and amount of water would be such that the water was entirely vaporized before the piston had moved very far. The vaporization of the water would lower the temperature of the gases in the piston and that would decrease the pressure of the hot gases but at the same time it adds more gas in the form of water vapor which would increase the pressure. Is there some equation or rule of thumb to tell which effect is greater and how much? I'm wondering how much this could decrease operating temperature with minimal loss of power. Answer: The addition of water into the combustion chamber of an internal combustion engine is called water injection and was done to reduce temperatures inside the engine to the point where the pistons, etc. would not begin to soften from the high heat and blow up at high power settings. The water also helped suppress detonation of the fuel-air mixture and encourage deflagration instead, to reduce the tendency of the engine to explode or beat itself to pieces while running at high power settings. Much was written about the thermodynamics of water injection in piston engines of the WWII era. A search on water injection should reveal what you desire. Note that in piston engines, water injection was typically used only on takeoff, where the engine must operate at its full maximum power rating- which produces the highest temperatures inside the engine.
{ "domain": "physics.stackexchange", "id": 96036, "tags": "heat-engine" }
Question about flow temperature and storage temperature in a cogeneration plant
Question: Let's assume I have a cogeneration plant that produces heat with a nominal output. The heat is given into a heat storage tank, which is filled from above. The maximum flow temperature of the cogeneration plant is 85 °C, if this occurs an emergency shutdown takes place. The maximum water temperature in the heat storage tank is 95 °C. Is it theoretically possible in this case to reach a water temperature of over 85 °C only with the cogeneration plant? Answer: when the cogeneration plant switches of upon reaching 85°C flow and without another heat source, there's no way to reach 95°C. Are you sure the switch-off happens at 85°C flow, and not at 75°C return to the cogernetion plant(this is a typical value for Otto-cycle cogeneration plants for the emergency cooler to kick in)? If you have other heat sources and want to achieve a higher flow temp. you can use a setup with flow like this: cold site of storage -> cogeneration plant -> boiler -> hot side of storage.
{ "domain": "engineering.stackexchange", "id": 3113, "tags": "electrical-engineering, energy, heating-systems, energy-efficiency, power-engineering" }
rosbag: error: Cannot find rosbag/play executable
Question: I am running ROS Noetic on Ubuntu 20.04.3 LTS. I cannot run rosbag play or rosbag record. It was working perfectly last week, but it ceased to work this week. I have tried reinstalling using apt install ros-noetic-rosbag, but that didn't help. The weird thing is rosbag check and rosbag info work pretty well: rosbag check flight_2022_02_21_23_02_03.bag Bag file does not need any migrations. I have suppressed the topic names from rosbag info so this thread isn't too long, but this is the header in it: rosbag info flight_2022_02_21_23_02_03.bag path: flight_2022_02_21_23_02_03.bag version: 2.0 duration: 1:15s (75s) start: Feb 21 2022 17:02:40.34 (1645484560.34) end: Feb 21 2022 17:03:56.02 (1645484636.02) size: 9.9 MB messages: 90332 compression: bz2 [101/101 chunks; 10.04%] uncompressed: 82.3 MB @ 1.1 MB/s compressed: 8.3 MB @ 111.8 KB/s (10.04%) The following happens when I try rosbag record or rosbag play: Record: rosbag record -a -O example.bag Usage: rosbag record TOPIC1 [TOPIC2 TOPIC3 ...] rosbag: error: Cannot find rosbag/record executable Play: rosbag play flight_2022_02_21_23_02_03.bag Usage: rosbag play BAGFILE1 [BAGFILE2 BAGFILE3 ...] rosbag: error: Cannot find rosbag/play executable I know that the problem isn't with the bag, as a coworker was able to play that same bag. I also tried other bags that I could play before, but I can't play any bags whatsoever now. I have noticed that the error was once observed by one user, but there was no solution anywhere that I could find (that user was supposed to open a question in this forum, but I wasn't able to find one). Here is what I get from the following commands: Find rosbag: find /opt/ros/noetic/lib/rosbag /opt/ros/noetic/lib/rosbag /opt/ros/noetic/lib/rosbag/record /opt/ros/noetic/lib/rosbag/fixbag_batch.py /opt/ros/noetic/lib/rosbag/bagsort.py /opt/ros/noetic/lib/rosbag/encrypt /opt/ros/noetic/lib/rosbag/fastrebag.py /opt/ros/noetic/lib/rosbag/play /opt/ros/noetic/lib/rosbag/fix_moved_messages.py /opt/ros/noetic/lib/rosbag/topic_renamer.py /opt/ros/noetic/lib/rosbag/savemsg.py /opt/ros/noetic/lib/rosbag/makerule.py /opt/ros/noetic/lib/rosbag/fix_msg_defs.py /opt/ros/noetic/lib/rosbag/fix_md5sums.py /opt/ros/noetic/lib/rosbag/fixbag.py /opt/ros/noetic/lib/rosbag/bag2png.py Rospack find: rospack find rosbag /home/marcelino/catkin_ws/src/ros_comm/tools/rosbag Any ideas in how I can fix this without having to format my laptop? =) Originally posted by Marcelino Almeida on ROS Answers with karma: 16 on 2022-02-21 Post score: 0 Answer: I was just able to find the solution, and I hope this will help others in the future. The issue is that I had ros_comm in my catkin workspace, but rosbag wasn't built. I just went to my catkin workspace, ran catkin build rosbag, then I was back in business Originally posted by Marcelino Almeida with karma: 16 on 2022-02-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37464, "tags": "rosbag" }
How to draw a ray diagram from focal length, object and image heights?
Question: Q. An object of height 8 cm is placed in front of a lens. It's inverted image of height 4.8 cm is formed on the screen. If the focal length of the lens is 12 cm then by drawing at scale calculate the object distance, image distance and magnification. Taking scale as 1cm = 4cm I've drawn the principal axis and the lens and marked focus (F) at 3cm from the lens. But how do I proceed from here with only the heights of the object and image?! Answer: Do you know how to draw the image of an object set at a certain distance from the lens? It is sort of the inverse. Start by drawing the ray parallel to the optical axis from the object incident onto the lens and refracted through the focus $F$. Extend both rays as much as you can (line in blue). Then, from the optical axis on the image side, find the perpendicular length of image that corresponds to the image length (the base is the focal axis and the tip of the image is where the image meets the refracted ray initially drawn). Once you have it, join this point to the optical centre (the centre of the lens) and extend this ray (in purple) to meet the original incidence ray. Where they meet is where the object is. You then have to measure the distances of the object, and of the image.
{ "domain": "physics.stackexchange", "id": 40273, "tags": "homework-and-exercises, optics, lenses" }
splitting srdf, urdf files
Question: Is it possible to have a srdf file (and as a matter of fact a urdf file as well) that are split into multiple files? I was reading the docs and an doesn't seem to exist. Why do I want this? I have a robot with a modular design. Or will I have to write a script to generate these files? Originally posted by fbelmonteklein on ROS Answers with karma: 140 on 2017-11-16 Post score: 0 Answer: For URDF: see wiki/xacro. For SRDF: this is not natively supported, but there are people that use xacro there as well. See personalrobotics/herb_description for a(n) (random) example. Originally posted by gvdhoorn with karma: 86574 on 2017-11-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 29384, "tags": "ros, urdf, srdf" }
Is dH(S,P) = dH(T,P) true (always)?
Question: I came across an interesting question with some physical chemistry students today. Based on the following steps, we're uncertain whether the statement in the title is/could be true. Assuming $dN = 0$, Enthalpy's natural variables are Entropy and Pressure: $$dH(S,p) = TdS + Vdp \space (1)$$ Enthalpy can be expressed as a total derivative of Temperature and Pressure: $$ dH(T,p) = \frac{\partial H}{ \partial T}\vert_p \space dT + \frac{\partial H}{\partial p}\vert_T \space dp \space (2)$$ Total Enthalpy is $$H = U + pV = TS - pV + \sum_i \mu_i N_i + pV = TS + \sum_i \mu_i N_i \space (3)$$ Thus, taking partial derivatives in (2), $$ dH(T,p) = SdT + 0 \space dp \space (4)$$ This means that, if $ dH(S,p) = dH(T,p)$, $$ SdT = TdS + Vdp \space (5)$$ must be true. Can anyone see something wrong? This seems to boil down to a math question: are two differentials of the same function (always) equal if expressed by different variables? Answer: The reasoning stated is partially correct, but the final relation you arrived to is incorrect. I will try to explain why and write the N dependence explicitly for completeness. The crucial thing is that when one writes an expression such as $$\left(\frac{\partial H}{\partial T} \right)_{P,N}$$ what one really means is "take the partial derivative of $H$ written as a function of $T$, $P$ and $N$ with respect to $T$". When you take the partial derivatives in equation (2) then, you should take them considering $H$ as a function of $T$, $P$ and $N$. You have to consider then the expression $H = H(T,P,N) = S(T,P,N)T + \mu(T,P) N$. If you differentiate that equation with respect to T and P the result is: $$\left(\frac{\partial H}{\partial P} \right)_{T,N} = \left(\frac{\partial S}{\partial P} \right)_{T,N} T + \left(\frac{\partial \mu}{\partial P} \right)_{T} N~~~;~~~ \left(\frac{\partial H}{\partial T} \right)_{P,N} = \left(\frac{\partial S}{\partial T} \right)_{P,N} T + S + \left(\frac{\partial \mu}{\partial T} \right)_{P} N$$ If you replace those two relations in your equation (2) the result is: $$ \mathrm{d}H = \left(\left(\frac{\partial S}{\partial P} \right)_{T,N} T+ \left(\frac{\partial \mu}{\partial P} \right)_{T} N \right) \mathrm{d}P + \;\left(\left(\frac{\partial S}{\partial T} \right)_{P,N} T + S+ \left(\frac{\partial \mu}{\partial T} \right)_{P} N\right)~\mathrm{d}T.$$ You can indeed equate this with your equation (1), which is what you ask in your main question. This is the same thing one ordinarily does when expressing a scalar as a function of different sets of coordinates, for instance $f = f(x,y) = x^2 + y^2$ and $f = f(r,\theta) = r^2$ this means, equating both, that $x^2 + y^2 = r^2$, which is a relationship that must hold if you want both $(x,y)$ and $(r,\theta)$ to refer to $f$ (this last sentence may be a bit tautological, I hope what it means is clear, note that it is certainly not "always" true that $f(x,y)$ and $f(r,\theta)$ are equal, since they are two different functions, albeit expressed with the same letter, for instance, $f(x=1,y=0) = 1$ but $f(r=2,\theta= \pi) = 4$, there must be a specific relation between the coordinates for these to be equal). If you do this you obtain: $$\left(\left(\frac{\partial S}{\partial P} \right)_{T,N} T+ \left(\frac{\partial \mu}{\partial P} \right)_{T} N \right) \mathrm{d}P + \;\left(\left(\frac{\partial S}{\partial T} \right)_{P,N} T + S+ \left(\frac{\partial \mu}{\partial T} \right)_{P} N\right)~\mathrm{d}T = T~\mathrm{d}S + V~\mathrm{d}P.$$ Note that you would get the same expression even if you considered processes in which $\mu \mathrm{d}N$ wasn't cero, cause both terms would cancel out. If one remembers that $T\mathrm{d}S = T\left(\frac{\partial S}{\partial P} \right)_{T,N}\mathrm{d}P + T\left(\frac{\partial S}{\partial T} \right)_{P,N}\mathrm{d}T$ then this simplifies to: $$\left(\left(\frac{\partial \mu}{\partial P} \right)_{T} N \right) \mathrm{d}P + \;\left(S+ \left(\frac{\partial \mu}{\partial T} \right)_{P} N\right)~\mathrm{d}T = V~\mathrm{d}P.$$ Dividing through by N: $$\left(\frac{\partial \mu}{\partial P} \right)_{T} \mathrm{d}P + \;\left(\bar{S}+ \left(\frac{\partial \mu}{\partial T} \right)_{P} \right)~\mathrm{d}T = \bar{V}~\mathrm{d}P.$$ This is true if and only if: $$\left(\frac{\partial \mu}{\partial P} \right)_{T} = \bar{V}~~;~~\left(\frac{\partial \mu}{\partial T} \right)_{P} = -\bar{S}.$$ These are correct relations and can also be deduced from the Gibbs-Duhem equation: $N\mathrm{d}\mu -V\mathrm{d}P + S\mathrm{d}T = 0$.
{ "domain": "chemistry.stackexchange", "id": 9009, "tags": "physical-chemistry, thermodynamics, enthalpy" }
Noise when dividing or multiplying signal in C
Question: I'm trying to create a echo effect and have managed to add multiple delays but when i try to divide or multiply these numbers the signal is completely distorted and adds a lot of noise to the system. This is my some of my program written in C. unsigned char txBuf0[AUDIO_BUF_SIZE]; unsigned char rxBuf0[AUDIO_BUF_SIZE]; int i, n, x; unsigned short Stor0[2000], Stor1[2000],Stor2[2000], Stor3[2000], z, ze; for(i=4; i < AUDIO_BUF_SIZE; i+= 4) { x= i/4; if(lastFullRxBuf == 0){ Stor3[x] = Stor2[x]/2; Stor2[x] = Stor1[x]/2; Stor1[x]= Stor0[x]/2; Stor0[x] = 0; Stor0[x] =((rxBuf0[i-4]) | (rxBuf0[i-3])<<8); z = (Stor0[x]) + (Stor1[x])+ (Stor2[x]) +(Stor3[x]); rxBuf0[i-4]= z; rxBuf0[i-3]= z>>8; txBuf0[i-4]=rxBuf0[i-4]; txBuf0[i-3]=rxBuf0[i-3]; } Answer: I don't think this code is doing what you intend it to do. However, you can do it more efficiently with the following code. unsigned char txBuf0[AUDIO_BUF_SIZE]; unsigned char rxBuf0[AUDIO_BUF_SIZE]; int i, n, x; unsigned short Stor0[2000], Stor1[2000],Stor2[2000], Stor3[2000], z, ze; unsigned short* txIntBuf0 = (unsigned short*) txBuf0; unsigned short* rxIntBuf0 = (unsigned short*) rxBuf0; int limit = (AUDIO_BUF_SIZE >> 1) - 2; x = 1 for( i = 0; i < limit; i+= 2 ) { if( lastFullRxBuf == 0 ) { Stor3[x] = Stor2[x] >> 1; Stor2[x] = Stor1[x] >> 1; Stor1[x] = Stor0[x] >> 1; Stor0[x] = rxIntBuf0[i]; z = Stor0[x] + Stor1[x] + Stor2[x] + Stor3[x]; rxIntBuf0[i] = z; txIntBuf0[i] = rxIntBuf0[i]; } x++; } Here's just some of the issues I see: 16 bit audio data is usually signed Setting your rx buffer doens't make sense. Isn't this where you got the data from? The if statement inside the loop can be put outside the loop (unless code is missing) It is quite possible for z to overflow (most likely your problem) Anyway, I hope this helps at bit. Ced
{ "domain": "dsp.stackexchange", "id": 6257, "tags": "audio, time-domain" }
Many body physics - changing to $k$ space
Question: I have an example in my notes starting with Linear Chain: $$H=-t\sum_{\langle i i' \rangle} c_i^\dagger c_{i'} = -2t \sum_k c_k^\dagger c_k \cos{k}$$ I don't know where the $2 \cos{k}$ comes from. Answer: Up to a sign on the Fourier transform, the definition of $c_k$ is $$ c_k =\frac{1}{\sqrt N}\sum_j e^{ikj}c_j $$ You can get the formula for $c_k^\dagger$ by taking the conjugate of both sides. You can also invert the Fourier transform to get $$ c_j =\frac{1}{\sqrt N}\sum_k e^{-ikj}c_k $$ Again, you can take the conjugate of both sides to get the equation for $c_j^\dagger$. Proving your relation then just comes from plugging in the expressions for $c_j$ and $c_j^\dagger$ in terms of the $c_k$ and simplifying.
{ "domain": "physics.stackexchange", "id": 37722, "tags": "quantum-mechanics, homework-and-exercises, fourier-transform, many-body, second-quantization" }
'Mini Twitter' in Python 3
Question: I've written a 'Mini Twitter' (that's what I've called it) in Python. It includes a text-based login screen and user panel where you can create Tweets, and change your username and password. Users' names, passwords and whether or not it is an administrator is logged in a text file. It also logs Tweets Users have made in a separate .txt file. There are 4 different .py files and 2 .txt files named: 'Administrator.py', 'main.py', 'Twitter_functions.py' and 'User.py' Please assess it and let me know how I can improve it. User.py: #User.py """ Zak July-August 2015 User.py In here is a class named User that has methods such as makeTweet and setPassword. There are also other functions that the class User uses. """ #strftime makes us able to make a note of the current date. from time import strftime #Create the class named User class User: ''' Class that acts as an online user and has methods such as 'make_tweet'. It takes 2 arguments, username and password. ''' #Init constructor. This is called as soon as an instance is created. def __init__(self, username, password): self.__password = password self.__username = username self.__creationDate = strftime("%d/%m/%Y") #This is the current date. #Boolean to show if this class is an Administrator. self.__isAdmin = False def __repr__(self): return "User({}, {})".format(self.__username,self.__creationDate) #Getters def get_password(self): return self.__password def get_username(self): return self.__username def get_creation_date(self): return self.__creationDate #Setters def set_password(self, password): self.__password = password def set_username(self, username): self.__username = username def make_tweet(self, text): '''Method To make a tweet. It writes text to a file in a specific format.''' textLen = len(text) if textLen > 140: #140 is the character limit in Twitter. raise Exception("Too many characters. Expected 140, got {}.".format(textLen)) with open("Tweets.txt","a+") as f: f.write("{} {}\n\t{}\n".format(self.__username,strftime("%d/%m/%Y"),text)) Administrator.py: #Administrator.py """ Zak July-August 2015 Administrator.py In here is a class named Admin that inherits from the User class. """ #Import the User class import User #Create a class named Admin that inherits from the User class class Admin(User.User): ''' Class that acts as an online Admin. It inherits from a class named User. It takes 2 arguments, username and password. ''' #Init constructor def __init__(self, username, password): #Call the parent's (User) init method. This saves us from rewriting code. super().__init__(username, password) #Boolean to show if this class is an Administrator. self.__isAdmin = True def __repr__(self): return "Administrator({}, {})".format(self.__username,self.__creationDate) Twitter_functions.py: #Twitter_functions.py """ Zak July-August 2015 Twitter_functions.py In here are functions that are used throughout my mini Twitter. Functions such as log_user() and find_users() are here. """ import User import Administrator import re #Function to log the user's name, creation date and password. def log_user(username,password,isAdmin=False): '''Writes three variables to a file.''' with open("Users.txt","a+") as f: f.write("{} {} {}\n".format(username,password,isAdmin)) def find_users(): '''Return a tuple that contains username,password,isAdmin.''' with open("Users.txt","r+") as f: lines = f.readlines() for line in lines: username, password,isAdmin = line.strip().split() yield username,password,isAdmin #This function acts as a log-in screen. It takes 2 arguments that must both be strings. def login(accUsername,accPassword): '''Returns an instance of a class if the 'login' was successful, otherwise False.''' for user in find_users(): #user[0] = a stored username, [1] = a stored password if accUsername == user[0] and accPassword == user[1]: #The account's username and password is valid: if user[2] == 'True': #If the account is an admin: #cAdmin = currentAdmin cAdmin = Administrator.Admin(accUsername,accPassword) return cAdmin else: #If the account is not an admin: cUser = User.User(accUsername,accPassword) return cUser #If no matchings are found: return False #This function acts as a sign-up. It takes 2 arguments that must both be strings. def signup(accUsername,accPassword): '''Returns a string saying whether the signup was successful or not.''' for user in find_users(): if user[0] == accUsername: return "Username is already in use." cUser = User.User(accUsername,accPassword) log_user(accUsername,accPassword) return "Signup Successful." #This function changes a username to specified text in the Users.txt def change_username(oldUsername, newUsername): '''Replaces text (usernames specifically) in a .txt file to something specified.''' #Open the file, and search all usernames, If it matches, replace it. with open("Users.txt","r") as f: if newUsername in f.read(): raise Exception("Username already exists.") f.seek(0) filedata = f.readlines() newdata = [x.strip().split() for x in filedata] #Example newdata: [[username, password, isAdmin]] for account in newdata: if account[0] == oldUsername: account[0] = newUsername #Now write the newdata to the file with open("Users.txt","w") as f: for account in newdata: #account[0] = username, [1] = password and [2] is isAdmin f.write("{} {} {}\n".format(account[0],account[1],account[2])) #This function is similar to change_username, and changes a password instead. def change_password(oldPassword,newPassword, username): '''Replaces text (passwords specifically) in a .txt file to something specified.''' #Open the file, and search all passwords, If it matches, replace it. with open("Users.txt","r") as f: #All file contents in a list filedata = f.readlines() #Get rid of spaces and newlines (\n) in the filedata variable. newdata = [x.strip().split() for x in filedata] for account in newdata: if account[0] == username and account[1] == oldPassword: account[1] = newPassword #Now write the newdata to the file with open("Users.txt","w") as f: for account in newdata: #account[0] = username, [1] = password and [2] is isAdmin f.write("{} {} {}\n".format(account[0],account[1],account[2])) main.py: #main.py """ Zak July-August 2015 main.py Here is the main part of my mini Twitter. It actually prints text, unlike the other half of my code. """ import Twitter_functions as tf def loggedin(account): print("""What do you want to do? 1) Make A Tweet 2) Change Username 3) Change Password """) selection = int(input("Please Enter Your Choice: ") ) #Make a Tweet if selection == 1: try: tweet = input("Enter Your Tweet Text: ") except KeyboardInterrupt: exit() except: loggedin(account) try: account.make_tweet(tweet) except Exception as e: print("Error! {}".format(str(e))) except: print("An error occurred. Restarting.") main() loggedin(account) #Change Username elif selection == 2: try: username = input("Please Enter Your New Username: ") except KeyboardInterrupt: exit() except: loggedin(account) #Check if the username is valid if not username.isalpha(): print("Username cannot contain symbols.") #This function edits Users.txt and replaces the correct usernames. It raises an #exception if the username already exists. try: tf.change_username(account.get_username(),username) except Exception as e: print("{}".format(str(e))) loggedin(account) account.set_username(username) print("Username set to: {}".format(account.get_username())) loggedin(account) #Change Password elif selection == 3: #Get the user to input his/her's current password. currentPassword = account.get_password() try: password = input("Please enter your current Password: ") except KeyboardInterrupt: exit() except: main() if password != currentPassword: print("Incorrect Password.") loggedin(account) elif password == currentPassword: password = input("Enter Your New Password: ") if " " in password or len(password) == 0: print("Password cannot contain ' ' (space) or nothing.") tf.change_password(account.get_password(),password,account.get_username()) account.set_password(password) print("Password set to: {}".format(account.get_password())) loggedin(account) #In Java, this would be a public static void main(){} def main(): print("\nWelcome To Mini Twitter!\n") #Ask the user if he or she wishes to signup. Then ask for a username and password try: shouldSignup = input("Do you want to signup? Y/N: ") except KeyboardInterrupt: #ctrl+c: exit() except: main() if shouldSignup.lower().startswith("y"): signingUp = True else: signingUp = False if signingUp: print("\n\nSign Up Menu:") else: print("\n\nLogin Menu:") try: username = input("Username: ") except KeyboardInterrupt: exit() except: main() if not username.isalpha(): print("Username cannot contain symbols or nothing.") main() try: password = input("Password: ") except KeyboardInterrupt: exit() except: main() if " " in password: print("Password cannot contain ' ' (space).") main() if signingUp: try: #Signup signedup = tf.signup(username,password) except: print("An error occurred while attempting to signup.. Restarting now.") main() print(signedup) main() else: #Simple Login account = tf.login(username,password) if not account: print("Incorrect Login Details.") main() print("Login Successful. Welcome, {}".format(account.get_username())) loggedin(account) #Call main. If it is imported, don't. if __name__ == "__main__": main() Examples: Users.txt: Bobby hello123 False blaze bobby False zxyo zxyo1111 False BlaZe Zak False boblington tommy False hihio hi False Tweets.txt: tommy 19/07/2015 Hello world. Just made this account! zxyo 21/07/2015 this is cool. Bob 24/07/2015 Hi, Im Bobby BlaZe 24/07/2015 hi there One thing I should note is that I don't really have any admin functionality as of now. I don't intend to add this, I just wanted to make it so I could add it in the future. Answer: The first thing I would say is that you don't need the User and Administrator classes in separate files. You can simply put them together in one module instead. Especially since Administrator actually imports User anyway. You can import them so that you only need the bare class name too and would require only minor changes to your code. from userclasses import User, Administrator I also think that your make_tweet function doesn't need to separate these lines: textLen = len(text) if textLen > 140: #140 is the character limit in Twitter. You can just call len(text) in the if statement and save yourself a line of code and a variable declaration. It's also more clear because it's more obvious what the if statement means in that context. Also, you use open(file, 'a+') multiple times without needing to. a+ allows you to read and append to the file, while a will allow you to just append. In a few places you could ommit the + as unnecessary. Speaking of unnecessary, you have a lot of comments about pretty self explanatory things. #Create the class named User class User: Python is explicitly designed to be clear enough that something like this shouldn't be necessary. Most people reading Python code will know this or what an init function is, unless your intention is for this to be teaching about how Python works. On the flipside, this docstring is not explanatory enough: def log_user(username,password,isAdmin=False): '''Writes three variables to a file.''' It's more useful to explain what the variables are and why they're useful to write to a file. Better would be def log_user(username,password,isAdmin=False): '''Writes a user's username, password and admin status to a file.''' Even if it's quite clear based on the parameter names, people can sometimes see a docstring without the parameter list and there's no reason to leave out this information in that case. Also, you don't need to try making private variables. There's no actual way to prevent code from accessing the attributes of a class. At any point code could just directly set someUser.__password = "banana". In Python the convention is to start a variable name with a single underscore to indicate that other people using your code should treat the variable as private, and if they don't then it's their responsibility, not yours. So, you don't need any of the setters or getters, you can just get and set the password directly with the dot syntax whenever you need to. I agree with shuttle87 about data structures, but for the current structure I like your yield set up for the user list. However, I think you should be prepared to handle the getting data in the wrong format. If someone edits the file or a write is done incorrectly you would get an error from having too few arguments for the three variables you're trying to set. But that error would be vague and unclear to a user, so you should raise a more clear error: for line in lines: try: username, password,isAdmin = line.strip().split() except ValueError: raise(ValueError, "User data file is in invalid format.") yield username,password,isAdmin You can still catch this error with a try except in cases where you want to continue on regardless. Also login can just not return anything if you don't get a User class. In the absence of a return statement Python functions will return None, and if None will evaluate as False. I'd rewrite the change_password and change_username functions to just be one function, with a parameter that specifies what to change. That way it could also be expanded to give admin privileges if you need that later. But even at the moment, you're largely duplicating unneeded code. Also you probably shouldn't let people change other users' usernames. def change_userdata(username, password, newdata, type): '''Replaces text in a .txt file to a new value, returns True if successful.''' with open("Users.txt","r") as f: filedata = [x.strip().split() for x in f.readlines()] for account in filedata: if account[0] == username and account[1] == password: if type == "username": account[0] = newdata elif type == "password": account[1] = newdata elif type == "admine": account[2] = newdata else: raise(ValueError, "Invalid data type {}".format(type)) break else: return with open("Users.txt","w") as f: for account in newdata: #account [username, password, isAdmin] f.write("{} {} {}\n".format(*account)) return True I made some other changes there, but in particular I should explain the for else loop and *account. An else at the end of a Python for loop will run if no iteration of the for loop called break. You can see that I break from the loop when the relevant account has been found, as there's no need to keep going once the correct user has been found. So if the user is never found, break is never run and instead the function will return early, signifying False if you check the result the function returns. As for *account, it's basically a Python operator to 'unpack' the list, returning all the values as a tuple. That way you don't need to write out each index since you can simultaneously pass all three of them to format. In main, I think you should actually have a function for each of the user's options. You can actually store a function in a list/dictionary and call it from there so it would be much more organised and easier to expand, like this: functions = [tweet, change_username, change_password] selection = int(input("Please Enter Your Choice: ") ) try: functions[selection]() except IndexError: print ("Invalid input, there's no function at index {}".format(selection)) I also think your labouring the input validation and bare excepts are always a bad idea. I'm not even sure what error you're trying to guard with them to be honest so I'd entirely strip them out. Instead, when an error is occurring, you should do your best to either fix it or handle it specifically. Like I've done above with the IndexError and ValueError. The KeyboardInterrupt calling exit is a good example too. Though to be cleaner, it'd be better to define your own input function and keep using it throughout, since every input location has the same set up essentially: def my_input(s): try: return input(s) except KeyboardInterrupt: exit() You can then just add more if there are other exceptions to be handled. You can assign expressions to variables as booleans in Python, so you can shorten this: if shouldSignup.lower().startswith("y"): signingUp = True else: signingUp = False to this: signingUp = shouldSignup.lower().startswith("y") Also I find that validating input works best when using a loop rather than recursion: while True: username = my_input("Username: ") if username.isalpha(): break print("Username cannot contain symbols or nothing.")
{ "domain": "codereview.stackexchange", "id": 15387, "tags": "python, python-3.x" }
How did this rock dome (pictured) form?
Question: I saw this rock formation near Hveravellir, Iceland. It is probably of volcanic origin and looks like a dome. It is nearly symmetric and appears to consist of hardened lava maybe, with several very big cracks that divide it into sections. It is quite far away from the lava flows of the old, very flat Strytur volcano that is about 2 km away. I have several hypotheses of my own, but I am not a geologist. Could it be a small "failed volcano"? Answer: The vast majority of magma never even makes it out to the surface - most is simply crystallised at depth in magma chambers which dead-end several kilometres below the surface, or are injected as dykes or sills within the host strata. By OrbitalPete, posted on reddit.com If magma rose up below that mound it probably got closer than several kilometres to the surface; and the theory could be tested with one drill hole. I'm assuming the mound itself is not lava as such but bedrock that's been forced up.
{ "domain": "earthscience.stackexchange", "id": 1958, "tags": "volcanology, geomorphology" }
Are photons electromagnetic waves, quantum waves, or both?
Question: Are photons electromagnetic waves, quantum waves, or both? If I subdivide an electromagnetic field into smaller electromagnetic fields, should I eventually find an electromagnetic wave of a photon? How can individual quantum waves combine to form the macroscopic observable of an electromagnetic field? Answer: Are photons electromagnetic waves, quantum waves, or both? A great ensemble of photons build up the electromagnetic wave. If I subdivide an electromagnetic field into smaller electromagnetic fields, should I eventually find an electromagnetic wave of a photon? This experiment has been done with lasers bringing down to individual photon strength in this double slit experiment: The movie shows the diffraction of individual photon from a double slit recorded by a single photon imaging camera (image intensifier + CCD camera). The single particle events pile up to yield the familiar smooth diffraction pattern of light waves as more and more frames are superposed (Recording by A. Weis, University of Fribourg). You ask: How can individual quantum waves combine to form the macroscopic observable of an electromagnetic field? It needs some strong math background, but handwaving: Both the classical electromagnetic wave and the quantum photon rely on solutions of maxwell's equations. The individual photons carry information about the frequency ( E=h*nu) and the spin and electromagnetic potential in the equation, since the quantum mechanical wavefunction of the photon ( which gives the probability distribution of the photon) and the classical wave depend on the same equations. There is a coherent synergy and the zillions of photons add up to give the classical wave.
{ "domain": "physics.stackexchange", "id": 13908, "tags": "electromagnetic-radiation, photons" }
Is there a supermassive black hole in the center of every single galaxy?
Question: I've read that there's a supermassive black hole in the center of every single galaxy, as we confirmed theres for example one in the center of ours, and in the andromeda galaxy, but does that confirm theres one in every single galaxy? And if so, why? Was it born at the beginning of the galaxys birth? Answer: The fact that we have a supermassive black hole at the center of our galaxy of course is not sufficient in order to state that there is one black hole in every single galaxy out there. You could never have a conclusive and certain confirmation unless you're able to identify all black holes and all galaxies present in the universe, which is not so easy to do. Nevertheless, astronomers do believe that supermassive black holes lie at the center of all large galaxies, as in our own Milky Way. If we focus for example on stellar black holes, they are very difficult to detect and judging from the number of stars large enough to produce them, scientists have estimated that there are as many as ten million to a billion in the Milky Way alone. This could be interpreted as a strong suggestion that every galaxy has at the center a black hole, but it's not a conclusive proof. Another possible explanation of this hypothesis relies on the fact that one possible mechanism (not the only one of course) for the formation of supermassive black holes involves a chain reaction of collisions of stars in compact star clusters that results in extremely massive stars, which then collapse to form intermediate-mass black holes. The star clusters then sink to the center of the galaxy, where the intermediate-mass black holes merge to form a supermassive black hole.
{ "domain": "physics.stackexchange", "id": 91523, "tags": "black-holes, astrophysics, galaxies" }
How to represent the Xf axis of the final frame (end-effector) in function of euler angles?
Question: I am working on a robot arm with 7 rotational joints. I can generate the T of any position, what I want to do ist to represent the vector Xf of the end effector frame in the base frame. How can I do it? Answer: Assuming: * the T you mention is a 4x4 transformation matrix * "by any point you want" you meant any given pose (pose is position and orientation) So the T transformation matrix links the end effector frame to the base frame. Let's assume that you have a given position (X=10, y = 20, z = 30) and orientation (A = 0, B = 0, C = 0). for this pose, the T matrix is trivial to make: $$T=\begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix}$$ This matrix can be used to calculate the coordinates of a point given in the end effector frame in the base frame. E.g. the point (X = 1, Y = 1, Z = 1) in defined in the End Effector Frame will have the following coordinates in the base frame: $$ \begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix} \times \begin{bmatrix} 1 \\ 2 \\ 3 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 11 \\ 22 \\ 33 \\ 1 \\ \end{bmatrix} $$ If you want to represent the coordinate fram visually, you will need to take one point on each axis and the origin and see what coordinates corespond to these points in the base frame, so: $$P_X= \begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix} \times \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 11 \\ 20 \\ 30 \\ 1 \\ \end{bmatrix} $$ $$P_Y= \begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix} \times \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 10 \\ 21 \\ 30 \\ 1 \\ \end{bmatrix} $$ $$P_Z= \begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix} \times \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 10 \\ 20 \\ 31 \\ 1 \\ \end{bmatrix} $$ $$P_O= \begin{bmatrix} 1 & 0& 0 & 10 \\ 0 & 1 & 0 & 20 \\ 0 & 0 & 1 & 30 \\ 0 & 0 & 0 & 1\\ \end{bmatrix} \times \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 10 \\ 20 \\ 30 \\ 1 \\ \end{bmatrix} $$ In order to visualize the coordinate system, you need to draw lines between $P_O P_X$, $P_O P_Y$ and $P_O P_Z$. If the coordinate system drawn is too small, you can use a point on each axis which is further away from the origin (instead of alway useing 1 you can use 10, 100 or even larger numbers) The above method works also if the rotation part of the transformation matrix is different from the identity matrix, this example was chosen for the easy calculations.
{ "domain": "robotics.stackexchange", "id": 2174, "tags": "kinematics, inverse-kinematics, forward-kinematics, dh-parameters" }
Filtering a list based on conditions
Question: I'm currently filtering a list based on several conditions, which may or may not be present: def requests = Request .list() .findAll { if (exportFrom) { it.tsCreated.millis >= exportFrom.millis } else { it } } .findAll { if (exportTo) { it.tsCreated.millis <= exportTo.millis } else { it } } .findAll { if (clientId) { it.resolvedClient?.clientId == clientId } else { it } } .findAll { if (params.exportType) { it.typeName == params.exportType } else { it } } .findAll { if (params.exportStatus) { it.status.toString() == params.exportStatus } else { it } } For instance, exportFrom is a date coming from the user interface. If it's present, we only want to export requests that occurred on that date or later. If not, we can disregard that test. it will never be 0 or "", as it from a list of objects that have been loaded from the database and so have already been validated. While this is readable and works, it seems like a poor solution and probably won't be very performant on a large dataset. Is there a Groovy idiom for this kind of series of selective filters? Answer: Few problems here. Fragility and possible inefficiency by returning {it} rather than true Iterating over the collection more often than is necessary (even though the later iterations may be over smaller sets). Duplication of the same pattern Point 1: Fragility Just return true (oh, and not { true }). That's what you want. By returning the object you risk some odd circumstance where the object evaluates to false. Why expose yourself to that risk? And returning true is just cheaper. Point 2: Repeated iteration Each call to findAll is returning a new collection, which you iterate over with the next call. That's a waste. You can do it with one findAll and one closure. The simplest (and most naive) way to do that would be this: def requests = Request .list() .findAll { (exportFrom ? it.tsCreated.millis >= exportFrom.millis : true) && (exportTo ? it.tsCreated.millis <= exportTo.millis : true) && (clientId ? it.resolvedClient?.clientId == clientId : true) && (params.exportType ? it.typeName == params.exportType : true) && (params.exportStatus ? it.status.toString() == params.exportStatus : true) && true } That will do just what your code does only without creating four intermediate copies of the list. It is, however, ugly. So one approach would be to turn each of those criteria into a function which takes a parameter and returns true or false. So you would have def requests = Request .list() .findAll { cond1(it) && cond2(it) && cond3(it) && cond4(it) && cond5(it) } Which is at least cleaner. However, let's consider Point 3: Duplication of a pattern You have a set of basic conditions. Each, if true, has a matching predicate with which you test the list item for validity. If the condition is not valid/present, you accept the list item. So... Imagine you create a Criteria class. It should have two properties criterion: the condition which, if true, requires a list element to be tested. Should probably be a closure. predicate: the test (another closure) to apply to the list element The Criteria class should also have a reject method which might look like this boolean reject (Object item) { if (this.criterion(item)) ! this.predicate(item) else false } (I used Object because I'm not sure what type is actually in your list) Now you can create a list of Criteria objects, each containing a criterion and a matching test. Lets say you call it criteriaList. Now the main bit of code can look like this: def requests = Request .list() .findAll { x -> ! criteriaList.find { c -> c.reject(x) } } If none of the criteria objects reject the item, then the find call returns null (empty list) and ! null evaluates to true. No rejections, so we like this item. On the other hand, if a list item is going to fail one of the tests, then find will return the first Criteria object in the list which rejected the item (and not try any others after that). So in that case ! criteriaList.find will evaluate to false and the item will be discarded. Can you see why I used a reject method (which returns true if the test fails) rather than an accept method (which would return true if the test succeeds)? I did that because find returns as soon as it finds a match, so we want to return as soon as one Criteria object rejects the list item. There are other ways to do this (e.g. closure composition, although that's a more functional approach which you said isn't your style) but I hope this is a good example which will help you think of your own solutions.
{ "domain": "codereview.stackexchange", "id": 12469, "tags": "performance, groovy" }
Velocity-Dependent Potential and Helmholtz Identities
Question: I'm currently working through the book Heisenberg's Quantum Mechanics (Razavy, 2010), and am reading the chapter on classical mechanics. I'm interested in part of their derivative of a generalized Lorentz force via a velocity-dependent potential. I understand the generalized force $$F_i = -\frac{\partial V}{\partial x_i} + \frac{d}{dt}\left(\frac{\partial V}{\partial v_i}\right)$$ that they derive from a Lagrangian of the form $L = \frac{1}{2}m|\vec v|^2 - V(\vec r,\vec v,t)$. However, in the next (critical) step of the derivation, the author cites a theorem from Helmholtz saying ...according to Helmholtz, for the existence of the Lagrangian, such a generalized force can be at most a linear function of acceleration, and it must satisfy the Helmholtz identities. The three Helmholtz identities are then listed as: $$\frac{\partial F_i}{\partial \dot{v_j}} = \frac{\partial F_j}{\partial \dot{v_i}},$$ $$\frac{\partial F_i}{\partial v_j} + \frac{\partial F_j}{\partial v_i} = \frac{d}{dt}\left(\frac{\partial F_i}{\partial \dot{v_j}} + \frac{\partial F_j}{\partial \dot{v_i}}\right),$$ $$\frac{\partial F_i}{\partial x_j} - \frac{\partial F_j}{\partial x_i} = \frac{1}{2}\frac{d}{dt}\left(\frac{\partial F_i}{\partial v_j} - \frac{\partial F_j}{\partial v_i}\right).$$ I'm trying to understand where this theorem comes from. Razavy cited a 1887 paper by Helmholtz. I was able to find a PDF online, but it is in German, so I could not verify whether or not it proved the theorem. Additionally, I could not find it in any recent literature. I searched online and in Goldstein's Classical Mechanics. The only similar concept that I can find is in the Inverse problem for Lagrangian mechanics where we have three equations known as Helmholtz conditions. Are these two concepts one in the same? If so, how should I interpret the function $\Phi$ and the matrix $g_{ij}$ that appear in the Helmholtz conditions I found online? If the cited theorem from Razavy does not relate from the inverse Lagrangian problem, could I have some help finding the right direction? Answer: We are interested whether a given force $$ {\bf F}~=~{\bf F}({\bf r},{\bf v},{\bf a},t) \tag{1}$$ has a velocity-dependent potential $$U~=~U({\bf r},{\bf v},t),\tag{2}$$ which by definition means that $$ {\bf F}~\stackrel{?}{=}~\frac{d}{dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}. \tag{3} $$ If we define the potential part of the action as $$ S_p~:=~\int \!dt~U,\tag{4}$$ then the condition (3) can be rewritten with the help of a functional derivative as $$ F_i(t)~\stackrel{(2)+(3)+(4)}{=}~ -\frac{\delta S_p}{\delta x^i(t)}, \qquad i~\in~\{1,\ldots,n\}, \tag{5} $$ where $n$ is the number of spatial dimensions. It follows from eqs. (2) & (3) that in the affirmative case the force ${\bf F}$ must be an affine function in acceleration ${\bf a}$. Since functional derivatives commute $$ \frac{\delta}{\delta x^i(t)} \frac{\delta S_p}{\delta x^j(t^{\prime})} ~=~\frac{\delta}{\delta x^j(t^{\prime})} \frac{\delta S_p}{\delta x^i(t)},\tag{6}$$ we derive the following consistency condition (7) for a force with a velocity dependent potential $$ \frac{\delta F_i(t)}{\delta x^j(t^{\prime})} ~\stackrel{(5)+(6)}{=}~[(i,t) \longleftrightarrow (j,t^{\prime})].\tag{7} $$ Eq. (7) is a functional analog of a Maxwell relation, and equivalent to the Helmholtz conditions$^1$ $$ \begin{align} \frac{\partial F_i(t)}{\partial x^j(t)} ~-~\frac{1}{2}\frac{d}{dt}\frac{\partial F_i(t)}{\partial v^j(t)} ~+~\frac{1}{4}\frac{d^2}{dt^2}\frac{\partial F_i(t)}{\partial a^j(t)}~&=~+[i \longleftrightarrow j], \cr \frac{\partial F_i(t)}{\partial v^j(t)} ~-~\frac{d}{dt}\frac{\partial F_i(t)}{\partial a^j(t)} ~&=~-[i \longleftrightarrow j], \cr \frac{\partial F_i(t)}{\partial a^j(t)}~&=~+[i \longleftrightarrow j] .\end{align}\tag{8} $$ [The above form (8) of the Helmholtz conditions can be simplified a bit.] Sketched systematic proof of the Helmholtz conditions (8). The distribution on the LHS of eq. (7) reads $$ \begin{align} \frac{\delta F_i(t)}{\delta x^j(t^{\prime})} &~\stackrel{(1)}{=}~\left[\frac{\partial F_i(t)}{\partial x^k(t)} ~+~ \frac{\partial F_i(t)}{\partial v^k(t)}\frac{d}{dt} ~+~ \frac{\partial F_i(t)}{\partial a^k(t)}\frac{d^2}{dt^2}\right] \frac{\delta x^k(t)}{\delta x^j(t^{\prime})}\cr &~=~\left[\frac{\partial F_i(t)}{\partial x^j(t)} ~+~ \frac{\partial F_i(t)}{\partial v^j(t)}\frac{d}{dt} ~+~ \frac{\partial F_i(t)}{\partial a^j(t)}\frac{d^2}{dt^2}\right]\delta(t\!-\!t^{\prime})\cr &~=~\left[\frac{\partial F_i(t)}{\partial x^j(t)} ~-~ \frac{\partial F_i(t)}{\partial v^j(t)}\frac{d}{dt^{\prime}} ~+~ \frac{\partial F_i(t)}{\partial a^j(t)}\frac{d^2}{dt^{\prime 2}}\right]\delta(t\!-\!t^{\prime}) .\end{align}\tag{9} $$ Let us introduce for later convenience new coordinates $$ t^{\pm}~:=~\frac{t \pm t^{\prime}}{2} \quad\Leftrightarrow\quad \left\{\begin{array}{c} t~=~ t^++t^- \cr t^{\prime}~=~ t^+-t^-\end{array} \right\} \quad\Rightarrow\quad \frac{d}{dt^{\pm}}~=~ \frac{d}{dt} \pm \frac{d}{dt^{\prime}}.\tag{10} $$ If we introduce a testfunction $f\in C^{\infty}_c(\mathbb{R}^2)$ with compact support, there are no boundary terms when we integrate by parts: $$ \begin{align} \iint_{\mathbb{R}^2} \! dt~dt^{\prime}&~f(t^+,t^-)~\frac{\delta F_i(t)}{\delta x^j(t^{\prime})} \cr \stackrel{(9)}{=}~~~~&2\iint_{\mathbb{R^2}} \! dt^+~ dt^-~ f(t^+,t^{-})\left[\frac{\partial F_i(t)}{\partial x^j(t)} - \frac{\partial F_i(t)}{\partial v^j(t)}\frac{d}{dt^{\prime}} + \frac{\partial F_i(t)}{\partial a^j(t)}\frac{d^2}{dt^{\prime 2}} \right] \delta(2t^-) \cr \stackrel{\text{int. by parts}}{=}&2\iint_{\mathbb{R^2}} \! dt^+~ dt^-~ \delta(2t^-)\left[\frac{\partial F_i(t)}{\partial x^j(t)} + \frac{\partial F_i(t)}{\partial v^j(t)}\frac{d}{dt^{\prime}} + \frac{\partial F_i(t)}{\partial a^j(t)}\frac{d^2}{dt^{\prime 2}} \right] f(t^+,t^{-})\cr =~~~~&\int_{\mathbb{R}} \! dt^+~\left[\frac{\partial F_i(t^+)}{\partial x^j(t^+)} + \frac{\partial F_i(t^+)}{\partial v^j(t^+)}\frac{d}{dt^{\prime}} + \frac{\partial F_i(t^+)}{\partial a^j(t^+)}\frac{d^2}{dt^{\prime 2}} \right] f(t^+,0) \cr \stackrel{(10)}{=}~~~&\int_{\mathbb{R}} \! dt^+~\left[\frac{\partial F_i(t^+)}{\partial x^j(t^+)} + \frac{\partial F_i(t^+)}{\partial v^j(t^+)}\frac{1}{2}\left(\frac{d}{dt^+}-\frac{d}{dt^-}\right)\right. \cr &+\left. \frac{\partial F_i(t^+)}{\partial a^j(t^+)}\frac{1}{4}\left(\frac{d}{dt^+}-\frac{d}{dt^-}\right)^2 \right] f(t^+,0)\cr \stackrel{\text{int. by parts}}{=}&\int_{\mathbb{R}} \! dt^+~\left[\left(\frac{\partial F_i(t^+)}{\partial x^j(t^+)}-\frac{1}{2}\frac{d}{dt^+}\frac{\partial F_i(t^+)}{\partial v^j(t^+)}+\frac{1}{4}\frac{d^2}{dt^{+ 2}}\frac{\partial F_i(t^+)}{\partial a^j(t^+)} \right)\right. \cr &+\left.\frac{1}{2}\left(\frac{d}{dt^+}\frac{\partial F_i(t^+)}{\partial a^j(t^+)}- \frac{\partial F_i(t^+)}{\partial v^j(t^+)}\right)\frac{d}{dt^-} + \frac{1}{4}\frac{\partial F_i(t^+)}{\partial a^j(t^+)}\frac{d^2}{dt^{- 2}} \right] f(t^+,0) .\end{align}\tag{11} $$ Now compare eqs. (7) & (11) to derive the Helmholtz conditions (8). We get 3 conditions because each order of $t^-$-derivatives of the testfunction $f$ along the diagonal $t^-=0$ are independent. There is an additional minus sign in the middle condition (8) because $t^-$ is odd under $t\leftrightarrow t^{\prime}$ exchange. $\Box$ It is in principle straightforward to use the same proof technique to generalize the Helmholtz conditions (8) to the case where the force (1) and potential (2) depend on higher time-derivatives. -- $^1$ The other Helmholtz conditions mentioned on the Wikipedia page of the inverse problem for Lagrangian mechanics address a much more difficult problem: Given a set of EOMs, we possibly have to rewrite them before they might have a chance of becoming on the form: functional derivative $\approx 0$. See also this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 59761, "tags": "forces, lagrangian-formalism, mathematical-physics, potential-energy, velocity" }
Can the electron configuration of Te be written that way?
Question: Normally, the electron configuration of Te is known as: $$\begin{aligned} {[Kr]} 5s^2 \ce{4d^10} 5p^4 \end{aligned}$$ Then, one day I was asked in a exam if this can be written also as: $$\begin{aligned} {[Kr]} 5s^2 \ce{4d^10} 5p^3 6s^1\end{aligned}$$ I answered that it couldn't. But my answer turned out to be wrong. If this is correct, why is it allowed to be written like this? And what general case can I learn when being asked such tricky questions? Answer: The second configuration allows Te to take advantage of the relative stability of half-full shells. Having the 5p and 6s orbitals both half-populated is more stable than having 4 electrons in the 5p orbital. Similar behavior is what allows carbon to form 4 bonds. The $\ce{s^1p^3}$ configuration is more stable than the $\ce{s^2p^2}$. Tellurium shows the behavior while oxygen does not as a result of the overlap between the 5p and 6s orbitals in the larger atoms. The same degree of overlap does not occur with the 2p and 3s orbitals.
{ "domain": "chemistry.stackexchange", "id": 1995, "tags": "physical-chemistry, atoms, electrons, electronic-configuration" }
recommender systems : how to deal with items that change over time?
Question: Let's say I am building a recommender system where items change through time. We suppose that each transaction is composed of : an item $i$ in list of items $(i_1, i_2, i_3, .., i_m)$. a user $u$ in list of users $(u_1, u_2, u_3, ..., u_n)$. a date $t$ in list of dates $(t_1, t_2, ... t_k)$. We suppose that items have underlying features that change over time.For example, If we consider retail products, underlying features could be : The discount level that is applied on the item when the customer has purchased the transaction (5%, 10%, 20%, 30%, ...). An other example, if we consider financial stocks, underlying features that change over time could be : The stock situation at time of the transaction (underpriced or overpriced). The stock's central bank politics at time of the transaction (low interest rates, medium interest rates, high interest rates). We suppose that these underlying features have a strong impact on users. It completly drives their decisions to buy or not an object. If we consider two items $i_1$ and $i_2$, at time $t_1$, a given user $u_1$ could prefer $i_1$ over $i_2$ because $i_1$'s underlying features are more interesting than $i_2$'s. If we consider a different time, maybe $u_1$ could be more interested in $i_2$ than $i_1$. My question is : how to take into account underlying features that change over time in recommender systems such as user-user collaborative filtering, SVD, ALS... ?. Answer: You will have look for incremental, online, or dynamic versions of classic recommender system algorithms. Those are the terms associated with changes over time. Another option is to reinforcement learning. The reinforcement learning framework can also model changes over time. The most work has been done in multi-arm bandits for recommendations.
{ "domain": "datascience.stackexchange", "id": 9239, "tags": "recommender-system" }
Relation between logspace-uniform circuits and P-uniform circuits
Question: In the book "Computational complexity" of Barak and Arora, on page 112, they state that: Theorem 6.15: A language has logspace-uniform circuits of polynomial size iff it is in P. The proof of this one is left as an exercise to the reader. I think both directions are trivial: => seems trivial, as a logspace TM that generates a circuit also runs in polynomial time, and hence is a P-uniform circuit, which is part of P. <= seems trivial, as a language that has a polynomial-time TM can be transformed into a circuit with Cook-Levin's theorem in logspace. However, what I don't get is why the theorem 6.15 explicitly states that the circuits must be of "polynomial size". How can there exist a logspace-uniform circuit that isn't polynomial in size? The logspace computable function itself cannot exceed a polynomial bound, how can it produce a circuit of superpolynomial size? Also, this theorem would imply that logspace-uniform circuits comprise the same languages as P-uniform circuits, which seems very unintuitive to me. I can't find any information on the relation between logspace-uniform and P-space uniform circuits on the web, so my assumption that they are equal is probably false, but I fail to see see why. Answer: The circuit value problem (CVP) is, given a circuit and an input, to evaluate the circuit. This is a well-known problem which is P-complete with respect to AC$^0$ reductions. Thus one can define P as the class of problems AC$^0$-reducible to CVP. We get the same class if we consider the class of problems polytime-reducible to CVP, or in general $L$-reducible to CVP, where $L$ is any intermediate class between AC$^0$ and P. An $L$-reduction to CVP is very similar to an $L$-uniform circuit (the only difference is that the inputs to the circuit could be any $L$-functions). Specifically, an AC$^0$-reduction to CVP can be converted to a uniform circuit (the notion of uniformity depends on the exact format of circuits, but it can probably be taken all the way down to AC$^0$). The same phenomenon occurs whenever we have a universal circuit (in the case of P, this is the circuit solving CVP). For a recent example, see this paper on comparator circuits.
{ "domain": "cs.stackexchange", "id": 2573, "tags": "polynomial-time, circuits" }
Acceleration of a steady line vortex
Question: In a question, I have to find the acceleration of a fluid parcel in a steady line vortex. I am given that $u_\theta=\frac{A_0}{r}$. So for a steady line vortex, the parcels are following circular paths therefore in cylindrical coordinates, $u_r=u_z=0$ and the acceleration comes from the Lagrangian derivative $\frac{D\vec{u}}{Dt}=\frac{\partial\vec{u}}{\partial{t}}+(\vec{u}\dot{}\vec{\nabla})\vec{u}$ evaluating this for the velocity given above gives a zero acceleration as the only term from $\vec{u}\dot{}\vec{\nabla}$ that gives a non zero derivative is $\partial/\partial{r}$ but then $u_r=0$ which kills it. However this doesnt ring true as from just considering circular motion, $\vec{a}=-\frac{u_\theta^2}{r}\hat{r}=-\frac{A_0^2}{r^3}\hat{r}$. If I use the identity $(\vec{u}\dot{}\vec{\nabla})\vec{u}=-\vec{\omega}\times\vec{u}+\vec{\nabla}(\frac{\vec{u}\dot{}\vec{u}}{2})$ then it can be shown that the vorticity is zero and the expected acceleration is obtained. How can this not work when evaluating the acceleration before the identity is used!? Three of us have puzzled over this and got nowhere! Answer: You need to remember to differentiate your unit vectors. In cylindrical coordinates the unit derivatives of the unit vectors are not simply zero as they are in cartesian. They're fairly simple to work out though.
{ "domain": "physics.stackexchange", "id": 8061, "tags": "fluid-dynamics, navier-stokes" }
Noticing that Newtonian gravity and electrostatics are equivalent, is there also a relationship between the general relativity and electrodynamics?
Question: In classical mechanics, we had Newton's law of gravity $F \propto \frac{Mm}{r^2}$. Because of this, all laws of classical electrostatics applied to classical gravity if we assumed that all charges attracted each other due to Coulomb's law being analogous. We can "tweak" classical electrostatics to fit gravity. In modern physics, does the reverse work? Can we "tweak" General Relativity to accurately describe electrostatics or even electromagnetism? Answer: The parallel between Gravity and E&M is that both forces are mediated by massless particles, the graviton and the photon, respectively. This, in the end of the day, is the reason why both classical theories look similar. But, when you really study what's going on behind the scene, you learn that Gravity is more appropriately described by General Relativity (GR) and ElectroMagnetism is better described by Quantum ElectroDynamics (QED). The resemblance of these two theories, one may say, rests in the fact that both are described by the same mathematical framework: a principle bundle. In GR's (ie, gravity) case this bundle is a Tangent Bundle (or an $SO(3,1)$-bundle) and in QED's (ie, E&M) case it's a $U(1)$-bundle. The geometric structure is the same, what changes is the "gauge group", the object that describes the symmetries of each theory. Under this new sense, then, your question could be posed this way: "Is there a way to modify geometry in order to incorporate both of the symmetries of these two theories?" Now, this question was attacked by Hermann Weyl in his book Space, Time and Matter, giving birth to what we now call Gauge Theory. As it turns out, Weyl's observations amounts to a slight change on what symmetries we use to describe Gravity: rather than only using $SO(3,1)$, Weyl used a different group of symmetries, called Conformal. As Einstein later showed, it turns out that if you try and describe Gravity and E&M using this generalized group of symmetries (under this new geometrical framework of principle bundles) you do not get the appropriate radiation rates for atoms, ie, atoms which we know to be stable (they don't spontaneously decay radioactively) would not be so under Weyl's proposal. After this blow, this notion of unifying Gravity and E&M via a generalization of the geometry (principal bundles) that describes both of them, was put aside: it's virtually impossible to get stable atoms (stability of matter) this way. But, people tried a slightly different construction: they posited that spacetime was 5-dimensional (rather than 4-dimensional, as we see everyday) and constructed something called a Kaluza-Klein theory. So, rather than encode the E&M symmetries by changing the geometry via the use of the Conformal Group, they changed it by increasing its dimension. Now, this proposal has its own drawbacks, for instance, the sore thumb that is supradimensionality, ie, the fact that spacetime is assumed to be 5-dimensional (rather than 4-dim) — there are other technicalities, but let's leave those for later. The bottom-line is that it's proven very hard to describe gravity together with the other forces of Nature. In fact, we can describe the Strong Force, the Wear Force and ElectroMagnetism all together: this is called the "Standard Model of Particle Physics". But we cannot incorporate gravity in this description, despite decades of trying.
{ "domain": "physics.stackexchange", "id": 55, "tags": "general-relativity, gravity, electromagnetism" }
Did we discover 10 or 12 new moons of Jupiter?
Question: I saw multiple news sites reporting that a team discovered 12 new Jupiter's moons: c|net - Twelve new Jupiter moons found, including one reckless one Discover - Jupiter’s Got Twelve New Moons — One is a Bit of a Problem Child While some other news sites claim that they found only 10 new moons: nature - Jupiter has 10 more moons we didn't know about — and they're weird EarthSky - Astronomers discover 10 new moons for Jupiter Why does the number of moons differ in the different articles? Did the team now found 10 or 12 moons? Answer: Per the Carnegie Science article that Magic Octopus Urn linked from NASA in the comments, a Carnegie Science team led by Scott S. Sheppard noticed something new in spring of 2017 (though some observations occurred as early as 2016). It took a year to confirm the discovery of the new moons. Ten of the moons orbit in the outer swarm of moons, which is one of the ways to divide the groups of moons. Nine of these follow the pattern of the other moons which orbit Jupiter in retrograde (the opposite direction of the planet's rotation). One they call "oddball" because it orbits prograde (the same direction as the planet's rotation) and has a more inclined orbit than the inner prograde moons. Two of the moons orbit in the inner group, which is what brings the total number of discoveries up to twelve, as referenced in the first set of articles in your post. Like the other inner moons, these orbit prograde. One of the retrograde moons and the "oddball" were first noticed in 2016. One of the inner group and the rest of the retrograde moons were discovered in 2017. The second inner group moon was discovered in 2018, then most were announced together on July 17, 2018. (Dates collected from Wikipedia.) The retrograde moon discovered in 2016 and one of the retrograde moons discovered in 2017 were announced in 2017, this means only ten moons were announced on July 17, 2018, which is why the Nature article refers to ten new moons. The EarthSky article talks some about the ten and the twelve, and it does clarify both groups (divided by inner/outer and year of announcement).
{ "domain": "astronomy.stackexchange", "id": 3050, "tags": "jupiter, natural-satellites" }
Is there a way to estimate fan capacity at altitude?
Question: Provided you know the capacity of a fan (flow rate) at constant speed and at sea level, is there an analytical way to predict what the flow rate would be at altitude? Or is this specific to the fan's design? Answer: For a fixed speed, a fan, blower or any turbo-machine in general will deliver the same volumetric flow regardless of the ambient pressure since the machine essentially scoops out a volume of air as each blade of the machine passes the machine's inlet. $$Q_{SL}=Q_{alt}$$ where $SL$ designates 'Sea Level' as reference and $alt$ as some higher altitude But at higher altitudes there are fewer molecules per unit volume (lower gas density) and so the mass flow rate is lower with increasing altitude and barometric pressure. $$\dot{m}_{alt} < \dot{m}_{SL}$$ So since the volumetric flow rates are the same then $${\dot{m}_{alt}\over{\rho}_{alt}} = {\dot{m}_{SL}\over{\rho}_{SL}}$$ and $$\dot{m}_{alt}={{\rho}_{alt}\over {\rho}_{SL}}\dot{m}_{SL}$$ But if we were to measure these mass flows as volumetric flows relative to sea level then $${{\dot{m}_{alt}}\over {{\rho}_{SL}}}={{{\rho}_{alt}}\over {{\rho}_{SL}}}{{\dot{m}_{SL}}\over {{\rho}_{SL}}}$$ which becomes $$Q_{Malt}={{{\rho}_{alt}}\over {{\rho}_{SL}}}Q_{MSL}$$ And the $Q_M$'s are the measured volumetric flows at altitude and sea level respectively. This result shows the the measured volumetric flow is reduced as altitude increases by the ratio of air density as it decreases. And this is consistent with zero volumetric flow as one moves out of the atmosphere and no more scooping is possible.
{ "domain": "physics.stackexchange", "id": 33858, "tags": "fluid-dynamics, pressure, flow, applied-physics" }
Weird rotational movement of a cylinder
Question: I have had this question for a while. Let's suppose we have a disk/cylinder with a radius $R$ and mass $M$ placed along its length on a flat surface. We press one of its sides with enough force, and the cylinder will move forward (in $x>0$) but rotate backward. Under certain conditions, the object will change its translational direction after covering a short distance and return while rotating. What I believe happens is as follows: The force applied to the cylinder has two components: one pointing forward and another one downward, with the latter exerting torque and causing the cylinder to rotate. The force pushing the center of mass displaces the cylinder from equilibrium, making it move forward while simultaneously rotating and sliding. As the cylinder rotates, it experiences kinetic friction with the surface. After traveling a certain distance, the kinetic friction decelerates the center of mass and eventually stops it. However, since it still possesses rotational kinetic energy, the disk returns by rolling. I havenig some problems to write the equations involved. I know we must set, for example, the external force $F$ (finger) and where it is applied. Suppose it forms some angle $\theta$, with, say, $x>0$. Also, we need some friction coefficients, say $\mu_s$ and $\mu_k$. What should be the system I need to solve in order to obtain, for example, the position of the CM? Answer: Forget about the force exerted by the finger and concentrate of the condition of the cylinder the instant the finger is removed at time $t=0$ with the translational velocity to the left being $v_0$ and the rotational velocity counterclockwise of $\omega_0$. Because there are no eternal torques acting on the cylinder, angular momentum is conserved. Consider the angular momentum about $P$. The initial angular momentum is $I_{\rm cm} \omega_0\hat z -mv_0r \hat z = \frac 12mr^2\omega_0\hat z -mv_0r \hat z$. To find the final state of the system one only needs to compare the two terms. If $\frac 12mr^2\omega_0>mv_0r$ then the final angular momentum will be in the $+\hat z$ direction and finally the cylinder will be moving to the left, $-\hat x$ direction whereas if $\frac 12mr^2\omega_0<mv_0r$ then the final angular momentum will be in the $-\hat z$ direction and finally the cylinder will be moving to the right, $+\hat x$ direction, So it a matter of whether or not the initial rotational motion "defeats" the initial translational motion. For the final state the angular momentum is $\frac 12mr^2\omega_{\rm f}\hat z +mv_{\rm f}r \hat z$ and $v_{\rm f} = r\omega _{\rm f}$, the no slipping condition. Equating the initial and final angular momenta will enable you to find $v_{\rm f}$ and $\omega _{\rm f}$. If you want to investigate the motion between the initial and final states the you need to set up two equations. The "aim" of the frictional force between cylinder and surface surface, $f_{\rm k}$ in your diagram, is to reduce to zero the relative motion between the cylinder and surface so that the no slip, $v=r\omega$, condition is reached. As shown in your second diagram the frictional force $f_{\rm k}$ will act towards the left. That frictional force will try and change the translational velocity whilst at the same time also change the rotational velocity. The two equations of motion are $m\dot v = -f_{\rm k} = \mu m g\Rightarrow \dot v = -\mu g$ for translation and $I\dot \omega = -f{\rm k}\,r=- \mu mg\,r \Rightarrow r \dot \omega = -2\mu g$ for rotational motion where $I=mr^2/2$ and $r$ is the radius of the cylinder. Using the initial conditions you can now solve for $v(t)$ and $r\omega(t)$ and when these are equal you have the final state of rolling without slipping when the two are equal.
{ "domain": "physics.stackexchange", "id": 97831, "tags": "forces, rotational-dynamics, rotational-kinematics, rotation, moment-of-inertia" }
Can Pluto be seen with the naked eye from Neptune when Pluto and Neptune are closest?
Question: When Neptune and Pluto are closest, about 100 million mi (160 million km) from each other, would an observer on Neptune (or rather on one of its moons, since Neptune is gaseous) be able to see Pluto, and maybe even Charon, with the naked eye? If not, could Pluto be seen in average binoculars? I think Pluto would appear a bit smaller than Mercury from Earth (but at much lower apparent brightness because of the distance from the Sun). Answer: No, it cannot. Far from it. The closest approach between both planets is roughly 16 AU due to the 3:2 orbit resonance. Pluto will even then be a tiny dot among many with a brightness around 14 mag. You can try that with Stellarium yourself, placing the observer on Neptune and looking for Pluto. You just have to find the right time. One such time is approx. in the year 2877.
{ "domain": "astronomy.stackexchange", "id": 5218, "tags": "observational-astronomy, pluto, naked-eye, neptune" }
How to use C++ htslib to read VCF contig name and size?
Question: A typical VCF file has: ##contig=<ID=chr1,length=248956422> ##contig=<ID=chr2,length=242193529> I would like to use htslib in C++ to read it. My attempt: htsFile *fp = bcf_open("my.vcf", "r"); bcf_hdr_t *hdr = bcf_hdr_read(fp); In https://github.com/samtools/htslib/blob/develop/htslib/vcf.h, I'm not able to find a function that can do that for me. How to read chr1 and 248956422 in C++? Answer: #include "htslib/vcf.h" int main(int argc, char *argv[]) { htsFile *fp; bcf_hdr_t *hdr; bcf_idpair_t *ctg; int i; if (argc == 1) { fprintf(stderr, "Usage: print-ctg <in.vcf>\n"); return 1; } fp = vcf_open(argv[1], "r"); hdr = vcf_hdr_read(fp); ctg = hdr->id[BCF_DT_CTG]; for (i = 0; i < hdr->n[BCF_DT_CTG]; ++i) printf("%s\t%d\n", ctg[i].key, ctg[i].val->info[0]); bcf_hdr_destroy(hdr); vcf_close(fp); return 0; } On stability: this use has been in htslib forever. In general, functions/structs/variables in the public headers are meant to be stable. However, there is no guarantee that future versions will always keep the same APIs.
{ "domain": "bioinformatics.stackexchange", "id": 453, "tags": "c++, htslib" }
How does one add matter coupling terms to the linearized Lagrangian for General Relativity?
Question: In Spacetime and Geometry, Dr. Carroll provides a Lagrangian for Einstein's equations in vacuum assuming that the metric can be written in the form $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$. The Lagrangian is, for reference, $$\mathcal{L}=\frac{1}{2}\left[\left(\partial_\alpha h^{\alpha\beta}\right)\left(\partial_{\beta}h\right)-\left(\partial_\alpha h^{\rho\sigma}\right)\left(\partial_{\rho}h^{\alpha}_{\;\,\sigma}\right)+\frac{1}{2}\eta^{\alpha\beta}\left(\partial_\alpha h^{\rho\sigma}\right)\left(\partial_\beta h_{\rho\sigma}\right)-\frac{1}{2}\eta^{\alpha\beta}\left(\partial_\alpha h\right)\left(\partial_\beta h\right)\right]$$ This, as can be verified, produces the Einstein tensor when varied. Now, later on, Dr. Carroll notes that by treating $h_{\mu\nu}$ as a field propagating over Minkowski spacetime, adding coupling to matter in the Lagrangian, and by requiring it to couple to its own energy-momentum tensor/matter energy-momentum tensor, General Relativity is restored. The part I am confused about is as follows: how is one to add coupling to matter in the Lagrangian? I assume it's constructed from factors of $h_{\mu\nu}$ and not its derivative, but I'm not sure how to do this. Any assistance would be much appreciated. Answer: The standard Pauli-Fierz Lagrangian density of the spin 2 field $h_{\mu\nu}$ is only the $[...]$ term, without the $1/2$ in front. The expected coupling to matter $\mathcal{L}_{\text{int}} \sim h_{\mu\nu}T^{\mu\nu}$ is "guessed" by Feynman in his lectures notes on gravitation (Lecture 3, page 42, Ed. of 1995). Kraichnan (Special-Relativistic Derivation of Generally Covariant Gravitation Theory (Physical Review, Volume 98, Issue 4, 1955)), and Gupta (Gravitation and Electromagnetism (Physical Review, Volume 96, Issue 6, 1954)) also put this "by hand" in their articles. I also did not find a direct proof in any of the Deser and Wald articles on gravity. The only solid proof of this linear coupling that I know of is given by Boulanger et al. in a perturbative cohomological set-up of Lagrangian BRST in Nucl.Phys. B597 (2001) 127-171 for a scalar field (section 9 of the arxiv draft). Of course, a full generality of matter coupling in the absence of own gauge invariance is inferred there, but it is exhibited for example after 20 pages of tedious calculation at the end of section 4 in the JHEP0502:016,2005. I quote as a reference to formula (104): <<Thus, the coupling between a Dirac field and one graviton at the first order in the deformation parameter takes the form $\Theta ^{\mu\nu}h_{\mu\nu}$.We cannot stress enough that is not an assumption, but follows entirely from the deformation approach developed here>>.
{ "domain": "physics.stackexchange", "id": 71682, "tags": "general-relativity, perturbation-theory, stress-energy-momentum-tensor, linearized-theory" }
Social football site with authentication and user statistics
Question: It works and does what is supposed what to do btw. <?php ob_start(); session_start(); include("php/connect.php"); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <title>Welcome to albsocial</title> <link rel="stylesheet" href="css/main.css"/> <link rel="stylesheet" href="css/menubar.css"/> <script type="text/javascript" src="../js/analytic.js"></script> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js"></script> </head> <script type="text/javascript"> setTimeout(function() { $('#error_check').fadeOut('slow'); }, 5000); </script> <body> <div id="header"> <div id="user_logged"> </div> <?php if(isset($_SESSION['username']) && $_SESSION['username'] != ''){ if(!isset($_GET['user'])){ echo " <div id='ligat'> <ul> <li class='first'><a href='seria.php'>Seria A</a></li> <li><a href='laliga.php'>La liga</a></li> <li><a href='#'>Premier Liga</a></li> <li><a href='#'>Bundesliga</a></li> <li><a href='#'>Ligue 1</a></li> </ul> </div> "; } }else{ echo ""; } ?> </div> </div> <div id="wrapper"> <div id="logo"> <a href="/">Albsocial</a> </div> <div id="login"> <?php if (isset($_SESSION['username']) && $_SESSION['username'] != ''){ $username = $_SESSION['username']; echo "<h4><a href='member.php?user=".$username."'>".$username."</a></h4><a href='logout.php'>LogOut</a>"; }else{ echo "<a href='login.php'>Login</a> <a href='#'>Register</a>"; } ?> </div> <div id="menubar"> <?php include("php/bar.php");?> </div> <div id="content_wrap"> <div id="content_member"> <?php //MARRIM USERNAME QE E VEJM NE ADRESS BARS if (isset($_GET['user'])){ $username = mysql_real_escape_string($_GET['user']); if(ctype_alnum($username)){ $check = "SELECT `username` FROM user WHERE username='$username'"; $get = mysql_query($check)or die(mysql_error()); if(mysql_num_rows($get)===1){ $row = mysql_fetch_assoc($get); $username = $row['username']; }else{ echo "Ky profil nuk ekziston."; } } } ?> <?php if (isset($_SESSION['username']) && $_SESSION['username'] != ''){ if (!isset($_GET['user'])){ //Ndeshjet e fituara ose jo echo "<h3>Ndeshjet e vendosura nga <b>$username</b> dhe Rezultatet:</h3><br/>"; $matches = "SELECT * FROM match_select WHERE user_id='$username'"; $query_match = mysql_query($matches)or die(mysql_error()); while ($row = mysql_fetch_assoc($query_match)){ $id = $row['match_id']; $liga = $row['liga']; if ($row['result'] == $row['final']){ $hey = "style='color: green;' "; $match = "SELECT * FROM `winner` WHERE `user_id` = '$username' AND `match_id` = '$id' AND `liga`='$liga'"; $matchResult = mysql_query($match)or die(mysql_error()); if($_POST['submit']){ if(mysql_num_rows($matchResult)) { $error1 = "<div id='error_check'>I keni marre piket ose Nuk jeni fitues.</div>"; }else{ mysql_query("INSERT INTO winner (user_id, match_id, final, liga) VALUE ('$username','$id', '1', '$liga')"); $error1 = "<div id='error_check'>Piket u shtuan ne database</div>"; } } }else if($row['final'] == ""){ $hey = " style='color: #333;'"; $n = "?"; }else{ $hey = " style='color: red;'"; } echo " <div id='my_selection'><h4> ".$home = $row['home']." - ".$away = $row['away']." - ".$input = $row['result'] ." </h4> </div> <div id='results'> <h4 $hey>".$home = $row['home']." - ".$away = $row['away']." - ".$result = $row['final']." $n </h4> </div> "; } echo $error1; echo " <form action='member.php' method='post'> <input type='submit' name='submit[$id]' id='match_check' value='Terhiq Piket'> </form> "; } }else{ header("Location: index.php"); } ?> </div> <?php if(!isset($_GET['user'])){ $result = mysql_query("SELECT SUM(final) AS value_sum FROM winner WHERE user_id='$username'"); $row = mysql_fetch_assoc($result); $sum = $row['value_sum']; $resul1 = mysql_query("SELECT SUM(dummy) AS value FROM match_select WHERE user_id='$username'"); $row1 = mysql_fetch_assoc($resul1); $dummy = $row1['value']; if($dummy['value'] == ""){ echo ""; }else{ echo " <div id='adds'> <h3>Statistikat e $username</h3> <br/> <h4 style='margin-left: 10px;'>Gjithsej keni vene: ".$dummy." ndeshje. <br/> <br/> Te sakta jane: ".$sum1 = $sum1 + $sum['test_value']." ndeshje. <br/> <br/> Te pa sakta ose akoma skane mbaruar jane: ".$no = $dummy - $sum['test_value']." ndeshje. </h4> </div> "; } } ?> </div> </div> <div id="footer"> <div id="footerWrapp"> <div id="copyrights"><center>©Te gjitha te drejtat jane te rezervuara nga <a href="#">ALALA</a> , 2013</center></div> </div> </div> </body> </html> Answer: It was not easy, but the first step for improving your code is to split the layout from the logic: layout.php <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <title>Welcome to albsocial</title> <link rel="stylesheet" href="css/main.css"/> <link rel="stylesheet" href="css/menubar.css"/> <script type="text/javascript" src="../js/analytic.js"></script> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js"></script> </head> <script type="text/javascript"> setTimeout(function() { $('#error_check').fadeOut('slow'); }, 5000); </script> <body> <div id="header"> <div id="user_logged"> <!--removed div//--> <?php if($noUserAndLoggedIn) :?> <div id='ligat'> <ul> <li class='first'><a href='seria.php'>Seria A</a></li> <li><a href='laliga.php'>La liga</a></li> <li><a href='#'>Premier Liga</a></li> <li><a href='#'>Bundesliga</a></li> <li><a href='#'>Ligue 1</a></li> </ul> </div> <? endif ?> </div> </div> <div id="wrapper"> <div id="logo"><a href="/">Albsocial</a></div> <div id="login"> <?php if ($isLoggedin) :?> <h4><a href='member.php?user=".$username."'><?=$username?></a></h4><a href='logout.php'>LogOut</a>"; <?php else: ?> <a href='login.php'>Login</a> <a href='#'>Register</a> <?php endif: ?> </div> <div id="menubar"> <?php include("php/bar.php");?> </div> <div id="content_wrap"> <div id="content_member"> <?php if ( $unknownUser ) :?> Ky profil nuk ekziston. <?php endif ?> <?php if ($noUserAndLoggedIn) :?> <h3>Ndeshjet e vendosura nga <b>$username</b> dhe Rezultatet:</h3><br/> <?php foreach ($lines as $line): ?> <div id='my_selection'> <h4><?=$line['selection']['home']?> - <?=$line['selection']['away']?> - <?=$line['selection']['result']?></h4> </div> <div id='results'> <h4 style='color: <?=$line['result']==1?'green':(($line['result']==-1)?'red':'#333')?> ;'> <?=$line['result']['home']?> - <?=$line['result']['away']?> - <?=$line['result']['final']?> <?php if ($line['result']['uncertain']): ?>?<?php endif;?> </h4> </div> <?php if ($line['error']!=null): ?>?<?=$line['error'];?> <form action='member.php' method='post'> <input type='submit' name='submit[<?=$line['id'];?>' id='match_check' value='Terhiq Piket'> </form> <?php endforeach; ?> <?php endif; ?> </div> <?php if($noUser && $stats!=null):?> <div id='adds'> <h3>Statistikat e $username</h3> <br/> <h4 style='margin-left: 10px;'>Gjithsej keni vene: <?=$stats['s1']?> ndeshje. <br/> <br/> Te sakta jane: <?=$stats['s2']?> ndeshje. <br/> <br/> Te pa sakta ose akoma skane mbaruar jane: <?=$stats['s3']?> ndeshje. </h4> </div> <?php endif; ?> </div> </div> <div id="footer"> <div id="footerWrapp"> <div id="copyrights"><center>©Te gjitha te drejtat jane te rezervuara nga <a href="#">ALALA</a> , 2013</center></div> </div> </div> </body> </html> content.php <? $isLoggedin=isset($_SESSION['username']) && $_SESSION['username'] != ''; if (!$isLoggedin) { header("Location: index.php"); die(); } $noUser=!isset($_GET['user']) $noUserAndLoggedIn= $isLoggedin && $noUser; $username = $isLoggedin?$_SESSION['username']:""; $unknownUser=false; if (isset($_GET['user'])) { $username = mysql_real_escape_string($_GET['user']); if(ctype_alnum($username)){ $check = "SELECT `username` FROM user WHERE username='$username'"; $get = mysql_query($check)or die(mysql_error()); if(mysql_num_rows($get)===1){ $row = mysql_fetch_assoc($get); $username = $row['username']; }else{ $unknownUser=true; } } } $lines=array(); if ($noUserAndLoggedIn) { $matches = "SELECT * FROM match_select WHERE user_id='$username'"; $query_match = mysql_query($matches)or die(mysql_error()); while ($row = mysql_fetch_assoc($query_match)){ $id = $row['match_id']; $liga = $row['liga']; $uncertain = false; $error1=null; if ($row['result'] == $row['final']){ $hey = 1; $match = "SELECT * FROM `winner` WHERE `user_id` = '$username' AND `match_id` = '$id' AND `liga`='$liga'"; $matchResult = mysql_query($match)or die(mysql_error()); if($_POST['submit']){ if(mysql_num_rows($matchResult)) { $error1 = "I keni marre piket ose Nuk jeni fitues."; }else{ mysql_query("INSERT INTO winner (user_id, match_id, final, liga) VALUE ('$username','$id', '1', '$liga')"); $error1 = "Piket u shtuan ne database"; } } }else if($row['final'] == ""){ $hey = 0; $uncertain = true; }else{ $hey = -1; } $lines[]=array('selection'=>array ('home'=>$row['home'],'away'=>$row['away'],'result'=>$row['result']), 'result'=>array ('home'=>$row['home'],'away'=>$row['away'],'result'=>$row['final'],'uncertain'=>$uncertain), 'error'=>$error1, 'status'=>$hey, 'id'=>$id); } } $stats=null; if ($noUser) { $result = mysql_query("SELECT SUM(final) AS value_sum FROM winner WHERE user_id='$username'"); $row = mysql_fetch_assoc($result); $sum = $row['value_sum']; $resul1 = mysql_query("SELECT SUM(dummy) AS value FROM match_select WHERE user_id='$username'"); $row1 = mysql_fetch_assoc($resul1); $dummy = $row1['value']; if ($dummy != ""){ $stats=array( 's1'=>$dummy, 's2'=>$sum1 + $sum['test_value'], 's3'=>$dummy - $sum['test_value'] ); //better names!! but I don't understand you language } } include "layout.php"; Now you have a separate file for the representation of your data and one for the calculation. The first file is now only a simple template without any business logic. In the words of the Model-View-Controller paradigm this is the view. The model and the controller are now combined in your content.php. In a second step you would split the content.php into two files. One is getting the data from your database (model) and the other is connecting the data from the database with then view (controller). Unfortunately I have no time to continue this at this point. But after you have splitted this file we should concentrate on some conceptual details. But this will be far easier than now, if the responsibility are divided into this three files.
{ "domain": "codereview.stackexchange", "id": 4011, "tags": "php, sql, authentication, session" }
Text Classification Taking too long
Question: I have a sample of 135k documents that are preprocessed, and to which I calculated TFIDF. I tried clustering with KMeans, which gave me a memory problem (20GB). Then, i tried with MiniBatch K-Means with just 2 clusters (I'm trying to check how many clusters give the best results) and 10k and 5k batch size. It didn't even complete for almost 3 hours. Now I'm trying DBSCAN, which is supposed to be less time-expensive, but it has been 2 hours as well (default params). Is this amount of samples normal to take this long, or am Are there other clustering / unsupervised ML algorithms I could use? Answer: KMeans and MiniBatch K-Means are generally faster than DBSCAN for large datasets. The fact that MiniBatch K-Means didn't complete even after 3 hours is unusual, unless your documents are extremely long or your machine has limited resources. DBSCAN's time complexity is O(n log n) in best case scenarios (when using a spatial index), but it can be as bad as O(n^2) in worst case scenarios. So it's not surprising that it's taking a long time with such a large dataset. There might be a few things you could try: BIRCH Algorithm: Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) is an unsupervised data mining algorithm used to perform hierarchical clustering over particularly large data-sets. You can find this at sklearn.cluster.Birch Dimensionality Reduction: Techniques like PCA or t-SNE could help reduce the dimensionality of your TF-IDF vectors before applying any clustering method, which could make these methods run faster. Online/Incremental Clustering Algorithms: These types of algorithms process one instance at a time and thus require less memory space compared to batch-based algorithms like KMeans or DBSCAN. Remember that regardless of which method you choose, preprocessing steps such as normalization or removing stop words from your text data can have great impacts on both running time and result quality.
{ "domain": "datascience.stackexchange", "id": 11926, "tags": "machine-learning, classification, clustering, text-classification" }
Real-time map generate on web like a Rviz
Question: Hi there By the time I came to this question, I got a lot of great information on the internet, but the combination of 'rosbridge' and 'robot web tool' or 'ros2djs' is just displaying the generated map The I want to see the process of gmapping generating a map like rviz in real time on the web. Please let me know the solution for that. Please excuse my poor English. Originally posted by justinROScolleage on ROS Answers with karma: 3 on 2019-03-26 Post score: 0 Original comments Comment by ahendrix on 2019-03-26: rviz is just subscribing to the /map topic and displaying it as gmapping generates updates. I haven't used the various ROS web tools that you mentioned, but you should be able to have them subscribe to the /map topic and display updates in real time too. Comment by justinROScolleage on 2019-03-26: thank you for your comment! Yes, I just wanna subscribe to the /map and display that on the web! But I cannot find the way to this. Comment by ahendrix on 2019-03-26: I don't know much about javascript or the ROS web tools, but some quick browsing of the documentation brought me to http://robotwebtools.org/jsdoc/ros2djs/current/maps_OccupancyGridClient.js.html , which seems to be able to subscribe to an occupancy grid and display it. I wasn't able to find a working example of that in a quick search, but there have been a lot of talks and documentation about the robot web tools packages, so I suspect that if you look hard enough, you'll be able to find something. Comment by ahendrix on 2019-03-26: Looks like I was a bit slow; @billy's comment includes a working example of the class that I linked to! Answer: See the answer to this question: https://answers.ros.org/question/315015/what-is-the-best-way-to-monitor-and-remote-control-the-robot-from-tablet/#315022 I put working code up that displays the map in webbrowser locally or remotely. Of course all that code started as samples from robotwebtools.org and ROS tutorials - but I admit I have not tried it while gmapping. Originally posted by billy with karma: 1850 on 2019-03-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by justinROScolleage on 2019-03-27: Thank you for @ahendrix @billy! I examined how to combine ROS and javascript to make a system, but I could not find a detailed tutorial. Please tell me the information you referenced when creating your own program. Comment by billy on 2019-03-27: So you had me curious. Now I have tried it with gmapping and it works.But you need to tweek the code I linked to in earlier comment to continuously update the map. Add ", continuous : true" to the gridClient var gridClient = new ROS2D.OccupancyGridClient({ ros : ros, rootObject : viewer.scene, continuous : true }); The map updates every 15 - 30 seconds on my old laptop running everything.. Comment by justinROScolleage on 2019-03-27: This is all I was looking for! I can’t thank you enough:) you and @ahendrix made me notice OccupancyGrid. Thank you and so so much. Comment by Addy_thegreat on 2020-05-20: Thanks @billy for your work! Comment by Addy_thegreat on 2020-05-20: I am using your html code give in the link https://answers.ros.org/question/315015/what-is-the-best-way-to-monitor-and-remote-control-the-robot-from-tablet/#315022 for gmapping of turtlebot3_world.The page is displaying the generation process of the map but it is not showing robot's position on that map..... Looks like the roslibjs is not able to subscribe /robot_pose ..... On the console I see 'Before callback' and 'creating robotMarkr: ' but I didn't saw 'In pose subscribe callback','Got Pose data:' and 'Pose updated: ' ........please resolve my problem @billy @ahendrix @justinROScolleage Comment by saintdere on 2020-06-12: What commands are needed to display the robot's position on the map? When I run the page, the square where the map should be is empty @Addy_thegreat Comment by Addy_thegreat on 2020-06-12: Clone rosbridge_suite and mjpeg_server from github at first @saintdere Comment by saintdere on 2020-06-16: I have cloned those, but what are the commands you run in terminal? Comment by Addy_thegreat on 2020-06-16: If you have installed it right enter this command in terminal roslaunch rosbridge_server rosbridge_websocket.launch and on new terminal run rosrun mjpeg_server mjpeg_server .............................. http://wiki.ros.org/rosbridge_suite/Tutorials/RunningRosbridge You visit this link to learn more about it @saintdere Comment by saintdere on 2020-06-18: The mjpeg_server is from this but isn't it deprecated? And you use this for turtlebot3 in gazebo right? So you run those commands along with the turtlebot3_world command? Comment by Arun_kumar on 2021-04-02: @billy I have tried the way told in this thread but at the webpage the map is offset at the left-bottom corner inside the Scence Viewer. How should view the map in full size?? Comment by billy on 2021-04-03: @Arun_kumar...I have a confession. I'm not a web programmer. It is set to the left side of my browser as well and I never saw a need to change it. I suggest you research HTML or CSS page formatting and put the grid client in a box in the middle of the page.
{ "domain": "robotics.stackexchange", "id": 32761, "tags": "rviz, ros-kinetic, realtime, rosbridge, 2d-mapping" }
Help creating a workable design?
Question: I received the following feedback for my code: "I still do not see a design here that will work. It is a very bad idea to have the big switch and loop at this point. All you need in your main() method is to create one object of each class and call the methods on them. In class Contact: Every class needs a comment. The first thing you need to do is to write a comment for this class that starts with the words "one object of this class stores..." Once you are very clear on what one object of this class represents, the rest of the program (and my comments) will make sense. Until you can write a comment like this, nothing will make sense. any method in this class has access to only one single contact. Therefore, your method printContacts() does not belong here. every method should do just one thing. Therefore, the method printContacts() — whatever class it is in — will not both read and print. I really think you need to throw away that method printContacts(). I don't believe that you will ever use that code. That is exactly why I asked you to leave all the method definitions blank for this deliverable. the welcome and the menu do not belong inside this class either. Methods in this class all must pertain only to exactly one contact In class ContactArray: I have no idea what this class is supposed to do, since you have no comment. I still do not see a design here that will work for this project. What you need is three classes: one class where one object of the class stores and manipulates the contact info for one person. one class where one object of the class stores and manipulates the info for all of the people in the whole list. I do not see a class like this in your code, and that is why your design will not work. one class with just a simple main() that just creates one object of each of the other two classes and calls all the methods on them. This is just for testing, to see that you know what each object represents and what you can do with each object. The lesson here is to never try to code a class before you can write a comment that tells what one object of the class represents. Writing in Java is so much harder than writing in English, if you can't write it in English, you will never be able to write it in code. The lack of comments for each class tells me that it was not reviewed carefully, quietly and independently by each team member. All of you have been working with the program guidelines all quarter, and so all of you should have caught this omission. Please take your time and come up with a design that contains exactly three classes, as I described above. No menu, no user input, no code inside the { } of the method definitions (except main()). No file I/O! We are just trying to get a workable design here, struggling with the compiler is a time drain." This is my code. CONTACTLIST.JAVA import java.util.Scanner; public class ContactList { public static void main(String args[]) { Scanner reader = new Scanner(System.in); reader.useDelimiter("\n"); ContactRunner runner; runner = new ContactRunner(reader); runner.run(); } } CONTACT.JAVA import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; import java.util.Set; import java.util.TreeSet; public class Contact { private String lastname, firstname, address, city, zip, email, phone, notes; public Contact(String lastnamename, String firstname, String address, String city, String zip, String email, String phone, String notes, String lastname) { this.lastname = lastname; this.firstname = firstname; this.address = address; this.city = city; this.zip = zip; this.email = email; this.phone = phone; this.notes = notes; } public Contact() { } // overrides the default Object method public String toString() { return lastname + ", " + firstname + ", " + address + ", " + city + ", " + zip + ", " + email + ", " + phone + ", " + notes; } /* * Sets the value for lastname to "s". */ void setLastName(String s) { lastname = s; } /* * Returns the value of lastname. */ String getLastName() { return lastname; } /* * Sets the value for firstname to "a". */ void setFirstName(String a) { firstname = a; } /* * Returns the value of firstname. */ String getFirstName() { return firstname; } /* * Sets the value for address to "b". */ void setHouseAddress(String b) { address = b; } /* * Returns the value of address. */ String getHouseAdress() { return address; } /* * Sets the value for city to "c". */ void setCity(String c) { city = c; } /* * Returns the value of city. */ String getCity() { return city; } /* * Sets the value for zip to "d". */ void setZip(String d) { zip = d; } /* * Returns the value of zip. */ String getZip() { return zip; } /* * Sets the value for phone to "e". */ void setPhone(String e) { phone = e; } /* * Returns the value of phone. */ String getPhone() { return phone; } /* * Sets the value for email to "f". */ void setEmail(String f) { email = f; } /* * Returns the value of email. */ String getEmail() { return email; } /* * Sets the value for notes to "g". */ void setNotes(String g) { notes = g; } /* * Returns the value of notes. */ String getNotes() { return notes; } void welcome() { // Welcomes the user to the program for the first time. System.out.println("\nYou are in the Contact List DB. " + "What would you like to do? \n"); } void menu() { // Prints out user menu written by Daniela. System.out.println("1. Enter a new person" + "\n" + "2. Print the contact list" + "\n" + "3. Retrieve a person's information by last name" + "\n" + "4. Retrieve a person's information by email address" + "\n" + "5. Retrieve all people who live in a given zip code" + "\n" + "6. Exit"); } void printContacts() { // Read from file, print to console. by Damani Brown & Seth // ---------------------------------------------------------- int counter = 0; String line = null; // Location of file to read File file = new File("contactlist.csv"); // Sort contacts and print to console try { Scanner scanner = new Scanner(file); // Before printing, add each line to a sorted set. by Seth // Copeland Set<String> lines = new TreeSet<>(); while (scanner.hasNextLine()) { line = scanner.nextLine(); lines.add(line); counter++; } // Print sorted contacts to console. for (String fileLine : lines) { String outlook = fileLine.substring(0, 1).toUpperCase() + fileLine.substring(1); System.out.println(outlook); } // --------------------------------------------------------------------- // Sort contacts code. by Seth Copeland scanner.close(); } catch (FileNotFoundException e) { } System.out.println("\n" + counter + " contacts in records.\n"); } } CONTACTARRAY.JAVA import java.io.FileWriter; import java.io.PrintWriter; import java.util.ArrayList; import java.util.Scanner; public class ContactArray { private static Scanner reader; public static void getContact() { reader = new Scanner(System.in); reader.useDelimiter("\n"); /** * Array list created by Daniela Villalobos, used to create running list * of object contacts with lastname, firstname, address, city, zip, * phone, email, and notes */ ArrayList<Contact> contacts = new ArrayList<Contact>(); Contact contact; contact = new Contact(); /** * Gets users contact information and adds the contact as a string in * our contact arraylist. - written by Daniela Vallalobos. */ /** * Gets users contact information and adds the contact as a string in * our contact arraylist. - written by Daniela Vallalobos. */ System.out.println("\nEnter Contact Last Name:"); String lastname = reader.next(); if (lastname == null) { System.out.println("Invalid entry.\n"); } else { contact.setLastName(lastname); } System.out.println("Enter Contact First Name: "); contact.setFirstName(reader.next().toLowerCase()); System.out.println("Enter Contact Street Address: "); contact.setHouseAddress(reader.next().toLowerCase()); System.out.println("Enter Contact City: "); contact.setCity(reader.next().toLowerCase()); System.out.println("Enter Contact Zip Code: "); contact.setZip(reader.next().toLowerCase()); System.out.println("Enter Contact Email: "); contact.setEmail(reader.next().toLowerCase()); System.out.println("Enter Contact Phone Number: "); contact.setPhone(reader.next().toLowerCase()); System.out.println("Enter Contact Notes: "); contact.setNotes(reader.next().toLowerCase()); contacts.add(contact); /** * Writes contact information from user to file written by Damani Brown */ FileOperations.write(); Contact c = contact; try (PrintWriter output = new PrintWriter(new FileWriter( "contactlist.csv", true))) { output.printf("%s\r\n", c); } catch (Exception e) { } System.out.println("Your contact has been saved.\n"); } } CONTACTRUNNER.JAVA import java.util.Scanner; public class ContactRunner { private Scanner reader; public ContactRunner(Scanner reader) { this.run(); } public void run() { Contact contact; contact = new Contact(); int action = 0; contact.welcome(); /** * While loop created to bring up user's choices - loop written by * Daniela Villalobos */ while (action != 6) { contact.menu(); reader = new Scanner(System.in); reader.useDelimiter("\n"); action = reader.nextInt(); /* * DV - if statement permits only actions 1-6 to execute a case */ if (action <= 0 || action > 6) { System.out.println("Invalid selection. "); } /** * Switch statement written by Daniela Villalobos */ switch (action) { case 1: { /** * Gets users contact information and adds the contact as a * string in our contact arraylist. - written by Daniela * Vallalobos. */ ContactArray.getContact(); break; } /** * Prints out all records from file in alphabetical order */ case 2: { contact.printContacts(); /** * Reads contacts from file, sorts and prints them to console. */ break; } /** * Ask's user to search for a lastname. Matches user input to record * of contacts, and prints out matching contact. - Coded by Seth & * Damani */ case 3: { System.out.println("\nEnter the last" + "name to search for: "); /** * Gets the searchterm from the user and Matches user input to * existing contact records. */ FileOperations.match(); break; } /** * Ask's user to search for a email. Matches user input to record of * contacts, and prints out matching contact. - Coded by Seth & * Damani */ case 4: { System.out.println("\nEnter the email " + "address to search for: "); /** * Gets the searchterm from the user and Matches user input to * existing contact records. */ FileOperations.match(); break; } /** * Ask's user to search for a zipcode. Matches user input to record * of contacts, and prints out matching contact. - Coded by Seth & * Damani */ case 5: { System.out.println("\nEnter the Zipcode " + "to search for: "); /** * Gets the searchterm from the user and Matches user input to * existing contact records. */ FileOperations.match(); break; } case 6: { System.out.println("\nNow quitting application..."); System.exit(action); } } } } } FILEOPERATIONS.JAVA import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.Scanner; public class FileOperations { private static Scanner reader; public static void write() { // Creates and writes to file try { File file = new File("contactlist.csv"); // If file doesn't exists, then create it. if (!file.exists()) { file.createNewFile(); } } catch (IOException e) { e.printStackTrace(); } } static void match() { // Matches user input to existing contact records. try { // Open the file as a buffered reader BufferedReader bf = new BufferedReader(new FileReader( "contactlist.csv")); // Start a line count and declare a string to hold our // current line. int linecount = 0; String line; reader = new Scanner(System.in); String searchterm = reader.next(); // Let the user know what we are searching for System.out.println("\nSearching for " + searchterm + " in file..."); // Loop through each line, putting the line into our line // variable. boolean noMatches = true; while ((line = bf.readLine()) != null) { // Increment the count and find the index of the word. linecount++; int indexfound = line.indexOf(searchterm); // If greater than -1, means we found a match. if (indexfound > -1) { System.out.println("\nContact was FOUND\n" + "\nContact " + linecount + ": " + line); noMatches = false; } } // Close the file after done searching bf.close(); if (noMatches) { System.out.println("\nNO MATCH FOUND.\n"); } } // Catches any exception errors catch (IOException e) { System.out.println("IO Error Occurred: " + e.toString()); } } } First she says we don't have enough classes and our main was too long. Suggested moving our main and our array list into different classes. We did that, and she's still says it's not a workable design. Program seem to run fine to me, how is it not workable? There is just no pleasing her. Can anyone translate her feedback into English so a beginner coder can understand. This is my 3rd time resubmitting this design document. We are already suppose to be starting the next step, but she won't let us continue until we have a "Workable" design. (yet program runs fine). So now I have to pretty much erase my whole code (which I worked countless days and nights on) and make a "workable" design/skeleton with pretty much no code that pleases her. Any help so that I can move forward would simply be AMAZING. Regards, Answer: If I understand the requirements correctly, then you were tasked to design not to implement such a system. So you were asked to specify the interface, or API. Example in English: Class Contact: represents a contact. Has getters/setters for name, email, etc. Class AddressBook: a collection where contacts can be added and removed. Provides searchFoo methods that take a string and returns a List of contacts whose foo field contains that string as substring (where foo is name, email, etc). (If you know SQL, the interface of AddressBook would mirror INSERT foo, DELETE foo and SELECT foo WHERE bar like actions) The AB also has a method to write a representation itself to a Writer, or to load contacts from a Reader. A constructor that would initialize the AB from a file would be pretty nifty. Class Shell: provides a text interface to AddressBook. Has methods to display forms like Contact createContactDialog(); // guide the user through creating a new contact void changeContactDialog(Contact c); // change fields of existing contact void displayContact(Contact c); // just display the contact void searchDialog(); // start dialog to search the AB Each Shell has fields in and out which are a Reader and Writer, and an AddressBook ab. The Shell has a run method which will start the shell, and basically handles the rest of the control flow. (Internally, this would be a while (true) loop that prompts the user for some action, and then dispatches appropriate dialogs. The loop (an the run method) is left by a command like exit) No other class will have user interface code! (Especially your FileOperations class is bad: This class is just a collection of helper functions, and does not represent a useful entity in itself. Classes containing only static stuff are not object oriented. Compare OOP to procedural programming, like in C or Pascal or whatnot) The main(): Here you create some Contacts, put them into an AddressBook, create a Shell with that, and let it run. Now, you translate all that to Java (without specifying the implementation except for main), put pretty comments everywhere and are done. Some hints for the actual implementation: Don't require the user to type numbers. Use meaningful full-text commands (like new, update, search), or at least mnemonic letters (c for create, u for update, / for search, like in Vim). You can easily compare given input via regular expression. Java is quite good with “Regexes”. Don't lowercase the data in your contacts. If you want to search for contacts case-insensitively, then you can lowercase the data for comparision. However, storing the changed data is a bug, as lowercasing changes the meaning of text.
{ "domain": "codereview.stackexchange", "id": 3424, "tags": "java, array" }
Instantly filtering a list on user input and navigate with arrow keys/tab
Question: For my fellow Computer Science and Media students at my university, I build a link collection which soon a lot of people used. It basically served as a central place for all the links we needed to get access to lecture and exercise scripts, etc. To step it up a bit, I decided to throw some JavaScript at it so folks could filter the list. The result can be seen here: vlau.me (Content is German, but the functionality is easy to figure out). Now this is the first time I used JavaScript, so bear with the code. Usage I include the JS file and call recordFilter() with the first argument being the location of a JSON file which is used 1) to build the list in the first place and 2) to rebuild it when the user enters something into the input. The second argument is the ID of the container holding the components like the input element. The input element’s ID is the third argument. <script src="js/vlaume.min.js"></script> <script>recordFilter('_data/records.json', 'record-filter', 'record-filter__input');</script> A note on how the list is populated in the first place: I use a static site generator (Jekyll) where I use the JSON file to build the list when generating the site. This is done to ensure users with disabled JavaScript get the list although they cannot filter it. That requires JavaScript. The JavaScript This is the complete code. I use an external Fuzzy Search implementation which is put into the compressed file but excluded from below code. function recordFilter(jsonFile, containerName, inputID) { // Some names that are repeatedly used as HTML class or ID names var listName = 'record-list'; var itemName = 'record'; var linkName = itemName + '__link'; var activeLinkName = linkName + '--active'; // Get the JSON data by using a XML http request var listData; var xhr = new XMLHttpRequest(); // The request needs to be synchronous for now because on slow connections the DOM is ready // before it fetches everything from the json file xhr.open('GET', jsonFile, true); xhr.onload = function(e) { if (xhr.readyState === 4) { if (xhr.status === 200) { listData = JSON.parse(xhr.responseText); } else { console.error(xhr.statusText); } } }; xhr.onerror = function(e) { console.error(xhr.statusText); }; xhr.send(null); /** * Before the record list can be build, the DOM has to be loaded so we can hook into the input. */ window.onload = function(e) { // Some things that are only usable when JavaScript is enabled are hidden by default. // Removing the `js-disabled` class makes them visible again. if (document.body.classList.contains('js-disabled')) { document.body.classList.remove('js-disabled'); } var placeholderKeys = []; for (var key in listData) { var value = listData[key]; placeholderKeys = placeholderKeys.concat(value.title, value.abbr, value.keywords); } var filterInput = document.getElementById(inputID); filterInput.placeholder = placeholderKeys[Math.floor(Math.random() * placeholderKeys.length)]; if (filterInput.value.length > 0) { buildRecordList(filterKeys(filterInput.value)); } var recordList = document.getElementById(listName); setActiveClass(recordList.firstElementChild.getElementsByClassName(linkName)[0]); // Watch the search field for input changes … filterInput.addEventListener('input', function(e) { // … and build a new record list according to the filter value buildRecordList(filterKeys(filterInput.value)); }, false); document.addEventListener('focus', function(e) { if (document.activeElement) { setActiveClass(document.activeElement); } }, true); }; window.onkeydown = function(e) { // Put it into a separate code block because it was a bit lengthy }; /** * Takes a string to search for in `listData` to create an array of related keys. * @return An array consisting of key strings which are related to `str`. */ function filterKeys(str) { if (str.length === 0) { var allKeys = []; for (var key in listData) { allKeys.push(key); } return allKeys; } var recordObjects = []; for (var objectKey in listData) { recordObjects.push(listData[objectKey]); } var options = { keys: ['abbr', 'title', 'keywords', 'persons', 'links.title'], id: 'key' }; var fuse = new Fuse(recordObjects, options); return fuse.search(str); } /** * Build the record list containing elements belonging to keys in `relatedKeys`. */ function buildRecordList(relatedKeys) { // Check if a list was build previously … var recordList = document.getElementById(listName); if (recordList) { // … and remove its content recordList.innerHTML = ''; } else { // … otherwise, create it recordList = document.createElement('ul'); recordList.id = recordList.className = listName; document.getElementById(containerName).insertBefore(recordList, null); } for (var i = 0; i < relatedKeys.length; i++) { recordList.innerHTML += recordStr(relatedKeys[i], listData[relatedKeys[i]]); } // If no list items were inserted, we need to stop here if (!recordList.hasChildNodes()) { return; } // Set the first child element in the list to active state setActiveClass(recordList.firstElementChild.getElementsByClassName(linkName)[0]); } /** * @return a string that contains the HTML markup for a record */ function recordStr(key, value) { var str = '<li class="' + itemName + '" data-key="' + value.key + '">' + '<div class="' + itemName + '__title">' + value.title + '</div>'; if (value.links.length > 0) { str += '<nav class="nav record-nav">'; for (var i = 0; i < value.links.length; i++) { var link = value.links[i]; str += '<a class="' + itemName + '__link" href="' + link.url + '">' + link.title + '</a>'; } } return str; } /** * @brief Moves the active class to the given element */ function setActiveClass(element) { if (element) { if (element.className.indexOf(linkName) > -1) { var recordList = document.getElementById(listName); var activeItem = recordList.getElementsByClassName(activeLinkName)[0]; if (activeItem) { activeItem.classList.remove(activeLinkName); } element.className += ' ' + activeLinkName; } } } /** * @return the closest ancestor of `element` that has a class `className` */ function findAncestor(element, className) { while ((element = element.parentElement) && !element.classList.contains(className)); return element; } /** * @brief Iterates over all current DOM elements to create an array of elements that are * focusable (i.e. they’re visible and have a tabIndex greater -1) * @return an array containing all currently focusable elements in the DOM */ function focusableElements() { var elements = document.getElementsByTagName('*'); var focusable = []; for (var i = 0; i < elements.length; i++) { if (elements[i].tabIndex > -1 && elements[i].offsetParent !== null) { focusable.push(elements[i]); } } return focusable; } } Handling key presses Arrow keys are used to navigate the links in the list while up/down are used to go from item (an item is a block containing links) vertically, left/right keys are used to go from link to link. When the user navigates to a link, I set focus() which 1) scrolls the element into view by default and 2) allows the user to open the link with enter or Ctrl+enter. Tabbing through links on the page also moves the active class (used for styling with CSS) if the to be focused element is a link of the list. It does not interfer with the default behavior of tabbing. /** * Listen to various key presses to enable arrow key navigation over the record links. * Opening links is done by giving links focus which has the desired interaction by default * * Some keys and which keycodes they’re mapped to: * `tab` – 9; `enter` – 13; `←` – 37; `↑` – 38; `→` – 39; `↓` – 40; */ window.onkeydown = function(e) { e = e || window.event; var recordList = document.getElementById(listName); // If `e.keyCode` is not in the array, abort mission right away if ([9, 13, 37, 38, 39, 40].indexOf(e.keyCode) === -1 || !recordList.hasChildNodes()) { return; } var activeLink = recordList.getElementsByClassName(activeLinkName)[0]; if (e.keyCode === 13) { if (document.activeElement === document.getElementById(inputID)) { document.activeElement.blur(); activeLink.focus(); } else { return; } } var targetElement; if (e.keyCode === 9) { // If there is only one item, the default is fine. if (recordList.length === 1) { return; } var elements = focusableElements(); var activeElement = document.activeElement; // Determine which element is the one that will receive focus for (var el = 0; el < elements.length; el++) { if (elements[el] === activeElement) { if (e.shiftKey && elements[el-1]) { targetElement = elements[el-1]; } else if (elements[el+1]) { targetElement = elements[el+1]; } break; } } } if ([37, 39].indexOf(e.keyCode) > -1) { var previousLink; var nextLink; var linkElements = recordList.getElementsByClassName(linkName); for (var i = 0; i < linkElements.length; i++) { if (activeLink === linkElements[i]) { previousLink = linkElements[i-1]; nextLink = linkElements[i+1]; break; } } if (!previousLink && !nextLink) { return; } if (e.keyCode === 37 && previousLink) { targetElement = previousLink; } else if (e.keyCode === 39 && nextLink) { targetElement = nextLink; } } else if ([38, 40].indexOf(e.keyCode) > -1) { var activeItem = findAncestor(activeLink, itemName); var previousItem = activeItem.previousElementSibling; var nextItem = activeItem.nextElementSibling; if (!previousItem && !nextItem) { return; } if (e.keyCode === 38 && previousItem) { targetElement = previousItem.getElementsByClassName(linkName)[0]; } else if (e.keyCode === 40 && nextItem) { targetElement = nextItem.getElementsByClassName(linkName)[0]; } } if (targetElement && targetElement.classList.contains(linkName)) { if ([37, 38, 39, 40].indexOf(e.keyCode) > -1) { e.preventDefault(); targetElement.focus(); } activeLink.classList.remove(activeLinkName); targetElement.className += ' ' + activeLinkName; } }; Related Links: Website (serves as a full demo) Repository on GitHub JSON file Answer: One thing that falls into my eye, is that you have a possible race condition: You use the data you load via AJAX in the window.onload event handler. However window.onload doesn't wait for AJAX calls, so it theoretically could happen, that the window.onload event handler runs before the data is loaded. Instead you should use the data directly in the AJAX onload event. You shouldn't be assigning event handlers directly to the element properties. window.onload = ... overwrites any other handler assigned to it (e.g. by another script), and accordingly can be overwritten itself. Instead use addEventListener just like you do for the over events. Also you should consider using the event DOMContentLoaded instead load. You use several newer DOM features such as classList and getElementsByClassName. It may be a good idea to check for support before using them, so that older browsers don't trip over them. As with the actual functionality, I'm not quite sure I understand yet what is happening. In any case it could do with some more comments, and possibly a better separation of logic and output. Maybe I'll have more time later to have another look.
{ "domain": "codereview.stackexchange", "id": 17013, "tags": "javascript, beginner" }
Why does a signal with constant frequency have spots that changes colors at a specific value of scale (and so frequency) in the scalogram?
Question: I am studying the Wavelet transform and I am considering this example that I took from PyWavelets documentation. The signal in time domain has the following shape: Till the value of zero on the horizontal axis we have a signal with a constant frequency. So I would expect the scalogram to have something like a constant (both in color and in dimension) horizontal stripe at a specific value of the scale (or period or frequency or whatever you want to put on the y axis of the scalogram) till the zero value, while instead it has like vertical stripes that alternate their colors from the extreme violet color to the extreme green. Why ?! This is the image: From a scalogram like this, I would expect a signal that changes frequencies because the changing of colors means that changing of values of wavelets coefficients and so the changing of similarity between the wavelet and the input signal (since the operation that is done is the convolution between the wavelet and the data). High similarity should mean high coefficients (so green color) while low similarity means low values of coefficients (so violet colors); if the colors changes means that the similarity changes and so also the shape of the signal changes and thus also the frequency. Is this right ? What am I missing ? Any suggestion would be really appreciated. Thanks in advance. EDITI:I appreciate the suggestions in the comments below my post but since there has not been an answer to my post and my question has not been closed, I want to share with you that I found a clear explanation in this nice video Wavelets: a mathematical microscope. As Ash pointed out, one should plot the magnitude of a complex wavelet and so one has to consider the convolution with both the real and imaginary part of the wavelet. Hence by following this procedure, I obtain the plot as I expected it to be but with still one problem: the red bar that should be in correspondence of scale value equal to 30 and the distortions (that correspond to signal changing in frequency) should go from 30 to lower scales, in my case is inverted. Why ? Here is the Python code that I used: time = np.linspace(-1, 1, 200, endpoint=False) signal = np.cos(2 * np.pi * 7 * time) + np.real(np.exp(-7*(time-0.4)**2)*np.exp(1j*2*np.pi*2*(time-0.4))) fig, ax = plt.subplots(figsize=(9, 5)) ax.plot(time,signal) sns.despine(fig, bottom=False, left=False) plt.show() scales = np.arange(1,31) ylabel = 'Period' xlabel = 'Time' waveletname='cgau2' coef, freqs=pywt.cwt(signal, scales, waveletname) fig, ax = plt.subplots(figsize=(12, 2)) contourf_ = ax.contourf(time, scales, np.abs(coef), cmap=plt.cm.Reds)#extend='both', ax.set_title('Wavelet Transform of Signal (${}$)'.format(waveletname), fontsize=20) ax.set_ylabel('Scales', fontsize=14) ax.set_xlabel('Time (s)', fontsize=14) fig.colorbar(contourf_) plt.show() EDITII: #Time domain signal time = np.linspace(-1, 1, 200, endpoint=False) signal = np.cos(2 * np.pi * 7 * time) + np.real(np.exp(-7*(time-0.4)**2)*np.exp(1j*2*np.pi*2*(time-0.4))) fig, ax = plt.subplots(figsize=(9, 5)) ax.plot(time,signal) sns.despine(fig, bottom=False, left=False) plt.show() #Setting parameters for Continous Wavelet Transform scales = np.arange(1,31) waveletname='cgau2' coef, freqs=pywt.cwt(signal, scales, waveletname) #contourf fig, ax = plt.subplots(figsize=(12, 2)) contourf_ = ax.contourf(time, scales, np.abs(coef), cmap=plt.cm.Reds)#extend='both', ax.set_title('Wavelet Transform of Signal (${}$)'.format(waveletname), fontsize=20) ax.set_ylabel('Scales', fontsize=14) ax.set_xlabel('Time (s)', fontsize=14) fig.colorbar(contourf_) plt.show() #matshow fig, ax = plt.subplots(figsize=(12, 5)) matshow_ = ax.matshow(np.abs(coef), extent=[-1, 1, 1, 31], aspect = 'auto', cmap='Reds', vmax=abs(coef).max(), vmin=0) fig.colorbar(matshow_) plt.gca().xaxis.tick_bottom() # it puts x axis from top to bottom of figure. loc = plticker.MultipleLocator(base=0.25) # this locator puts ticks at regular intervals ax.xaxis.set_major_locator(loc) ax.set_title('Wavelet Transform of Signal (${}$)'.format(waveletname), fontsize=20) ax.set_ylabel('Scales', fontsize=15) ax.set_xlabel('Time', fontsize=15) plt.show() #imshow fig, ax = plt.subplots(figsize=(12, 5)) imshow_ = plt.imshow(np.abs(coef), extent=[-1, 1, 1, 31], cmap='Reds', aspect='auto', vmax=abs(coef).max(), vmin=0) fig.colorbar(imshow_) ax.set_title('Wavelet Transform of Signal (${}$)'.format(waveletname), fontsize=20) ax.set_ylabel('Scales', fontsize=15) ax.set_xlabel('Time', fontsize=15) plt.show() This is the plot using matshow or imshow. Answer: Re: real part There are oscillations because that's what the wavelet transform is - a decomposition into zero-mean, localized oscillations. CWT is convolution (rather, cross-correlation) of signal with wavelets, and you're taking the real part of this result: top animation here may help. With analytic or real wavelets, the CWT also nicely interprets as "redistributing" the signal over a 2D plane, with each oscillatory component in its proper slot, per the one-integral inverse. abs gives intensity, which interprets as amplitude but only with analytic wavelets, which cgau2 isn't, and is why b is over-represented. Re: wrong scales The red plot is correct, the green-purple one isn't. Lower scale $\Leftrightarrow$ higher frequency. Let's plot sig: Blue is a pure sine: constant frequency over time, so a horizontal bar in time-freq Orange is a Gaussian-windowed sine: constant frequency but localized in time, so a "lump" in time-freq Clearly, orange is lower in frequency, hence "resonates" with greater scale. pywt's example is fixed via extent=[-1, 1, 31, 1] (I originally ignored this). For best results, apply a wavelet with high time resolution: Note the sine is around frequency=7, matching np.cos(2 * np.pi * 7 * t). import numpy as np from ssqueezepy import cwt, Wavelet from ssqueezepy.experimental import scale_to_freq from ssqueezepy.visuals import imshow t = np.linspace(-1, 1, 200, endpoint=False) sig = (np.cos(2 * np.pi * 7 * t) + np.real(np.exp(-7*(t-0.4)**2)*np.exp(1j*2*np.pi*2*(t-0.4)))) wavelet = Wavelet(('gmw', {'beta': 4})) Wx, scales = cwt(sig, wavelet, padtype='zero') freqs = scale_to_freq(scales, wavelet, N=len(sig), fs=1/(t[1] - t[0])) imshow(Wx, abs=1, yticks=freqs, xticks=t, xlabel="time [sec]", ylabel="frequency [Hz]")
{ "domain": "dsp.stackexchange", "id": 11594, "tags": "python, wavelet, time-frequency, cwt, pywavelets" }
How to prove wave nature of large object?
Question: In double slit experiment only small sized particles were used. however if we use large objects such as a tennis ball, the expected pattern is not observed. Why the wave nature is not visible for large objects in double slit experiment ? How do I then prove that large objects also have wave nature? Answer: The difficult bit is creating a coherent beam of large objects since the rate of decoherence increases rapidly with the size of the object. If you could create a coherent beam of tennis balls you could diffract it, but in the real world you wouldn't be able to maintain the coherence for longer than the tiniest fraction of a second. As far as I know the largest object that has been diffracted is a buckyball. It's unlikely you'll ever be able to prove that objects the size of a tennis ball have wave like properties. However since QM correctly predicts diffraction for sizes ranging from electrons to buckyballs there's no obvious reason why this should break down for larger objects. For more on this subject see the questions: Will a football (soccer) diffract? Validity of naively computing the de Broglie wavelength of a macroscopic object Why doesn't a marble rolling on a table ever reflect back at the edge?
{ "domain": "physics.stackexchange", "id": 90650, "tags": "double-slit-experiment" }
Navigation using real time generate map without a known map
Question: I know this should be possible but at the point of using AMCL. I don`t know what to do as AMCL does not have a tutorial with no known map. Right now in the robot configuration.launch **A map generated, and current laser scan data can be seen in the rviz. Then I follows navigation tutorial to write configuration for navigation stack ** the other yaml file are all the same except that at global_costmap_params.yaml static_map is set to false amcl_diff.launch is similiar to the one found in drh-robotics-ros - Revision 108: /trunk/ros/ardros/launch But I got problem for the last part move_base.launch map server is need to run navigation stack which operates on the known map This part specific ask a known map for processing where as I need it to use the /map generated by slam_gmapping. And I`ll need it to be real time. I know I have to change to a real time map generated by slam_gmapping. But I dont know how to do it as there wasnt much info out of here amcl/Tutorials Does any one knows how to do it? How to modify the last move_base.launch to read real time map data? Originally posted by snakehaihai on ROS Answers with karma: 28 on 2012-12-06 Post score: 2 Answer: AMCL is meant for localization only (based on a known map) and will not work with a non-static map. SLAM (Simultaneous Localization and Mapping) methods are meant for the task you're looking at. Options for that are gmapping and (shameless plug :) ) hector_mapping. I suppose you could look into hacking AMCL to work with a map generated online by some other approach, but for that other approach to be able to generate a consistent map you need localization. And that's the chicken-egg problem of SLAM ;) Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-12-06 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 11998, "tags": "ros, navigation, mapping, time" }
Work required to change an object's speed in different reference frames
Question: Let $\vec s$, $\vec v$, $\vec a$ and $\vec F$ be displacement, velocity, acceleration, and force functions of time $t$. With respect to a absolute reference frame, such that $\vec v = \vec 0$ at $t = 0$, we can derive the kinetic energy of a particle of mass $m$ as below: $$ \begin{align} E_k & = \int \vec F \cdot \vec {ds} \\ & = \int \vec F \cdot \vec v\; dt \\ & = \int m \vec a \cdot \vec v\; dt \\ & = m\int \vec a \cdot \vec v\; dt \\ \end{align} $$ solving the integral: $$ \begin{align} \int \vec a \cdot \vec v\; dt\; & = \vec v\;\cdot\int \vec a\;dt\; - \int\left(\vec v\,'\;\cdot \int \vec a\;dt\right)\; dt \\ & = \vec v\; \cdot \vec v\;- \int \vec a \cdot \vec v\; dt \\ \implies \int \vec a \cdot \vec v\; dt\;& = \frac {\vec v \cdot \vec v}{2} \\ \end{align} $$ Thus: $$ E_k = \frac {m\;(\vec v \cdot \vec v)}{2} $$ now let us say at some $\Delta E_k$ is the change in kinetic energy as velocity changes from $\vec v_1$ at $t_1$ to $\vec v_2$ at $t_2$ (that is $\vec v(t_2) = \vec v_2$). Integrating from $t_1$ to $t_2$ we get what we expect: $$ \begin{align} \Delta E_k & = \int_{t_1}^{t_2} \vec F \cdot \vec {ds} \\ & = m \int_{t_1}^{t_2} \vec a \cdot \vec v\; dt \\ & = m \left[\frac {(\vec v_2 \cdot \vec v_2)}{2} - \frac {(\vec v_1 \cdot \vec v_1)}{2}\right] \\ \\ \Delta E_k & = \frac {m\; \left(\vec v_2 \cdot \vec v_2 - \vec v_1 \cdot \vec v_1\right)}{2} \\ \end{align} $$ So far so good, now I try to get the same answer using a different line of reasoning. Let us shift to a inertial frame with velocity $\vec v_1$ with respect to the absolute reference frame above. Now $\vec v_2$ is equal to $\vec v_2 - \vec v_1$ in this new frame of reference and at $t = t_1$ velocity is $\vec 0$. So $\Delta E_k$ is: $$ \begin{align} \Delta E_k & = \frac {m\;(\vec v_2 - \vec v_1) \cdot (\vec v_2 - \vec v_1)}{2} \\ & = \frac {m\;(\vec v_2 \cdot \vec v_2 + \vec v_1 \cdot \vec v_1 - 2\vec v_1 \vec v_2)}{2} \\ \end{align} $$ this is obviously not equal to: $$ \frac {m\; \left(\vec v_2 \cdot \vec v_2 - \vec v_1 \cdot \vec v_1\right)}{2} $$ where did I go wrong here? What am I missing? Here is a similar question, but there we have a plane of some mass $M$ to account for energy difference. Answer: When you change between two reference frames that are in relative uniform motion, the absolute value of the kinetic energy and the value of the work of a force may change (and thus the difference of the kinetic energy between two states of the system), but the theorem of the kinetic energy holds in both reference frames, as you may expect since classical physics is invariant to Galilean transformations. Computations in your example. Maybe you forgot to change the coordinates in the work integral. $\displaystyle W = \int_{1}^{2} \mathbf{F} \cdot \mathbf{v} \, dt = \dfrac{1}{2} m |\mathbf{v}_2|^2 - \dfrac{1}{2} m |\mathbf{v}_1|^2 = \Delta K$ The second observer measures velocities $\tilde{\mathbf{v}} = \mathbf{v} - \mathbf{v}_0$, and accelerations $\tilde{\mathbf{a}} = \mathbf{a}$ $\displaystyle \tilde{W} = \int_{1}^{2} \mathbf{F} \cdot \tilde{\mathbf{v}} \, dt = \int_{1}^{2} m \tilde{\mathbf{a}} \cdot \tilde{\mathbf{v}} \, dt = \dfrac{1}{2} m |\tilde{\mathbf{v}}_2|^2 - \dfrac{1}{2} m |\tilde{\mathbf{v}}_1|^2 = \Delta \tilde{K}$ The difference between the work evaluated by the two observers is $\displaystyle W - \tilde{W} = \int_{1}^{2} \mathbf{F} \cdot \mathbf{v}_0 \, dt = \int_{1}^{2} m \mathbf{a} \cdot \mathbf{v}_0 \, dt = m ( \mathbf{v}_2 - \mathbf{v}_1) \cdot \mathbf{v}_0 = m ( \tilde{\mathbf{v}}_2 - \tilde{\mathbf{v}}_1) \cdot \mathbf{v}_0 = m \Delta \mathbf{v} \cdot \mathbf{v}_0$, and the same holds for the difference of the kinetic energy difference, that reads $\Delta K - \Delta \tilde{K} = \dfrac{1}{2} m |\mathbf{v}_2|^2 - \dfrac{1}{2} m |\mathbf{v}_1|^2 - \dfrac{1}{2} m |\tilde{\mathbf{v}}_2|^2 + \dfrac{1}{2} m |\tilde{\mathbf{v}}_1|^2 =$ $\qquad \qquad \quad = \dfrac{1}{2}m(|\tilde{\mathbf{v}}_2|^2 + |{\mathbf{v}}_0|^2 + 2\tilde{\mathbf{v}_2} \cdot \mathbf{v}_0 - |\tilde{\mathbf{v}}_1|^2 - |{\mathbf{v}}_0|^2 - 2\tilde{\mathbf{v}_1} \cdot \mathbf{v}_0 - |\tilde{\mathbf{v}}_2|^2 + |\tilde{\mathbf{v}}_1|^2) = $ $\qquad \qquad \quad = m (\tilde{\mathbf{v}}_2 - \tilde{\mathbf{v}}_1) \cdot \mathbf{v}_0 = m \Delta \mathbf{v} \cdot \mathbf{v}_0$.
{ "domain": "physics.stackexchange", "id": 91752, "tags": "classical-mechanics, energy, velocity" }
how to install cmvision package for ros indigo
Question: I am new to ros and may be this is a silly question. But please help. I am not able to install cmvision package for ros indigo using following command sudo apt-get ros-indigo-cmvision The error I get is: 'E: Invalid operation ros-indigo-cmvision' Please help. Thanks in advance! Originally posted by vacky11 on ROS Answers with karma: 272 on 2017-02-08 Post score: 0 Answer: The error I get is: 'E: Invalid operation ros-indigo-cmvision' apt-get is telling you that it doesn't understand what you want, as ros-indigo-cmvision is not an apt-get command. The correct command would be: sudo apt-get install ros-indigo-cmvision note the install there. But in this case that is not going to work either, as cmvision has not been released for anything but ROS Hydro (note the lack of any mention of any other ROS release on the wiki page). If you still want to use the package, you'll have to build it from sources. See #q252478 for some info on how to do that. kbogert/cmvision/indigo-devel appears to be a fork of the original cmvision package that has been updated to work on ROS Indigo. Originally posted by gvdhoorn with karma: 86574 on 2017-02-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26961, "tags": "ros, cmvision, ros-indigo" }
Rotational mechanics
Question: Is it possible that a disc is rolling up a rough inclined plane if only gravitational and frictional forces are acting on it ? What I am confused about is,which force is moving the disc up the plane as gravitational force acts in downward direction and frictional​ force can't be more than gravitational force here and no other force is acting. Answer: Yes, it is possible. It will slow down it's rolling, but nevertheless roll uphill for a while. The answer to your comment on the other answer is, that the friction direction doesn't depend on rolling direction! It only depends on other forces present. This is the case for static friction, which you have in pure rolling (the contact point doesn't slide but is static during the short touching duration). When gravity pulls downwards, then friction must hold back in the contact point the opposite way to avoid sliding. Therefore, friction pulls upwards along the incline - both when the object rolls uphill and when it rolls downhill.
{ "domain": "physics.stackexchange", "id": 40337, "tags": "homework-and-exercises, rotational-dynamics" }
Random 2D noise generated in spectrum domain always have maximum at (1,1)
Question: I want to add a noise made of harmonic functions to my 2D matrix. I thought it could be made by adding random amplitudes to modes in Fourier domain. I keep Hermitian symmetry of the matrix so that the IFFT gave me a real matrix (I iterate N/2 times over each positive and its conjugate negative frequency). Why is the first element of the final matrix always the maximum? If I, however, not only add but add or subtract a random number then the maximum is placed in a random place. I've written an example in MATLAB: N = 16; %dimensions x2d = ones(N); %basic 2D N*N matrix y2d = fft2(x2d); %its FFT %adding a random number to the spectrum paying attention to its conjugate %part so the basic matrix remains real upfreq = N/2; for chn1=2:upfreq for chn2 = 2:upfreq % N/2+1 is skipped rnd = rand; %if changed to rand-0.5 it gives random maximum point y2d(chn1,chn2)=y2d(chn1,chn2) +rnd ; y2d(N-chn1+2,N-chn2+2)=y2d(N-chn1+1+1,N-chn2+1+1)+rnd; end end imagesc(ifft2(y2d)) colorbar Could you explain me why is it so? Answer: If you use random real valued frequencies (e.g. just magnitudes), they all correspond to cosine functions, which all peak at zero (cos(0 * f) == 0). To get more random peak locations, use random complex frequencies as the input to your ifft2d, which randomizes the phase so all the peaks don't line up at zero.
{ "domain": "dsp.stackexchange", "id": 8305, "tags": "fft, noise" }
Is the number 1 a unit?
Question: In dimensionless analysis, coefficients of quantities which have the same unit for numerator and denominator are said to be dimensionless. I feel the word dimensionless is actually wrong and should be replaced by "of dimension number". For example, the Mach number is of dimension one. Many people write, for this case: Mach-Number | Dimension: "-" | Unit: "1" As mentioned before, I would say 'Dimension: "1"' in this place. But what about the unit? $\text m/\text s$ divided by $\text m/\text s$ is equal to one. But is the number one a unit by definition? Or should one say that the Mach number has no unit and therefore 'Unit: "-"'? Answer: This is analogous to the definition of an empty product in mathematics. For a finite non-empty set $S=\{s_1,\ldots,s_n\}$, the product over $S$ can be defined as $$\prod_{s\in S}s=s_1\times \cdots\times s_n.$$ For such a product you'd want disjoint unions to map into products: if $R\cap S=\emptyset$, then you want $\prod_{x\in R\cup S}x=\left(\prod_{s\in S}s\right) \times \left(\prod_{r\in R}r\right)$, but for this to make sense you want to be able to handle the empty set, and the only way to make the rules consistent is to set $$\prod_{s\in\emptyset}s=1.$$ This essentially says: if there's nothing to multiply, the result is one. (Similarly, empty sums are defined to be zero, for the same reason.) In the case in hand, you could simply say if there are no units to multiply, then you get one. As Luboš points out, this is the harmless only consistent choice, as multiplying by one does not change the quantity. Moreover, this empty-product intuition can be carried out to a full formalization of physical dimensions and units as a vector space. The whole works is in this answer of mine, but the essential idea is that positive physical quantities form a vector space over the rationals, where "addition" is multiplication of two quantities and "scalar multiplication" is raising the quantity to a rational power. This vector-space formalism is precisely the reason why dimensional analysis often boils down to a set of linear equations. Moreover, in this vector space the 'zero' is the physical quantity and unit $1$ - neither vector space makes sense unless $1$ is both a quantity and a unit. Ultimately, of course, it boils down to convention, so people can just say "I'm going to do this in this other way" and they won't be "wrong" as such. However, in general, the consistent way to assign things is to say that dimensionless quantities have dimension $1$ (modulo whatever square bracket convention you're using) and unit $1$. To back this up a bit, for those that care about organizational guidance, the BIPM publishes the International Vocabulary of Metrology, which states (§1.8, note 1) that The term "dimensionless quantity" is commonly used and is kept here for historical reasons. It stems from the fact that all exponents are zero in the symbolic representation of the dimension for such quantities. The term "quantity of dimension one" reflects the convention in which the symbolic representation of the dimension for such quantities is the symbol 1 (see ISO 31-0:1992, 2.2.6). This is essentially the same in the ISO document, which has been superseded by ISO 80000-3:2009 (paywalled, but free preview available), which has an essentially identical entry in §3.8. Finally, and as a response to some of the comments by Luboš Motl, this applies to the term "physical dimension" as understood by the majority of physical scientists. There is also an alternative convention, used in high-energy contexts where you work in natural units with $\hbar=c=1$, in which you're left with a single nontrivial dimension, usually taken to be mass (=energy). In that context, it is usual to say a quantity or operator has "dimension $N$" to mean that it has mass dimension $N$ i.e. it has physical dimension $m^N$, but since there's only ever mass as the base quantity it often gets dropped. However, this is very much a corner case with respect to the rest of physical science, and high-energy theorists are remiss if they forget that their "dimension $N$" only works in natural units, which are useless outside of their small domain.
{ "domain": "physics.stackexchange", "id": 9490, "tags": "units, dimensional-analysis, conventions" }
Replace part column value with value from another column of same dataframe
Question: I have a dataframe with two columns: Name DATE Name1 20200126 Name2 20200127 Name#DATE# 20200210 I need to replace all the #DATE# with the data from the DATE column, and get something like this: Name Name1 Name2 Name20200210 How can I achieve this? I've tried things like this, without any good result..: df_merged_tables["Name"].str.replace("#DATE#",merged_tables["DATE"]) Thanks! Answer: Your solution is close Maybe just needed to add an apply. Try: df = pd.DataFrame({"Name":["Name1", "Name2", "Name#DATE#"], "Date":[20200126, 20200127, 20200210]}) df["NewColumn"] = df.apply(lambda row: row["Name"].replace("#DATE#", str(row["Date"])), axis = 1) Outputs:
{ "domain": "datascience.stackexchange", "id": 9202, "tags": "python, pandas" }
What is the purpose of atomisation in atomic absorption spectroscopy?
Question: What is the purpose of the atomisation process in atomic absorption spectroscopy? I understand that solution is evaporated by the flame and then the molecules decompose into atoms. But why must the molecules become individual atoms before the light is passed through them? Answer: To complement the previous answer: Atomic absorption lines are very sharp, with high extinction coefficient, low detection limit and are specific for the given element. Molecular absorption is band-wise, diffuse, with much lower ext. coefficient, higher detection limit and last but not least, the selectivity is lost.
{ "domain": "chemistry.stackexchange", "id": 12894, "tags": "inorganic-chemistry, analytical-chemistry" }
Running time with while loop
Question: What's the running time of: foo(n) if(n==1) return; int i=1; while(i<n) { i=i+2 } foo(n-2) There are $n/2$ recursive calls to foo but how to add to the calculation the while loop Answer: You can write the relation likes the following: $$T(n) = T(n-2) + \frac{n}{2}$$ $$T(n) = \frac{n}{2} + \frac{n-2}{2} + \frac{n-4}{2}+\cdots +\frac{n-(n-2)}{2} = $$ $$\frac{n^2-2(1+2+3+\cdots+(\frac{n}{2}-1))}{2}=$$ $$\frac{n^2-(\frac{n}{2}-1)\times \frac{n}{2}}{2}= \Theta(n^2)$$
{ "domain": "cs.stackexchange", "id": 10628, "tags": "runtime-analysis" }
Simple yo-yo work problem
Question: I'm taking an introduction to modern physics course and would appreciate some help on this problem: A yo-yo is attached to a string, whose end is held tightly. It is released from rest. After the yo-yo has fallen a distance h, find the total kinetic energy of the yo-yo. Find the speed of the yo-yo after it has fallen a distance h. Find the magnitude of the tension in the string while the yo-yo is falling. Attempt at a solution: $$\Delta KE = KE_f-KE_i=KE_f$$ Going from a point particle system, $$\Delta KE_{trans} = W_{gravity}=mgh$$ But I am not sure this is true because the string also does work on the yo-yo, but I don't know that force. Answer: Since energy is conserved, the total kinetic energy after the yo-yo has fallen a distance $h$ is simply the initial energy, $E_1=mgh$. However, gravitational potential energy is converted into two forms of kinetic energy: translational kinetic energy ($\frac{1}{2}mv^2$) and rotational kinetic energy ($\frac{1}{2}I\omega^2$). Here, $I$ is the moment of inertia of the yo-yo, and $\omega$ is the angular velocity. Now, since the rope is always tangent to the yo-yo, the speed of the rope is simply the tangential velocity of the yo-yo, $v=\omega r$. Thus you can use conservation of energy $E_1=E_2$, where $E_2=\frac{1}{2}mv^2+\frac{1}{2}I\omega^2$ to determine $v$. Finally, to find the tension, apply Newton's laws $F=ma$, and $\tau=I\alpha,$ with $\alpha$ the angular acceleration. Both the linear force equation and the torque equation apply! This will allow you to determine the tension in the rope in terms of the mass, radius of the yo-yo, etc.
{ "domain": "physics.stackexchange", "id": 63280, "tags": "homework-and-exercises, newtonian-mechanics, energy, rotational-dynamics, work" }
Dirac, Weyl and Majorana Spinors
Question: To get to the point - what's the defining differences between them? Alas, my current understanding of a spinor is limited. All I know is that they are used to describe fermions (?), but I'm not sure why? Although I should probably grasp the above first, what is the difference between Dirac, Weyl and Majorana spinors? I know that there are similarities (as in overlaps) and that the Dirac spinor is a solution to the Dirac equation etc. But what's their mathematical differences, their purpose and their importance? (It might be good to note that I'm coming from a string theory perspective. Plus I've exhausted Wikipedia here.) Answer: Recall a Dirac spinor which obeys the Dirac Lagrangian $$\mathcal{L} = \bar{\psi}(i\gamma^{\mu}\partial_\mu -m)\psi.$$ The Dirac spinor is a four-component spinor, but may be decomposed into a pair of two-component spinors, i.e. we propose $$\psi = \left( \begin{array}{c} u_+\\ u_-\end{array}\right),$$ and the Dirac Lagrangian becomes, $$\mathcal{L} = iu_{-}^{\dagger}\sigma^{\mu}\partial_{\mu}u_{-} + iu_{+}^{\dagger}\bar{\sigma}^{\mu}\partial_{\mu}u_{+} -m(u^{\dagger}_{+}u_{-} + u_{-}^{\dagger}u_{+})$$ where $\sigma^{\mu} = (\mathbb{1},\sigma^{i})$ and $\bar{\sigma}^{\mu} = (\mathbb{1},-\sigma^{i})$ where $\sigma^{i}$ are the Pauli matrices and $i=1,..,3.$ The two-component spinors $u_{+}$ and $u_{-}$ are called Weyl or chiral spinors. In the limit $m\to 0$, a fermion can be described by a single Weyl spinor, satisfying e.g. $$i\bar{\sigma}^{\mu}\partial_{\mu}u_{+}=0.$$ Majorana fermions are similar to Weyl fermions; they also have two-components. But they must satisfy a reality condition and they must be invariant under charge conjugation. When you expand a Majorana fermion, the Fourier coefficients (or operators upon canonical quantization) are real. In other words, a Majorana fermion $\psi_{M}$ may be written in terms of Weyl spinors as, $$\psi_M = \left( \begin{array}{c} u_+\\ -i \sigma^2u^\ast_+\end{array}\right).$$ Majorana spinors are used frequently in supersymmetric theories. In the Wess-Zumino model - the simplest SUSY model - a supermultiplet is constructed from a complex scalar, auxiliary pseudo-scalar field, and Majorana spinor precisely because it has two degrees of freedom unlike a Dirac spinor. The action of the theory is simply, $$S \sim - \int d^4x \left( \frac{1}{2}\partial^\mu \phi^{\ast}\partial_\mu \phi + i \psi^{\dagger}\bar{\sigma}^\mu \partial_\mu \psi + |F|^2 \right)$$ where $F$ is the auxiliary field, whose equations of motion set $F=0$ but is necessary on grounds of consistency due to the degrees of freedom off-shell and on-shell.
{ "domain": "physics.stackexchange", "id": 12414, "tags": "special-relativity, representation-theory, spinors, majorana-fermions, clifford-algebra" }
Binary matrix column subset selection complexity
Question: Given an $m \times n$ matrix ($m$ rows) containing only $0$'s and $1$'s, what is the complexity of finding an $m \times k$ submatrix (of $k$ columns) such that within the chosen submatrix there is no row containing only zeroes, in other words, every row contains at least one $1$? For example, given the $4 \times 3$ matrix $\begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 \end{bmatrix}$ a subset of $k=3$ columns that fails to fulfill this condition is that of the first columns, as the third row has all zeros within this submatrix, but the column set $\{1, 3, 4\}$ would be a solution. I suspect this is a hard problem, but I haven't been able to find a direct reference. I'm interested in this problem because of its applications in cryptography. Answer: Rephrasing as a set system, each row represents a subset $E_i$ of some set $X$, for $i=1,2,\dots,m$. You want a set $Y \subseteq X$ with at most $k$ elements, such that $E_i \cap Y \ne \emptyset$ for each $i$. In other words, you want a hitting set of size at most $k$; this problem is NP-complete.
{ "domain": "cstheory.stackexchange", "id": 2153, "tags": "np-hardness, combinatorics, boolean-matrix" }
Welch's Power Spectral Density - Time-Averaging Explanation
Question: I am just branching out to taking the power spectrum of short-term audio frames ($20$ ms) in order to extract useful audio features. I have been reading about Welsh's method which states that after computing the squared magnitude of the result, the individual periodograms are time-averaged which reduced the variance of the individual power measurements. I was hoping somebody could explain what and how time-averaging is achieved? Is it done by looking at the individual squared bins within a given frame, or does it compare a previous frame to a present frame like a Spectral Flux? Also, I take it that a periodogram refers to the periodicity of a given audio frame? Thanks. Answer: Periodogram averaging means that the squared DFTs of consecutive frames are averaged (per frequency bin). If $k$ denotes the frequency index and $l$ denotes the frame index, the averaged periodogram is computed as $$\overline{P}[k]=\frac{1}{L}\sum_{l=0}^{L-1}P[k,l]\tag{1}$$ where $P[k,l]$ is the $k^{th}$ element of the squared magnitude of the DFT of the frame with index $l$. In (1) the averaging is done over $L$ frames. This is only useful if the signal can be assumed to be stationary during these $L$ time frames.
{ "domain": "dsp.stackexchange", "id": 2707, "tags": "power-spectral-density" }
Best colour to detect
Question: I am using emgucv with opencv. I have an object I wish to detect based on colour. I convert the image to Hsl colour space and perform a filter on it. The object I am detecting is perceived as yellow so I cover that range and make sure I am using/filtering good saturation. This has been working well. But I have now upgrade my usb camera and I have changed the lighting in the room. Now running the same code I can it is ‘missing’ a lot of the object. So, I inspect the hue of the pixels that have been missed out. I can now see that the hues of those pixels have ‘shifted’ to the red range. This incidentally is more noticeable when the object is closer to the camera. So my question is this; Is yellow an unreliable colour to detect compared to say red, green, blue etc? Is the object not truly yellow? Though when the object is further away it does not appear to be a problem, Is colour segmentation unreliable? Should I consider using a different colour space? I will post my existing code if people require but I do not think it is required? Thanks Answer: A standard color camera, along with its software driver, is suppose to reflect the color that the eye would see. If it isn't doing that, then you have a badly designed camera or driver. In that case, either get a good camera+driver combo, or if the damage is linear you might be able to repair it by first multiplying the RGB with a 3x3 matrix that you have designed. Start with a unity matrix and adjust the elements until the output looks most like it does when you are looking right at the object (and assuming there is negligible color damage caused by the monitor you use to do this). Or you could even programmatically create the matrix by comparing the camera output with an ideal camera output. Once that is taken care of then you can pass the output to the algorithm that recognizes the object. As far as the color space used for recognizing the object, a simple algorithm might do best keying off of the hue and saturation, so HSL might be best for that. But your real goal is to key off the intensities of all three colors (RGB) but still ignore luminance. So I suggest you use normalized RGB. That is, first scale the RGB so the total energy is some constant (say, unity). Specifically: ScaledRGB = RGB / VectorMagnitude(RGB). Once it's been normalized, you simply measure how far that normalized RGB "vector" is from the target color. Other than the above points, there shouldn't be an issue with the particular color you are using, except that it should be most different from the particular background environment. EDIT: answering subsequent questions... If the scene lighting changes, you must first measure the change by noting the coloration of parts of the scene or the whole scene that always remain in view. For example, if you know of a mostly white object that remains in view and remains stationary, you can use that color vector as the normalization divisor rather than the magnitude of the pixel as described earlier (that is, divide a pixel's red by the white object's red, etc.). If you don't know of any white objects that remain in view and remains stationary, then take the average RGB of the entire scene (including the target object, if present, since you don't know where it is, yet. And also ideally compensate for known coloration of these background objects, if possible) and use that RGB as the divisor in your normalization (pixel's red divided by average red, etc.). This normalizes the scene for lighting and camera oddities. Then you can search for the object without regard of changes in lighting, etc..
{ "domain": "dsp.stackexchange", "id": 7453, "tags": "image-segmentation" }
extrinsic calibration of non-overlapping cameras
Question: Hello everyone, does anybody have any idea how to calculate the relative transformation between the cameras of a multi camera rig with no overlap? Thanks in advance. Originally posted by Ifx13 on ROS Answers with karma: 54 on 2021-07-04 Post score: 0 Original comments Comment by midjji on 2022-03-03: It is trivial to do so using any standard toolbox for the calibration of overlapping cameras. All you need is one more camera not part of the rig, which is placed in such a way as to create overlap with both. Then simply chain the transform. The method, including what to do if one camera isn't enough is described in: http://www.diva-portal.org/smash/get/diva2:1185614/FULLTEXT02 This approach is simpler than using a mirror, and more accurate. Further, because the extra camera is only needed during the calibration, you can use a high end one, and achieve very high accuracy. The paper also includes a simulator to show expected accuracy. As with all camera calibration, to ensure a good result: Light the scene well, preferably sunlight use tripods for the pattern and the camera to ensure they are stationary cover the entire sensor with roughly evenly sampled observations Use a minimum number of camera parameters non-monotonic lens distortion is failed estimation Answer: I would suggest to take a look at ethz-asl/kalibr. From the readme: Kalibr is a toolbox that solves the following calibration problems: Multiple camera calibration: intrinsic and extrinsic calibration of a camera-systems with non-globally shared overlapping fields of view Originally posted by gvdhoorn with karma: 86574 on 2021-07-05 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Ifx13 on 2021-07-06: I've tried this. The thing is that the overlap is practically zero and when they say that the input is "non-globally shared overlapping fields of view" they mean that there is no need for every camera at the rig to have overlapping FoV but there is still the need to have overlap at in between the pairs of the cameras. Also, this package can work with different targets, one of the supported targets is the aprilgrid, this target does not require to be "seen" completely by both cameras, it says that it can be partially viewed but at least one marker of the grid must be seen at both cameras. With my current camera configuration, this is not possible. Not a single marker is shared between images, they're cut in half. Please correct me if misunderstood something.
{ "domain": "robotics.stackexchange", "id": 36642, "tags": "slam, navigation, calibration, ros-melodic, camera-calibration" }
is there a way to normalize [-3,1] to ${\begin{bmatrix} \dfrac{-3}{\sqrt{10}}\\ \dfrac{1}{\sqrt{10}}\\ \end{bmatrix}}$ with python?
Question: I am learning SVD by following this MIT course The lecturer is trying to normalize a vector $${\begin{bmatrix} -3\\ 1\\ \end{bmatrix}}$$ to $${\begin{bmatrix} \dfrac{-3}{\sqrt{10}}\\ \dfrac{1}{\sqrt{10}}\\ \end{bmatrix}}$$ I tried this with Python NumPy np.linalg.norm(v1,ord=2,axis=1,keepdims=True) and got array([[3.], [1.]]) I would like to get something like this [[-0.9486833 ], [ 0.31622777]] is there a way with Python (for instance, NumPy) to do the job? any other 3rd party library is also appreciated. Answer: You have already computed that, but you've not bound the output to a variable, also called name in python. Try the following snippet: result = np.linalg.norm(v1,ord=2,axis=1,keepdims=True) print(result) Based on the edit, I update the answer. As you may find answers to your question, a typical way to find what you need is something like the following function: def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v / norm Equivalently, there is a function called normalize in sklearn.preprocessing which can be employed for your task.
{ "domain": "datascience.stackexchange", "id": 5460, "tags": "machine-learning, python, data-mining, numpy, normalization" }
"Correctness" of type theory
Question: How to "proof" that type theory is correct? Or at least explain that it's meaningful in some sense. In what extent is this a mathematical question and in what is a philosophical one? When type theories are introduced for example in Luo's book, the following properties are proved: Subject reduction Reduction termination Church-Rosser property As far as I understand, what these properties describe is that we have a decidable type checking, and that the language of type theory is meaningful in some sense, but nothing more than this. I asked a question on violation of strict positivity for inductive types (Example of where violation of strict positivity condition in inductive types leads to inconsistency) and in the answer I was told that non strictly positive types might lead to non existence of set theoretic model. Why we need type theoretic model here? And what it gives to us? Answer: Type theories have multiple uses, and with each kind of usage comes a different notion of correctness. They two key uses are As a foundation of mathematics. In this context correctness means primarily that we can't deduce falisity. As a tool in programming. Here correctness means primarily that that well-typed programs "don't go wrong" in Milner's famous words, i.e. that they do not exhibit a certain forms of error, such as trying to execute 3 + "hello". Both notions of correctness are related, but not the same. Gödel's second incompleteness theorem puts serious restrictions in the way of showing the 'correctness' of type theories as a foundation of mathematics. The best you can hope for is to show 'correctness' (understood as consistency) relative to some other foundation, e.g. by giving a set-theoretic model. The key thing we worry about with type theories is that a given type-theory is inconsistent because we can define a non-terminating term. Modern type-theories are fairly complicated beasts, and it's quite easy to make them too powerful so that a fix-point combinator becomes definable. Such fixpoint combinators inhabit every type, making the type-theory inconsistent as a foundation of mathematics. Since fixpoint combinators can be subtle, it is quite easy to overlook how one's typing system allows them to be defined. Indeed, famous foundationalists have a habit of proposing foundations of mathematics that are unsound for this reason. For example Martin-Löf's original proposal for type-theory made the kind of types itself a type (that approach to typing is now referred to as Type:Type). This is extremely convenient, and gives you the most general form of impredicative polymorphism. Alas, Girard discovered that this was unsound, when he managed to express the Burali-Forti paradox in Type:Type, see this discussion for more details. Every since, Martin-Löf's type theories have been predicative, but with added inductive definitions to recover (much of) the expressive power of impredicativity. Encoding a type theory in another foundation such as set theory gives us a fuzzy feeling of security: it says there are unlikely to be an obvious unsoundness bugs in our type-theory. The justification behind this intuition is strictly empirical (!): in the century of since Cantor invented and Zermelo formalised it, nobody has found a soundness bug in conventional set theory. (As an aside, this situation is similar to cryptography where we trust RSA, Diffie-Hellman, AES or 3DES for little other reason than that they have withstood sustained attempts at breaking.) Note that while a system with Type:Type is wrong as a foundation of mathematics, it is perfectly acceptable as a typing system for programming languages, because we can still show that no typeable program goes "wrong". Regarding proof techniques for correctness, the different properties you mention relate to different notions of correctness. Termination is the key step to showing correctness of a type-theory as a foundation of mathematics, while subject reduction is usually the key step to showing that a type-theory is usable as a tool in programming. Note that since a termination proof is the high-road to consistency as a foundation, by Gödel's second incompleteness theorem, we can prove termination of a type-theory only by using a stronger system.
{ "domain": "cstheory.stackexchange", "id": 2639, "tags": "lo.logic, type-theory, dependent-type" }
MVC Async Action Invoking Workflow
Question: I've just started working with Workflow (WF4) and have been playing with an idea of using it in MVC3.0 controller actions to see if it improves the maintainability of complex actions; also potentially giving the ability to make multiple DB calls in parallel to populate the output model. Along with this I've been looking at Async controller actions but I'm not sure I've really got my head around those. I know you can load workflow into a WorkflowApplication and call BeginRun(...) to get that to run asynchronously but I'm thinking that's the wrong approach; from looking online I've come up with the following implementation which does run as expected but I'm wondering if there's anything wrong with the code below or it will cause other issues that I've not thought about. [HttpPost] public void ConfigureAsync(ConfigureExportInputModel inputModel) { if (inputModel == null) { throw new ArgumentNullException("inputModel"); } var arguments = new Dictionary<string, object> { { "ExportType", inputModel.ExportType } }; var wf = new ErrorHandledExportWorkflow(); AsyncManager.OutstandingOperations.Increment(); ThreadPool.QueueUserWorkItem(o => { AsyncManager.Parameters["result"] = WorkflowInvoker.Invoke(wf, arguments)["MvcOutput"] as IDefineMvcOutput; AsyncManager.OutstandingOperations.Decrement(); }); } public ActionResult ConfigureCompleted(IDefineMvcOutput result) { return this.ActionResultFactory.Create(this, result); } Answer: I cannot believe this went unanswered for 5 and a half years (I guess I can, it is a difficult question to answer) - I'm going to try to answer it from the respect of early 2012, and the respect of today (mid 2017). 2012 The biggest concern I see comes from this line: AsyncManager.Parameters["result"] = WorkflowInvoker.Invoke(wf, arguments)["MvcOutput"] as IDefineMvcOutput; I don't know how WorkflowInvoker works, but I assume that it has the capability of throwing an exception, which means that your AsyncManager.OutstandingOperations.Decrement(); line would not be reached, and I wonder what other issues that might cause for your application. (Looking for operations that aren't there, for example.) I would consider a try/finally block, or decrement before your operation gets invoked. (You can probably consider "In Progress" as a non-outstanding operation.) The other issue I see with your Lambda method is regarding the idea of "closures", and captured variables. In .NET the wf and arguments variables are still part of the local method and are simply referring to the local copy, which means if you modify either variable after queuing up the worker, you could end up with a different result. This may not be an issue with your infrastructure, but I feel it's worth pointing out none-the-less. 2017 Let's fast-forward five years and six months (which is an agonizingly long time) and examine what language features might make this a different process. I'm just going to post the form that takes advantage of the new language features, then we'll discuss where async/await might help you. [HttpPost] public void ConfigureAsync(ConfigureExportInputModel inputModel) { if (inputModel == null) { throw new ArgumentNullException("inputModel"); } var arguments = new Dictionary<string, object> { [nameof(inputModel.ExportType)] = inputModel.ExportType }; var wf = new ErrorHandledExportWorkflow(); AsyncManager.OutstandingOperations.Increment(); ThreadPool.QueueUserWorkItem(o => { AsyncManager.Parameters["result"] = WorkflowInvoker.Invoke(wf, arguments)["MvcOutput"] as IDefineMvcOutput; AsyncManager.OutstandingOperations.Decrement(); }); } public ActionResult ConfigureCompleted(IDefineMvcOutput result) => this.ActionResultFactory.Create(this, result); So it doesn't look much different, but you can see that it's just a bit shorter. The expression-bodied method syntax, the dictionary initializer, and the nameof operator. But what really makes a difference is the async/await of the TPL. This starts making our code come alive. With the proper implementation, you can leverage async/await to really bring a good flow to the asynchronous nature of your code. If properly implemented on the WorkflowInvoker.Invoke method, you might get away with something like: [HttpPost] public Task<ActionResult> RunAsync(ConfigureExportInputModel inputModel) { if (inputModel == null) { throw new ArgumentNullException("inputModel"); } var arguments = new Dictionary<string, object> { [nameof(inputModel.ExportType)] = inputModel.ExportType }; var wf = new ErrorHandledExportWorkflow(); var result = await WorkflowInvoker.InvokeAsync(wf, arguments); return this.ActionResultFactory.Create(this, result["MvcOutput"] as IDefineMvcOutput); } This would allow you to await the whole method, and return the thread to the pool until done. By leveraging what .NET built-in with C#5.0, you can write succinct asynchronous code that doesn't have to worry about the overhead of performing the async operations. The framework and language really take that burden away from you. Of course, this expects an InvokeAsync method on the WorkflowInvoker, which may or may not exist, and as such this is pseudo-/hypothetical code, but the idea should remain the same. You can read the MSDN article for more explanation of what is possible with this design.
{ "domain": "codereview.stackexchange", "id": 27050, "tags": "c#, asp.net-mvc-3, asynchronous" }
Is this correct factory method pattern?
Question: I know that there are many similar questions, but I don't understand most of those questions because I'm not sure if I know what a factory method pattern is. So, after reading many examples over the web, I came up with the following simple classes. Am I doing it correctly? If so...any improvements I can add? abstract class Driveable { abstract public function start(); abstract public function stop(); } class CoupeDriveable extends Driveable { public function start() { } public function stop() { } } class MotorcycleDriveable extends Driveable { public function start() { } public function stop() { } } class SedanDriveable extends Driveable { public function start() { } public function stop() { } } class DriveableFactory { static public function create($numberOfPeople){ if( $numberOfPeople == 1 ) { return new MotorcycleDriveable; } elseif( $numberOfPeople == 2 ) { return new CoupleDriveable; } elseif( $numberOfPeople >= 3 && $numberOfPeople < 4) { return SedanDriveable; } } } class App { static public function getDriveableMachine($numberOfPeople) { return DriveableFactory::create($numberOfPeople); } } $DriveableMachine = App::getDriveableMachine(2); $DriveableMachine->start(); Update: according to palacsint and serghei's valueable advices, I've updated my code. abstract class DriveableFactory { static public function create($numberOfPeople); } class CarDriveableFactory extends DriveableFactory { static public function create($numberOfPeople){ $products = array ( 1=>"MotorcycleDriveable", 2=>"CoupeDriveable", 3=>"SedanDriveable", 4=>"SedanDriveable" ); if( isset( $products[$numberOfPeople] ) ) { return new $products[$numberOfPeople]; } else { throw new Exception("unable to find a suitable drivable car"); } } } Answer: The essence of the Factory method Pattern is to "Define an interface for creating an object, but let the subclasses decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses." If you insist on the above GoF definition there are two issues. First, you should create a DriveableFactory interface and rename your DriveableFactory to (for example) CarDriveableFactory. abstract class DriveableFactory { static public function create($numberOfPeople); } class CarDriveableFactory extends DriveableFactory { static public function create($numberOfPeople) { ... } } But your code is fine, if you don't need (don't have a reason) the abstract DriveableFactory interface do NOT add it to the code. The second issue is that the create method should not be static. If it's static subclasses cannot override the create method. Finally, the App class looks unnecessary. So, I'd write something like this: DriveableFactory factory = new CarDriveableFactory(); $DriveableMachine = factory->getDriveableMachine(2); $DriveableMachine->start(); Some small improvements: 3.5 is an allowed value? And 3.1415? If not consider changing else if( $numberOfPeople >= 3 && $numberOfPeople < 4) to else if($numberOfPeople == 3 || $numberOfPeople == 4) In the last line of the create() method I would throw an IllegalArgumentException (or a similar one in PHP) with the message "invalid value: " . $numberOfPeople.
{ "domain": "codereview.stackexchange", "id": 3405, "tags": "php, design-patterns, factory-method" }
Does water have surface tension in a vacuum?
Question: I could be totally wrong here but I was thinking about water surface and what creates that. My thought is it is the thin mixture of water and air separating the two. This mixture creates the boundary between water and air that has the property of surface tension. So does the surface of water in a vacuum act in the same way? Is there a surface with surface tension when there is no air to make the mixture? Answer: Yes, water still has surface tension in a vacuum. Water/vacuum surface tension is 72.8 dyn/cm experimentally according to Zhang et al. J. Chem. Phys. 103, 10252 (1995). Surface tension is caused by the fact that water molecules in the bulk (not at the surface), are surrounded by other water molecules with which they interact through intermolecular forces. At the surface, the molecules cannot be completely surrounded by other water molecules. The surface molecules are in a higher energy state because they are not stablized by intermolecular interactions. This is why liquids tend to miniumize surface area and become spherical droplets absence any other forces. Also, the attractive force from other water molecules on the surface molecules has a net force in the direction toward the interior.
{ "domain": "physics.stackexchange", "id": 13521, "tags": "vacuum, surface-tension" }
Whether this proof regarding Bohr's second postulate is true or false?
Question: Let us assume a particle oscillating with displacement $x$. Now $x = A\sin(2πft)$ $$\frac{\mathrm d x}{\mathrm dt} = v = 2πfA\cos(2πft)$$ Now $KE_\text{max}$ can be given when cosine value is 1; thus, $$KE_\text{max} = 2(π^2)(f^2)m(A^2) \tag{i}$$ Now let linear momentum be given as $p = 2πmfA\cos(2πft)$ and let $2πmfA = B$. Now $p/B = \cos(2πft)$ and $x/A = \sin(2πft)$ Now $\sin^2(2πft) + \cos^2(2πft) = 1$, thus $$\frac{x^2}{A^2} + \frac{p^2}{B^2} = 1$$ Now the area of this ellipse is given as $πAB$ so, $$∫p\mathop\!\mathrm dx = πAB$$ Now $E = nhf$ (Planck's Postulate), thus from (i) $$2(π^2)(f^2)m(A^2) = nhf \tag{ii}$$ Using (ii), we have $$∫p\mathop\!\mathrm dx = nh$$ But we want to find angular momentum not linear so we put limits of integral from 0 to 2π and assign a new variable for angular momentum as $L$ and $\mathrm dx = \mathrm dθ$ thus we have $L = nh/2π$ (Bohr's Second Postulate) Hence proved. Answer: Your "proof" is not correct because it is not a proof. All you've done is provide a layer of algebraic manipulations that is thick enough for the reader (meaning you yourself) to miss where the postulate was asserted. In your case, you've introduced "Planck's postulate" that $E=nhf$ without any argumentation for its validity in the situation where you're using it. This leaves a gaping logical hole in your proof that renders it useless. (Similarly, your substitutions $p\to L$, $\mathrm dx\to \mathrm d\theta$ are completely unjustified and ultimately incorrect, so they would also be enough to sink your proof. But your proof is already sunk by that point.) Bohr's second postulate is called a postulate for a reason. There are no known ways to derive it from postulates which are more fundamental and where the derivation does not irrevocably change some aspect of the hypotheses in ways that are too transformative for the manipulation to be termed an implication.
{ "domain": "physics.stackexchange", "id": 54710, "tags": "quantum-mechanics, energy, atomic-physics, frequency, atoms" }
Does electric field lines refract just like light?
Question: Do electric field lines approaching a boundary at an angle get refracted and change direction just like light rays do? Because will discussing electric field lines and flux associated with it we do not consider a change in direction Answer: When light (or an EM wave) reaches an inetrface the direction of propagations changes. This is not the same as the direction of the electric field. The change in propagation direction is not related and does not imply a change in the direction of the electric field. So, the logic of the question is flawed (non sequitur). However, the electric field lines can change direction at a dielectric interface and they are shown to do so in the diagrams treating electric field in and around dielectrics. So, this part of the OP question "So why don't we consider for their direction change?" is based on a false assumption. We do. Dielectric cylinder
{ "domain": "physics.stackexchange", "id": 95059, "tags": "electrostatics, electromagnetic-radiation, electric-fields, refraction, boundary-conditions" }
Finding Eigen Values from Quantum Phase Estimation - Using qiskit
Question: I am trying to use the quantum phase estimation(EigsQPE) of qiskit to find the eigen values of a matrix. As I am new to quantum computing so I am confused what to measure in the circuit to derive the eigen values of the input matrix. I know how to identify phase and derive eigen value using phase derived from the most probable bit string for a single eigen value. But deriving multiple eigen values from QPE circuit is confusing. Any help will be much appreciated. Code : https://github.com/Anand-GitH/HLL-QauntumComputing/blob/main/Qiskit-QPEStandalone.ipynb Answer: Two things to note: EigsQPE needs the eigenvalues to be scaled onto the range (0,1]. You can use evo_time to set the scaling. If you don't pass this value, a scaling value will be set automatically. You can get this value using eigs.get_scaling(). If the eigenvalue is $e^{2\pi i\theta}$, then the register contents will be $2^n\theta$ That means if the register contains the value $x$, and evo_time equals $s$, then your eigenvalue will be $2\pi x/(2^ns)$
{ "domain": "quantumcomputing.stackexchange", "id": 2573, "tags": "programming, qiskit, quantum-phase-estimation, linear-algebra" }
Why can't a piece of paper (of non-zero thickness) be folded more than $N$ times?
Question: Updated: In order to fold anything in half, it must be $\pi$ times longer than its thickness, and that depending on how something is folded, the amount its length decreases with each fold differs. – Britney Gallivan, the person who determined that the maximum number of times a paper or other finite thickness materials can be folded = 12. Mathematics of paper folding explains the mathematical aspect of this. I would like to know the physical explanation of this. Why is it not possible to fold a paper more than $N$ (=12) times? Answer: I remember that the question in your title was busted in Mythbusters episode 72. A simple google search also gives many other examples. As for single- vs alternate-direction folding, I'm guessing that the latter would allow for more folds. It is the thickness vs length along a fold that basically tells you if a fold is possible, since there is always going to be a curvature to the fold. Alternate-direction folding uses both flat directions of the paper, so you run out of length slightly slower. This would be a small effect since you have the linear decrease in length vs the exponential increase in thickness. Thanks to gerry for the key word (given in a comment above). I can now make my above guess more concrete. The limit on the number of folds (for a given length) does follow from the necessary curvature on the fold. The type of image you see for this makes it clear what's going on For a piece of paper with thickness $t$, the length $L$ needed to make $n$ folds is (OEIS) $$ L/t = \frac{\pi}6 (2^n+4)(2^n-1) \,.$$ This formula was originally derived by (the then Junior high school student) Britney Gallivan in 2001. I find it amazing that it was not known before that time... (and full credit to Britney). For alternate folding of a square piece of paper, the corresponding formula is $$ L/t = \pi 2^{3(n-1)/2} \,.$$ Both formulae give $L=t\,\pi$ as the minimum length required for a single fold. This is because, assuming the paper does not stretch and the inside of the fold is perfectly flat, a single fold uses up the length of a semicircle with outside diameter equal to the thickness of the paper. So if $L < t\,\pi$ then you don't have enough paper to go around the fold. Let's ignore a lot of the subtleties of the linear folding problem and say that each time you fold the paper you halve its length and double its thickness: $ L_i = \tfrac12 L_{i-1} = 2^{-i}L_0 $ and $ t_i = 2 t_{i-1} = 2^{i} t_0 $, where $L=L_0$ and $t=t_0$ are the original length and thickness respectively. On the final fold (to make it n folds) you need $L_{n-1} \leq \pi t_{n-1}$ which implies $L \leq \frac14\pi\,2^{2n} t$. Qualitatively this reproduce the linear folding result given above. The difference comes from the fact you lose slightly over half of the length on each fold. These formulae can be inverted and plotted to give the logarithmic graphs where $L$ is measured in units of $t$. The linear folding is shown in red and the alternate direction folding is given in blue. The boxed area is shown in the inset graphic and details the point where alternate folding permanently gains an extra fold over linear folding. You can see that there exist certain length ranges where you get more folds with alternate than linear folding. After $L/t = 64\pi \approx 201$ you always get one or more extra folds with alternate compared to linear. You can find similar numbers for two or more extra folds, etc... Looking back on this answer, I really think that I should ditch my theorist tendencies and put some approximate numbers in here. Let's assume that the 8 alternating fold limit for a "normal" piece of paper is correct. Normal office paper is approximately 0.1mm thick. This means that a normal piece of paper must be $$ L \approx \pi\,(0.1\text{mm}) 2^{3\times 7/2} \approx 0.3 \times 2^{10.5}\,\text{mm} \approx .3 \times 1000 \, \text{mm} = 300 \text{mm} \,. $$ Luckily this matches the normal size of office paper, e.g. A4 is 210mm * 297mm. The last range where you get the same number of folds for linear and alternate folding is $L/t \in (50\pi,64\pi) \approx (157,201)$, where both methods yield 4 folds. For a square piece of paper 0.1mm thick, this corresponds to 15cm and 20cm squares respectively. With less giving only three folds for linear and more giving five folds for alternating. Some simple experiments show that this is approximately correct.
{ "domain": "physics.stackexchange", "id": 100577, "tags": "material-science, geometry, continuum-mechanics" }
Where does the RT term come from in the derivation for the activation enthalpy from the Eyring equation?
Question: It is easy to show an Arrhenius-like equation from the Eyring equation, but if you do this, you get that the activation energy is about equal to the activation enthalpy. However, the real approximation is that $E_\mathrm{a} = \Delta H^\ddagger + RT$. Why is this? Answer: Using vibrational partition functions to define the reaction rate constant produces an equation of the form $\displaystyle k=aT^be^{-\Delta U_0^\mathrm{O}/(RT)}$ where $a,b$ are constants independent of $T$ and $\Delta U_0^\mathrm{O}$ is the difference in zero point energies at the transition state compared to reactant, this is related to what is commonly called the activation energy. This approach allows molecules to have discrete vibrational levels and so a thermal population of these. The constants $a,b$ are found from the partition functions. The (classical) Arrhenius equation is often written as $k_A=Ae^{-E_A/(RT)}$. Taking the log of each equation and differentiating wrt. $T$ gives $$\frac{d\ln(k)}{dT} =\frac{bRT+\Delta U_0^\mathrm{O}}{RT^2}\equiv \frac{E_A}{RT^2}$$ thus the Arrhenius activation energy is $E_A\equiv bRT+\Delta U_0^\mathrm{O}$. (Texts on statistical thermodynamics gives details of partition functions, some book on kinetics also give this detail.)
{ "domain": "chemistry.stackexchange", "id": 13857, "tags": "kinetics, enthalpy, transition-state-theory" }
Reasoning behind taking the Fourier transform of the fermionic operators for a circular $1$D spin chain
Question: In the section 4.1 of Quantum Computation by Adiabatic Evolution, Farhi et al proposes a quantum adiabatic algorithm to solve the $2$-SAT problem on a ring. To compute the complexity of the algorithm the authors computed the energy gap between the ground and first excited states of the adiabatic Hamiltonian. The adiabatic Hamiltonian is defined as $$ \tilde{H} (s) = (1-s) \sum^n_{j=1}(1-\sigma^{(j)}_x) + s \sum^n_{j=1}\frac{1}{2} (1-\sigma^{(j)}_z \sigma^{(j+1)}_z ) $$ To prove the correctness of the algorithm, the authors consider an operator which negates the value of the bits in the $z$ axis. $$ G = \prod^n_{j=1}\sigma^{(j)}_x $$ Then the authors start the steps of Jordan-Wigner transformation. The fermionic operators are defined as follows. $$ b_j = \sigma_x^1 \sigma_x^2 \ldots \sigma_x^{j-1} \sigma_-^{j} \mathbf{ 1}^{j+1} \ldots \mathbf{ 1}^n \\ b^\dagger_j = \sigma_x^1 \sigma_x^2 \ldots \sigma_x^{j-1} \sigma_+^{j} \mathbf{ 1}^{j+1} \ldots \mathbf{ 1}^n $$ where $$ \sigma_{\pm} = \sigma_x \pm i \sigma_y $$. After reexpressing the adiabatic Hamiltonian using the fermionic operators the authors mention the following fact before taking the Fourier transform of the fermionic operators. Because this is invariant under the translation, $b_j \to b_{j+1}$ , and is quadratic in the $b_j$ and $b^\dagger_j$ , a transformation to fermion operators associated with waves running round the ring will achieve the desired reduction of $H(s)$. My questions: What reduction are the authors talking about? Why do we need that reduction? In the Fourier transform, $\beta_p = \frac{1}{\sqrt{n}} \sum^n_{j=1} e^{i\pi p j/n} b_j$, why $p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)$? My attempt: Answer: 1) The reduction they are referring to is explain in the middle of page 13: "We now write (4.5) in the invariant sector as a sum of $n/2$ commuting $2×2$ Hamiltonians that we can diagonalize." 2) They are "reducing" the difficult problem of diagonalizing a very large matrix to the much easier problem of separately diagonalizing $n/2$ different $2 \times 2$ matrices. 3) At the very top of page 14, it says "Since we will restrict ourselves to the $G = 1$ sector, (4.10) and (4.11) are only consistent if $b_{n+1} = −b_1$, so we take this as the definition of $b_{n+1}$." These are called antiperiodic boundary conditions and often occur when fermions are put on a periodic ring (either in real space, as in this case, or in imaginary time, as in the Matsubara formalism in statistical quantum field theory). Only the odd values of $p$ are compatible with this antiperiodic boundary condition - even values of $p$ would result in a function with periodic boundary conditions.
{ "domain": "physics.stackexchange", "id": 31489, "tags": "quantum-mechanics, condensed-matter, quantum-information, fermions, eigenvalue" }
What dictates the lifetime of a solvated electron in a given solvent?
Question: Solvated electrons have a long lifetime in ammonia solutions, but their counterparts in water (called hydrated electrons) have a much smaller lifetime, of the order of microseconds in very pure water. (image from: Boero et al, Phys Rev Lett 2003, 90, 226403) What properties of the solvent account for these very different lifetimes? Both are polar protic solvents, so what other(s) factor(s) could be involved here? Answer: For those who have never had the pleasure of personally doing this, see this video. It has been known since 1807 that dissolving sodium in liquid ammonia results in a beautiful color. It was originally thought the color was due to some familiar complex instead of a solvated electron. A similar phenomenon happens with other alkali metals in ammonia. Research in the field says that it takes, at least, forty some ammonia molecules to solvate a given electron. Metstable cavities form and their stability could depend highly on electrostatic interactions to solvate our electron. Ammonia will slowly react by evolving hydrogen gas, $$\ce{2 NH3 + 2 e- -> H2 + 2 NH2-}$$ Differences in solvent can play a huge role in the stability of our solvated electron. An analogous decomposition occurs in water, $$\ce{2 H2O + 2 e- -> H2 + 2 HO-}$$ Famously the latter occurs faster than the former, see here in comparison to my earlier link. We could say that these differences in rate reflect different stabilities of our solvated electron. Although, the addition of an appropriate catalyst to our ammonia will result in rapid evolution of hydrogen. A more quantitative description for the difference in energies is obtained by measuring the UV-vis spectrum for a solvated electron in both water and ammonia. One will quickly notice that the band appears at higher energies in water than it does in ammonia and there are considerable differences in the band shapes (the band is wider in ammonia). So Why Do They Differ So here I have to make the disclaimer that no current theories quantitatively reproduce the observed phenomena, such as the absorption spectrum for a solvated electron. This question has no established/accepted answer as it is an ongoing area of research. The self-ionization of water has an equilibrium constant on the order of $K = 10^{-14}$ and that of ammonia is on the order of $10^{-30}$. Hydronium formation occurs in the case of water and ammonium in the case of ammonia. Hydronium has a considerably lower $\mathrm{p}K_\text{a}$ than ammonium and so it’s reasonable to see why a reaction in water would be more likely with an electron. (Ammonia rarely produces a weakly acidic species, but water often produces a strongly acidic complex.) A slightly deeper reason may be that water forms more ordered local domains/structures in solution than ammonia and this influences the rate by resulting in a larger entropy of activation when the hydrolyzed electron breaks up these larger structures, recall that $k \propto \exp (\Delta S ^\ddagger /R) $. Just speculation though.
{ "domain": "chemistry.stackexchange", "id": 29, "tags": "solvents, stability" }
What's the right way to setup an image classifier by multiple params?
Question: I'm very new to the data science and machine learning, so apologies for my ignorance. What I'm trying to understand is how to setup an image classifier system (maybe based on CNN) which will classify my image by multiple params. Most of the examples I found are about classifying by single class, i.e. "cat", "dog", "horse", etc, but what I'd like to have is, for example, {"red", "dog", "tongue"}. Is there a simple way to do it? The best option would be to have a ready setup, so I can just change their test dataset with mine and see the right formatting. Thanks! UPDATE: Also please help me understand if it's a complicated task for an experienced machine learning engineer? What'd be the timing and cost given a dataset? Answer: You normally one hot encode your labels so that every possible attribute gets it's own binary representation. So if you have 10 attributes they would be represented as [attr1, attr2, attr3, ..., attr10], with values either 1 if the attribute is present, or 0 if it is not. Then you train a network with the same number of output neurons as possible categories and use a sigmoid activation function.
{ "domain": "datascience.stackexchange", "id": 4768, "tags": "classification, cnn, image-classification" }
Layman's explanation of the mysterious occurrence of quantum tunneling?
Question: I see much talk about probability and functions, quantum states and ball-in-a-pit analogies. Bah. I would like an easy summary about the principles behind, demonstrations of, and application from the effect of quantum tunneling suitable for a high-school freshkid. Being a curious person, this seems very interesting. I appear to be more used to the classical approach, where a ball either goes in or goes out, and doesn't shove a tiny shaving of itself out of the pit like in quantum tunneling. That is why I need an explanation. According to this interesting Wikipedia article, 1 attoKelvin is the temperature required for 'macroscopic teleportation of objects.' Intuition in my mind tells me it is because the wavelength of an object increases as it gets colder, but I can't connect the dots smoothly. Help please? Answer: One way to look at quantum mechanics is through the dynamics. One way to look at the dynamics is through the Schrödinger equation and there is nothing wrong with that. Since it is a linear equation it does encourage one to look at particular solutions and then make a general solution as a simple sum of those solutions, which can sometimes delay understanding how a particular system works. Another approach is to follow the probability current and then to see what makes that change. When you look at the probability current, it is not linear (for experts it is more bilinear) so is more directly posed in terms of the actual wave function of the actual system. So let's look at tunneling from the point of view of a younger high schooler. The current is proportional to the wave and how much it is changing (its gradient). The wave is like how much is there (the magnitude) and if it was real the gradient would measure how much stronger it is in one place than the other. However, it is complex, so the gradient can also measure how the phase changes. The change in phase from here to there is actually a stand in for how the density of the wave flows from one place to another. For experts, when $\Psi(x,y,z,t)$=$R(x,y,z,t)e^{iS(x,y,z,t)/\hbar}$ then the probability current is $\vec J$=$\frac{R^2}{m}\vec\nabla S.$ So the change in phase from here to there is a direct analog of how the probability current is flowing. You can imagine that the wavefunction tells you all these different possibilities. And the fact that it is a complex field means it tells you about where these possibilities are and how they flow (this is common in physics to use a complex field to represent both what it is right now and how it is changing). Now how does that current change? We know how it flows from here to there, like knowing the flow of water everywhere in a river. Just like with water we need to know how the phase (the velocity) changes in time. Based on where you are relative to everything else there is a classical force that naturally makes velocities change in exactly the way you expect, but in quantum mechanics there is a completely new force one based on whether there is a bigger density of possibilities on one side than the other. It is like a particle has to go up a hill where the hill has a height of $-\frac{\hbar^2}{2m}\frac{\vec\nabla^2R}{R}$ i.e. the term $\vec\nabla^2R$ measures how the density of possibilities $R$ deviates from the average values around it. Thus where there are less possibilities than around it, there is an extra push into that region. This is similar to water if it wanted to rush into a region with less density except there is a very very strange property since you divide by $R$ as well you can get large pushes even when there is little density in the whole region. And this acts just like a potential so it is like storing energy in the shape of then possibilities. For instance if all the possibilities have the right density profile you can get a force that cancels the classical forces everywhere, no velocities change, and it is even possible to have zero velocity, so the analogy of energy is stored entirely in the shape profile of the wave of possibilities and in the classical potential. So now let's get to tunneling properly. Tunneling happens when the classical forces by themselves are not strong enough to push you through a barrier, but if there is a higher density of possibilities on one side of the barrier this can provide an entirely new and quantum force that can push the wave into the classically forbidden region. You can see this in statics if you look at one those special profiles where nothing is moving, the shale everywhere is exactly enough to produce a force that counters the classical forces. Or you can see it with dynamics. In dynamics you can watch the leading edge of a packet slow down as it approaches a classical hill, it slows down relative to the wave behind it, but that leads to a higher density behind that leading edge which means the quantum force pushes it up the hill harder than it otherwise would go. If the hill is larger this repeats over and over again until there is such a strong build up behind that leading edge that the leading edge is finally pushed through the barrier. That's tunneling. And since a lot of possibilities have to pile up behind that leading edge for a high and wide barrier only the tip of the leading edge makes it through. This is why tunneling it rare. Now, to be fair, because of that $1/R$ term it is a relative lot that piles behind it, not a lot. So you want a mismatch between the average that is proportionate to how much is there. So the effect is still that only the tip of the leading edge makes it through, but not much makes it through. That buildup pushed both ways, so as the leading edge got pushed through, the whole rest of the wave got pushed back. And normally a potential pushes back, they just get pushed back stronger and earlier than they do classically. And the whole process depends on the original shape profile of the wave. Even with no classical potential the wave can push itself apart because any region of higher than (local) average (relative to how much is there) exerts a force away from itself. This is entirely quantum. When I say that, this is because classically if there was a possibility for a particle to be here or there then each possibility would dynamically evolve as if it were the only thing. In quantum mechanics these possibilities are real in the sense that they actually interfere with each other and force a flow of possibilities from the current and the current (velocity) changes not just from the classical potential but also from the quantum potential of $-\frac{\hbar^2}{2m}\frac{\vec\nabla^2R}{R}$ so that regions where the possibilities are denser than the regions around them exert forces to even them out and regions where the possibilities are less dense than the regions around them also exert forces to even them out. These forces can be overcome by the classical forces, thus regions can get more dense with possibilities until the quantum force is strong enough to balance the classical force pushing them there. So there is an expectation for possibilities to roll down a classical hill. Its just that they can also go up a classical hill if they were flowing towards the hill and have a bigger density of possibilities driving them through it. This whole story can be easily misunderstood. In particular you don't need to track how each bit of probability flows from here to there to get your final answer. So all of this intuition is not required to get the correct answer. And so (well) over 99% of Physicists do not learn this approach. And I don't want you to think it is required. The alternative is to just not pay attention to where the probabilities is when you aren't looking at it, and to not bother even asking how it got from one place to another, just do some math that correctly tells you how many possibilities (the probability) are on the other side of the barrier. There is nothing wrong with that. Two ways of getting the same answers are equally valid. But if you don't ask what goes on when you don't look then you can't answer it. But it is OK to not answer it because stories about what happen when you don't look (in between looking) are just that, stories. Not science. And tracking the flow of probability and tracking what makes it change can be overkill if you also e tracking the whole wave over time. But if they produce the same answers and you can learn how to use one to gain a better understanding, that can be useful. If one of them would be a bunch of symbols that don't mean anything to you then it might be less useful even if you get all the right answers. That said, neither is simple.
{ "domain": "physics.stackexchange", "id": 23015, "tags": "quantum-mechanics, quantum-tunneling" }
Multiple subscribers to service
Question: Good evening, I may have misunderstood the concept of ROS services, but is it possible to have multiple client calling for the same service? For instance, I have a service running on a server that processes raw images and returns informations about them. Can I have a service that use those informations and another that register the output of server? If yes, should I create a new node that publish the data, or is it possible to make the service directly speak with the database? Thank you in advance, Julien Girard Originally posted by Rhapsodos on ROS Answers with karma: 3 on 2016-05-31 Post score: 0 Answer: I may have misunderstood the concept of ROS services, but is it possible to have multiple client calling for the same service? Yes, this is possible. A service-server offers a service, that any node can use. Each client sends some information to the service and gets a corresponding response. See also here for a simple example. For instance, I have a service running on a server that processes raw images and returns informations about them. Can I have a service that use those informations and another that register the output of server? So, if I understand correctly, you want to send some raw images to the service server and get information about them as a response? Yes, this is possible with a service server. If your server just processes the images it gets from an external non-ROS source and you want to use them in another ROS node, then it would be better to publish them as a topic. Originally posted by JohnDoe2991 with karma: 305 on 2016-06-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Rhapsodos on 2016-06-02: Thank you very much for your explanation. Considering that I have nodes that uses differently inputs and outputs of this server, I will publish them on a topic.
{ "domain": "robotics.stackexchange", "id": 24775, "tags": "ros, service" }
Equipartition of energy and degrees of freedom in Diatomic gas
Question: Suppose we have a gas of $N$ diatomic molecules (ex $O_2$) with one-molecule hamiltonian being: $$\mathcal{H} = \frac{\vec{p}_1^2}{2m}+\frac{\vec{p}_2^2}{2m} + V(r_{rel}) $$ Where $r_{rel}$ is the relative distance between the two atoms. Say the potential is something like $V= \frac12 m ω^2 r_{rel}^2 = \frac12 k r_{rel}^2$ Thus the one body partition function is going to be: $$ Z_1 = \frac1{h^6} \int d^3r_1 \int d^3r_2 \int d^3p_1 \int d^3p_2 e^{-β\mathcal{H}}$$ or more conveniently-using center of mass coordinates : $$ Z_1 = \frac1{h^6} \int d^3r_{cm} \int d^3r_{rel} \int d^3p_1 \int d^3p_2 e^{-β\mathcal{H}}$$ (something tells me I got something wrong here?) It's easy to see the first integral gives us the volume $V$, while the last two are the same so we'll get a term $ (\frac{2mπ}{β})^{3}$. Now the point were I'm getting confused is the integral of the relative distance: I'm reading in notes/books that the relative distance is one extra quadratic term in the hamiltonian and will thus give us an extra term $\sim (\frac{1}{β})^{1/2}$. However I'm thinking of it as three extra terms, since $\vec{r}_{rel}= \vec{r}_1 - \vec{r}_2$ is a vector, same as $\vec{p}_{1,2}$, right? Indeed doing the integral I get : $$ \int d^3r_{rel} exp(-\frac{βk}{2} r_{rel}^2) =4π \int dr r^2 exp(-\frac{βk}{2} r^2) \sim (\frac{1}{β})^{3/2}$$ So I'd get a heat capacity $9/2 k N$ instead of $7/2 k N$. What am I getting wrong? Why are the momenta considered as 3 quadratic terms each, while the relative distance in the potential as 1 and how does this become apparent when integrating with respect to center of mass coordinates? Answer: I think the problem is that your spring potential has its minimum at the origin, whereas it should be at the equilibrium bond length $r_0$. Apart from this, I think that your derivation is OK. With $V=\frac{1}{2}k(r_\text{rel}-r_0)^2$, the integral becomes more complicated. One can shift the origin of coordinates out to $r_0$: set $x=r_\text{rel}-r_0$. If we make the approximation that the RMS amplitude is much less than $r_0$, the integration limits may be extended to $\pm\infty$ without changing the result significantly. Also, the $r^2$ prefactor becomes, to a good approximation, $r_0^2$, and your integral becomes proportional to $\beta^{-1/2}$ as expected.
{ "domain": "physics.stackexchange", "id": 50672, "tags": "statistical-mechanics, molecules, degrees-of-freedom, gas" }
"frame_id" assignment
Question: I would like to know the semantic behind the following frame_id assignment: cloud.header.frame_id = "/map"; Which of the following (or both are) is correct: We want to express our cloud values in the world(/map in this case) frame. We want to express our cloud values with respect to the origin of the world(/map in this case) frame. I just couldn't see the difference between those two, and at the moment, I presumed that they both carry the same semantic and effect. Thanks in advance. Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2012-02-21 Post score: 0 Answer: I think 1. and 2. are exactly the same. Originally posted by dornhege with karma: 31395 on 2012-02-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by alfa_80 on 2012-02-22: Thanks a lot for the confirmation.
{ "domain": "robotics.stackexchange", "id": 8334, "tags": "ros, beginner" }
Are gauge theories always renormalizable?
Question: Speaking of quantum field theories. Is one of the following implications correct? gauge theory (gauge invariant) => renormalizable renormalizable => gauge theory (gauge invariant) If yes do you have any reference? If not have "gauge invariance / gauge theories" something to do with renormalizability? Answer: The first statement is correct to some extent, the second isn't. Take the case of vector gauge theories, like the ones in the Standard Model. These theories have a massless vector field, which can be described by two degrees of freedom (2 polarisations) while the classical field itself, $A_\mu$, is described by 4 components. Gauge invariance is related to this mismatch. At the quantum level, the interacting theory has processes which, diagrammatically, involve the so-called loops. These loops are often (computationally) infinite and the theory only makes sense if there is a systematic way of subtracting these infinities at each order in perturbation theory, this is what we mean by a Renormalisable theory. Now, this in terms mean that some different diagrams will need to be related in order for these infinities to cancel at each order in perturbation theory (or more precisely for the Lagrangian bare quantities to be related at each order in perturbation theory through the coefficients Z). It so happens that if one demands the gauge invariance to be a symmetry of the quantum theory (and not only of the classical level) one find the conservation law is the so-called Ward-Takahashi identities (and not Noether's theorem) state: for a gauge invariant theory one has to have gauge invariant observables, then we should have under a gauge transformation $$\delta_\epsilon \langle O(x) \rangle = \delta_\epsilon \int d\phi O(x) e^{i S(\phi,\partial \phi)} = 0 \Rightarrow \langle \delta O(x) \rangle - i \int d^4x \epsilon(x) \langle O \partial_\mu J^\mu \rangle = 0$$ where $O(x)$ is some local operator, say for example a n-point function. What this is that for some n-point functions you will be able to relate some diagrams, and ultimately you will find equivalences between renormalisation of different parameters of the theory, which are exactly the correspondences you would need to employ in order to guarantee renormalisablitly. As the theory is renormalisable one can show that in the end we need only one counter-term for each bare quantity, such that we can just write down a counter-term Lagrangian which has the same form of the bare quantities. When does this fail?: Gravity. Gravity is a Gauge theory, but the gauge group is not a (semi) simple Lie group as in the Standard Model. Gravity's gauge group is the one of diffeomorphisms of space-time, and this is not a Lie group (in the sense it does not have a finite dimensional Lie algebra). Furthermore, Gravity Lagragian has negative mass dimension couplings $(G_N \propto 1/M^2_{Pl})$, and by inspecting the superficial degree of divergency (or dimensional analysis, following some texts) one can show that the number of required counter-terms increases with perturbation theory. This last notice is a general comment: negative mass dimension couplings always lead to non-renormalisable theories in 4 dimensions! A theory can be renormalisable without being a gauge theory. The usual $\lambda \phi^4$ scalar theory in 4 dimensions is renormalisable. This means that the number of required counter-terms does not increase with the order of perturbation theory and so the divergencies are kept under-controlled at each order of perturbation theory, in the sense they can be systematically subtracted from the theory. References: http://en.wikipedia.org/wiki/Renormalization http://en.wikipedia.org/wiki/Ward%E2%80%93Takahashi_identity Literally any QFT book with renormalisation chapter. Namely, to understand the counter-term and bare quantities discussion follow a book with diagrammatic description of QFT (Peskin, etc), while for just a functional methods discussion follow something like Zee (conceptual, but good introduction).
{ "domain": "physics.stackexchange", "id": 22371, "tags": "quantum-field-theory, standard-model, gauge-theory, renormalization, gauge-symmetry" }
difference between Time.now() and /clock
Question: Hello!I'm new to ROS and now i have checked that to print time on screen (for example) i can use ros::Time::now() or subscribe to /clock and get time from the message. Anybody can tell me if there are differencies between these two ways? thanks, max Originally posted by max on ROS Answers with karma: 1 on 2012-05-15 Post score: 0 Answer: These two options represent the same time. In code, you should definitely use ros::Time::now(). It will get you the current ROS time that is being published to /clock without most of the overhead involved with subscribing to the topic yourself. Plus, it looks a lot better in your code. Originally posted by DimitriProsser with karma: 11163 on 2012-05-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Lorenz on 2012-05-15: Isn't /clock only published by simulation and only used by ros::Time when use_sim_time is true? I think on a real system, subscribing to it would just not work. Comment by DimitriProsser on 2012-05-15: I believe you might be correct
{ "domain": "robotics.stackexchange", "id": 9406, "tags": "ros, time" }
Monitoring progress in Parallel.ForEach every minute
Question: I'm using Parallel.ForEach to download 500K URLs and I want to monitor the number of URLs that have been successfully downloaded each minute. int elapsedMinutes = 0, cnt = 0; Parallel.ForEach(list, tuple => { var currMinutes = (int) ((DateTime.Now - startTime).TotalMinutes); lock (Object) { if (currMinutes > elapsedMinutes) { elapsedMinutes = currMinutes; Console.WriteLine("{0}: Extracted {1} urls", DateTime.Now, cnt); } } //download urls ... Interlocked.Increment(ref cnt); }); It looks fine now but the lock statement part seems a bit ugly to me. Is there a better way to achieve this? Answer: Instead of checking the time from each thread on each iteration, I would use a Timer: int count = 0; TimeSpan reportPeriod = TimeSpan.FromMinutes(0.1); using (new Timer( _ => Console.WriteLine("{0}: Extracted {1} urls", DateTime.Now, count), null, reportPeriod, reportPeriod)) { Parallel.ForEach( list, tuple => { //download urls ... Interlocked.Increment(ref count); }); } Console.WriteLine("{0}: Done", DateTime.Now); I have also renamed cnt to count, those two characters are not worth it to make your code less readable.
{ "domain": "codereview.stackexchange", "id": 10002, "tags": "c#, multithreading, locking, task-parallel-library" }
LCCDE in simple words?
Question: What is LCCDE?I only know its abbreviation/full form :linear constant-coefficient difference equation I know that in s domain we have differential equations and in z domain we have difference equation but what is LCCDE?What is its need?What is difference between a normal difference equation and a LCCDE? Answer: An $N$th-order linear constant-coefficient difference equation (LCCDE) is of the form $$y[n]=\sum_{k=0}^{M}b_kx[n-k]-\sum_{k=1}^{N}a_ky[n-k]\tag{1}$$ It is linear because the sequences $x[n]$ and $y[n]$ appear linearly in $(1)$, and it has constant coefficients because the coefficients $a_k$ and $b_k$ do not depend on the index $n$. LCCDEs are important because they can be used to describe many practically useful discrete-time (or discrete-space) systems, such as linear time-invariant (LTI) filters (which are also called linear shift-invariant (LSI) filters, if $n$ does not represent a time index). Given appropriate initial conditions, Eq. $(1)$ can be used to recursively compute the sequence $y[n]$ given its past values and the current and past values of the sequence $x[n]$. For causal LTI filters, $y[n]$ is interpreted as the filtered output, and $x[n]$ is the input. In that case, Eq. $(1)$ describes an infinite impulse response (IIR) filter. If $a_k=0$, $k=1,\ldots,N$, Eq. $(1)$ reduces to $$y[n]=\sum_{k=0}^{M}b_kx[n-k]\tag{2}$$ which is a convolution sum, and the coefficients $b_k$ represent the system's impulse response. Note that the impulse response has finite length $M+1$, i.e., the system described by $(2)$ is a finite impulse response (FIR) filter.
{ "domain": "dsp.stackexchange", "id": 8196, "tags": "z-transform, differential-equation" }
What is a "batch" in batch normalization?
Question: I'm working on an example of CNN with the MNIST hand-written numbers dataset. Currently I've got convolution -> pool -> dense -> dense, and for the optimiser I'm using Mini-Batch Gradient Descent with a batch size of 32. Now this concept of batch normalization is being introduced. We are supposed to take a "batch" after or before a layer, and normalize it by subtracting its mean, and dividing by its standard deviation. So what is a "batch"? If I feed a sample into a 32 kernel conv layer, I get 32 feature maps. Is each feature map a "batch"? Are the 32 feature maps the "batch"? Or, if I'm doing Mini-Batch Gradient Descent with a batch size of 64, Are 64 sets of 32 feature maps the "batch"? So in other words, the batch from Mini-Batch Gradient Descent, is the same as the "batch" from batch-optimization? Or is a "batch" something else that I've missed? Answer: The "batch" is same as in mini-batch gradient descent. The mean in batch-norm here would be the average of each feature map in your batch (in your case either 32 or 64 depending on which you use) generally batch is used quite consistently in ML right now, where it refers to the inputs you send in together for forward/backward pass.
{ "domain": "ai.stackexchange", "id": 1614, "tags": "neural-networks, batch-normalization" }
Conditional entropy calculation in python, H(Y|X)
Question: Input X: A numpy array whose size gives the number of instances. X contains each instance's attribute value. Y: A numpy array which contains each instance's corresponding target label. Output : Conditional Entropy Can you please help me code the conditional entropy calculation dynamically which will further be subracted from total entropy of the given population to find the information gain. I tried something like the below code example. But the only input data I have are the two numpy arrays. can you please help me correct this ? [code] def gain(data, attr, target_attr): val_freq = {} subset_entropy = 0.0 for record in data: if (val_freq.has_key(record[attr])): val_freq[record[attr]] += 1.0 else: val_freq[record[attr]] = 1.0 for val in val_freq.keys(): val_prob = val_freq[val] / sum(val_freq.values()) data_subset = [record for record in data if record[attr] == val] conditional_entropy += val_prob * entropy(data_subset, target_attr) Answer: Formula for conditional entropy is: $H(X|Y)=\sum_{v\epsilon values(Y)}P(Y=v)H(X|Y=v)$ for X given Y. Mutual information of X and Y: $I(X,Y)=H(X)-H(X|Y)=H(Y)-H(Y|X)$ I assume you already know the formula for H(X), the entropy. For more information I would suggest: http://www.cs.cmu.edu/~venkatg/teaching/ITCS-spr2013/notes/lect-jan17.pdf After knowing these formulas coding part shouldn't be that hard. Python takes care of most of the things for you such as: log(X), when X is matrix python just takes log of every element. For the sum you can use iterative approach or use np.sum(). If you have a code consider posting it so we can revive and tell you what is wrong, right and how to improve.
{ "domain": "datascience.stackexchange", "id": 6982, "tags": "machine-learning, decision-trees" }
Four vectors from spinors
Question: In Exercise 2.3 of A modern introduction to Quantum Field Theory by Michele Maggiore I am asked to show that, if $\xi_R$ and $\psi_R$ are right-handed spinors, then $$ V^\mu = \xi_R^\dagger \sigma^\mu \psi_R$$ transforms as a four vector. Here, $\sigma^\mu = (1,\sigma^i)$. I have shown that it does so for boosts along the x-axis by explicitly transforming the two spinors and showing that the components of $V^\mu$ transform correctly. It seems easy to do the same thing for boosts along other directions and rotations around the separate axes. However, I would like to show this for a general Lorentz-transformation, i.e. I would like to show $$ (\Lambda_R \xi_R)^\dagger \sigma^\mu (\Lambda_R \psi_R) = \Lambda_\nu^\mu \xi_R^\dagger \sigma^\nu \psi_R \text{ }\text{ }\text{ }\text{ }\text{ }\text{ (2)} $$ Trying to do this explicitly seems even less elegant than what I have done so far. I have tried commuting $\sigma^\mu$ to the right of $\Lambda_R$ in (2), but that gives a rather complicated expression. Is there a slightly more elegant way to see why (2) is true without computing all the components explicitly? (I am not looking for a solution, but would rather have a hint!) Answer: You're essentially done, because any proper, orthochronous Lorentz transformation is the product of a rotation and a boost. If you are not familiar with this fact, try looking up the proof of polar decomposition of matrices and you will see that it generalizes easily. If you get stuck, add a comment below and I'll update this answer with more details.
{ "domain": "physics.stackexchange", "id": 1459, "tags": "homework-and-exercises, group-theory, representation-theory, lorentz-symmetry, spinors" }
Progress 4GL - Luhn algorithm
Question: My implementation in Progress 4GL of the Luhn Algorithm. Any suggestions on improving it? FUNCTION fnLuhnAlgorithm RETURNS LOGICAL (INPUT pcNumber AS CHARACTER): /*------------------------------------------------------------------------------ Purpose: Applies Luhn Algorithm to check a Number Notes: Returns True/False Validation based on check digit From the rightmost digit, which is the check digit, moving left, double the value of every second digit; if product of this doubling operation is greater than 9 (e.g., 7 * 2 = 14). Sum the digits of the products (e.g., 10: 1 + 0 = 1, 14: 1 + 4 = 5) together Compute the sum of the digits. Multiply by 9. The last digit, is the check digit. ------------------------------------------------------------------------------*/ DEFINE VARIABLE cNum AS CHARACTER NO-UNDO. DEFINE VARIABLE iCheck AS INTEGER NO-UNDO. DEFINE VARIABLE iLength AS INTEGER NO-UNDO. DEFINE VARIABLE iLoopCnt AS INTEGER NO-UNDO. DEFINE VARIABLE iNum AS INTEGER NO-UNDO. DEFINE VARIABLE iNum1 AS INTEGER NO-UNDO. DEFINE VARIABLE iNum2 AS INTEGER NO-UNDO. DEFINE VARIABLE iTestLength AS INTEGER NO-UNDO. ASSIGN iLength = LENGTH(pcNumber) iTestLength = iLength - 1 iCheck = 1. /* 1 for the check digit we skip */ DO iLoopCnt = iTestLength TO 1 BY -1: ASSIGN iNum = INTEGER(SUBSTR(pcNumber,iLoopCnt,1)) iCheck = iCheck + 1. IF iCheck MODULO 2 = 1 THEN ASSIGN iNum1 = iNum1 + iNum. ELSE DO: ASSIGN iNum2 = iNum * 2. IF iNum2 < 10 THEN ASSIGN iNum1 = iNum1 + iNum2. ELSE ASSIGN cNum = STRING(iNum2) iNum1 = iNum1 + INTEGER(SUBSTR(cNum,1,1)) + INTEGER(SUBSTR(cNum,2,1)). END. END. ASSIGN iNum2 = iNum1 * 9 iNum = iNum2 MODULO 10. IF iNum = INTEGER(SUBSTR(pcNumber,iLength,1)) THEN RETURN TRUE. ELSE RETURN FALSE. END FUNCTION. /* fnLuhnAlgorithm */ Answer: You are kind of overdoing it a little in my opinion, there are too many variables that don't really add much value, also I would change the parameter to be DECIMAL, to avoid calls using not numbers that would cause run time errors. My implementation would look like this: FUNCTION fnLuhnAlgorithm RETURNS LOGICAL (INPUT pcNumber AS DECIMAL): /*------------------------------------------------------------------------------ Purpose: Applies Luhn Algorithm to check a Number Notes: Returns True/False Validation based on check digit From the rightmost digit, which is the check digit, moving left, double the value of every second digit; if product of this doubling operation is greater than 9 (e.g., 7 * 2 = 14). Sum the digits of the products (e.g., 10: 1 + 0 = 1, 14: 1 + 4 = 5) together Compute the sum of the digits. Multiply by 9. The last digit, is the check digit. ------------------------------------------------------------------------------*/ DEFINE VARIABLE cNumber AS CHARACTER NO-UNDO INITIAL "". DEFINE VARIABLE iDigit AS INTEGER NO-UNDO INITIAL 0. DEFINE VARIABLE iSum AS INTEGER NO-UNDO INITIAL 0. DEFINE VARIABLE iLoopCnt AS INTEGER NO-UNDO INITIAL 0. cNumber = STRING(pcNumber). DO iLoopCnt = LENGTH(cNumber) - 1 TO 1 BY -1: iDigit = INTEGER(SUBSTR(cNumber,iLoopCnt,1)). IF iLoopCnt MODULO 2 = LENGTH(cNumber) MODULO 2 THEN iSum = iSum + iDigit. ELSE iSum = iSum + INTEGER(SUBSTR(STRING(iDigit * 2,"99"),1,1)) + INTEGER(SUBSTR(STRING(iDigit * 2,"99"),2,1)). END. IF ((iSum * 9) MODULO 10) = INTEGER(SUBSTR(cNumber,LENGTH(cNumber),1)) THEN RETURN TRUE. ELSE RETURN FALSE. END FUNCTION. /* fnLuhnAlgorithm */
{ "domain": "codereview.stackexchange", "id": 4005, "tags": "algorithm, performance, progress-4gl" }
Computation of Density for an Ideal Gas
Question: I'd like to get some help deriving the following result: $$ \langle \rho(\mathbf{q}) \rangle = \frac{N}{V}$$ where $$\rho(\mathbf{q}) = \sum_{i}^{N}\delta(\mathbf{q}_i-\mathbf{q}) $$ and $\mathbf{v} = v_x,v_y,v_z$, since we are working in a $6N$- dimensional phase space. I tried doing this: $$ \langle \rho(\mathbf{q}) \rangle = \frac{\int dq^{3N} dp^{3N} e^{-\beta H}\rho(\mathbf{q}) }{Q} $$ Now, the partition function $Q_N$ can be factorized as $$Q = Q_1 \cdot Q_2 \cdot...\cdot Q_N = \prod_{i}^{N} Q_i \tag{1}$$ and the Hamiltonian of the system can be rewritten as the sum of the Hamiltonians corresponding to each particle: $$ \mathcal{H} = \sum_{i}^{N} H_i \tag{2}$$ Then I tried rewriting the sum as a one over the indices $i \text{s.t. } i = j$ and over $i \ne j$: $$ \langle \rho(\mathbf{q}) \rangle = \frac{\int d\mathbf{q}_i d\mathbf{p}_i e^{-\beta \sum_{i = j}H_i }\delta(\mathbf{q-q}_i) \cdot \int d\mathbf{q}_{i\ne j} d\mathbf{p}_{i \ne j} e^{-\beta \sum_{i \ne j}H_i }\delta(\mathbf{q-q}_i) }{\prod_i Q_i}$$ But couldn't get anything good out of this, unfortunately. I honestly don't see how to get an $N$ in the numerator. Also, the denominator is equal to $V$, therefore the integration along the momenta must not matter in the overall picture, but, again, I can't see how we should get rid of the integral in $d\mathbf{p}_i$ when the hamiltonians depend directly upon them. I'd be much more than glad if somebody could either help me see if I made any crucial mistakes along the way, or tell me how to go on performing these calculations. Answer: First let's calculate the contribution of one of the terms in the sum: $$\langle \rho \rangle_1=\frac{\int \Pi_idp_i dq_i \delta(q_1-q) e^{-\beta H_1 -\beta H_2 -...}}{Q_1^N}$$ where $H_j\equiv H(p_j,q_j)$ and $Q_1=\int dp_1 dq_1 e^{-\beta H_1}$. We find $$\langle \rho \rangle_1=\frac{\int dp_1 e^{-\beta H_1(p_1,q)}\int \Pi_{i=2}dp_i dq_i e^{-\beta H_2 -\beta H_3 -...}}{Q_1^{N}}$$ On the other hand in an ideal gas $Q_1= V \int dp_1 e^{-\beta H_1}$, because $H$ doesn't depend on $q$. Thus we have $$\langle \rho \rangle_1=\frac{\frac{Q_1}{V}\times Q_1^{N-1}}{Q_1^{N}}=\frac{1}{V}$$ This is the contribution of $q_1$ in the sum. Considering all terms in the sum, we find $$\langle \rho \rangle = N\times \frac{1}{V}= \frac{N}{V}$$
{ "domain": "physics.stackexchange", "id": 98135, "tags": "homework-and-exercises, statistical-mechanics, density, partition-function" }
What protein or other process does the peptide BPC-157 come from?
Question: BPC-157 is a peptide for which there were some studies that suggest that it can help with wound healing and gastrointestinal problems in some animal models. In 2022 the World Anti-Doping Agency added it as a forbidden substance. A patent suggests that it can be extracted from human and animal gastric juice, but it was from the time before the human genome project, so they don't mention the actual protein it comes from. My attempts at running BLAST only bring me to the synthetic Usp-BPC-157 and not to any protein that could plausibly be the source in humans or other animals. Can someone tell me how the body creates BPC-157 naturally, either from what protein it's a breakdown product or otherwise how it gets into human gastric juice? Answer: As far as I can tell, it's a made-up scam. Almost all of the research on this peptide is done by one person, Predrag Sikirić, who also discovered and patented it. The claim is that this is the "important part" of some other, larger protein that he claims to have isolated from the gut. E.g. from Jelovac, N., Sikiric, P., Rucman, R., Petek, M., Marovic, A., Perovic, D., ... & Prkacin, I. (1999). Pentadecapeptide BPC 157 attenuates disturbances induced by neuroleptics: the effect on catalepsy and gastric ulcers in mice and rats. European journal of pharmacology, 379(1), 19-31.: We have identified a new human gastric juice protein with mucosal protective properties and a huge range of organoprotective effects, and with MW 40,000 (determined by gel chromatography), code-named BPC. ... In line with this, a 15-amino acid fragment (BPC 157), with amino acid sequence, Gly–Glu–Pro–Pro–Pro–Gly–Lys–Pro–Ala–Asp–Asp–Ala–Gly–Leu–Val, MW 1419, and apparently no sequence homology to known gut peptides, thought to be essential for activity of an entire peptide, was characterized and synthesized Okay, so apparently this MW 40,000 protein has a "huge range of organoprotective effects", except... it doesn't seem it's actually been studied, ever. Let us follow some citations... The earliest paper those passages refer to is this one: Sikirić, P., Petek, M., Ručman, R., Seiwerth, S., Grabarević, Z., Rotkvić, I., ... & Karakas, I. (1993). A new gastric juice peptide, BPC. An overview of the stomach-stress-organoprotection hypothesis and beneficial effects of BPC. Journal of Physiology-Paris, 87(5), 313-327. From the paper: A new gastric juice peptide, Mr 40000, named BPC has recently been isolated (in preparation) and its huge range of organoprotective effects has been described. In this, a 15 amino acid fragment (BPC 157) thought to be essential for its activity has been characterized (Sikiri6 et al, 1991a,b, c,d,e,f,g, 1992a,b). Okay, so they've already got this peptide from gastric juice, already found and "fully characterized" (quote from the abstract) it, all in just 2 years. I can't think of anything in biology that is fully characterized and it takes decades to do this kind of work with modern methods, yet this was done by one group in a couple years! Within just those a,b,c,d,e,f,g references in 1991 they've already shown that this peptide cures kidney lesions, pancreatitis, colon hypersensitivity, diabetes, liver lesions, and stomach ulcers. And all of this work on the peptide was completed and published before the discovery of the original MW 40,000 protein was ever published (that's the "in preparation" note from the quote; I can't find evidence that the "in preparation" manuscript was ever published). All of these papers with the same lead author. This is not credible, at all: no one can perform all this work. My best guess is that it is made up entirely. The "1991a" reference is this one: Sikiric, P., Petek, M., Rucman, R., Rotkvic, I., Seiwerth, S., Grabarevic, Z., ... & Udovicic, I. (1991). Hypothesis: Stomach stress response, diagnostic and therapeutical value: a new approach in organoprotection. Exp Clin Gastroenterol, 1, 15-16. Along with several of these other papers, it makes up the first volume of the journal "Exp Clin Gastroenterol", which appears to have been created solely to host this new, influential work. I can't really find any trace of it. Google Scholar can't find any of these papers. There is an (also sketchy) journal with a similar name hosted at "gastrossr (dot) org" that only took that name in 2002 but does not seem to have any relation. So, let's review... A larger protein, never before described but supposedly isolated from the stomach, cures everything. Before this protein is described in the literature (nothing published on it at all), one person has already identified and synthesized the important 15 amino acids from it that are responsible for all of its activities across all body systems. All further research is done with this peptide. The peptide and precursor protein are unrelated to any other protein (e.g.: your BLAST attempt; also, there is no mention of any related protein in any of these papers, including recent ones). The peptide is patented by the person who publishes all the papers about it's benefits. The original work describing the peptide was published in a brand-new journal; I haven't found any evidence that the journal actually exists. I suspect it's a creation of the person who "published" all of its content, including a dozen first-authored papers in the first year that this peptide was discovered. The name: BPC, stands for "Body Protective Compound". So, before doing this research categorizing all the ways that it protects the body, it was already pre-ordained with the name "body protective compound". I assume this is because the author already knew what his results were going to show, because he planned to make it all up. Can someone tell me how the body creates BPC-157 naturally, either from what protein it's a breakdown product or otherwise how it gets into human gastric juice? I can't find any evidence that it does; in a 'research' context it's an entirely synthetic peptide. The papers supporting it seem to be entirely synthetic, as well. As of now there is only one result for a search of "BPC 157" as a treatment/intervention on clinicaltrials.gov; this was a Phase I study that was cancelled; it's not clear whether a single patient was ever enrolled. You can find sources that sell or claim to sell it, in some cases labeled as "for horses", in others as a dietary supplement for humans while repeating some of the dodgy claims in these papers, despite never being studied in people. I won't link to any of these sources for what I think are obvious reasons.
{ "domain": "biology.stackexchange", "id": 12419, "tags": "proteins, blast" }
Quantum tunneling and a football permeating a wall
Question: I was wondering if I can say to a layman that "upon throwing the ball on a wall an enormously large number of times, there is a small probability that the ball will go through the wall", while explaining quantum tunneling (alpha decay example is abstract and artificial for a layman). My doubt if whether the wall region can be modeled as a finite potential barrier (infinite potential barrier - which is not of Dirac delta form - will not allow tunneling). Also, the wall seems to have all the other characteristics of the artificial barrier potential we set up in quantum mechanics, am I missing anything? Answer: infinite potential barrier – which is not of Dirac delta form – will not allow tunneling isn't quite right. A barrier where there's a finite region of infinite potential will not allow tunneling, nor will potentials with singularities going suffiently fast $\to \infty$. But it's easy to construct a non-dirac potential with singularity that still permits tunneling; in particular the one-dimensional singularities of the $\tfrac{1}{|r|}$ peaks as which you might model the nuclei's coulomb potential aren't much of a problem. You can basically model the ball's CM amplitude as a Bloch wave there. So, yes, a macroscopic ball can in fact tunnel through a brick wall. Of course, the probability is exponentially small in the thickness, so indeed an enourmously large number of throws is required. More problematically, it is far more likely for the ball to, say, spontaneously disintegrate into two identically-shaped halfs, to develop a tight chemical connection to the wall, or perhaps catch fire.
{ "domain": "physics.stackexchange", "id": 7951, "tags": "quantum-mechanics, potential, quantum-tunneling" }
Isomer Identification Using Condensed Structural Formulae
Question: Which of the following pairs are isomers? a) $\ce{C5H10}$ and $\ce{C10H20}$ b) $\ce{CH3(CH2)4CH3}$ and $\ce{CH3(CH2)3CH3}$ c) $\ce{CH3CH(CH3)(CH2)2CH3}$ and $\ce{CH3(CH2)2CH(CH3)2}$ d) $\ce{(CH3)3CH}$ and $\ce{CH3CH2CH2CH3}$ My textbook says the answer is C). I beg to differ. I think it is D). C) is just a different way of writing 2-methylpentane. Answer: isomer One of several species (or molecular entities ) that have the same atomic composition (molecular formula) but different line formulae or different stereochemical formulae and hence different physical and/or chemical properties. Source: Pure and Applied Chemistry, 1994, 66, 1077 (Glossary of terms used in physical organic chemistry (IUPAC Recommendations 1994)) Pure and Applied Chemistry, 1996, 68, 2193 (Basic terminology of stereochemistry (IUPAC Recommendations 1996)) a) C5H10 and C10H20 different chemical formula → not isomeric b) CH3(CH2)4CH3 = hexane (C6H14) and CH3(CH2)3CH3 = pentane (C5H12) different chemical formula → not isomeric c) CH3CH(CH3)(CH2)2CH3 = 2-methylpentane (C6H14) and CH3(CH2)2CH(CH3)2 = 2-methylpentane (C6H14) same chemical formula and same structural formula → identical, not isomeric d) (CH3)3CH = isobutane (C4H10) and CH3CH2CH2CH3 = butane (C4H10) same chemical formula and different structural formula → isomeric
{ "domain": "chemistry.stackexchange", "id": 2157, "tags": "organic-chemistry, isomers" }
Shannon interpolation formula for downsampled data with an "almost ideal" low pass filter
Question: Let $x[n]$ be a discrete time signal with DFT given by $X(f)=\sum_n x[n]e^{-2\pi inf}$ supported on $[-1/2M,1/2M]$ with $f\in[-1/2,1/2]$. I can then down-sample to get $y[n]:=x[nM]$. Then, let $$\widetilde{x}[n]=\begin{cases}My[n/M],& M|n,\\0,&\text{otherwise}.\end{cases}$$ Then its DFT is given by $$ \begin{aligned}\widetilde{X}(f)&=\sum_{n\in\mathbb{Z}}\widetilde{x}[n]e^{-2\pi inf}\\&=\begin{cases}M\sum_{n\in\mathbb{Z}}y[n/M]e^{-2\pi inf},& M|n,\\0,&\text{otherwise}\end{cases}\\&=\begin{cases}M\sum_{n}x[n]e^{-2\pi inf},&M|n,\\0,&\text{otherwise}.\end{cases}\end{aligned} $$ Now, let $\hat{x}[n]=(\tilde{x}\ast h)[n]$ be the discrete Hilbert transform of $\tilde{x}$, with $h$ an "almost" ideal low-pass filter with cut-off frequency $f_c=1/M$. My question is, how do I then apply the Shannon interpolation formula to reconstruct $x(t)$? Intuitively, I would guess that it would be something along the lines of $$x(t)=\left(\sum_{n\in\mathbb{Z}}x[n]\cdot\delta(t-n\Delta t)\right)\ast H(f),$$ with $$H(f)=\begin{cases}\frac{DTFT\{\hat{x}[\cdot]\}(f)}{M\cdot X(f)},&\text{if }M|n, \\ 0,&\text{otherwise}. \end{cases}$$ Am I correct? Answer: I don't get your downsample step when you downsampled by factor $M$. Let me go from scratch with the spectrum visualization below, with time domain, continuous frequency domain and discrete frequency domain from left to right. When we reduce the sampling frequency by a factor $k$, the signal spectrum is copied to new replicas at $f_s/k$. The discrete spectrum is a snapshot of continuos spectrum at $[-f_s, f_s]$ and $[-f_{s,down} = f_s/k, f_{s,down} = f_s/k]$, so it is expanded by the factor $k$. The DFT works on the discrete frequency domain. The downsample $k$ must be chosen so that the expanded spectrum (in discrete frequency, or the new replicas in continuous version) does not overlap with their copies centered at $-2\pi$ and $2\pi$, or $-1/2$ and $1/2$ if you take discrete instantenous frequency by dividing $2\pi$; otherwise aliasing happens and the reconstruction of $x(t)$ is impossible. If there is not alias, a low pass filter to take the spectrum part centered at 0 is suffice. This filter does not need to be "ideal", just to be sure that the filter takes only the center part. In your calculation, if I understand well, you are talking about the discrete frequency domain and the original spectrum takes $1/M$ the normalized band (you said $f$ is in $[-1/2, 1/2]$ and your signal has support in $[-1/2M, 1/2M]$). In this case your downsample factor $k$ must be less than $M/2$ and the cutoff frequency of your ideal filter must be at $1/4$ if $k=M/2$ (and yes, in this case $k=M/2$, we need "ideal filter" assumption).
{ "domain": "dsp.stackexchange", "id": 5004, "tags": "lowpass-filter, interpolation, downsampling, reconstruction" }
How to deal with explicit time dependence of the Lagrangian?
Question: Clearly, if the Lagrangian in explicitly time dependent, the Euler-Lagrange equations being satisfied does not extremise the action. I am unclear as to how to deal with systems with an explicitly time-dependent Lagrangian. We would need a condition of the sort $$\delta q_i (\frac{\partial L}{\partial q_i}-\frac{d^r}{dt^r}\frac{\partial L}{\partial q_i^{(r)}}) + \frac{\partial L}{\partial t}\delta t=0$$ to be satisfied, where here the $q-i$ are the generalised coordinates and as many time derivatives of them are taken (i.e. r is a function of i) as needed. Summation convention is used. Is it possible to find the 'phase space' (with a less strict definition) trjectory like this? It seems that the changes in the generalised coordinates are now coupled... I consdered maybe splitting the action integral up into time steps over which $\frac{\partial L}{\partial t}\delta t=0$, but then we cannot guarantee that the boundary conditions (which are given only for the end points of the whole time interval) make the additional terms from integrating by parts vanish, or indeed we cannot even slve for the evolution of the generalised coordinates over the small time interval without the boundary conditions.... Answer: The Euler-Lagrange equations in the case of explicitly time dependent Lagrangian are the same as the ones without explicit time dependence. You are getting mixed up with what is getting varied and how the variation should be carried out. Approach 1. You do not vary time. This is because you want to find how the configuration $q \in \mathbb{R}^n$ of your system evolves in time $t \in \mathbb{R}$, so you are looking for a map $$ q : [t_1, t_2] \to \mathbb{R}^n$$ that describes this time evolution, starting at point $q_1$ and ending at point $q_2 \in \mathbb{R}^n$. So Lagrangian mechanics tells you that you want to find a curve of the form $$\gamma = \Big\{ \, \big(\,t, \,q(t) \,\big) \, \in \, \mathbb{R} \times \mathbb{R}^n\,\, :\, \, t \in [t_1, t_2] \Big\}$$, such that $q(t_1) = q_1$ and $q(t_2) = q_2$, that optimizes the action functional $$S[\gamma] = \int_{t_1}^{t_2}\, L\Big( q(t), \, \frac{d q}{dt}(t), \, t\Big) \,dt$$ defined for any smooth curve $\gamma$ of the form written above. Then if you get any two such curves $\gamma = \Big\{ \, \big(\,t, \, q(t)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$ and $\tilde\gamma = \Big\{ \, \big(\,t, \, \tilde q(t)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$ between the points $q_1$ and $q_2$, then there exist one parameter family of such curves $$\gamma_{\epsilon} = \Big\{ \, \big(\,t, \, q(t, \epsilon)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$$ where $q(t, 0) = q(t)$ and $q(t,1) = \tilde q(t)$. So basically this tells you that any two curves of the form $\gamma_{\epsilon} = \Big\{ \, \big(\,t, \, q(t, \epsilon)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$, among which is the solution you are interested in, can be included into a one parameter family (a variation) of the type $$\gamma_{\epsilon} = \Big\{ \, \big(\,t, \, q(t, \epsilon)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$$ So the curves that optimize the action $S[\gamma]$ are among the critical points of $S[\gamma]$. The critical points of $S[\gamma]$ are the zeroes of its derivative. To find the derivative of $S[\gamma]$ you have to take an arbitrary family $\gamma_{\epsilon}= \Big\{ \, \big(\,t, \, q(t, \epsilon)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$ that connects $q_1$ to $q_2$ and take $$\frac{\partial}{\partial \epsilon}\, S[\gamma_{\epsilon}] \Big|_{\epsilon = 0}$$ i.e. first differentiate with respect to $\epsilon $ and then set $\epsilon = 0$. Thus, in order to find the critical curves $\gamma$ of the functional $s[\gamma]$, you want to find those $\gamma = \Big\{ \, \big(\,t, \, q(t)\, \big) \, : \, t \in [t_1, t_2] \, \Big\}$ between $q_1$ and $q_2$ for which \begin{align} 0 = \delta S[\gamma] =& \frac{\partial}{\partial \epsilon}\, S[\gamma_{\epsilon}] \Big|_{\epsilon = 0} = \frac{\partial}{\partial \epsilon}\, \int_{t_1}^{t_2}\, L\Big( q(t, \epsilon), \, \frac{d q}{dt}(t, \epsilon), \, t\Big) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \frac{\partial}{\partial \epsilon}\, L\Big( q(t, \epsilon), \, \frac{d q}{dt}(t, \epsilon), \, t\Big) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} + \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial}{\partial \epsilon} \frac{d q^k}{dt} \,\right) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} + \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{d}{dt} \frac{\partial q^k}{\partial \epsilon} \,\right) \,dt \, \Big|_{\epsilon = 0} = \\ \end{align} By the product rule of derivatives $$\frac{d}{dt}\left(\, \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} \,\right) = \left(\, \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \right)\, \frac{\partial q^k}{\partial \epsilon} + \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{d}{dt} \frac{\partial q^k}{\partial \epsilon}$$ and therefore $$ \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{d}{dt} \frac{\partial q^k}{\partial \epsilon} = - \left(\, \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \right)\, \frac{\partial q^k}{\partial \epsilon} + \frac{d}{dt}\left(\, \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} \,\right) $$ Also, when you set $\epsilon =0$ then $$\delta q^k(t) = \frac{\partial q^k}{\partial \epsilon}(t, \epsilon)\Big|_{\epsilon =0}$$ and you get the expression \begin{align} 0 = \delta S[\gamma] =& \frac{\partial}{\partial \epsilon}\, S[\gamma_{\epsilon}] \Big|_{\epsilon = 0} = \frac{\partial}{\partial \epsilon}\, \int_{t_1}^{t_2}\, L\Big( q(t, \epsilon), \, \frac{d q}{dt}(t, \epsilon), \, t\Big) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} + \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{d}{dt} \frac{\partial q^k}{\partial \epsilon} \,\right) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} - \left(\, \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \right)\, \frac{\partial q^k}{\partial \epsilon} + \frac{d}{dt}\left(\, \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} \,\right) \,\right) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q(t), \, \dot{q}(t), \, t\Big) \delta q^k(t) - \left(\, \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q(t), \, \dot{q}(t), \, t\Big) \right)\, \delta q^k(t) \right)\, dt +\\ &+ \int_{t_1}^{t_2}\, \frac{d}{dt}\left(\, \frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \dot{q}, \, t\Big) \frac{\partial q^k}{\partial \epsilon} \,\right) \,dt \, \Big|_{\epsilon = 0} = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q(t), \, \dot{q}(t), \, t\Big) \delta q^k(t) - \left(\, \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q(t), \, \dot{q}(t), \, t\Big) \right)\, \delta q^k(t) \right)\, dt + 0 = \\ =& \int_{t_1}^{t_2}\, \left( \,\frac{\partial L}{\partial q^k}\Big( q(t), \, \dot{q}(t), \, t\Big) - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q(t), \, \dot{q}(t), \, t\Big) \,\right)\,\delta q^k(t) dt \end{align} which has to hold for any arbitrary smooth curve $\delta q(t)$. This is possible if and only if $$\frac{\partial L}{\partial q^k}\Big( q(t), \, \dot{q}(t), \, t\Big) - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q(t), \, \dot{q}(t), \, t\Big) = 0$$ for all $k=1, ..., n$. In other words, a curve $\gamma = \Big\{\,(\, t, \, q(t)\,) \, : \, t \in [t_1,t_2]\,\Big\}$ connecting $q_1$ to $q_2$ is a critical curve for the action $S[\gamma]$, i.e. $$\delta S[\gamma] = 0$$ if and only if it satisfies the system of differential equations $$ \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}^k}\Big( q, \, \frac{dq}{dt}, \, t\Big) = \frac{\partial L}{\partial q^k}\Big( q, \, \frac{dq}{dt}, \, t\Big) $$ for $k=1, ..., n$. The latter system of differential equations is exactly the system of Euler-Lagrange equations. Approach 2. To find the evolution of your system, you may parametrize the curves $\gamma = \Big\{\,(\, t, \, q(t)\,) \, : \, t \in [t_1,t_2]\,\Big\}$ in terms of another parameter $s \in \mathbb{R}$ and so time becomes $t=t(s)$ your evolution becomes $q(s) = q(t(s))$. Consequently, the curves you are dealing with are of the more general form $\gamma = \Big\{\,(\, t(s), \, q(s)\,) \, : \, s \in [s_1,s_2]\,\Big\}$. But then $$\frac{dq^k}{dt}(t) = \frac{dq^k}{ds}(s)\frac{ds}{dt}(t) =\frac{\,\, \frac{dq^k}{ds}(s)\,\,}{\frac{dt}{ds}(s)} = \Big(\, \frac{dt}{ds}(s)\,\Big)^{-1} \frac{d q^k}{ds}(s)$$ and $$dt = \frac{dt}{ds}(s)\, ds$$ and therefore, the action becomes $$S[\gamma] = \int_{t_1}^{t_2}\, L\Big( q(t), \, \frac{d q}{dt}(t), \, t\Big) \,dt = \int_{s_1}^{s_2}\, L\left( q(s), \, \frac{d q}{ds}(s)\, \Big(\,\frac{dt}{ds}(s)\,\Big)^{-1}, \, t(s) \,\right) \,\frac{dt}{ds}(s)\, ds = $$ $$= \int_{s_1}^{s_2}\, \tilde{L}\left(\, q(s), \, t(s), \, \frac{d q}{ds}(s), \, \frac{d t}{ds}(s)\,\right) \,ds$$ where the new Lagrangian $\tilde L$ is $$\tilde{L}\left(\, q, \, t, \, \frac{d q}{ds}, \, \frac{d t}{ds}\,\right) = L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \,\frac{dt}{ds} $$ Observe that this last function $\tilde L$ is not explicitly dependent on the new variable $s$, so the critical curves should satisfy the usual Euler-Lagrange equations but this time with respect to the variable $s$ and with one extra equation for $t$. However, if you invert the function $t=t(s)$, turning it into $s=s(t)$, you will eliminate $s$ and you will end up with the same Euler-Lagrange equations which I derived in Approach 1 plus one extra equation for $t$, decoupled from them. So you drop it and you arrive at the same result as Approach 1. Indeed, the Euler Lagrange equations for $\tilde L$ are \begin{align} &\frac{d}{ds}\, \frac{\partial \tilde{L}}{\partial q^{'k}}\Big(\,q, t, \frac{d q}{ds}, \frac{dt}{ds} \,\Big) = \frac{\partial \tilde{L}}{\partial q^k}\Big(\,q, t, \frac{d q}{ds}, \frac{dt}{ds} \,\Big)\\ &\frac{d}{ds}\, \frac{\partial \tilde{L}}{\partial t'}\Big(\,q, t, \frac{d q}{ds}, \frac{dt}{ds} \,\Big) = \frac{\partial \tilde{L}}{\partial t}\Big(\,q, t, \frac{d q}{ds}, \frac{dt}{ds} \,\Big) \end{align} for $k = 1, ..., n$. Now plug in the expression for $\tilde L$ in terms of $L$ and carefully carry out all the chain rule differentiation and you get \begin{align} &\frac{d}{ds}\left( \frac{\partial }{\partial q^{'k}} \,\left[L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds}\right] \right) = \frac{\partial }{\partial q^k}\left[ L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds}\right]\\ &\frac{d}{ds}\left( \frac{\partial }{\partial t'} \,\left[L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds}\right] \right) = \frac{\partial }{\partial t}\left[ L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds}\right] \end{align} where $q^{'k}$ is a short notation for $\frac{dq^k}{ds}$. First, by applying chain rule and carrying out the differentiation, we get \begin{align} &\frac{d}{ds}\left( \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds} \, \frac{\partial }{\partial q^{'k}}\left[\Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^j}{ds}\right]\,\right) = \frac{\partial L}{\partial q^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds}\\ &\frac{d}{ds}\left( \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds} \frac{\partial}{\partial t'}\left[\Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^j}{ds}\right] + L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\,\right) = \\&= \frac{\partial L}{\partial t}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds} \end{align} and then the equations simplify to \begin{align} &\frac{d}{ds}\left( \frac{\partial L}{\partial \dot{q}^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds} \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\right) = \frac{\partial L}{\partial q^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds}\\ &\frac{d}{ds}\left(-\, \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\frac{dt}{ds} \Big(\,\frac{dt}{ds}\,\Big)^{-2}\frac{d q^j}{ds} + L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\,\right) = \\&= \frac{\partial L}{\partial t}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds} \end{align} and after that we get \begin{align} &\frac{d}{ds}\left( \frac{\partial L}{\partial \dot{q}^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\, \right) = \frac{\partial L}{\partial q^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds}\\ &\frac{d}{ds}\left(-\, \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^j}{ds}, \, t \,\right) \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^j}{ds} + L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\,\right) = \\&= \frac{\partial L}{\partial t}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \frac{dt}{ds} \end{align} Since $\Big(\frac{dt}{ds}\Big)^{-1} = \frac{ds}{dt}$, we can multiply both side of the equations by $\Big(\frac{dt}{ds}\Big)^{-1}$ and obtain \begin{align} &\frac{ds}{dt}\frac{d}{ds}\left( \frac{\partial L}{\partial \dot{q}^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\, \right) = \frac{\partial L}{\partial q^k}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\\ &\frac{ds}{dt}\frac{d}{ds}\left(-\, \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^j}{ds} + L\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right)\,\right) = \\&= \frac{\partial L}{\partial t}\left( q, \, \Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q}{ds}, \, t \,\right) \end{align} Finally, invert the function $t=t(s)$ into $s=s(t)$ and recall that then $\frac{ds}{dt} \frac{d}{ds} = \frac{d}{dt}$ and $\Big(\,\frac{dt}{ds}\,\Big)^{-1}\frac{d q^k}{ds} = \frac{d q^k}{dt}$. Now, we arrive at the equations \begin{align} &\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{q}^k}\left( q, \, \frac{d q}{dt}, \, t \,\right)\, \right) = \frac{\partial L}{\partial q^k}\left( q, \, \frac{d q}{dt}, \, t \,\right)\\ &\frac{d}{dt}\left(-\, \frac{\partial L}{\partial \dot{q}^j}\left( q, \, \frac{d q}{dt}, \, t \,\right) \frac{d q^j}{dt} + L\left( q, \, \frac{d q}{dt}, \, t \,\right)\,\right) = \frac{\partial L}{\partial t}\left( q, \, \frac{d q}{dt}, \, t \,\right) \end{align} The first system of equations are the Euler-Lagrange equations for $k = 1, ..., n$. The last equation is in fact the evolution of the total energy of the system. When $L$ does not depend on $t$ explicitly, the right hand side of this last equation is zero and we end up with the conservation of energy of the system.
{ "domain": "physics.stackexchange", "id": 53052, "tags": "lagrangian-formalism, time, variational-principle, action, variational-calculus" }
Why do house geckos take so long to react?
Question: This is the Common house gecko found abundantly where I live. I've been observing this for quite long and I used to ignore it after thinking on it for a short while. Today I decided to write this here on Biology SE. So in my lawn there's this large wall on which there are usually around 3-4 geckos foraging for their favorite insects in night. Everytime I walk towards them some of them instantly go into their hiding spots (within 1-2 seconds of my arrival) with lightening speed. But most of them initially look cool and not actually bothered by my presence. Not only that -I can even wave at them; go closer towards them and nothing happens. But suddenly after all of my actions they also run towards their hidings as they were daydreaming. (Though not instantly) Just curious- Is there any real lag between their signal reception and processing? Are there some other animals which show such behavioural responses? (Maybe they are just kidding with me :P) Answer: This is entirely behavioral and is quite widespread. If they ran away everything something large passed by they would waste a lot of energy, many animals wait first to see if you are just moving through, and thus can be ignored, OR if you are going to hang around and thus are a potential threat (and in the case of camouflaged organisms have a much higher chance of spotting them). It is seen in many animals gazelle are known for it for instance. This paper on the behavior in other lizards should help with understanding the evolutionary pressures behind it.
{ "domain": "biology.stackexchange", "id": 7823, "tags": "zoology, electrophysiology, herpetology" }
Custom error-logging
Question: I'm writing a static class to log my application errors into a text file. I am planning on using this as a library on any application I intend to develop in the future. Please let me know how I can improve this more and what could be changed so it would perform better. public static class Logger { public static void Log(Exception ex) { StreamWriter sw = null; try { sw = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory + "\\Log.txt", true); sw.WriteLine( DateTime.Now.ToString() + " : " + "\r\nSource " + ex.Source.ToString().Trim() + "\r\nMessage : " + ex.Message.ToString().Trim() + "\r\nInner Exceptions: " + Convert.ToString(ex.InnerException) + "\r\nException thrown from: " + getExceptionGeneratedMethod(ex) + "\r\nLine Number: " + getExceptionGeneratedLineNumber(ex) + "\r\nStack Trace: " + ex.StackTrace.ToString() ; sw.Flush(); sw.Close(); } catch (Exception e) { sw = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory + "\\Log.txt", true); sw.WriteLine("Error Could not be logged!!"); throw ex; } } /// <summary>Writes a log file in the root directory with a custom message. /// <param name="message">String that will be displayed as a message</param> /// </summary> public static void writeCustomErrorLog(string message) { StreamWriter sw = null; try { sw = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory + "\\PassiveLog.txt", true); sw.WriteLine(DateTime.Now.ToString() + " : " + "\n Custom Message: " + message); sw.Flush(); sw.Close(); } catch (Exception ex) { throw ex; } } /// <summary>Writes a log file in the a directory passed in as a location with a custom message and a custom file-name. /// <param name="message">String that will be displayed as a message</param> /// <param name="location">String value that accepts a location for the log file to be generated</param> /// <param name="fileName">String value that accepts a file-name for the log file to be generated</param> /// </summary> public static void writeCustomErrorLog(string location, string message,string fileName) { StreamWriter sw = null; try { sw = new StreamWriter(location + "\\"+fileName+".txt", true); sw.WriteLine(DateTime.Now.ToString() + " : " + "\n" + message); sw.Flush(); sw.Close(); } catch (Exception ex) { throw ex; } } /// <summary>Returns the Mathod name the exception occured /// <param name="ex">The <c>exception</c> must be passed as a parameter</param> /// <returns>returns the method name where the exception occured as a <c>string</c></returns> /// </summary> public static string getExceptionGeneratedMethod(Exception ex) { var s = new StackTrace(ex); var thisasm = Assembly.GetExecutingAssembly(); string methodname = s.GetFrames().Select(f => f.GetMethod()).First(m => m.Module.Assembly == thisasm).Name.ToString(); return methodname; } /// <summary>Returns the line number on which the exception occured /// <param name="ex">The <c>exception</c> must be passed as a parameter</param> /// <returns>returns the line number where the exception occured as a <c>integer</c> value</returns> /// </summary> public static int getExceptionGeneratedLineNumber(Exception ex) { var st = new StackTrace(ex, true); var frame = st.GetFrame(0); int line = frame.GetFileLineNumber(); return line; } } Answer: Reusable methods Notice that you have several methods that essentially do the same thing. Log and both writeCustomErrorLog methods follow the same approach: StreamWriter sw = null; try { sw = new StreamWriter("some file path", true); sw.WriteLine("some string"); sw.Flush(); sw.Close(); catch (Exception ex) { throw ex; } This can easily be put into a reusable method: private void WriteMessageToFile(string filepath, string message) { StreamWriter sw = null; try { sw = new StreamWriter(filepath, true); sw.WriteLine(message); sw.Flush(); sw.Close(); catch (Exception ex) { throw ex; } } And then your methods become much simpler and less repetitive: public void writeCustomErrorLog(string location, string message,string fileName) { string filePath = location + "\\" + fileName +".txt"; string logMessage = DateTime.Now.ToString() + " : " + "\n" + message; WriteMessageToFile(filePath, logMessage); } Catching and throwing try { // ... } catch (Exception ex) { throw ex; } Catching an exception should be done when you want to handle the exception. Throwing an exception should be done when you don't want to handle the exception (or raise a new exception) There is no point to catching an exception, only to then throw it. That is essentially like queueing at a supermarket checkout when you don't have any items you want to buy. There's no point to doing so. The try/catch can be removed altogether in these cases. Note that the try/catch could be useful if you actually did something with it: try { // ... } catch (Exception ex) { Trace.Write("Exception occurred here!"); throw; } Note the different between throw ex; and throw;. throw ex; resets the stacktrace to where you call throw ex;. This effectively removes deeper stack trace information throw; retains the stack trace of the exception as it was initially raised. This does not remove data. Throwing an existing exception is rarely a good idea. Hardcoding AppDomain.CurrentDomain.BaseDirectory + "\\Log.txt" AppDomain.CurrentDomain.BaseDirectory + "\\PassiveLog.txt" Don't hardcode your location, nor your filename. You intend to use this in future projects. It's not unforeseeable that you're going to want to decide where to put the file, or what you should name it (especially if you want to use more than one log file in the same application). Also, I have no idea what a "passive log" is. The name isn't very descriptive. String concatenation DateTime.Now.ToString() + " : " + "\r\nSource " + ex.Source.ToString().Trim() + "\r\nMessage : " + ex.Message.ToString().Trim() + "\r\nInner Exceptions: " + Convert.ToString(ex.InnerException) + "\r\nException thrown from: " + getExceptionGeneratedMethod(ex) + "\r\nLine Number: " + getExceptionGeneratedLineNumber(ex) + "\r\nStack Trace: " + ex.StackTrace.ToString() Do not concatenate strings with +. Strings are immutable. That means that you can't change a string, you can only create a new string (and the old one will be discarded. Let's take a simple example: "a" + "b" + "c" + "d" These concatenations are done step by step. Look at the needed memory allocation: Create a 1 character string (a) Create a 1 character string (b) Create a 2 character string (a+b) Create a 1 character string (c) Create a 3 character string (ab+c) Create a 1 character string (d) Create a 4 character string (abc+d) To concatenate 4 strings, you're had to allocate 7 strings. For a 4 character result, you've effectively allocated 13 characters' worth of strings. This problem quickly grows out of proportion and leads to bad performance, massive memory usages, and eventual OutOfMemoryExceptions being thrown. Furthermore, avoiding this is very simple: Option 1 - String.Format This is more useful for cases where you are replacing a fixed set of placeholders. string result = String.Format("{0} is the father of {1}", nameOfFather, nameOfChild); Note that interpolated strings are a fairly recent addition that simplify the syntax: string result = $"{nameOfFather} is the father of {nameOfChild}"; Option 2 - StringBuilder This is more useful for cases where you dynamically generate a string, instead of replacing a fixed set of placeholders: StringBuilder sb = new StringBuilder(); sb.Append(nameOfFather); sb.Append(" is the father of "); sb.Append(nameOfChild); string result = sb.ToString(); I repeat again: Do not concatenate strings with +. Especially since interpolated strings have been added; there's no reasonable "simply syntax" argument to using + concatenation. Addendum Piedar made a really interesting remark in the comments. Apparently, string + string concatenation is now converted to String.Concat by the compiler. This effectively negates the issue I'm pointing at. However, I still urge you to not do + concatenation on big string anyway, from a readability perspective. value + " seconds" is readably enough, but you're pasting a lot of different things together and the code starts looking bulky and ugly. However, this is a style (and, in extreme cases, readability) argument, not a technical one. Statics Everything you've listed is static. Why? This issue is strongly related to why you hardcoded your values. You think that your single log is a catch-all for any future application's needs and you'll never need to tweak it or change it. You're effectively making it so that a future consumer of your library has no configuration options. This is inherently bad design. This is equivalent to Word being able to save files in a single (hardcoded) directory, or Firefox telling you what your homepage should be without alowing you to change it. Good applications (and libraries) give control to the user, they don't tell the user what to do. Note: there is nothing wrong with providing a default behavior, but you need to give your user the option of changing the default behavior if they so choose. Separation of concerns Other than not allowing the user to change key settings, you're also handling too much here. Your one class is handling many responsibilities: Deciding where to put the file Accessing and writing to the file Deciding how to format the log messages These things should ideally be split into separate classes. I understand that the need for separating this into classes is not that great at the moment. However, once you implement the other suggested changes (configuration options), your code will quickly grow to a size where the need for separation becomes more apparent. Ideally, this is a better logging approach: LogFile allows you to create a log with a specific configuration (filename, location) ExceptionLogMessageFormatter takes an exception and turns it into a log message (string). Logger takes a pre-formatted string message and writes it to a pre-configured logfile. You're currently only logging exceptions, but the need for separation becomes more apparent when you want to also log other objects. If you want to e.g. log an object of type Person, you'll create a separate PersonLogMessageFormatter class, but you will be able to reuse the same LogFile class (a different file with different settings, but the same class in code!) and Logger class (which logs the string message to this different log). Don't reinvent the wheel! This is the most important lesson to draw here. However, it does come with a caveat, which I will mention at the end. What you've built here is a fairly simple tool, for a problem that is not new to the field of software development. (That's not an insult - in case you interpreted it as such). Odds are that someone is already going to have created a library to handle this exact job. And they have: NLog Log4Net Elmah These tools are very well developed, offer an incredibly amount of configuratbility, and are able to handle fringe cases that you probably haven't even thought of but will eventually have to implement a solution for. Personally, I'm a big fan of NLog. I've never come across a need for logging that could not be handled by NLog. I like how it works, and I find that it gives me a perfect balance of good default behavior and exceedingly configurable custom behavior if I need it. If you were going to write a story about a world based on medieval fantasy with magic and multiple humanoid races; I would suggest that you watch e.g. the Lord of the Rings movies (or read the books of course). Even if your story is going to be different, it gives you a nice overview of the ins and outs of your story's setting, pitfalls to avoid, how to create interesting an immersive plots, ... Similarly, if you want to write a logging framework, I suggest that you first try working with existing tools. Even if your tool is going to do something new; seeing the existing framework gives you a good overview of what features you're going to need to add to your tool to make it user friendly, and it helps you draw the line on how much configuration options you need to add to it to fulfill most needs. Caveat However, writing your own logging framework from scratch (with no knowledge of existing frameworks) can be rewarding in and of itself as a training exercise. Based on your code, I surmise that you are a beginner, at least to creating reusable libraries. So if writing this library benefits you for training purposes, don't let my review stop you. But if your only goal is to create a functionally usable tool, then I do suggest first looking for existing solutions, either to already solve your problem or even just to get some inspiration/ideas.
{ "domain": "codereview.stackexchange", "id": 30654, "tags": "c#, error-handling, logging" }
Related links for WordPress posts
Question: I have written the following class which is part of a pagination system in Wordpress. The class works as expected and also works as expected when used in the system. Just as background, the class performs the following task Takes an array of post ID's and creating links to the posts which ID's is supplied in the array. Uses Wordpress functions get_permalink(), add_query_arg() and get_post_field() to retrieve the appropriate information to build the links I need a review to asses the correctness of my code, basically how correct is my code, and also, how correct is my use of PHPDoc blocks and the information in these doc blocks. Here is my class: (I have left out the interface) <?php namespace PG\Single\Post\Navigation; /** * PostLinks class * * Creates clickable post links for post ID's given * * @param (array) $postIDs Array of post IDs * @param (array) $extraQueryVars Array of query variables to add to the URL's * @param (array) $args Array of arguments. See below * - @param (bool) previous Whether or not to get adjacent post older or newer to current post Default true * - @param (bool) boundary Whether or not to get the boundary posts Default false * - @param (bool) first Whether or not to get the first or last post when the boundary parameter is set to true Default true * - @param (string) anchorText Text to be used as an anchor text Default %anchor uses post title * - @param (string) postLinkText Text to be used as link text Default %text uses post title * - @param (string) spanTextOldest Text to be used as oldest post link Default Oldest post: * - @param (string) spanTextNewest Text to be used as newest post link Default Newest post: * - @param (string) spanTextPrevious Text to be used as older post link Default Older post: * - @param (string) spanTextNext Text to be used as newer post link Default Newer post: * * @since 1.0.0 */ class PostLinks implements PostLinksInterface { /** * @since 1.0.0 * @access protected * @var (array) $postIDs */ protected $postIDs; /** * @since 1.0.0 * @access protected * @var (array) $extraQueryVars */ protected $extraQueryVars; /** * @since 1.0.0 * @access protected * @var (array) $args */ protected $args; /** * Sets the default arguments. * * @since 1.0.0 * @access protected * @var (array) $defaults */ protected $defaults = [ 'previous' => true, 'boundary' => false, 'first' => true, 'anchorText' => '%anchor', 'postLinkText' => '%text', 'spanTextOldest' => 'Oldest post: ', 'spanTextNewest' => 'Newest post: ', 'spanTextPrevious' => 'Older post: ', 'spanTextNext' => 'Newer post: ', ]; /** * Public constructor method * * @param (array) $postIDs Array of post IDs * @param (array) $extraQueryVars Array of query variables to add to the URL's * @param (array) $args Array of arguments. See below * - @param (bool) previous Whether or not to get adjacent post older or newer to current post Default true * - @param (bool) boundary Whether or not to get the boundary posts Default false * - @param (bool) first Whether or not to get the first or last post when the boundary parameter is set to true Default true * - @param (string) anchorText Text to be used as an anchor text Default %anchor * - @param (string) postLinkText Text to be used as link text Default %text * - @param (string) spanTextOldest Text to be used as oldest post link Default Oldest post: * - @param (string) spanTextNewest Text to be used as newest post link Default Newest post: * - @param (string) spanTextPrevious Text to be used as older post link Default Older post: * - @param (string) spanTextNext Text to be used as newer post link Default Newer post: * * @since 1.0.0 */ public function __construct($postIDs = null, $extraQueryVars = null, $args = []) { $this->setPostIDs($postIDs); $this->setExtraQueryVars($extraQueryVars); $this->setArgs($args); } /** * Setter setPostLinks() * * Sets an array of posts IDs * * @since 1.0.0 * @param $postIDs * @return $this */ public function setPostIDs($postIDs) { $this->postIDs = filter_var($postIDs, FILTER_VALIDATE_INT, ['flags' => FILTER_FORCE_ARRAY]); return $this; } /** * Returns the posts IDs. * * @since 1.0.0 * @return (array) $this->postIDs */ public function getPostIDs() { return $this->postIDs; } /** * Setter setExtraQueryVars() * * Sets an array of additional query variables to add to the URL's * * @since 1.0.0 * @param $extraQueryVars * @return $this */ public function setExtraQueryVars($extraQueryVars) { $this->extraQueryVars = $extraQueryVars; return $this; } /** * Returns the array of query variables. * * @since 1.0.0 * @return (array) $this->extraQueryVars */ public function getExtraQueryVars() { return $this->extraQueryVars; } /** * Setter setArgs * * Sets the arguments and merges them with the defaults and also cast array to an object. * * @since 1.0.0 * @param $args * @return $this */ public function setArgs($args) { $this->args = (object) array_merge($this->defaults, $args); return $this; } /** * Returns an object of arguments. * * @since 1.0.0 * @return (object) $this->args */ public function getArgs() { return $this->args; } /** * Conditional tag to check if the boundaryPosts parameter is set to true. Any other value returns false. * * @access private * @since 1.0.0 * @return (bool) true on success false on failure */ private function isBoundary() { return $this->args->boundary === true ? true : false; } /** * Conditional tag to check if the previous parameter is set to true. Any other value returns false. * * @access private * @since 1.0.0 * @return (bool) true on success false on failure */ private function isPrevious() { return $this->args->previous === true ? true : false; } /** * Conditional tag to check if the first parameter is set to true. Any other value returns false. * * @access private * @since 1.0.0 * @return (bool) true on success false on failure */ private function isFirst() { return $this->args->first === true ? true : false; } /** * Text to be used as post link pre-text according to parameter values set * by previous, boundary and first * * @access private * @since 1.0.0 * @return (string) $text */ private function spanTextText() { $text = null; if ($this->isBoundary() !== true) { if ($this->isPrevious()) { $text = filter_var($this->args->spanTextPrevious, FILTER_SANITIZE_STRING); } else { $text = filter_var($this->args->spanTextNext, FILTER_SANITIZE_STRING); } } else { if ($this->isFirst()) { $text = filter_var($this->args->spanTextOldest, FILTER_SANITIZE_STRING); } else { $text = filter_var($this->args->spanTextNewest, FILTER_SANITIZE_STRING); } } return $text; } /** * CSS classes to be used for post links according to parameter values set * by previous, boundary and first. * * @access private * @since 1.0.0 * @return (string) $classes */ private function linkclasses() { $classes = null; if ($this->isBoundary() !== true) { if ($this->isPrevious()) { $classes = 'previous'; } else { $classes = 'next'; } } else { if ($this->isFirst()) { $classes = 'oldest'; } else { $classes = 'newest'; } } return $classes; } /** * Create the post links according to input values of the class * * @since 1.0.0 * @return (string) $link */ public function links() { $link = ''; if ($this->postIDs !== null) { $link .= '<div class"paginate-nav-links ' . $this->linkclasses() . '">'; foreach ($this->postIDs as $key=>$postID) { /* * Get post post_title according to ID. * * @uses get_post_field() * @see http://codex.wordpress.org/Function_Reference/get_post_field */ $postTitle = get_post_field('post_title', $postID); /* * Test to see if WP_Error is not triggered. If so, continue */ if (is_wp_error($postTitle)) continue; /* * Made it to here, build the post links. */ if ($this->extraQueryVars === null) { /** * Get the post permalink. * * @uses get_permalink() * @see https://codex.wordpress.org/Function_Reference/get_permalink */ $url = get_permalink($postID); } else { /* * Test if $this->extraQueryVars is a valid array. Throw exception on error */ if (!is_array($this->extraQueryVars)) { throw new \InvalidArgumentException( sprintf( __('%s: The value of $extraQueryVars should be an array. Please recheck the the $extraQueryVars input'), __METHOD__ ) ); } /* * If an array of query vars is set. sanitize the array and add to the URL */ foreach ($this->extraQueryVars as $k=>$v) $vars[filter_var($k, FILTER_SANITIZE_STRING)] = filter_var($v, FILTER_SANITIZE_STRING); /** * Add the custom query variables to the post URL * * @uses add_query_arg() * @see https://codex.wordpress.org/Function_Reference/add_query_arg */ $url = add_query_arg($vars, get_permalink($postID)); } /* * If defaults are used, $anchor and $linkText will default to post titles */ $anchor = $this->args->anchorText == '%anchor' ? $postTitle : $this->args->anchorText; $linkText = $this->args->postLinkText == '%text' ? $postTitle : $this->args->postLinkText; if ($key === 0) { /* * Don't print any mark-up if $this->spanTextText() is empty or null */ if ($this->spanTextText()) { $link .= '<div class"paginate-nav-links ' . $this->linkclasses() . '-text">'; $link .= filter_var($this->spanTextText(), FILTER_SANITIZE_STRING); $link .= '</div>'; } $link .= '<div class"paginate-nav-links links">'; } $link .= '<div class"paginate-nav-links link-' . ($key + 1) . '">'; $link .= '<a href="' . filter_var($url, FILTER_SANITIZE_URL) . '" title="' . filter_var($anchor, FILTER_SANITIZE_STRING) . '">'; $link .= filter_var($linkText, FILTER_SANITIZE_STRING); $link .= '</a>'; $link .= '</div>'; if (!array_key_exists(($key + 1), $this->postIDs)) { $link .= '</div>'; } } $link .= '</div>'; } /* * return null if no post links exists */ if (!$link) $link = null; /* * return a string holding the post links */ return $link; } } Answer: I haven't really looked into your code that much, but the first thing I noticed is that you have a number of methods (like the constructor) that you expect to be an array. That's fine, sometimes you want an array to be passed. But if you really need an argument to be an array, enforce it by using a type hint: public function thisMethodRequiresAnArray(array $argument) { return is_array($argument);//will ALWAYS be true } Whenever this method is called, and the provided argument is not an array, an error will be raised. However, in case of your constructor, it's clear that the array you expect the user to pass needs to be a specific format. You require certain keys to be present, and these keys will have an impact on the way your class behaves. I'd strongly suggest you write a PostLinksConfig class, which defines all of the properties your class requires, and initializes them to a default value (the default array you declare in your class can serve as a template here). That way, you can type-hint for that config class: public function __construct(array $postIds = null, array $extraQueryVars = null, PostLinksConfig $args = null) { } That way, you're certain that $args->getPrevious(); will always return a value (either the default, or a value the user set on the PostLinksConfig instance). What's more, if you write the config class with protected/private properties, and use setters, you can validate, format and normalize the parameters the user provides you with, and throw exceptions if a really outlandish value is being set. For example: you expect previous to be a boolean, so in the config class, your setter could look like this: public function setPrevious($prev) { //either check type: if (!is_bool($prev)) { throw new InvalidArgumentException('Previous MUST be a bool'); } //or cast the value to a boolean $this->previous = (bool) $prev; return $this; } In addition to giving you more control over the format of the arguments the user passes to your methods, this will cut down on the rather verbose doc-blocks you have now. Your constructor can be documented like so: /** * Constructor - Some description here... * * @param array $postIds = null <-- add the default value * @param array $extraQueryVars = null * @param PostLinksConfig $args = null */ public function __construct() {} there's just no need to document what $args should/can contain, as this is already documented by the type-hint: the class dictates the format, and if its methods are documented, then the @param annotation should suffice. I've also noticed you have some @return annotations that look like this: /** * @return (array) $this->postIDs */ I'd simply write @return array. The @return annotation is mostly there for the benefit of the user. If I'm using any half-decent IDE, it'll use the doc-blocks for auto completion, and it'll let me know what the return value of any given method is. To be able to so, the key parts are @return, to signal what information the annotation will provide and array (or any other type), to let me know what is being returned. I couldn't give a damn if the return value is coming from property X or Y of that class, it could even be a hard-coded array for all I care. Classes are about abstracting the nitty-gritty away from the user. I don't need to know how a class does what it does, I just need to know how to use it, and what to expect from it. If nothing else, by leaving the property names out of the annotations, you save yourself the bother of having to change your annotations whenever you decide to rename a property. I know, it's not a great argument, but it could be something you might want to consider. Not only in this case, but when writing code in general. Be cleverly lazy: always ensure you're not doing work that has been done before, and write code in such a way that maintaining, debugging and refactoring it will cost you as little effort as humanly possible. A couple of little nit-picks: I've also spotted code like this: return $this->args->first === true ? true : false; Which could be written a lot shorter: return $this->args->first; And assuming you're going to refactor your code to use a config object: return $this->args->isFirst();//or getFirst(), whichever works best for you There's a lot of validation (in the form of filter_var calls) going on, which is good, but again: using a config class, with validating setters would make most (if not all) of this validation redundant in the class you've posted here. Another reason why adding a secondary class would be well worth considering. Last but not least, I'll just leave you with the less-than-helpful remark that the links method should be reworked, and possibly be separated out into another object. Generating markup by stringing it together is a dangerous game to play. It's error prone, and a nightmare to maintain. It might be best to write a utility class that builds around one of PHP's own extensions that handle markup a lot better (DOMDocument or SimpleXMLElement to name a couple). These classes provide you with a (relatively) clean API, and a lot more capabilities than what you're doing now. Adding nodes to the DOM at any given point becomes quite easy if you take the time to get acquainted with their respective API's. As ever, the DOM api's are a bit verbose, and sometimes cumbersome, but in the long run, I promise you it's well worth learning how to use them. I might come back to this answer, and look at your code a bit more attentive, because right now, it really is time for me to go to bed. Have fun, hope this helps you along a bit Update First off, I'm sorry to have kept you waiting, but I'm rather busy lately, so I couldn't get back to you sooner. About that links method, though I've not been able to go through it all in great detail, I've noticed a couple of things: You're only creating div and a elements All of the divs share the paginate-nav-links class (which might be redundant, if you rewrite part of the CSS, but that's not up for review here). The a tags are only assigned an href and title attribute, so that logic could be separated out into a non-public method (protected or private depending on your needs) The entire markup string you're constructing is contained within a single div element So where to start, I'd suggest using the DOMDocument API, and in this case the DOMElement class in particular. I'd start by initializing $links to null, then simply start using the DOM api: if ($this->postIDs !== null) { $links = new DOMElement('div'); //either add class attribute like so: $links->setAttribute('class', $classString); //or opt for a more "scolarly" approach: $classAttr = new DOMAttr('class', $classString); $links->setAttributeNode($classAttr); } The benefit of using the attribute node is that you can add classes later on, depending on what is required, you can also clone it at a certain point, and use that clone to build on for other DOM elements. The main reason for my brining this up is because I've noticed a lot of class values being the same. At the very least, I'd consider using a sprintf here and there to avoid my having to duplicate string constants all over the place, replacing: '<div class"paginate-nav-links ' . $this->linkclasses() .'">'; With: //outside of any loop $classFormat = 'paginate-nav-links %s'; $classNames = sprintf( $classFormat, $this->linkclasses() ); At least, you'll save yourself the trouble of having to refactor the entire method whenever the CSS classes change. Next, building the rest of the DOM is quite easy. Depending on where you want to add a given child to the DOM (the links, for example, are children of a DIV, that is in turn the child of a div (created when $key === 0 and closed after processing the links, which is again a child of the outer div we've created in the snippet above. Instead of constructing the div containing the "spanTextText()" value, and all of the links inside of a loop, I'd construct it outside of it. Simply because the if ensures us that there will be at least one link. Same goes for that spanTextText business, so: if ($this->spanTextText()) { $span = new DOMElement('div', $this->spanTextText());//set its inner value $span->setAttribute( 'class', sprintf( $classFormat, $this->linkclasses().'-text' ); $link->appendChild($span);//add to the main $link element } $container = new DOMElement('div'); $container->setAttribute('class', sprintf($classFormat, 'links')); $link->appendChild($container); Inside the loop, you'll simply be appending child elements to the $container object, and each of these elements can contain children of their own (which they will: the actual links). Now say that, for some reason, the $container object was already added to the dom, but to get to the spanTextText value, you'd actually have to perform the loop, in that case, just keep the code above (creating and adding $container to $link), and simply add the child where you want it (before $container in this case): foreach ($this->postIDs as $key=>$postID) { //do stuff if ($key === 0 && $this->spanTextText()) { //create $span like before, then: $link->insertBefore($span, $container);//inserts $span before $container } //adding a link to the container $inner = new DOMElement('div'); $inner->setAttribute('class', sprintf($classFormat, 'link-'.($key+1))); $a = new DOMElement('a', $linkText); $a->setAttribute('href', $url); //add link to inner div $inner->appendChild($a); //add inner div to container (which is already added to $link) $container->appendChild($inner); //you can still update the nodes you've added: $a->setAttribute('title', $anchor); } So as you can see, the DOM API is pretty flexible, and more than capable of constructing the markup you need. However, to get to the actual markup string, you'll need to use the DOMDocument::saveHTML method, which isn't that hard either. However, at this point I'm not sure which approach will work best/as expected, so I'll list all of the possible approaches, based on the docs //after loop: $dom = new DOMDocument(); //you may or may not need this bit: $dom->appendChild($link);//add to own DOM $htmlString = $dom->saveHTML($link);//should yield markup for $link //alternatively, one that I know for sure that works $body = $dom->createElement('body'); $dom->appendChild($body); $body->appendNode($link); //or, one that I know for sure will work: $htmlString = substr( $dom->saveXML($body), 6, -7//6 and -7 => strlen('<body>'), -1*strlen('</body>') ); The saveXML usage is documented here Anyway, that's how I'd generate markup: by using a DOM API instead of having to rely on error-prone, hard to maintain code that strings together markup.
{ "domain": "codereview.stackexchange", "id": 12731, "tags": "php, classes, wordpress, pagination" }
What is necessary for a causal set to be manifold-like?
Question: A causal set is a poset which is reflexive, antisymmetric, transitive and locally finite. As a motivation, there is a programme to model spacetime as fundamentally discrete, with causal sets providing the underlying structure. Typically this is constructed by sprinkling (a poisson process) existing spacetime with elements, endowing these elements with an ordering given by causal cones, and removing spacetime. Volume is then given by a counting metric, which with the causal structure is enough to build a geometry. By the poisson nature of this process, the distribution is Lorentz-invariant. This only makes sense in nature if the causal set is manifold-like, by which we mean it can be faithfully embedded into a manifold, such that the element count gives volume. Precisely when is a causal set manifold-like: what are the necessary conditions for the existence of such an embedding? (Are there interesting sufficient conditions?) Do they have natural interpretations? [This should be tagged quantum-gravity I think.] Answer: In particular the following reference attempts to construct a useful necessary condition: S. Major, D.P. Rideout, S. Surya, Stable Homology as an Indicator of Manifoldlikeness in Causal Set Theory, arXiv:0902.0434 (Continuum topology and homology) I do not know of any sufficient conditions, beyond giving an explicit embedding. Such a thing would be great!
{ "domain": "physics.stackexchange", "id": 865, "tags": "mathematical-physics, quantum-gravity" }
Can the Hermitian operator be related to state space to describe physical phenomena?
Question: Can space-time, in which phenomena occur, and the space of states in which phenomena are described by means of the Hermitian operator be related? I suspect it is because the hermetic operator is built on linear spaces that associate real states with vectors, but I'm not sure. Answer: The standard Dirac bra-ket coordinate picture is $$ H=\iint\!\! dx dx'~|x\rangle \langle x|H|x'\rangle\langle x'| ~~~\leadsto , $$ so that, considering $\langle x|\psi\rangle =\psi(x)$ and $h(x,x')= \langle x|H|x'\rangle$, you readily have $$ |\phi\rangle =H|\psi \rangle ~~~\leftrightarrow ~~~\phi(x)= \int \!\! dx' ~~h(x,x') \psi (x'). $$ That is, you represent Hilbert-space states and operators through coordinate-space functions and convolutions. Beyond this, there is a much subtler and disparate formulation which maps Hilbert-space operators to phase-space functions, through the Wigner map. This undergirds a qualitatively, distinctly different formulation of QM, equivalent to the Hilbert space you are studying, but I suspect this outranges your scope. In this formulation, the phase-space convolution law is very-very-very different, and is called the "star product".
{ "domain": "physics.stackexchange", "id": 84174, "tags": "quantum-mechanics, hilbert-space, operators, quantum-information" }
How does an electron actually find the stationary states (eigenstates) through time evolving using Schrodinger wavefunction?
Question: I have a question regarding how the the stationary states (eigenstates) are arrived at in Schrodinger's wavefunction, please. In the graph below, taken from https://www.youtube.com/watch?v=2V0Xmc0ow80&list=PLdCdV2GBGyXM0j66zrpDy2aMXr6cgrBJA&index=4: we see the resulting position wave shape distribution over a long time period using the potential constraint V(x) = x^2 and the initial wave shape W0(x) = [(2x^3-3x)*exp(-x^2/2)/sqrt(1.25323), 0]. The array indexing refers to [Re_part, Im_part]. The resulting probability function shape (in green) given this initial probability distribution, which we deliberately chose to be W0(x), remains the same over time regardless how the first R part and the second I part is intra-changing individually. This is referred to as the stationary (or eigen) state, if we only consider the "real" physical green function above (which is the shape of the probability distribution of finding the particle at location x), and ignore the red and blue internal components which are not individually discernable by us. My question is, this stable state is achieved by first inputting the initial probability shape of W0(x) = [(2x^3-3x)*Math.exp(-x^2/2)/Math.sqrt(1.25323), 0], and letting the shape evolve over time. If we start with a random "incorrect" initial shape for W0(X) to start with, the resulting evolution over time will probably not automatically evolve into the steady/stationary state by itself. So, by what process, supposing we just throw an electron into this V(x)=x^2 constraint, for which the electron will probably not find itself in this magical Hermite state to begin with, by what process will the particle discover and settle over time into the stationary (eigen) state like the green wave distribution above? If the Schrodinger equation claims complete description over the time evolution of the probability wave, then shouldn't the starting condition makes no difference at all, and the particle will always end up in a stationary ("stable") state no matter what the starting W0(x) looks like, just by following the rules of the Schrodinger wavefunction? Answer: How electrons come to be in the lowest energy level of a Hydrogen atom is a very good question. Whether they start off in one definite higher energy level or a superposition of two or more. The reason is that your standard $s, p, d, f$ orbitals are stationary states of the system including just the electron and proton. Not of the system that includes the electromagnetic field as well. The Wikipedia article on the Jaynes-Cummings model gives a good description of what happens to the contributions from excited states. Initially they decay and then it talks about a "revival time". But this time is on the order of $\omega^{-1}$ where $\omega$ is the minimal frequency that the electromagnetic field can have because it is in a cavity. Atoms in the continuum have $\omega \to 0$ so that the process looks like pure decay.
{ "domain": "physics.stackexchange", "id": 83369, "tags": "quantum-mechanics, experimental-physics, electrons, schroedinger-equation, simulations" }
Problem publishing at cmd_vel
Question: Hello, I have written the following code so I can send commands to my robots motors, which are listening to the /labrob/cmd_vel topic: #include "ros/ros.h" #include "geometry_msgs/Twist.h" int main(int argc, char **argv) { ros::init(argc, argv,"vel"); ros::NodeHandle vel; ros::Publisher vel_pub = vel.advertise<geometry_msgs::Twist>("/labrob/cmd_vel", 100); ros::Rate loop_rate(10); geometry_msgs::Twist msg; msg.linear.x = 0.1; msg.linear.y = 0; msg.linear.z = 0; msg.angular.x = 0; msg.angular.y = 0; msg.angular.z = 0; vel_pub.publish(msg); ros::spin(); return 0; } With the rostopic pub command via the terminal, I can move the robot fine, but with the code I posted, the robot does not respond. What I am doing wrong ? Originally posted by patrchri on ROS Answers with karma: 354 on 2016-08-01 Post score: 0 Answer: I don't know what went wrong, but I modified the code to this: #include "ros/ros.h" #include "geometry_msgs/Twist.h" int main(int argc, char **argv) { ros::init(argc, argv,"vel"); ros::NodeHandle vel; ros::Publisher vel_pub = vel.advertise<geometry_msgs::Twist>("/labrob/cmd_vel", 100); while(ros::ok){ geometry_msgs::Twist msg; msg.linear.x = 0.1; msg.linear.y = 0; msg.linear.z = 0; msg.angular.x = 0; msg.angular.y = 0; msg.angular.z = 0; vel_pub.publish(msg); ros::spinOnce(); } return 0; } And it worked. Originally posted by patrchri with karma: 354 on 2016-08-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Humpelstilzchen on 2016-08-03: See http://answers.ros.org/question/11167/how-do-i-publish-exactly-one-message/
{ "domain": "robotics.stackexchange", "id": 25418, "tags": "gazebo, navigation, ros-kinetic, velocity, publisher" }
Why does a simple pendulum or a spring-mass system show simple harmonic motion only for small amplitudes?
Question: I've been taught that in a simple pendulum, for small $x$, $\sin x \approx x$. We then derive the formula for the time period of the pendulum. But I still don't understand the Physics behind it. Also, there's no angle $x$ involved in a spring-mass system, then why do we consider it an SHM only for small amplitudes? Answer: A simple pendulum does not strictly show simple harmonic motion unless you allow some approximations and uncertainties. It approximately behaves as a harmonic oscillator for small amplitudes. An object is said to be executing simple harmonic motion (no damping; not a forced oscillation) if and only if it satisfies the following condition: $$\frac{d^2 \phi}{dt^2} = -\omega^2 \phi \tag{1}$$ where $\phi$ is a variable quantity such as displacement, angular displacement, etc. Does a pendulum execute simple harmonic motion? The equation of motion for the pendulum can be written as: $$\vec{F} = {m\vec{g}} - \vec{T}$$ We know that the pendulum bob will move in a circle (assume that the string does not stretch), therefore, there is no motion in the direction of the string. This would mean that the net force on the bob will be used to provide a constant centripetal force. $$F_{radial} = T - mg\cos \theta = \frac{mv^2}{L}$$ The acceleration along the circumference of the string can be written as: $$F_{tangential} = ma = mg \sin \theta$$ $$a_{tangential} = a = g \sin \theta \tag{2}$$ The tangential acceleration can be expressed in terms of the angle $\theta$ as follows: $$v = L \frac{d\theta}{dt}$$ $$\frac{dv}{dt} = a = -L\frac{d^2\theta}{dt^2} \tag{3}$$ We have a minus sign because the gravitational force (acceleration) always tries to decrease the angle $\theta$. Substituting $(3)$ in $(2)$, you get, $$L\frac{d^2\theta}{dt^2} = -g \sin \theta \tag{4}$$ If you compare equation $(4)$ with equation $(1)$, you'll notice that it does not match. This would mean that the pendulum bob does not execute a simple harmonic motion. However, if the amplitude is small, then the maximum value of $\theta$ is small. The small angle approximation can be stated as follows: $$\sin \theta \approx \theta$$ Image Source: Wikipedia Using the approximation, you can rewrite equation $(4)$ as $$L\frac{d^2\theta}{dt^2} = -g\theta \tag{5}$$ The above equation looks quite similar to the equation $(1)$. It does match perfectly. Therefore, for small amplitudes, the pendulum executes a simple harmonic motion with a reasonable uncertainty. Does a spring-mass system execute simple harmonic motion? If the spring obeys Hooke's law, then it always executes simple harmonic motion. Hooke's law states that: $$F_{restoring} = ma = - kx \tag{6}$$ It is clearly evident from the above equation that the acceleration is directly proportional to the displacement and acts in the direction opposite to the displacement. Why do we limit the amplitude of a spring-mass system? Under high strain, the spring does not obey Hooke's law. This is kinda obvious: if you stretch a spring too much, it deforms permanently. Therefore, the equation $(6)$ no longer holds. If that equation does not hold, then the mass won't execute simple harmonic motion.
{ "domain": "physics.stackexchange", "id": 38113, "tags": "newtonian-mechanics, harmonic-oscillator, oscillators, approximations, anharmonic-oscillators" }
Linear SVM with slack variables: Will it find a perfect decision boundary if it exists?
Question: Suppose I use a linear Support Vector Machine with slack variables on a dataset that is linearly separable. Could it happen that the Support Vector Machine reports a solution that does not perfectly separate the classes? As an illustration: Is the situation in the picture possible for a Support Vector Machine with slack variables? Although there is a "better" boundary that allows perfect classification, the Support Vector Machine goes for a sub-optimal solution that misclassifies two samples. Answer: Having linear separable data means that an optimal solution exists. The support vector machine is an approach to identify the optimal solution with multiple steps through Lagrange multiplication. This method is guaranteed to give you the optimal solution if there is one.
{ "domain": "datascience.stackexchange", "id": 2564, "tags": "svm" }
Lasers for plasma
Question: Recently I had the same idea as given at A question about the properties of plasma and its potential use in recycling, or a similar one, rather. The basic idea is: could you turn stuff into plasma and sort the atoms by element? The answer appears to be roughly: yes, with difficulty. I have a few followup questions. Suppose you used pulsed lasers to focus on a point, to quickly turn that spot into a plasma. (That works, right?) You may suppose the presence of a vacuum, since I suspect that simplifies a number of things. What kind of laser would be sufficient for that? How much energy would be approximately optimal, in how much time and space? (You need a certain minimum energy per cm^2, I think, but my understanding is also that if you simply try to pour more and more energy into a spot at once, the plasma absorbs it and you don't do much more to the remaining material.) How much does the material in question affect the required temperatures? (I haven't been able to find many numbers, such as the minimum plasma temperature of iron.) Is a laser like this even approximately sufficient, or would it require a much higher powered laser? Ballpark estimates are ok; preferable would be links to commercial lasers that would work. Answer: For ionization of your material, you need high field strength (high laser intensity). That is because in laser plasmas, usually strong field ionization, where the strong external field facilitate electron quantum tunneling out from the potential well of the atom, is the dominant ionization mechanism. So, as you say, you need to achieve a certain number of W/cm$^2$. Without plugging in the numbers in the formula to check, I would suspect that you would want to reach at least about $10^{14}$W/cm$^2$; you can see this paper (not open access) or this page where they talk about implementation of ionization into their code. So then, is the laser you suggested enough? With 36 kW, you would need to focus down the laser to a spot size of about $0.2$ µm to reach an intensity of $10^{14}$W/cm$^2$, which is smaller than the wavelength of the laser and hence not possible. It would be more feasible to focus a laser to ~10 µm in size, but you could perhaps get a way with a bit lower laser intensity, which then would require around 100 kW of laser power to reach the desired ionizing field. You are also right in that you can't just pour in more energy at the same spot if you want to ionize as much material as possible, since the ionization only happens where you point your laser. As for what temperature you need, most elements, especially metals, will have an ionization energy of the order of 1$-$10 eV for the first electron, hydrogen famously has 13.6 eV (but that is rather high, since it only has one electron). Through the use of Boltzmann's constant $k_B=8.6\times10^{-5}$ eV/K, the ionization energies translates to temperatures of roughly 100000 K to have a fully ionized plasma (all atoms has lost one or more electrons, which would be required if you wanted to sort them). By now I also feel compelled to point out (like in the answer to the other question) that this method would not be economical (either monetarily or energywise) for the purposes of recycling. Since it is much easier to move electrons around between atoms (chemistry) than to completely strip them off, chemical recycling is still much more effective than plasma recycling.
{ "domain": "physics.stackexchange", "id": 59848, "tags": "laser, plasma-physics" }
Paper work before I do make my own Line follower Robot
Question: I am a graduate student trying to make my own Line follower Robot for my minor assessment, I've all hardware parts and all data-sheets with me, I've attended a workshops of Robotics and studied a lot on Line follower robot. I have a good knowledge of C Programming and Embedded systems, but the problem is I've a very limited amount of time(2 days). Please help me to suggest a good paper work about my Project - Line follower robot, where should I start from ? I am getting myself confused should I start from Programming or should I first do circuit simulations as I know It is not a better approach to use directly hardware. Please suggest me a fine Paper work or some links/videos so that I can make my Robotics projects fast. Any help would be really appreciated, Thanks. Answer: Two days? Eeek! I wouldn't try to simulate the thing at a circuit level. If you're an EE and you're familiar with circuit simulation packages then using SPICE (or whatever) to simulate the system may let you slide by with an acceptable performance. Personally, if I were designing one of these for money, I'd probably simulate it in Scilab, possibly using Xcos -- but that's because I'm familiar with that tool. Given your limited time I wouldn't try to do more than make something that follows straight lines and gentle curves at some fixed speed.
{ "domain": "robotics.stackexchange", "id": 207, "tags": "mobile-robot" }
How to understand force given an inconstant mass?
Question: I am wondering about what force really is after I have looked up wikipedia, where there is stated that $F=ma$ only holds to be true for a constant $m$. For inconstant mass force is: $$F=\frac{dp}{dt}= \frac{d(m(t)v(t))}{dt} = \frac{dm(t)}{dt}v(t)+a(t)m(t)$$ I cannot imagine this, this breaks my understanding of force. As far as I know, in Newtons mechanics force describes how strong a mass accelerates. It is a variable of state. But if it is a variable of State, for any state in time $s_{T}$ the force should be defined as: $$F=m(T)\cdot \frac{dv}{dt}$$ I really do not get, what the term: $\frac{dm(t)}{dt}\cdot v(t)$ means. I do not understand why suddenly force should be dependet of the amount of velocity. I hope someone can help me build a new understanding. Edit: Wikipedia also says that my formula is just wrong, I do not fully understand the explanation given in "variable-mass"-system paragraph. Could you explain it more detailed? $$\mathbf {F} +\mathbf {u} {\frac {\mathrm {d} m}{\mathrm {d} t}}=m{\mathrm {d} \mathbf {v} \over \mathrm {d} t}$$ I do not see why this holds to be true, or where this comes from. Makes no sense to me at all. Basically expelling mass should already be displayed by the force $F$ since newtons third law holds to be true. Thanks Answer: The full equation of second law is inconsistent under transformations. Consider the a frame where $$F=m\frac{dv}{dt} + v\frac{dm}{dt}$$ Now consider another frame moving with a constant velocity $v'$. In that frame one can write $$F=m\frac{d(v-v')}{dt}+(v-v')\frac{dm}{dt}$$ There will be an additional factor of $-v'\frac{dm}{dt}$ in the expression. This means that in two different inertial frames there are different physical laws, which is obviously wrong. To keep the equations consistent with this transformation we write the equation as $$ F + u_{rel}\frac{dm}{dt} = m\frac{dv}{dt}$$ Where $u_{rel}$ is the relative velocity of the mass that is being added or subtracted to the system with respect to the system. If net force on a system with variable mass is zero then the change in mass accelerates the system. The most general example of variable mass systems are rockets. Rockets are constantly releasing fuel, which reduces its mass. Another example can be machines guns firing bullets (though the bullets are generally much lighter than the gun). In one of your comments I read that Yes, but this spewing should already be in a or F. Since molecules are exerting pressure. That is right the spewing force is included in $F$ but the variable mass is not. Since the spewing is constant the force of spewing is acting on different masses at different times. At time $t$ the constant force $F$ is acting on a body of mass $M(t)$. At time $t+\Delta t$ the same force is acting on a body of mass $M(t+\Delta t)$. That is what's missing from the equation. Derivation of Variable Mass Equation Consider a mass $dm$ that is added to a body of mass $M$ and velocity $v$. Let the velocity of $dm$ be $v_m$. Since the mass is being added the collision will be completely inelastic. Hence both $dm$ and $M$ will move with the same velocity, say $v+dv$. Since there is no net external force on the whole system we can conserve momentum. $$v_m dm +Mv = (dm +M)(v+dv)$$ or $$v_m dm + Mv =v dm + dmdv + Mv +Mdv$$ The term $dmdv$ is very small and hence can be are to zero. The equation becomes $$-(v-v_m)dm = Mdv$$ Dividing by a small time interval in which impulse occurs $$-u_{rel} \frac{dm}{dt} = Ma$$ The minus is because $dv$ is negative, i.e $v<v+dv$ (the mass was decelerated). If there is a net force $F$ then $$F = \frac{v dm + dmdv + Mv +Mdv - v_m dm -Mv}{dt}$$ Or $$F-u_{rel} \frac{dm}{dt} =Ma$$ Which is the desired result. Hope this helps.
{ "domain": "physics.stackexchange", "id": 57870, "tags": "newtonian-mechanics, mass" }
What is the reason behind the shape of the absorption curve of electron paramagnetic resonance
Question: In our EPR experiment, the signal looks like the "first derivative" part of the above picture. Why is this? What does the "first derivative" mean and why is it the quantity our instruments detect in such an experiment? Answer: See http://www.bruker-biospin.com/cwpractice.html (Bruker know a thing or two about making spectrometers :-). I quote: The magnetic field strength which the sample sees is modulated sinusoidally at the modulation frequency. If there is an EPR signal, the field modulation quickly sweeps through part of the signal and the microwaves reflected from the cavity are amplitude modulated at the same frequency. For an EPR signal which is approximately linear over an interval as wide as the modulation amplitude, the EPR signal is transformed into a sine wave with an amplitude proportional to the slope of the signal So the spectrometer measures the first derivative. To get the signal you'd need to integrate the output from the spectrometer, but normally you wouldn't do that as you can get the line position and width directly from the first derivative.
{ "domain": "physics.stackexchange", "id": 3556, "tags": "electromagnetism, experimental-physics" }
At what density does helium burning start in a star?
Question: I have seen several references that say that helium burning begins in a star once the core temperature reaches $10^8$K (such as here) but nowhere that says what density that corresponds to. Does anybody know a reference that has this value? And if it makes a difference, I would like to know the density for quiescent helium burning, not helium flash helium burning. Answer: The short answer: $\sim10^4$ grams per cubic centimeter. From this webpage, I have a few statistics regarding required mean density and during each phase of fusion (the fusion of lighter elements may happen in the "shells" outside the core). $$\begin{array}{|c|c|} \hline \text{Fusion phase}&\text{Mean density (g/cm}^3)\\ \hline \text{Hydrogen} & 5\\ \hline \text{Helium} & 700\\ \hline \text{Carbon} & 200,000\\ \hline \text{Neon} & 4,000,000\\ \hline \text{Oxygen} & 10,000,000\\ \hline \text{Silicon} & 30,000,000\\ \hline \end{array}$$ This is data for a star of $25M_{\odot}$. Note, however, that this is mean density in the star, not the density in the core, where fusion is taking place. That is on the order of $\sim10^4$ grams per cubic centimeter (see e.g. here). There may be a sharp density dropoff starting from the center of the star, showing that only the hot, dense inner regions can fuse the main element in each stage.
{ "domain": "astronomy.stackexchange", "id": 1976, "tags": "star, helium" }