anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Verify collection as a method parameter
Question: How can I verify that ohterClassMock.MethodToTest() was called with contracts? That it was called with contract, that have Id = 1 and Param2 = 34 and also with contract2, which has Id = 2 and Param2 = 56. This is my code: GoodClass goodClass = new GoodClass(ohterClassMock); ... var contracts = new List<Contract>(); var contract = new Contract{ Id = 1, Param2 = 34 }; contracts.Add(contract); var contract2 = new Contract { Id = 2, Param2 = 56 }; contracts.Add(contract2); goodClass.DoSomething(contracts); ohterClassMock.Verify(mock => mock.MethodToTest(It.IsAny<contracts>())); I know that it can be tested for every item in the collection: ohterClassMock.Verify(mock => mock.MethodToTest(It.Is<Contract>(contract => contract.Id == 1))); ... but maybe there is some other syntax to call it in one line of code? Updated: goodClass.DoSomething(contracts); calls foreach(Contract contract in contracts) { ohterClassMock.MethodToTest(contract); } Answer: Strictly to answer the question at hand, you have local variables for the individual contracts already, so you can just re-use them: ohterClassMock.Verify(mock => mock.MethodToTest(It.Is<Contract>(c => contract.Equals(c)))); ohterClassMock.Verify(mock => mock.MethodToTest(It.Is<Contract>(c => contract2.Equals(c)))); This assumes you have implemented IEquatable<Contract> and/or overridden Object.Equals on your Contract object. Given the behavior of most test and mocking frameworks, it will probably save you a lot of grief to go ahead and override Object.ToString so that failed tests will print out nicer expected/actual values than the fully-qualified type names. Also, as an aside, you can create your list with a collection initializer if you do so after building your individual contracts: var contracts = new List<Contract> { contract, contract2 }; Or, if your method takes in IEnumerable<Contract>, it may be even simpler to use: var contracts = new [] { contract, contract2 };
{ "domain": "codereview.stackexchange", "id": 3419, "tags": "c#, unit-testing, moq" }
Using delayed choice interference experiments as a computing device
Question: I had an idea how to design a "quantum computer": How about designing interference-experiments where the design of the experiments itself represents algorithmical or mathematical problems that are coded in a way that the solution to that problem is interference = “true” and no interference = “false” as a result? These results would be acquired instantaneously because the algorithm would not be processed step by step as with classical computers. If the design was changed on the run - thereby representing another problem - the solution would even be acquired faster than the quanta going through the apparatus as predicted by “delayed choice”-experiments. Are there already such ideas for “quantum computers” of this kind? They would not have the problem of decoherence but on the contrary decoherence vs. no-decoherence would be their solution set of the problems represented by the (flexible) experimental design. Does this make sense? EDIT: Perhaps my question is not formulated well - perhaps I shouldn't call it "quantum computer". My point is that any configuration with these double slit experiments - no matter how complicated - instantly shows an interference pattern when there is any chance of determining the way the photons took. My idea is to encode some kind of calculation or logical problem in this configuration and instantly find the answer by looking at the outcome: interference = “yes” and no interference = “no” – Answer: Here is my understanding of what you are asking (and I believe it is a little different to Lubos's interpretation, so our answers will differ): Can you build a computer that uses the interference effects to perform computation, where you use the presence of light in a particular place to represent 1 and no light to represent 0? The answer is yes, though with certain caveats. First let me note that there is something already called a quantum computer which exploits quantum effects to outperform normal (classical) computers. Classical computation is a special case of quantum computation, so a quantum computer can do everything a classical computer can do, but the converse is not true. If you have single photon detectors, you can use the interference effects of a network of beam splitters and phase plates together with the detectors to create a universal quantum computer. This is something called linear-optics quantum computing or LOQC. Perhaps the best known scheme is the KLM proposal, due to Knill, Laflamme and Milburn. Now the caveats: you need to have a fixed finite number of photons, and you need to adapt the network based on earlier measurement results. This adaptive feed-forward is quite difficult to achieve in practice, though not impossible, and computation with such a setup has been demonstrated. A further interpretation of the question is whether it is sufficient to use such a linear network, but only make measurements at the end. This is an open question, though there is strong evidence that such a device is not efficiently simulable by a classical computer (see this paper by Scott Aaronson). It is however not yet known whether you can implement universal classical or quantum computing on such a device.
{ "domain": "physics.stackexchange", "id": 304, "tags": "quantum-information, quantum-computer" }
How to run java node via launch file?
Question: Hi all, How can we include java node in launch file? I know we normally include roscpp and rospy nodes in launch files but how to include such following command in lauch file e.g: rosrun rosjava_bootstrap run.py java_package java_node __name:=j_node Best regards Originally posted by safzam on ROS Answers with karma: 111 on 2012-05-06 Post score: 0 Answer: The best option currently is to use the installApp task to build an executable jar and wrapper script (see the rosjava_tutorial_pubsub example and the rosjava_core documentation: http://docs.rosjava.googlecode.com/hg/rosjava_core/html/index.html). Then you can use roslaunch to execute it in the same way you would an arbitrary executable. Originally posted by damonkohler with karma: 3838 on 2012-05-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by safzam on 2012-05-10: Thanks. I have a topic t1 which bublishes strings "yes" and sometimes "no". Another topic t2 publishes string "why" only when t1 bublishes "no". Now I want to get a chart/graph. When and which topic triggered. I have seen the rxplot (only works for numeric values). rxconsole gives the output but in a seraial form. Is there any way you know how to get my desired plot OR how to get a text file from rxconsole so that i can play with in Matlab to get the plot. I mean I am only interested triggering of the topic but on some specific values. Regards and thanks in advance Comment by tfoote on 2012-05-20: @safzam This is a separate question. Please ask it separately.
{ "domain": "robotics.stackexchange", "id": 9272, "tags": "rosjava" }
Can vorticity be destroyed?
Question: I have a professor that is fond of saying that vorticity cannot be destroyed. I see how this is true for inviscid flows, but is this also true for viscous flow? The vorticity equation is shown below for reference. From this equation, it looks as if vorticity only convects and diffuses. This would suggest that it can't be destroyed. $$\frac{D\boldsymbol{\omega}}{Dt} = (\boldsymbol{\omega}\cdot\nabla)\boldsymbol{V} + \nu\nabla^2\boldsymbol{\omega}$$ However, consider this thought experiment: Suppose we have a closed container filled with water with initial vorticity field $\boldsymbol{\omega}_0$ at time $t_0$. If the container is allowed to sit undisturbed, as $t\to\infty$ the water will become stationary ($\boldsymbol{V}\to 0$) with zero vorticity ($\boldsymbol{\omega}\to 0$). This suggests vorticity can be destroyed. My professor claims the boundary-layer vorticity at the sides of the container is equal and opposite in sign to the bulk vorticity. If this is the case, the vorticity cancels out after a long time resulting in the stationary fluid and vorticity is not destroy (just cancelled out). EDIT: I'm looking for either a proof that the boundary-layer vorticity is equal and opposite to the bulk vorticity or a counter-explanation or proof. (I'm using proof in a very loose hand-wavy sense) Answer: Your professor is correct, but I agree with you that the statement “vorticity can’t be destroyed or created” seems jarring - I would prefer to think of this as “vorticity is conserved” because the conservation of vorticity derives from the Navier-Stokes Eq and the conservation of angular momentum. I confess this is splitting terminology hairs (don’t push it with your professor) but I think it helped me. So, I think, maybe I can understand this as an analogy with linear momentum, because linear momentum is conserved too. I remember the problem of a car of mass m, traveling toward the right at velocity v, and on the same road an identical car traveling to the left at velocity –v. They collide head-on and smash and stick together. Velocities after the crash – zero. Momentum after the crash – zero and of course, momentum is conserved. The total momentum of the system was zero before and after. Let say your container filled with water is a long annulus with thick steel walls. The flow is initially a circular flow around the axis (i.e. 2D flow.) What is initial total angular momentum of the system? Eventually the fluid stops moving, so the final total angular momentum of the system must be zero. How do we show that the initial angular momentum is zero too? At this point you need to recognize that the vorticity vector in the moving fluid is everywhere parallel to the axis of the container. And you need to use Stokes’ theorem to write an integral equation with a line integral on the LHS (the circulation) and a surface integral on the RHS (vorticity integrated over the container cross section. ) \begin{align*} \oint_{C} v \cdot dl = \int_{S} w \cdot dS\ \end{align*} Take your integration path (the closed path C) entirely inside the steel wall of your container. The velocity inside the container wall is always zero, and so the circulation along the path is always zero, and so the total vorticity across the cross-sectional area (the area S) of the container and fluid is always zero too. You can calculate for yourself that the boundary layer vorticity is equal and opposite to the bulk viscosity using a similar approach. Imagine spinning the container about its axis at a constant angular velocity. Eventually the entire viscous-fluid and container system will be rotating like a rigid body around the axis. Every point has the same angular velocity, and there is now a vortex located at the center. Compute the circulation for any closed path that includes the vortex inside it – this will be the strength of the vortex, and the magnitude and sign of vorticity in the bulk fluid. You can show yourself that the strength of this vortex is the total vorticity. Compute the circulation around any path that does not include the vortex, this will always come out to be zero. Pick a path near the fluid-container boundary, s’, so half of it is in fluid and half is inside the container wall, as long as the container and fluid are still rotating together the circulation around this path will be zero too. Now stop the container’s rotation. The fluid continues to move. Compute the circulation around the path s’ again, it is no longer zero and is the vorticity at the boundary layer. It’s sign is opposite that of the vortex at the center. Every point along the fluid-boundary can be associated with path like s’ and a small amount of boundary layer vorticity. Integrate around the entire boundary and the sum will be equal in magnitude and opposite in sign to the strength of the vortex at the center. Eventually, boundary layer vorticity will diffuse towards the center and annihilate the center vortex. @Isopycnal_Oscillation is correct to point out that in 3D, and particularly near turbulent conditions, vorticity is not conserved. The second term on the RHS of your ‘transport equation’ says that the stretching and tilting of vortex tubes can change vorticity too. However, I expect that in the classes where your professor is fond of saying that “vorticity cannot be destroyed” turbulent flow is seldom if ever encountered. Finally, assuming that the LHS of your ‘transport equation’ equals zero does not necessarily require that the fluid be inviscid or that the problem be 2D – you are assuming that the terms on the RHS happen to cancel exactly and the vorticity is fortuitously ‘steady-state.’ So yes, that is a very strong assumption to accept.
{ "domain": "physics.stackexchange", "id": 44751, "tags": "fluid-dynamics, conservation-laws, viscosity, vortex" }
Selecting folders based on last modified date
Question: I am attempting to find folders based on their last modified dates. import os, shutil, stat, errno, time, datetime srcFolders = "C:\Users\Test\Desktop\Test" archiveDate = datetime.datetime.strptime("2016/11/20", '%Y/%m/%d') os.chdir(srcFolders) for name in os.listdir('.'): if os.path.isdir(name): modifiedDate = time.strftime('%Y/%m/%d', time.gmtime(os.path.getmtime(name))) strLastModified = datetime.datetime.strptime(modifiedDate, '%Y/%m/%d') if strLastModified > archiveDate: print name This is what I've got and it seems to be right in that they're comparing the same attributes. If there is a better, more Pythonic way for this, please advise. Answer: Do not put all imports on the same line and use snake_case rather than camelCase for variable names. Read PEP8, the official Python style guide to make your code look like Python code. Use functions for better reusability. Instead of using hardcoded values like your folder path or archive date, use parameters. You will more easily be able to test various values in the interactive interpreter. This also means using the if __name__ == '__main__' construct. return values instead of printing them in the function. Again, better reusability. For starter you can build a list and return it. when using datetime objects, you can compare them directly. It is cleaner and probably faster than comparing their string representation. Building dates is also as simple as using the right constructor. you don't need to move to the desired directory before listing its content. os.listdir can take the absolute path and work from that. Revised code import os import datetime def filter_by_date(src_folder, archive_date): relevant_folders = [] for name in os.listdir(src_folder): full_name = os.path.join(src_folder, name) if os.path.isdir(full_name): if datetime.fromtimestamp(os.path.getmtime(full_name)) > archive_date: relevant_folders.append(name) return relevant_folders if __name__ == '__main__': print filter_by_date("C:\Users\Test\Desktop\Folder", datetime.datetime(2016, 11, 10)) Is a first aproach. But using append in a for loop is deemed unPythonic, better use a generator or a list-comprehension here: import os import datetime def filter_by_date(src_folder, archive_date): os.chdir(src_folder) return [ name for name in os.listdir('.') if os.path.isdir(name) and datetime.fromtimestamp(os.path.getmtime(name)) > archive_date ] if __name__ == '__main__': print filter_by_date("C:\Users\Test\Desktop\Folder", datetime.datetime(2016, 11, 10))
{ "domain": "codereview.stackexchange", "id": 23274, "tags": "python, windows" }
Plotting a density function
Question: I am trying to plot the following function in MATLAB $$f(x) = \lambda e^{-\lambda(x-\Delta)}w(x-\Delta) $$ where $$w(x) = \begin{cases} 1 &x \geq 0 \\ 0 & {\rm otherwise}\\ \end{cases} $$ I chose a value of $\lambda$ of 1 and $\Delta$ of 2. I am just not sure how to define $w(x)$ and shift the function by delta. Thank you. Answer: You can plot samples of a continuous density function just as you would do with any continuous function. Assuming $w(x)$ represents the continuous-time unit step function, $$w(x-\Delta) = \begin{cases}{ 1 ~~~,~~~x \geq \Delta \\ 0 ~~~,~~~ \text{otherwise} } \end{cases}$$ then the shifted exponential pdf can be plotted with the following matlab/octave code : clc; clear all; close all K1 = -1; % evaluation interval begins K2 = 9; % evaluation interval ends N = 1000; % number of points to display x = linspace(K1,K2, N); % x = domain to use for displaying lam = 1; % exponential PDF parameter delta = 2; % shift amount f = lam*exp(-lam*(x-delta)); f = f.*(x>delta); % to implement w(x-delta) figure, plot(x,f); title('shifted exponential pdf for \lambda = 1, \Delta = 2'); The output will be:
{ "domain": "dsp.stackexchange", "id": 8084, "tags": "probability-distribution-function" }
Reaction between phenol and nitrous acid
Question: My question is about the products formed when phenol reacts with nitrous acid. I googled it and reached this PubChem paper (DOI: 10.1016/S0045-6535(02)00857-3), which states: It was found that phenol reacts with nitrous acid to produce cyanide ions. Cyanide ion generation is attributed to the conversion of phenol to nitrosophenol through the well-known nitrosation reaction, and decomposition of benzoquinonoxim to form cyanide and aliphatic compound. Is this correct? If not, then what is the actual reaction? Answer: What the reference says The reference gives only following figure to support their theory. A weakness of this mechanism is requiring a high energy intermediate like sp2-carbocation. An alternative proposal This alternative avoids high-energy intermediates and thus appears more likely:
{ "domain": "chemistry.stackexchange", "id": 10155, "tags": "organic-chemistry, phenols, nitro-compounds" }
Which mushroom could be this?
Question: Collected in northern West Europe in autumn 2017 Answer: By a hint from a FB forum, it should be meripilus giganteus and I have to agree.
{ "domain": "biology.stackexchange", "id": 7618, "tags": "mushroom" }
Compare version numbers
Question: I am wondering if there is any way to meaningfully shorten the chain of conditionals used here to compare two versions. struct VERSIONCODE represents a version number of the form major.minor.revision and CompareVersions returns EQUAL if they're equal; LHS_NEWER if vLHS is newer, and RHS_NEWER if vRHS is newer. typedef enum { EQUAL = 0, LHS_NEWER, RHS_NEWER } ECOMPARISON; // Specifically not named "VERSION" to avoid conflicting with common names from third-party libraries etc. typedef struct { int nMajor; int nMinor; int nRev; } VERSIONCODE; ECOMPARISON CompareVersions(VERSIONCODE vLHS, VERSIONCODE vRHS) { if (vLHS.nMajor > vRHS.nMajor) { return LHS_NEWER; } else if (vLHS.nMajor < vRHS.nMajor) { return RHS_NEWER; } else// if (vLHS.nMajor == vRHS.nMajor) { if (vLHS.nMinor > vRHS.nMinor) { return LHS_NEWER; } else if (vLHS.nMinor < vRHS.nMinor) { return RHS_NEWER; } else// if (vLHS.nMinor == vRHS.nMinor) { if (vLHS.nRev > vRHS.nRev) { return LHS_NEWER; } else if (vLHS.nRev < vRHS.nRev) { return RHS_NEWER; } else// if(vLHS.nRev == vRHS.nRev) { return EQUAL; } } } } Answer: The CompareVersions() function in this answer uses subtraction for comparison. This is considered to be bad practice - it leads to bugs and potential security holes. (Yes, the post does say "if we can ensure that the version values are small enough to avoid integer overflow", but that pretty much requires the caller of this function to know the result ahead of time.) To actually answer the question, I would remove the unnecessary elses: ECOMPARISON CompareVersions(VERSIONCODE vLHS, VERSIONCODE vRHS) { if (vLHS.nMajor > vRHS.nMajor) return LHS_NEWER; if (vLHS.nMajor < vRHS.nMajor) return RHS_NEWER; // vLHS.nMajor == vRHS.nMajor if (vLHS.nMinor > vRHS.nMinor) return LHS_NEWER; if (vLHS.nMinor < vRHS.nMinor) return RHS_NEWER; // vLHS.nMinor == vRHS.nMinor if (vLHS.nRev > vRHS.nRev) return LHS_NEWER; if (vLHS.nRev < vRHS.nRev) return RHS_NEWER; return EQUAL; } This is much easier to read, and can be seen to be correct by inspection.
{ "domain": "codereview.stackexchange", "id": 37512, "tags": "c" }
Will a small magnet affect an insulated computer wire?
Question: I'm designing something that involves a small magnet, probably about 0.5m in diameter, which will rest next to a computer wire carrying data, video, audio, etc. If the wire is insulated (think USB or VGA cable), will the magnet have any adverse effect on the current flowing through it? Thanks, ~Carpetfizz Answer: I don't think that the fact that the small object you're placing next to a computer cable has a DC magnetic field has any significant effect on the signal currents. However, the fact that the small magnet may have a very large magnetic permeability (as most permanent magnets do) could add significant inductance to the circuit and tend to cause it to act as a low-pass frequency filter in some situations and like a common-mode rejection filter in other situations (Ferrite Core terminators) depending on the exact placement of the small object with respect to the signal cable. Probably best to keep it and any other high magnetic permeability objects away from the signal cables.
{ "domain": "physics.stackexchange", "id": 25806, "tags": "electricity, magnetic-fields" }
C++ minimal threadsafe array based on std::deque
Question: Here is a minimal example of a threadsafe array I want to build on for a timeseries application, with the following characteristics: Ever-growing, and the already contained elements remain constant (Usually) a single writer calling push_back Multiple dependent readers Here is the corresponding implementation, or rather an early attempt of it: template<typename T> struct threadsafe_array { auto operator[](int i) const { return deq[i]; } auto size() const { return deq_size.load(std::memory_order_acquire); } void push_back(T const& t) { std::unique_lock<std::mutex> lock(mut); deq.push_back(t); lock.unlock(); deq_size.fetch_add(1, std::memory_order_release); } private: std::deque<T> deq; std::atomic<int> deq_size{0}; std::mutex mut; }; My underlying ideas: reads of the available elements through operator[](int) are carried out lock-free. a std::deque is used as the underlying container because it does not invalidate concurrent reads when doing a push-back (--in contrast, a std::vector could as it potentially does a reallocation) the push_back is forwarded to the underlying deque, on which it is applied in an atomic way through locking the std::mutex. Thereafter, the variable deq_size of type std::atomic<int> variable is adjusted using release semantics (so that the previous push_back is not reordered after the fetch_add). if there are reads occurring in between adding an element to the deque and the adjustment of the size, they have have to get along with a smaller size(), i.e. as if the array had not been updated. Calling operator[size()] therefore does not need to be undefined behvaiour as it is for std::deque (but that's more an inconsistency than a feature). Questions: Is this thing already threadsafe and doing what I wrote, or am I missing some points? Are the memory orders in the atomic operations ok, or are there better choices (e.g. memory_order_relaxed for the load in size())? Is it preferable doing the update of the size in push_back() under the lock (and thus, if I see it right, limit the size difference between size() and the underlying std::deque::size() to only one)? Answer: Your code isn't thread-safe, as executing a read with .operator[]() and a write with .push_back() isn't synchronized in any way. Even though no references to elements are invalidated by std::deque<T>::push_back(), it can change the data-structure to retrieve those references. What you want to look into is called a readers-writer-lock, which is the primitive for allowing either multiple readers or only a single writer access at the same time. C++17 provides it as std::shared_mutex, so you might have to use boost::shared_mutex unless your library is already there.
{ "domain": "codereview.stackexchange", "id": 26769, "tags": "c++, multithreading, c++14, thread-safety, lock-free" }
Installing navigation stack on ROS Melodic
Question: Hi, I've been trying to use some features from the ros-planning/navigation stack, but it seems that it is not currently available for ROS Melodic. In my case, I need to use the map_server feature, in one of my applications. So far I've read that (sic) It has been released, but a sync has not yet happened., but all the package is available on their github page. Is there a way to manually install the downloaded package from GitHub on ROS Melodic Distro? I've already tried sudo apt-get install ros-melodic-navigation and had no success. Thanks in advance. Originally posted by woz on ROS Answers with karma: 13 on 2018-08-17 Post score: 0 Original comments Comment by Choco93 on 2018-08-17: have you tried building github package? Comment by woz on 2018-08-17: Haven't found any instructions on how to do that. Would you mind explaining how to? Comment by Choco93 on 2018-08-17: clone into your workspace and do catkin build Comment by woz on 2018-08-17: Didn't work. Copied the whole content into my catkin_ws folder, sourced it, tried to run catkin build and got catkin: command not found. Tried catkin_makeand nothing happened as well. Comment by woz on 2018-08-17: Also, downloaded the catkin_tools in order to use catkin build but when I tried to do it on my ws, I've got: The build space at '/home/<user>/catkin_ws/build' was previously built by 'catkin_make'. Please remove the build space or pick a different build space. Comment by Choco93 on 2018-08-17: catkin clean -y && catkin build Comment by woz on 2018-08-17: Didn't work as well: [clean] Error: The current or desired workspace could not be determined. Please run catkin cleanfrom within a catkin workspace or specify the workspace explicitly with the--workspace option. Comment by gvdhoorn on 2018-08-17: That is not at all how you would build a pkg from sources. See #q252478 for the general procedure. Answer: If the packages are really released, built and just waiting for a sync, you can install them from the staging repository. See wiki/ShadowRepository for information on how to do that. Don't build things from source if you don't have to. Originally posted by gvdhoorn with karma: 86574 on 2018-08-17 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by woz on 2018-08-17: Your last comment on my post, and this post did exactly what I wanted. Thanks a lot, @gvdhoorn! Worked like a charm :) Comment by gvdhoorn on 2018-08-17: If you're using the shadow-fixed repository then I'm not sure why you'd still need to build from source. Comment by woz on 2018-08-17: I wasn't using it. To be honest, I had no idea about its existence. Anyway, both your posts helped me a lot and for that I'm really grateful :)
{ "domain": "robotics.stackexchange", "id": 31566, "tags": "navigation, ros-melodic" }
Ruby code refactoring
Question: I have n-times similar statements if trigger_data.tt_closed unless trouble_ticket.changes.key?(:status) @run = 0 break end unless trouble_ticket.changes[:status][1] == "Closed" @run = 0 break end end if trigger_data.tt_assignee unless trouble_ticket.changes.key?(:assigned_to) @run = 0 break end unless trouble_ticket.changes[:assigned_to][1] == trigger_data.tt_assignee @run break end end How to refactoring that code? Maybe dynamic statement build with pass some hash to input. I'm newbie in metaprogramming. Give me advise please Answer: Are you inside a loop? Here is a possible way of doing it def check(td, tt, trigger, key, ticket_changes) if td.send(trigger) unless tt.changes.key?(key) @run = 0 return true end unless tt.changes[key][1] == ticket_changes @run = 0 return true end end return false end def metacheck(td, tt) [[:tt_closed, :status, "Closed"], [:tt_assignee, :assigned_to, td.tt_assignee]].each do |k| return if check(td, tt, k[0], k[1], k[2]) end end metacheck(trigger_data, trouble_ticket) I have used an array of triplets to check the conditions. The check can further be simplified as def check(td, tt, trigger, key, ticket_changes) return false unless td.send(trigger) if !tt.changes.key?(key) or (tt.changes[key][1] != ticket_changes) @run = 0 return true end return false end We can join all the conditions together. But I think this captures the original intension best.
{ "domain": "codereview.stackexchange", "id": 1816, "tags": "ruby" }
From Bose-Hubbard model to quantum rotor model
Question: For the Bose-Hubbard model: $$H=-t\sum_{\langle i,j \rangle}(b_i^{\dagger}b_j+b_j^{\dagger}b_i)+K\sum_{i}(\hat{n}-n_0)^2$$ In the large-filling limit, we can replace $\hat{n}$ by $\hat{n}+n_0$, and also $\hat{n}_j$ by $-i\frac{\partial}{\partial \theta_j}$, after the which the on-site repulsion term in the Hamiltonian becomes the kinetic term for the phase angle. However, for the hopping term, why can we simply replace $b_j^{\dagger}$ with $e^{i\theta_j}$? Basically, we are supposed to get the final phase-only quantum rotor model given as the following: $$H=-2t\sum_{\langle i,j \rangle}cos(\theta_i-\theta_j)+K\sum_j\left(-i\frac{\partial}{\partial \theta_j}\right)^2$$ Source: Eq.(14.3) from "advanced solid state physics 2e" by P. Philips. Answer: To be slightly more careful: you can always rewrite a bosonic operator $b$ in terms of number and phase variables: $$ b^{\dagger} = \sqrt{\hat{n}} e^{i \hat{\phi}} $$ Here the operator $e^{i \hat{\phi}}$ is defined by its action on every occupation number state: $$ e^{i \hat{\phi}} | n \rangle = | n+1 \rangle $$ Strictly speaking, $e^{i \hat{\phi}}$ is not unitary since $e^{-i \hat{\phi}} |0 \rangle = 0$. But we can formally extend our Hilbert space to include both positive and negative integers and define $e^{-i \hat{\phi}} |0 \rangle = |-1 \rangle$, at which point $e^{i \hat{\phi}}$ can be properly considered a unitary operator. $\hat{\phi}$ is technically not a well-defined Hermitian operator since it is a compact variable, but formally writing $[\hat{\phi}, \hat{n}] = i$ produces the correct commutation relations for properly defined periodic functions of $\hat{\phi}$, such as $e^{i \hat{\phi}}$. Now, to obtain the rotor model, what you should do is write $b^{\dagger}_i b_j = 2\sqrt{\hat{n}_i} \cos(\hat{\phi}_i - \hat{\phi}_j) \sqrt{\hat{n}_j}$. Then, if $\langle\hat{n}_i \rangle = \bar{n}$ is large, we can formally replace $\sqrt{\hat{n}_i} = \sqrt{\bar{n} + \delta \hat{n}_i} \simeq \sqrt{\bar{n}}$ to obtain a rotor model of the form $$ \hat{H} = -2t \bar{n} \sum_{\langle i j \rangle} \cos(\hat{\phi}_i - \hat{\phi}_j) + K \sum_i \delta \hat{n}_i^2 $$ Note the factor of $\bar{n}$ in front. If your goal is to describe the Bose-Hubbard model and not the rotor model, this factor is important: it tells you that the Mott transition is somewhere around $t\bar{n} / K \sim \mathcal{O}(1)$, which suggests that the transition happens at smaller and smaller $t$ as the filling is increased (which is the correct prediction).
{ "domain": "physics.stackexchange", "id": 95064, "tags": "condensed-matter, models" }
Is the Schmidt basis the one minimizing entanglement?
Question: I know, that for a compound system $ |\psi \rangle_{AB} $ we can find the Schmidt basis, which is an unique one. Is it at the same time the basis, in which the two subsystems are minimally entangled? If so, how this can be proved / disproved? I think it would make sense to say, that when the Schmidt rank is equal to 1, the system is separable, because in the basis minimizing the entanglement we can represent the state $|\psi \rangle_{AB} $ as a product of two substates $ |\psi \rangle_A \otimes |\psi \rangle_B $. Answer: The Schmidt basis is just a special basis in which you can write a given bipartite state $|\psi\rangle_{AB}$. The state is always the same, regardless of which representation you choose (i.e., in which basis you express it). Thus, the entanglement - which is a property of the state, not of the chosen representation - does not change. On the other hand, the Schmidt decomposition allows you to read off the entanglement of the state easily.
{ "domain": "physics.stackexchange", "id": 58491, "tags": "quantum-information, quantum-entanglement, quantum-states" }
SQL Server stored procedure boilerplate
Question: What would you do to improve upon this boilerplate empty stored procedure, being mindful of the delicate balance between length, complexity, performance and clarity? -- ============================================= -- Author: The usual suspects -- Create date: 10/06/2011 -- Description: -- -- Nice long description about the procedure -- -- ============================================= CREATE PROCEDURE [dbo].[My_Stored_Proc] ( -- exampleParam is an example parameter. @exampleParam INT = 30 ) AS BEGIN -- main SET NOCOUNT ON BEGIN TRY DECLARE @crlf varchar(2) SET @crlf = CHAR(13) + CHAR(10) -- *** DO YOUR STUFF HERE *** END TRY BEGIN CATCH -- Error handler DECLARE @ErrorNumber INT DECLARE @ErrorSeverity INT DECLARE @ErrorState INT DECLARE @ErrorProcedure NVARCHAR(4000) DECLARE @ErrorLine INT DECLARE @ErrorMessage NVARCHAR(4000) DECLARE @ErrorDescription NVARCHAR(4000) -- retrieve error info SELECT @ErrorNumber = ERROR_NUMBER(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE(), @ErrorProcedure = ERROR_PROCEDURE(), @ErrorLine = ERROR_LINE(), @ErrorMessage = ERROR_MESSAGE(); -- build custom error description SELECT @ErrorDescription = @crlf + @crlf + 'Base Error:\t[' + CAST(@ErrorNumber AS VARCHAR) + '] ' + @ErrorMessage + @crlf + @crlf + 'exampleParam:\t' + CAST(@exampleParam AS VARCHAR) + @crlf + 'Application:\t' + APP_NAME() + @crlf + 'User:\t' + SYSTEM_USER + @crlf + 'Database:\t' + DB_NAME() + @crlf + 'Procedure:\t' + @ErrorProcedure + @crlf + 'Line:\t' + CAST(@ErrorLine AS VARCHAR) + @crlf + 'Severity:\t' + CAST(@ErrorSeverity AS VARCHAR) + @crlf + 'State:\t' + CAST(@ErrorState AS VARCHAR); RAISERROR(@ErrorDescription, @ErrorSeverity, 1) RETURN @@ERROR END CATCH END -- main For instance, is there a nice way to move that error handler out of the stored proc so the logic can be shared and not duplicated inside each procedure? Notice it can be nice to include the values of the parameters in that error message (exampleParam above). Do you agree or disagree with my stance on handling transaction rollbacks in stored procs? (lack thereof) Do you have or can you write an example of better boilerplate along with a description of where my approach falls short and why your version might be a better starting point What about SET XACT_ABORT {ON | OFF}? Which option for XACT_ABORT would be a best practice? Answer: All in all it’s a good idea. Have you considered moving your crlf and CATCH logic; or a portion of it; to a reusable function? This would help ensure; wherever it’s used; that it remain consistent and you don’t have the same code all over the place.
{ "domain": "codereview.stackexchange", "id": 1011, "tags": "sql, sql-server, error-handling" }
What does $w_{\tau}=-w_x$ mean?
Question: Problem and solution given at http://exir.ru/1/resh/1_106.htm A small washer put on the inclined plane, which makes an angle α with the horizon, and reported the initial velocity $v_0$ (Fig. 1.27). Find the dependence of the speed of the puck angle $\phi$, if the coefficient of friction $k = \tan \alpha$ and the initial moment $\phi_0 = \pi / 2$. Solution is also given on the same site but is in Russian. And I couldn't figure it out myself. Basically there are three forces, friction, normal and gravity. If we consider only the plane of the incline, we have gravity+ normal =$mg \sin \alpha$, and friction =$mg \sin \alpha$. Firstly, will the direction of frictional force always be opposite to the direction of motion at each instant, irrespective of the fact that there is gravity? And if so, how do we solve it? Does $W$ mean work done? Work done by gravity (+normal) will be $mg \Delta x$, and work done by friction will be $mg \Delta l$, where $\Delta x$ is downward distance fallen, and $\Delta l$ is total path length. There seems to be no relation between the two. So how do we do it? Answer: A friend of mine sent the solution.....
{ "domain": "physics.stackexchange", "id": 35288, "tags": "homework-and-exercises, newtonian-mechanics, friction, terminology" }
Fragment-changing in Android
Question: My current code is kinda atrocious to switch fragments and everything, but I don't know a better way to check to see if the fragment I'm trying to switch to. private void FragmentChange(Bundle Data) { SetActionBarButtons(new String[] { "SearchButtonOff", "SearchExtraButtonOff" }); String[] data = Data.getStringArray(ACTIVITY_MAIN.DATA); String type = data[0]; String fragment = data[1]; if (fragment.equals(POST_MAIN)) { postmainFragment = new Fragment_PostMain(); mainContainer = postmainFragment; } else if (fragment.equals(POST_ACCOUNT)) { postaccountFragment = new Fragment_PostAccount(); mainContainer = postaccountFragment; } else if (fragment.equals(POST_ACCOUNTLOGIN)) { postaccountloginFragment = new Fragment_PostAccountLogin(); mainContainer = postaccountloginFragment; } else if (fragment.equals(POST_LOCATION_MAIN)) { postlocationmainFragment = new Fragment_PostLocationMain(); mainContainer = postlocationmainFragment; } else if (fragment.equals(POST_IMAGES)) { postimagesFragment = new Fragment_PostImages(); mainContainer = postimagesFragment; } else if (fragment.equals(POST_TEMPLATE)) { posttemplateFragment = new Fragment_PostTemplate(); mainContainer = posttemplateFragment; } else if (fragment.equals(POST_LOCATION)) { postlocationFragment = new Fragment_PostLocation(); mainContainer = postlocationFragment; } else if (fragment.equals(POST_CATEGORY)) { postcategoryFragment = new Fragment_PostCategory(); mainContainer = postcategoryFragment; } else if (fragment.equals(POST_CATEGORY_MAIN)) { postcategorymainFragment = new Fragment_PostCategoryMain(); mainContainer = postcategorymainFragment; } else if (fragment.equals(POST_LOCATION_MAIN)) { postlocationmainFragment = new Fragment_PostLocationMain(); mainContainer = postlocationmainFragment; } else if (fragment.equals(LOADING)) { loadingFragment = new Fragment_Loading(); mainContainer = loadingFragment; } else if (fragment.equals(POST_RESULTS_IMAGEVIEW)) { postresultsimageviewFragment = new FRAGMENT_POST_RESULTS_IMAGEVIEW(); mainContainer = postresultsimageviewFragment; } else if (fragment.equals(SEARCH_RESULTS_REPLY)) { postresultsreplyFragment = new FRAGMENT_SEARCH_RESULTS_REPLY(); mainContainer = postresultsreplyFragment; } FragmentTransaction FT = getFragmentManager().beginTransaction(); if (type.equals(FRAGMENTCHANGE_FADEIN)) { FT.setCustomAnimations(R.anim.fade_in, R.anim.fade_out); } else if (type.equals(FRAGMENTCHANGE_FLIP_FORWARD)) { FT.setCustomAnimations(R.animator.card_flip_right_in, R.animator.card_flip_right_out); } else if (type.equals(FRAGMENTCHANGE_FLIP_BACK)) { FT.setCustomAnimations(R.animator.card_flip_left_in, R.animator.card_flip_left_out); } if (mainContainer != null) { FT.replace(R.id.main_fragment, mainContainer).commit(); } } Basically, I want to pass a Bundle down and I set the first part of the bundle as the "type" for the animation, and the fragment as the actual fragment that I'm trying to switch it. Then of course since I have so many fragments it just gets really messy with the if statements. Is there a better way to write this so it's not as "ugly"? Answer: Here's few things to avoid "ugly" dups and long-long methods: Don't duplicate variables Like your mainContainer variable, because you used one variable for each fragment and pass it to another global variable: postmainFragment = new Fragment_PostMain(); // global variable outside fragmentChange() mainContainer = postmainFragment; // global variable outside fragmentChange() ... postaccountFragment = new Fragment_PostAccount(); mainContainer = postaccountFragment; ... It should be better to use just one variable, and you will avoid duplicate allocations: Fragment mainContainer = null; // inside fragmentChange() ... mainContainer = new Fragment_PostMain(); // directly set the fragment ... mainContainer = new Fragment_PostAccount(); Therefore, you will not create global fragments variables for each fragments in parent class (postmainFragment, postaccountFragment, etc). Use "switch" as @dacories said You can do with Integers instead of Strings (it might be faster), for example: private final static int FRAGMENTCHANGE_FADEIN = 0; private final static int FRAGMENTCHANGE_FLIP_FORWARD = 1; private final static int FRAGMENTCHANGE_FLIP_BACK= 2; ... private void fragmentChange(Bundle data) { int transition = data[0]; switch(transition) { case FRAGMENTCHANGE_FADEIN: ... case FRAGMENTCHANGE_FLIP_FORWARD: ... } } You can do the same for fragments... Do a general method and avoid long-long switch Instead of passing an array and switching between all your fragment in the fragmentChange method, you could pass directly the fragment concerned: private void fragmentChange(Fragment frag, String tag, int transition) { if (frag != null) { ... FragmentTransaction ft = getFragmentManager().beginTransaction(); switch (transition) { default: case FRAGMENTCHANGE_FADEIN: ft.setCustomAnimations(R.anim.fade_in, R.anim.fade_out); break; case FRAGMENTCHANGE_FLIP_FORWARD: ft.setCustomAnimations(R.animator.card_flip_right_in, R.animator.card_flip_right_out); break; case FRAGMENTCHANGE_FLIP_BACK: ft.setCustomAnimations(R.animator.card_flip_left_in, R.animator.card_flip_left_out); break; } ft.replace(R.id.main_fragment, frag, tag).commit(); } } So, you could directly call the change method as follows: fragmentChange(new Fragment_PostMain(), "PostMain", 0); You will in this case avoid a global fragment variable mainContainer which is constantly allocated in the whole class and avoid a very long switch or if/else condition. But, if I want to pass datas to the next fragment? One solution could be using static instance methods to pass datas with setArguments. You could have an instance in fragment class to attach the datas and return just the fragment into the fragmentChange method. An example might be in each fragment: public static Fragment_PostMain newInstance(int i, String n, boolean b) { Fragment_PostMain frag = new Fragment_PostMain(); Bundle datas = new Bundle(); datas.putInt(i); datas.putString(n); datas.putBoolean(b); frag.setArguments(datas); return frag; } You could retrieve these datas with getArguments methods in fragment. And call the method by passing the fragment with datas: Fragment_PostMain frag = Fragment_PostMain.newInstance(12, "NameSection", true); fragmentChange(frag, "PostMain", 0); Like this, you won't have to create a Bundle array in the method. Finally Your solution with long-long if/else can be more readable with switch statement, yes. My solutions above will be more readable with less code, yes again. However, you will have to think of what you really need: a long condition code (which can be easier to change in future if one fragment change for all class) or a short version (with class customization and direct access to the datas). I hope this will help you. Just an advice: in Java, the standard naming conventions is Classes should begin with an uppercase whereas methods with lowercase.
{ "domain": "codereview.stackexchange", "id": 16537, "tags": "java, android" }
Splitwise clone done right
Question: I have started this project basically for learning perspective and wanted to learn good object oriented design. What I am trying to do is making clone of something like this but this is a command line version. There are lots of bits and pieces of this application that I would like to be reviewed in detail. Separating I/O logic from the business logic. Deciding the roles and responsibilities of each object involved (Separation of Concern?). Loose coupling Open for maintainability and flexibility. And would love to hear any other reviews too. Disclaimer: The code is nearly 500 lines long but I needed to give enough examples to be understood by others. Code: 'use strict'; const GROUPS = new Set(); const ACCOUNTS = new Set(); class GroupRepository { add(group) { GROUPS.add(group); } remove(group) { return GROUPS.delete(group); } filter(id) { return [...GROUPS].filter(g => g.name === id); } all() { return [...GROUPS]; } } class AccountRepository { add(account) { ACCOUNTS.add(account); } filter(id) { return [...ACCOUNTS].filter(acc => acc.name === id); } remove(account) { return ACCOUNTS.delete(account); } all() { return [...ACCOUNTS]; } } class Group { constructor(name) { this.name = name; this.accounts = new Set(); } getName() { return this.name; } add(account) { this.accounts.add(account); } delete(account) { return this.accounts.delete(account); } getAccounts() { return [...this.accounts]; } toString() { return `Group: ${this.name}`; } } class Transaction { constructor(account, amount, linkId, description='') { this.timestamp = new Date(); this.amount = amount; this.account = account; this.linkId = linkId; this.description = description; } toString() { let dd = this.timestamp.getDate(); let mm = this.timestamp.getMonth(); let yy = this.timestamp.getFullYear(); if (this.amount < 0) { return `${dd}/${mm}/${yy} ${this.description}- You get back ${Math.abs(this.amount)}`; } else { return `${dd}/${mm}/${yy} ${this.description}- You pay back ${Math.abs(this.amount)}`; } } } class Account { constructor(name, balance) { this.name = name; this.balance = balance; } debit(amount) { this.balance -= amount; } credit(amount) { this.balance += amount; } toString() { return `${this.name.charAt(0).toUpperCase() + this.name.slice(1)}`; } } class AccountManager { constructor(groupRepository, accountRepository) { this.groups = groupRepository; this.accounts = accountRepository; this.transactions = []; } register(account) { this.accounts.add(account); } registerAll(iterable) { for (let account of iterable) { this.register(account); } } get(name) { return this.accounts.filter(name); } addGroup(group) { this.groups.add(group); } getGroup(name) { return this.groups.filter(name); } transfer(amount, from, to) { const accounts = this.accounts.all(); if (accounts.indexOf(from) === -1 || accounts.indexOf(to) === -1) { throw new Error(`Invalid account entry ${from} ${to}`); } if (from === to) return; from.debit(amount); to.credit(amount); this.transactions .push(new Transaction(from, -amount, to)); this.transactions .push(new Transaction(to, +amount, from)); } history(user) { console.log('Transaction history for ' + user); return this.transactions.filter((t) => { return t.account === user; }).map(t => t.toString()); } balance() { const accounts = this.accounts.all(); function recur(accounts) { let maxCredit = findMax(accounts); let minCredit = findMin(accounts); if (maxCredit.balance === 0 && minCredit.balance === 0) { return; } let maxOfTwo = minCredit.balance > maxCredit.balance ? minCredit : maxCredit; console.log(`${maxCredit} owes ${minCredit} ${maxOfTwo.balance.toFixed(2)}`); minCredit.credit(maxOfTwo.balance); maxCredit.debit(maxOfTwo.balance); recur(accounts); } recur(accounts); } } function findMax(accounts) { let max = accounts[0]; for (let account of accounts) { if (account.balance >= max.balance) { max = account; } } return max; } function findMin(accounts) { let min = accounts[0]; for (let account of accounts) { if (account.balance <= min.balance) { min = account; } } return min; } const accountRepository = new AccountRepository(); const groupRepository = new GroupRepository(); function example1(input) { const sender = input.split(':')[0]; const message = input.split(':')[1]; const amount = message.split('|')[0]; const RE = /([A-Z]{2,13})(, \1)*( "(.*)")?/g; console.log(message.match(RE)[3]); console.log(` LQ: 40.00|LQ,FP,MD,GR "Dinner out" FP owes LQ 10.00 MD owes LQ 10.00 GR owes LQ 10.00 `); let AM = new AccountManager(groupRepository, accountRepository); let LQ = new Account('LQ', 0); let FP = new Account('FP', 0); let MD = new Account('MD', 0); let GR = new Account('GR', 0); AM.registerAll([LQ, FP, MD, GR]); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message.split('|')[1])); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); console.log(AM.history(LQ)); console.log(AM.history(FP)); } example1('LQ: 40.00|LQ,FP,MD,GR "Dinner out"'); function example2() { let sender = 'LQ'; let message = 'LF'; let amount = 10; console.log(` LQ: 10|LF LF: 10|GR GR owes LQ 10.00 `); const AM = new AccountManager(groupRepository, accountRepository); let LQ = new Account('LQ', 0); let LF = new Account('LF', 0); let GR = new Account('GR', 0); AM.registerAll([LQ, LF, GR]); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } sender = 'LF'; message = 'GR'; amount = 10; res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } example2(); function example3() { let sender = 'SC'; let message = 'EM,SC,RC'; let amount = 16.5; console.log(` SC: 16.5|EM,SC,RC RC: 8.00|EM,GP O/P EM owes SC 9.50 GP owes SC 1.50 GP owes RC 2.50 `); const AM = new AccountManager(groupRepository, accountRepository); let SC = new Account('SC', 0); let RC = new Account('RC', 0); let EM = new Account('EM', 0); let GP = new Account('GP', 0); AM.registerAll([SC, RC, EM, GP]); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } sender = 'RC'; message = '8|EM,GP'; amount = 8; res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } example3(); function example4() { let sender = 'LQ'; let message = 'LQ,FP,MD,GR'; let amount = 40.00; console.log(` LQ: 40.00|LQ,FP,MD,GR "Dinner out" GR: 15.00|GR,LQ,MD "Uber" MD owes LQ 15.00 FP owes LQ 10.00 `); const AM = new AccountManager(groupRepository, accountRepository); let LQ = new Account('LQ', 0); let FP = new Account('FP', 0); let MD = new Account('MD', 0); let GR = new Account('GR', 0); AM.registerAll([LQ, FP, MD, GR]); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } sender = 'GR'; message = 'GR,LQ,MD'; amount = 15.00; res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } example4(); function example5() { let sender = 'LQ'; let message = 'MM+2*3,LQ*2,FP'; let amount = 62.00; console.log(` LQ: 62|MM+2*3,LQ*2,FP `); const AM = new AccountManager(groupRepository, accountRepository); let MM = new Account('MM', 0); let LG = new Account('LG', 0); let PB = new Account('PB', 0); AM.registerAll([MM, LG, PB]); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } example5(); function groupExample() { let sender = 'LQ'; let message = 'SUSHILOVERS+6,MD*2'; let amount = 132.00; console.log(` LQ: 132|SUSHILOVERS+6,MD*2 FP owes LQ 28.80 MB owes LQ 28.80 MD owes LQ 45.60 `); let AM = new AccountManager(groupRepository, accountRepository); let LQ = new Account('LQ', 0); let FP = new Account('FP', 0); let MB = new Account('MB', 0); let MD = new Account('MD', 0); AM.registerAll([LQ, FP, MB, MD]); const G = new Group('SUSHILOVERS'); G.add(LQ); G.add(FP); G.add(MB); AM.addGroup(G); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } groupExample(); function duplicateGroup() { let sender = 'LQ'; let message = 'SUSHILOVERS,MOVIEBUFFS'; let amount = 210.00; console.log(` LQ: 210|SUSHILOVERS,MOVIEBUFFS !!Should Fail!! `); let AM = new AccountManager(groupRepository, accountRepository); let AC = new Account('AC', 0); let LQ = new Account('LQ', 0); let FP = new Account('FP', 0); let MB = new Account('MB', 0); let MD = new Account('MD', 0); AM.registerAll([AC, LQ, FP, MB, MD]); const G = new Group('SUSHILOVERS'); G.add(LQ); G.add(FP); G.add(MB); AM.addGroup(G); const G2 = new Group('MOVIEBUFFS'); G2.add(AC); G2.add(MB); AM.addGroup(G2); let res = getCreditTransferInfo(amount, parseCreditInfo(AM, message)); for (let a of res) { AM.transfer(a.credit, AM.get(sender)[0], AM.get(a.account)[0]); } AM.balance(); } duplicateGroup(); function getCreditTransferInfo(amount, creditInfo) { const mul = []; const add = []; const res = []; const accounts = []; for (let c of creditInfo) { accounts.push(c.substring(0, 2)); if (c.indexOf('*') !== -1) { mul.push(parseFloat(c[c.indexOf('*')+1])); } else { mul.push(1); } if (c.indexOf('+') !== -1) { add.push(parseFloat(c[c.indexOf('+')+1])); } else { add.push(0); } } let each = (amount - add.reduce((x, sum) => x + sum, 0)) / mul.reduce((x, sum) => x + sum, 0); for (let i = 0; i < accounts.length; i++) { let amount = each * mul[i] + add[i]; res.push({ account: accounts[i], credit: each * mul[i] + add[i] }); } //console.log("Transfer Info"); //console.log(res); return res; } function getGroupAccounts(AM, name) { let group = AM.getGroup(name)[0]; return group.getAccounts().map(a => a.name); } function parseCreditInfo(AM, message) { const RE = /([A-Z]{2, 12})/g; const duplicates = new Set(); const accounts = []; let name = ''; let count = 0; let expr = ''; for (let i = 0; i < message.length; i++) { let char = message.charAt(i); if (char >= 'A' && char <= 'Z') { name += char; count = name.length; } else if (char === ',') { if (count === 2) { accounts.push(name + expr); duplicates.add(name); } else if (count > 2) { for (let account of getGroupAccounts(AM, name)) { duplicates.add(account); accounts.push(account+expr); } } name = ''; expr = ''; count = 0; } else if (char === '*' || char === '+' || (char >= '0' && char <= '9')) { expr += char; } } //console.log('Parse Result'); // if last name is group let group = AM.getGroup(name); if (group.length) { for (let account of getGroupAccounts(AM, name)) { duplicates.add(name); accounts.push(account+expr); } } else { duplicates.add(name); accounts.push(name+expr); } // Throw if duplicate entry if (duplicates.size < accounts.length) { throw new Error('Duplicate entry'); } //console.log(accounts); return accounts; } Answer: From staring at the code, this is my feedback: GroupRepository, AccountRepository are so similar, they should share code delete is a reserved syntax word, you should not use it as a function name I like class Group, very clean Transaction does not support multi-currency, something to consider I see trouble in the horizon if you allow zero amount transactions, I would kick those back GroupRepository.filter(id) should really be GroupRepository.filter(name) since the caller does provide a name Not sure a silent return without UI message is the way to go here: if (from === to) return; I would declare accountRepository and groupRepository all the way at the top Not sure what example11 does in production code, are you making sure we are paying attention ;) Please put your test(ing) code in a different file, this will bite you otherwise at some point Your code is very clean, I checked on jshint.com. You only have 2 unused variables (amount and RE) I am not sure what balance really does, needs more comments There are no obvious UI functions, so it is hard to look at separating I/O, except to say perhaps good job ;) Separation of Concern looks okay, I probably would have had a Transaction class This looks pretty good for maintainability/flexibility so far
{ "domain": "codereview.stackexchange", "id": 23366, "tags": "javascript, object-oriented, programming-challenge, design-patterns" }
Kappa near to 60% in unbalanced (1:10) data set
Question: As mentioned before, I have a classification problem and unbalanced data set. The majority class contains 88% of all samples. I have trained a Generalized Boosted Regression model using gbm() from the gbm package in R and get the following output: interaction.depth n.trees Accuracy Kappa Accuracy SD Kappa SD 1 50 0.906 0.523 0.00978 0.0512 1 100 0.91 0.561 0.0108 0.0517 1 150 0.91 0.572 0.0104 0.0492 2 50 0.908 0.569 0.0106 0.0484 2 100 0.91 0.582 0.00965 0.0443 2 150 0.91 0.584 0.00976 0.0437 3 50 0.909 0.578 0.00996 0.0469 3 100 0.91 0.583 0.00975 0.0447 3 150 0.911 0.586 0.00962 0.0443 Looking at the 90% accuracy I assume that model has labeled all the samples as majority class. That's clear. And what is not transparent: how Kappa is calculated. What does this Kappa values (near to 60%) really mean? Is it enough to say that the model is not classifying them just by chance? What do Accuracy SD and Kappa SD mean? Answer: The Kappa is Cohen's Kappa score for inter-rater agreement. It's a commonly-used metric for evaluating the performance of machine learning algorithms and human annotaters, particularly when dealing with text/linguistics. What it does is compare the level of agreement between the output of the (human or algorithmic) annotater and the ground truth labels, to the level of agreement that would occur through random chance. There's a very good overview of how to calculate Kappa and use it to evaluate a classifier in this stats.stackexchange.com answer here, and a more in-depth explanation of Kappa and how to interpret it in this paper, entitled "Understanding Interobserver Agreement: The Kappa Statistic" by Viera & Garrett (2005). The benefit of using Kappa, particularly in an unbalanced data set like yours, is that with a 90-10% imbalance between the classes, you can achieve 90% accuracy by simply labeling all of the data points with the label of the more commonly occurring class. The Kappa statistic is describing how well the classifier performs above that baseline level of performance. Kappa ranges from -1 to 1, with 0 indicating no agreement between the raters, 1 indicating a perfect agreement, and negative numbers indicating systematic disagreement. While interpretation is somewhat arbitrary (and very task-dependent), Landis & Koch (1977) defined the following interpretation system which can work as a general rule of thumb: Kappa Agreement < 0 Less than chance agreement 0.01–0.20 Slight agreement 0.21– 0.40 Fair agreement 0.41–0.60 Moderate agreement 0.61–0.80 Substantial agreement 0.81–0.99 Almost perfect agreement Which would indicate that your algorithm is performing moderately well. Accuracy SD and Kappa SD are the respective Standard Deviations of the Accuracy and Kappa scores. I hope this is helpful!
{ "domain": "datascience.stackexchange", "id": 110, "tags": "r, class-imbalance, gbm" }
Why does O2NCl only have two resonance structures?
Question: Why is it that $\ce{O2NCl}$ has only two resonance structures? More specifically, why is the Lewis drawing where $\ce{N}$ is double bonded to $\ce{Cl}$ not considered a resonance structure along with the other two where $\ce{N}$ is double bonded to an oxygen? Answer: $\ce{O2NCl}$ does not have two or three resonance structures but rather almost infinitely many that have different probabilities. Resonance structures will never bring you close to the actual picture; for the true picture to evolve you will need to do molecular orbital calculations. That said, resonance structures are often a somewhat good approximation if weighted accordingly. This last expression in italics is important. Not all resonance structures are equal. People teaching resonance structures usually use expressions like most probable resonance structures. To determine whether a resonance structure is more probable than another one, check the following list: does it give all main group atoms a valence electron octet? are formal charges minimised? are formal charges distributed in accordance with electronegativity (i.e. do electronegative atoms have more formal negative charge than electropositive ones)? The list is ordered from most important to not so important although traditional teaching sometimes weighs maximised amount of bonds higher and thus arrives at hyperoctet structures. Now let’s check the structures of $\ce{O2NCl}$ that you are talking about: We see that all structures fulfill rule 1. However, rule 2 clearly shows a difference: The two structures on the left contain $+1$ and $-1$ formal charges each while the rightmost structure has $+2$ and $-2$. We don’t need to check rule three, we can see that the rightmost structure is unfavourable and thus less close to the truth. Finally, the formal charges are distributed correctly on all three structures, because oxygen is more electronegative. Therefore, the left two structures are equally probable while the rightmost one is not so; oftentimes in exams, only the left two will be considered ‘correct’. But as I said, the true picture (or something really close to the truth) can only be determined by quantum chemical calculations.
{ "domain": "chemistry.stackexchange", "id": 4326, "tags": "bond, resonance, lewis-structure" }
Deconvolve a FIR-filter using limits? Or: Are limits distributive with the inverse fourier transform?
Question: Given $y[n] = f[n] * h[n]$ where $y[n]$ and $f[n]$ are two one-dimensional discrete signals that are given, find out: if $h[n]$ is a FIR-filter if it is a FIR-filter, what its kernel is. Obviously, this can be done by doing deconvolution: $h[n] = y[n] *^{-1} f[n]$. Performing deconvolution (written here as $*^{-1}$) in the time domain is of course computationally expensive. However, it is possible to go to the frequency-domain instead: $$h[n] = y[n] / f[n] = \mathcal{F^{-1}}(Y ./ F) $$ (Where $ ./$ is pairwise division and $Y$ and $F$ are the (discrete) fourier transformations of $y[n]$ and $f[n]$ respectively, that is $Y = \mathcal{F}(y[n]$), $F = \mathcal{F}(f[n])$). Many existing algorithms use this property, since pairwise division has a lower time-complexity) than direct deconvolution. However, these algorithms claim to struggle with zero-valued frequencies: If $F$ contains a zero, then we will divide by zero during the pairwise division. My question is: Is it possible to calculate the outcomes using limits here? After all: \begin{align} \lim_{k\to 0^+} \frac{x}{k} &= \lim_{l\to \infty} l \tag{$0^+$}\\ \lim_{k\to 0^-} \frac{x}{k} &= \lim_{l\to -\infty} l \tag{$0^-$}\\ \mathcal{F}(x) &= \int_{-\infty}^{\infty} f(x)\ e^{-2\pi i x}\,dx\tag{FT}\\ \mathcal{F^{-1}}(x) &= \int_{-\infty}^{\infty} f(x)\ e^{2\pi i x}\,dx\tag{IFT}\\ \int_{-\infty}^{\infty} f(x)\,dx &= \lim_{h \to 0} \sum{\frac{f(x)+f(x-h)}{2}} \tag{sum limit integral}\\ \end{align} So the fourier transform (and the inverse fourier transform) are defined in terms of integrals, which are themselves defined in terms of limits. Limits distribute over summation, so it seems to me that it would be possible to simply use IEEE 754 floating-point numbers (which follow the limit-rules for division by zero) to find a correct answer, which would allow for a very fast computer-implementation of a deconvolution algorithm. As an aside, I wonder if the fact that $h[n]$ is a FIR-filter (and linear, time-invariant and causal) is useful in some way. Is my reasoning correct, or is it flawed somewhere? Answer: Integral definition So the fourier transform (and the inverse fourier transform) are defined in terms of integrals, which are themselves defined in terms of limits. Nope, the integrals are not just the limits of sums here. That would be true for Riemann integrals over smooth functions (try doing that difference quotient at a place where $f$ isn't continuous). It's not how the Lebesgue integral that we need to use when defining the Fourier transformation is defined. And, especially, if you understand the DFT as a "special use case" of the continuous FT, then you have to assume periodic repetition of the signal vector you're observing. Periodicity leads to line spectra. These are, by definition, not continuous. So, no, your last equation isn't right. Fourier Transform definitions Your equations $(\text{FT})$ and $(\text{IFT})$ aren't how we write "The Fourier transformation of $f(x)$ is ...": Your left side of the equation doesn't say you're transforming $f(x)$; it doesn't mention $f$ at all! One common way I see the "Fourier transform of $f$" (which is a function of the frequency $\omega$) being written is \begin{align} \mathcal{F}\{f\}(\omega) \end{align} and while we're at it, the right side of these equations is wrong, too: you're missing the $\omega$ (or whatever target-domain free variable you want to use) in the exponent. Limit $l$ Now, the first two equations: \begin{align} \lim_{k\to 0^+} \frac{x}{k} &= \lim_{l\to \infty} l \tag{$0^+$}\\ \lim_{k\to 0^-} \frac{x}{k} &= \lim_{l\to -\infty} l \tag{$0^-$} \end{align} stand a bit on their own here, because you don't explain what $l$ is meant to be, but: The right hand side of $(\text{$0^+$})$ and $(\text{$0^-$})$ don't exist. I mean, you literally write "when we not fix $l$ to any actual value, but let it simple pine for infinity, what's the actual value?". So, that actual value then doesn't exist. Finding a substitute value of a function in a singularity Is my reasoning correct, or is it flawed somewhere? Yeah, at least the way it's written down, it's mathematically flawed. Of course, you're right from a functional analysis point of view, your quotient function simply has a couple of singularities. That's not a problem per se! Some of these singularities can actually be solved by finding a series on the environment of that point that is equal to the quotient in all but that point, and converges, and then "fix" the value at that point. However, that puts a huge restriction on your quotient function: It needs to be holomorphic in the neighborhood of that singularity for that series to exist. Sadly, quotients of holomorphic functions are only holomorphic where the denominator isn't zero. So, bad luck going the limit route! What you're looking for is the residue theorem! I'll really quickly outline the idea: your quotient function is generally holomorphic, aside from a discrete set of points (where your denominator is 0 or your numerator goes towards infinity). If a function is holomorphic, than it has the (funky) property that if you have a "disk" (a non-intertwining surface with a closed boundary) on which it is defined, then you can fully infer the function from the values on the boundary of that disk. You might have met that, if you're an EE, used in Stoke's theorem (in electrodynamics) So, you just define a small disk around your singularities with radius $\gamma$, and let that radius get smaller. That should define how your quotient should behave in the singularity. Problems The above thing is a very analytical way of describing a system. Yes, if you have an analytic description of your quotient (hint: you do, see what sampling does and how we mathematically define the DFT as finite sum over holomorphic functions), you can apply that knowledge. The practical problem arises from the fact that your quotient is based on an observation. Observations are noisy. Where an observed quality is close to 0, the noise component becomes dominant. If you divide by that, you get a huge amplification of noise! So, whilst nice analytically, that's not the way you want to go in the real world. That's why all the deconvolution algorithms (there's actually quite a few) apply assumptions on the system being convolved with, and minimize some error term or maximize some likelihood term.
{ "domain": "dsp.stackexchange", "id": 7031, "tags": "fourier-transform, finite-impulse-response" }
don't work RGB in openni_kinect
Question: Hi I have UBUNTU 11.10 electric Please help I installed the openni package: sudo apt-get install ros-electric-openni-kinect cd /opt/ros/electric/stacks/openni_kinect rosmake I disconnect the rgb cameras and hooked XTION I launched: roslaunch openni_launch openni.launch After rosrun image_view disparity_view image:=/camera/depth/disparity and see that it work. But when I started rosrun image_view image_view image:=/camera/rgb/image_color I see only grey window!!! It don't work!!! I disconnect the rgb cameras What's the problem? Help please. I had already tried everything . Originally posted by niki on ROS Answers with karma: 1 on 2014-08-26 Post score: 0 Answer: You're using a deprecated ROS version (electric) on a version of ubuntu that is no longer supported (11.10) with a camera package that isn't released anymore (openni_kinect) and a camera whose chipset maker got bought by apple and so will likely stop getting sold at some point (xtion). If you switch to ROS indigo on ubuntu 14.04 and use openni2_camera to read from the camera, you'll have a much better chance of finding people who know how to answer your questions. Originally posted by jbinney with karma: 606 on 2014-08-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by niki on 2014-08-27: Thanks. I have a robot QBO and his OpenQbo Distro is the Linux distro based only on Ubuntu 11.10 (Oneiric Ocelot) ROS electric. What should I do in this situation?
{ "domain": "robotics.stackexchange", "id": 19200, "tags": "ros, openni-kinect, asus-xtion-pro" }
Mathematical explanation of recursion and lambda (referenced in The Little Schemer)
Question: In the preface of Friedman and Felleisen's book The Little Schemer it states: We could, for example, describe the entire technical content of this book in less than a page of mathematics, but a reader who understands that page has little need for that book. Has anyone got an online link to the equivalent of this mathematical summary of The Little Schemer in a single page? Presumably this would cover recursion, lambda and the y-combinator. Answer: What Sam said. Also, it's really well under a page. If you're familiar with evaluation contexts, you can specify the call-by-value lambda calculus like this: Terms $$M ::= x \mid (M \, M) \mid (\lambda x . M)$$ Values $$V = (\lambda x . M)$$ Evaluation contexts $$E ::= [\:] \mid ([\:] M) | (V [\:])$$ The (only) reduction rule: $$E[((\lambda x . M) V)] \to E[M.\mathrm{subst}(V,x)]$$ where $.\mathrm{subst}$ denotes capture-avoiding substitution. Again, though, the background knowledge involved in reading this definition is by no means self-evident. There are many free places on the web to read about it. For a tidy and well-typeset presentation, you might also be interested in Felleisen/Flatt/Findler's book, Semantics Engineering with Redex. Holey Moley! I just googled for it, and got the full PDF online. Well, that won't last...
{ "domain": "cstheory.stackexchange", "id": 1562, "tags": "computability, recursion, lisp, scheme" }
Is the entire Universe the same age?
Question: Is all of the visible Universe exactly the same age everywhere? What about the Universe beyond what we can see? Answer: Ben Hocking is right that the Universe we observe is not the same age everywhere: the further away we look, the younger things are, simply because the light from faraway objects takes time to reach us. But you may be wondering about more than that: what if we could remove those light-travel-time delays and consider the Universe as it is now, as opposed to the way we see it (with greater delays for more distant objects)? Would it all be the same age in that case? That turns out to be a more subtle question than you might think, because of relativity. Relativity says, among other things, that different choices of reference frame lead to different notions of time -- things that are simultaneous in one reference frame are not simultaneous in another. When you consider what a distant object is like now, you have to be careful to specify what reference frame you're talking about, because different frames lead to different "nows." At each point in spacetime, there is a reference frame that seems like the "most natural" one to use, namely the one in which the expansion of the Universe looks roughly the same in all directions. Cosmologists tend to use that reference frame to define and synchronize their clocks. That is, the time coordinate at any point in spacetime is, by definition, the amount of time that would have elapsed according to an observer who'd been at rest in that reference frame since the Big Bang. That time coordinate is often called "cosmic time." If you use cosmic time as your time coordinate, then it is true that, at any fixed moment of time, all points in the Universe have the same age. But it's true pretty much because of the way we defined our time coordinate, so this is kind of a vacuous statement! Update: Based on the comments, I realize I should have explained some things explicitly. I'm considering here "cosmological" effects, meaning I'm ignoring things that are due to small-scale inhomogeneities and thinking about large scales, on which the Universe is approximately homogeneous. If, for instance, you hang around near the horizon of a black hole, you'll age at a different rate from someone else. Even if you avoid such extreme cases, small-scale variations in the gravitational potential will lead to small-amplitude age variations. In fact, the definition of cosmic time only really makes sense in the approximation where we're willing to average over small-scale inhomogeneities. If you're not willing to do that, then different prescriptions for synchronizing your clocks, no one of which is obviously more natural than another, will lead to small differences in ages of things.
{ "domain": "physics.stackexchange", "id": 1187, "tags": "cosmology" }
Collision of charged black holes
Question: Suppose there are two charged black holes which collide to form a bigger black hole. But when they combine, a lot of potential energy of the system is lost/gained depending on their charges (the opposite or the same). Will it manifest as an increase/decrease in mass of the big black hole? If the mass were to come down (+ve ,+ve collision) will the resultant black hole shrink in size? Answer: The issue of particle annihilation is immaterial to the final mass of the merged black hole. If the traditional, "no-hair" view of gravitational collapse holds, and the particles lose their identity when crushed into a singularity, there would be no particle annihilation at all. If some newer and more exotic physics holds, such as string theory or loop quantum gravity, that rescues gravitational collapse from creating singularities, then even if the basic particles retain some sense of identity and can annihilate, these dynamics will still occur inside the event horizon of the black hole, and the energy released from the annihilation event will still be trapped inside the event horizon and register as mass from outside. The only issue at stake, then, is the bulk electrostatic potential energy as the two black holes approach each other. If the holes are oppositely charged, then potential energy will be converted to kinetic energy, and presumably some of this will get radiated away during the collision, resulting in a slightly lower mass for the resulting black hole. If the black holes are of like charge, then it will require more work to bring them together, and this work will probably end up reflected as a slightly larger mass of the resulting black hole. As a practical matter, however, the fractional difference in mass will be minute. All objects of astrophysical-scale masses, including black holes, will be found to have negligible net charge, due to the abundant presence of free electrons and ions in interstellar space. Any object in space with a large net charge will rapidly accrete free charged particles, neutralizing itself. For tiny black holes on the primordial or quantum scale, Stephen Hawking calculated that such a black hole can only have a net charge of a few electrons (eight, perhaps?); any more and not enough bound electron states could exist for such a "black hole atom" to be stable against the black hole nucleus accreting charged particles and neutralizing itself. I read this paper early in grad school and remember it relatively clearly, but so far I haven't been able to find the reference. Will update if I do. However, I did find http://arxiv.org/PS_cache/gr-qc/pdf/0001/0001022v1.pdf In this paper, on similar stability and half-life arguments, they claim that a primordial black hole could not have a charge greater than 70.
{ "domain": "physics.stackexchange", "id": 2961, "tags": "black-holes, charge, gravitational-waves" }
Light reflection, absorption and transmission
Question: According to what I know, reflection, absorption, and transmission all involve the absorption of photon's energy. In the case of reflection and transmission, the absorbed energy is re-emitted in the form of electromagnetic wave (which is not in the case of light absorption). What makes the difference between them? Is it possible that three of the processes happen at the same time for a certain material? Answer: The other replies may be somewhat confusing. It is indeed reasonable to consider absorption, reflection, and transmission all as sequences beginning with absorption of a photon. They indeed all happen simultaneously. The difference is in what happens to the energy after it is absorbed, and this is in part determined by how the photon was absorbed. If the photon was absorbed in such a way that its energy may be quickly transferred to some other excitation (say, through electron-electron scattering to another electron state or through electron-phonon scattering to a lattice vibration), then the photon energy remains in the material, and we call it “absorbed.” Alternatively, the photon can be absorbed into a “virtual state.” This is a state which coherently re-emits the photon. This re-emission is slightly delayed, which delays propagation through the material and results in the material having refractive index $>1$. The photon can be re-emitted in any direction allowed by the exciting polarization; however, due to a coherent interference of waves, you’ll only have a significant probability of finding the photon in one of the classical directions of the beam (i.e. forward into the material or in the reflection direction at an interface). The photon which was absorbed/re-emitted many times and emerges out the back of the material is called “transmitted.” The photon which was absorbed/re-emitted one or more times and emerges in the reflection direction we call “reflected.” Thus, for any single photon, it is either reflected, transmitted, or absorbed (I’m ignoring another category you might call “scattered”). For a beam of many photons, these are all happening simultaneously.
{ "domain": "physics.stackexchange", "id": 80422, "tags": "electromagnetism, reflection, absorption" }
Characteristics of plasma accelerators
Question: I've been reading about plasma accelerators but I do not understand the main differences between a plasma accelerator and a "normal" one. In the case of LHC for example, if we have free protons accelerated, wouldn't that be a plasma? In any case, do plasma accelerators have any advantages with respect to other type of accelerators? Answer: The difference between "normal" Radiofrequency (RF) accelerators and plasma based accelerators is how the acceleration happens, it's not really to do with the beam itself. In a RF accelerator, particles are in a tube which is in vacuum and they are pushed by the electric field of radio waves. The limit to this is that the strongest possible electric field you can use is about 100 million Volts per meter. Beyond that, the electric field of the waves start to ionise and rip apart the walls of the accelerator pipe which disrupts the electric field accelerating the particles and can result in poor beam quality and potential collisions between the particle beam and the accelerator beam pipe. Plasma on the other hand is already a fully ionised, ripped apart and broken material so it can sustain huge electric fields and nothing happens to it. Plasma accelerators use laser pulses (or occasionally particle beams) to create a wave inside the plasma by pushing electrons away from their starting position and allowing the protons left behind pull them back towards where they started. This wave travels behind the laser like a wave behind a boat. If the conditions are correct, some electrons can become trapped inside the wave where they surf it, gaining a large amount of energy in a small distance. The accelerating electric field of the plasma wave can easily reach > 100 Billion Volts per meter so to reach the same particle energy, a plasma accelerator can be 1/1000 th the size of a RF accelerator.
{ "domain": "physics.stackexchange", "id": 48801, "tags": "plasma-physics, particle-accelerators" }
Simple micro-benchmarking library
Question: I'm working on a simple micro-benchmarking library in Java. This library makes it easy to benchmark multiple reference implementations. You provide the inputs and trigger the calls, the library takes care of executing N times, measuring the runs and printing the averaged results. Essentially, it works like this: Create a dedicated class for the benchmark Prepare input data in the constructor Create one method per reference implementation, annotate with @MeasureTime Add a main method to trigger the benchmark runner, with an instance of this class as parameter Run the class, find the results on stdout To set the number of warm-up iterations and iterations, use the @Benchmark annotation on the class, for example: @Benchmark(iterations = 10, warmUpIterations = 5) The annotations: @Retention(RetentionPolicy.RUNTIME) public @interface MeasureTime { int[] iterations() default {}; int[] warmUpIterations() default {}; } @Retention(RetentionPolicy.RUNTIME) public @interface Benchmark { int iterations() default BenchmarkRunner.DEFAULT_ITERATIONS; int warmUpIterations() default BenchmarkRunner.DEFAULT_WARM_UP_ITERATIONS; } The class that runs the benchmarks on a target object passed in, parameterized by annotations: import microbench.api.annotation.Benchmark; import microbench.api.annotation.MeasureTime; import microbench.api.annotation.Prepare; import microbench.api.annotation.Validate; import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.util.*; public class BenchmarkRunner { public static final int DEFAULT_ITERATIONS = 1; public static final int DEFAULT_WARM_UP_ITERATIONS = 0; private final Object target; private final int defaultIterations; private final int defaultWarmUpIterations; private final List<Method> measureTimeMethods = new ArrayList<>(); private final List<Method> prepareMethods = new ArrayList<>(); private final List<Method> validateMethods = new ArrayList<>(); public BenchmarkRunner(Object target) { this.target = target; Class<?> clazz = target.getClass(); Benchmark annotation = clazz.getAnnotation(Benchmark.class); if (annotation != null) { defaultIterations = annotation.iterations(); defaultWarmUpIterations = annotation.warmUpIterations(); } else { defaultIterations = DEFAULT_ITERATIONS; defaultWarmUpIterations = DEFAULT_WARM_UP_ITERATIONS; } for (Method method : clazz.getDeclaredMethods()) { if (method.getAnnotation(MeasureTime.class) != null) { measureTimeMethods.add(method); } else if (method.getAnnotation(Prepare.class) != null) { prepareMethods.add(method); } else if (method.getAnnotation(Validate.class) != null) { validateMethods.add(method); } } Collections.sort(measureTimeMethods, (o1, o2) -> o1.getName().compareTo(o2.getName())); } public static void run(Object target) { new BenchmarkRunner(target).run(); } public void run() { runQuietly(); } private void runQuietly() { try { runNormally(); } catch (InvocationTargetException | IllegalAccessException e) { e.printStackTrace(); } } private void runNormally() throws InvocationTargetException, IllegalAccessException { Map<Method, Throwable> validationFailures = new LinkedHashMap<>(); for (Method method : measureTimeMethods) { MeasureTime measureTime = method.getAnnotation(MeasureTime.class); if (measureTime != null) { try { runMeasureTime(target, method, measureTime); } catch (InvocationTargetException e) { Throwable cause = e.getCause(); if (cause instanceof AssertionError) { validationFailures.put(method, cause); printExecutionFailure(method); } else { throw e; } } } } if (!validationFailures.isEmpty()) { System.out.println(); for (Map.Entry<Method, Throwable> entry : validationFailures.entrySet()) { System.out.print("Validation failed while executing " + entry.getKey().getName() + ": "); System.out.println(entry.getValue()); } } } private void invokeMethods(Object instance, List<Method> methods) throws InvocationTargetException, IllegalAccessException { for (Method method : methods) { method.invoke(instance); } } private void runMeasureTime(Object instance, Method method, MeasureTime measureTime) throws InvocationTargetException, IllegalAccessException { for (int i = 0; i < getWarmUpIterations(measureTime); ++i) { invokeMethods(instance, prepareMethods); method.invoke(instance); invokeMethods(instance, validateMethods); } int iterations = getIterations(measureTime); long sumDiffs = 0; for (int i = 0; i < iterations; ++i) { invokeMethods(instance, prepareMethods); long start = System.nanoTime(); method.invoke(instance); sumDiffs += System.nanoTime() - start; invokeMethods(instance, validateMethods); } printExecutionResult(method, sumDiffs / iterations); } private void printExecutionInfo(String message, String ms) { System.out.println(String.format("%-60s: %10s ms", message, ms)); } private void printExecutionFailure(Method method) { printExecutionInfo("Validation failed while executing " + method.getName(), "-"); } private void printExecutionResult(Method method, long nanoSeconds) { printExecutionInfo("Average execution time of " + method.getName(), "" + nanoSeconds / 1_000_000); } private int getParamValue(int[] values, int defaultValue) { if (values.length > 0) { return values[0]; } return defaultValue; } private int getWarmUpIterations(MeasureTime measureTime) { return getParamValue(measureTime.warmUpIterations(), defaultWarmUpIterations); } private int getIterations(MeasureTime measureTime) { return getParamValue(measureTime.iterations(), defaultIterations); } } An example benchmark class: public class SimpleSortingDemo { private List<Integer> shuffledList; public SimpleSortingDemo() { shuffledList = new ArrayList<>(); for (int i = 0; i < 10000; ++i) { shuffledList.add(i); } Collections.shuffle(shuffledList); } public static void main(String[] args) { new BenchmarkRunner(new SimpleSortingDemo()).run(); } @MeasureTime public void bubbleSort() { BubbleSort.sort(new ArrayList<Integer>(shuffledList)); } @MeasureTime public void insertionSort() { InsertionSort.sort(new ArrayList<Integer>(shuffledList)); } } If you want to test drive it in your own projects, the GitHub project page explains nicely the steps to get started. I'd like a review in terms of everything, but here are some points you might want to pick on: What would you do differently? Is there a way to make the library easier to use? Is the implementation of BenchmarkRunner clear and natural? Is it adequate the way it measures the execution time? Are the annotation names intuitive and natural? (If not, can you suggest better names?) The @MeasureTime annotation returns int[] as iterations, which is sort of a dirty hack I use to treat it null by default, inheriting from @Benchmark.iterations or the global default. Is there a cleaner way to do this? Answer: I like the Idea of this very small framework and it looks like a nice way to measure runtime quickly. I just have some minor comments. If-Statement for (Method method : measureTimeMethods) { MeasureTime measureTime = method.getAnnotation(MeasureTime.class); if (measureTime != null) { try { I think the if (measureTime != null) { is not needed, as this is the pre-condition for adding a method to measureTimeMethods. Code Documentation Your code is lacking any form of comments in the code. In my humble opinion, at least every public method should be commented. For developing it might be useful on private methods as well. For example to me it is not obvious (by reading the method names) what the difference between runQuietly() and runNormally() is. Comment ratio depends on your personal taste a lot though and is very dependant on the methods' names. Other Comments runQuietly() could probably be inlined. It is only used once in run(). runNormally(): Maybe rename to to: runMeasurements() or similar? runMeasureTime(): Maybe rename to measureMethodRuntime() or similar? getParamValue(): Maybe rename to getFirstOrDefault() or similar? Someone with more expierence implementing anotations than me should probably answer your questions about the annotation implementation, so I'll not comment on that.
{ "domain": "codereview.stackexchange", "id": 11241, "tags": "java, benchmarking" }
Generating lift by Naruto running
Question: "Naruto running" is running with your arms behind you (pictured above). Many characters in the Naruto series are superhuman and can run way faster than a normal human. I thought that they run like this because they run so fast that swinging their arms will add drag instead of making them go faster. Ignoring whether that's true or not, how fast does a person have to run to create enough lift on their body to keep it always leaning forward and not tip over? (I imagine the body is like the wing of an airplane that never actually takes off because the thrust comes from the feet pushing the ground). EDIT: By leaning forward I meant having enough air resistance to push your body upwards instead of just falling to the ground when you lean forward. Basically like standing in strong winds, but you are the one moving instead of the air: (As I'm typing this I just realized that the answer might be "as fast as the speed of the wind it takes to make a person lean that far".) Answer: If you wanted to get an estimate of how fast you'd need to be moving in order to lift yourself with air deflection from running, you need to establish a couple of important parameters: Body weight $p$ Torso angle $\theta$ (defined such that $0^\circ$ would be running upright) Speed $s$ Torso length $\ell$ Torso width $w$ Air density $\rho$ Afterwards, all you need to do is a little bit of fluid mechanical force balancing. You can estimate that your torso deflects all of the incoming air you come into contact with straight downwards, which is only a decent approximation if your torso angle is high, in which case the lift force you would feel is simply the change in vertical momentum of the air coming at you in your frame of reference: $$ \begin{align} L &= v_{\text{down}} \dot{m}_{\text{down}} \\[5px] &= v_{\text{down}} \dot{m}_{\text{front}} \\[5px] &= v_{\text{down}} \left(s w \ell \cos{\left(\theta\right)} \, \rho \right) \\[5px] &= \left(s\frac{\cos{\theta}}{\sin{\theta}}\right) \left(s w \ell \cos{\theta} \right) \\[5px] &= s^2 \cot{\theta} \, \cos{\theta} \, w \ell \rho \end{align} $$ You can see that the approximation is bad for small torso angles since it predicts that you'd shoot off into space if you ran normally $\left(\theta = 0^\circ \right) ;$ that's because air would most certainly not deflect downwards if you ran this way. To get liftoff, you want this lift to be equal to your body weight: $$L = pg$$ Let's plug in some numbers. If you weigh $p = 175$ pounds, are running with a torso angle of $\theta = 45^\circ$, have a torso length and width of $\ell = 0.5$ meters and $w = 0.4$ meters respectively, you can plug this in along with an air density of $\rho = 1.225$ kilograms per meter cubed and $g = 9.81$ meters per second squared to find what minimum speed you need to run at: $$s^2 \, \cot{\theta} \, \cos{\theta} \, w\ell\rho = pg$$ $$s^2 \times \frac{\left(0.2 \times 1.225\right)}{\sqrt{2}} \frac{\mathrm{kg}}{\mathrm{m}} = 778.45\ \mathrm{N}$$ $$s = 67.0332 \frac{\mathrm{m}}{\mathrm{s}} \approx 150\ \text{mph}$$ There are a great deal of things I haven't considered here, like the amount of body weight your feet are holding up by virtue of running, pressure changes along the vicinity of your body, etcetera. But you can always just slap on some safety factors and run three or four times as fast, since your speed would reduce back to the liftoff value (if not less) if your feet were lifted off the ground.
{ "domain": "physics.stackexchange", "id": 60099, "tags": "drag, aerodynamics, lift" }
ROS Answers SE migration: amcl question
Question: Hi everyone, I am new to ROS and I have a question about amcl package. Does AMCL take dynamic obstacles into account? or is it for static obstacles only? Thanks for your help, Firat Originally posted by frt on ROS Answers with karma: 1 on 2013-03-26 Post score: 0 Answer: AMCL does not try to model dynamic obstacles. However, probabilistic methods like AMCL are generally fairly robust to things like dynamic obstacles, as long as they don't comprise most of your sensor data. For instance, a person walking by the robot while it navigates down a hallway is unlikely to cause to lose localization -- whereas several people circling around the robot and blocking out the laser's view of the walls around it may very well cause it to lose localization. Originally posted by fergs with karma: 13902 on 2013-03-26 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13552, "tags": "navigation, amcl" }
In time series analysis, is taking a multi-period difference equivalent to a band-pass filter?
Question: For time series, a simple high-pass filter is obtained by subtracting the previous value from each value: $y(n) = x(n) - x(n-1)$ If I take a multi-period difference: $y(n) = x(n) - x(n - a)$ where $a > 1$ Is this equivalent to a band-pass filter which attenuates frequencies significantly above and below $1 / a$? If not, how can it be described in terms of high/low/band-pass filters? Answer: What you are describing is two cases of a more general form of a Comb Filter (I encourage you to go through the link, but I'll adapt to your particular case here): $$y(n) = x(n) + \alpha x(n-K) $$ with $K$ the delay in samples, and $\alpha$ the scaling factor applied to the delayed signal. In your case, you have $\alpha = -1$, which gives you $$y(n) = x(n) - x(n-K)$$ To figure out what kind of filter this is, let's move to the frequency domain: Take the $\mathcal{Z}$-transform and derive the transfer function: \begin{align*} Y(z) &= X(z) - z^{-K}X(z)= X(z)(1-z^{-K})\\\\ \implies H(z) &= \frac{Y(z)}{X(z)} = 1 - z^{-K} \end{align*} Substitute $z = e^{j\omega}$ to get the frequency response: $$H(\omega) = 1 - e^{-j\omega K}$$ At this point, you can note that for $\omega = 0$, $H(\omega) = 0$ so you can rule out "low-pass". Let's go further, and compute the magnitude response: \begin{align*} \vert H(\omega) \vert &= \vert 1 - e^{-j\omega K} \vert \\\\ &= \vert 1 - \cos{(\omega K)} + j\sin{(\omega K)} \vert \\\\ &= \sqrt{1 - 2 \cos{(\omega K)} + \cos^2{(\omega K}) + \sin^2{(\omega K})}\\\\ &= \sqrt{1 - 2 \cos{(\omega K)} + 1}\\\\ &= \sqrt{2 - 2 \cos{(\omega K)}}\\\\ &= \sqrt{2\vphantom{1 - \cos{(\omega K)}}} \cdot \sqrt{1 - \cos{(\omega K)}} \end{align*} Let's now analyze the magnitude response: $\vert H(\omega) \vert$ is periodic (since you have a cosine term). $\vert H(\omega) \vert = 0$ for $\omega = 2\pi / K$ since $ \cos{(2\pi)} = 1$, but ALSO for $\omega = 4\pi / K$ since $\cos{(4\pi)} = 1$ As a matter of fact, $\vert H(\omega) \vert = 0$ (the nulls) for every integer $k$ such that: $$\omega = \frac{2\pi k}{K}$$ Similarly, the maxima (the peaks) happen when $\cos{(\omega K)} = -1$, at $k$ such that: $$\omega = \frac{\pi k}{K}$$ At these peaks, $\vert H(\omega) \vert = 2$ The resulting magnitude response looks like a comb, hence the term Comb Filter. Here are a few examples for different values of $K$ on a logarithmic scale, restricting ourselves to the normalized Nyquist frequency $\pi$, and normalizing by the peak value ($\vert H(\omega) \vert = 2$) so that the maxima fall at $0\,\texttt{dB}$:
{ "domain": "dsp.stackexchange", "id": 11433, "tags": "filters, frequency-response, time-series" }
C Logging Function
Question: I wrote this log function and it works but is there a better way to write into the file stream and console at the same time? //The function void log(int lvl, const char * format, ...) { va_list args; va_start(args, format); //If lvl < 0 the prefix will not be used if (lvl>=lINFO) { time_t now = time(NULL); struct tm tm = *localtime(&now); //Printing to the console printf("[%d-%02d-%02d %02d:%02d:%02d %s] [%s] : ", tm.tm_year+1900, tm.tm_mon+1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec, __log_name, parse_lvl(lvl) ); //Printing into the file //Checking if NULL(if yes the file wont be used) if(__log_filestream) fprintf(__log_filestream, "[%d-%02d-%02d %02d:%02d:%02d %s] [%s] : ", tm.tm_year+1900, tm.tm_mon+1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec, __log_name, parse_lvl(lvl) ); } //Printing to the console vprintf(format, args); printf("\n"); //Printing into the file if (__log_filestream) { vfprintf(__log_filestream, format, args); fprintf(__log_filestream, "\n"); } va_end(args); } ``` Answer: You're doing printf and fprintf twice for the same arguments. So, it's replicating code. *printf is somewhat heavyweight. So, I'd do sprintf [or snprintf] to a buffer (e.g): len = sprintf(buf,...); And then do: fwrite(buf,1,len,stdout); if (__log_filestream) fwrite(buf,1,len,__log_filestream); The overhead of using the buffer is less than calling a printf function twice. I'd repeat the same buffering for the vprintf/vfprintf Or, better yet, just keep concatenating to the buffer and do a single [pair of] fwrite at the end. This is [yet] better performance. And, this is especially helpful if there are multiple threads doing logging because the given log message will come out on a single line (e.g.): threadA timestamp | threadA message threadB timestamp | threadB message With two separate writes to a stream, two threads could intersperse partial parts of their message [not nearly as nice]: threadA timestamp threadB timestamp threadA message threadB message Note that this is enough to guarantee that we see nice/whole lines. But, it doesn't prevent a race for the ordering of two lines for each stream. That is, we could have [on stdout]: threadA line threadB line But, it might not prevent [on the logfile stream]: threadB line threadA line So, we could wrap the fwrite calls in a mutex. Or, we could just wrap the calls in a flockfile(stdout) / funlockfile(stdout) pairing (See: FORCE_SEQUENTIAL in the example below) If this function is the only function writing to the logfile and/or stdout, you might get even better performance by using write instead of fwrite [YMMV] Here's how I would refactor the code [and, if you're paranoid, you could use snprintf]: // The function void log(int lvl,const char *format,...) { va_list args; char buf[1000]; char *cur = buf; // If lvl < 0 the prefix will not be used if (lvl >= lINFO) { time_t now = time(NULL); struct tm tm; localtime_r(&now,&tm); // Printing to the console cur += sprintf(cur,"[%d-%02d-%02d %02d:%02d:%02d %s] [%s] : ", tm.tm_year + 1900,tm.tm_mon + 1,tm.tm_mday, tm.tm_hour,tm.tm_min,tm.tm_sec, __log_name,parse_lvl(lvl)); } va_start(args, format); cur += sprintf(cur,format,args); va_end(args); *cur++ = '\n'; *cur = 0; size_t len = cur - buf; // lock [either] stream to force both streams to come out "atomically" in // the same order #ifdef FORCE_SEQUENTIAL flockfile(stdout); #endif // Printing to the console fwrite(buf,1,len,stdout); // Printing into the file if (__log_filestream) fwrite(buf,1,len,__log_filestream); #ifdef FORCE_SEQUENTIAL funlockfile(stdout); #endif }
{ "domain": "codereview.stackexchange", "id": 39355, "tags": "c, file, logging" }
Kobuki Not Moving on Navigation - move_base issue
Question: Hi All, My set up is Kobuki with s/w on Raspberry Pi with RPLIDAR A2 and that feeding back to a laptop. I'm trying to get the amcl_demo to work: roslaunch turtlebot_navigation amcl_demo.launch map_file:=/home/mark/KitchenMap.map.yaml --screen The amcl seems to start without issue and reports: [ INFO] [1489096786.597987673]: Got new plan Eventually giving up with: [ WARN] [1489096805.398058019]: Rotate recovery behavior started. Which is when I would expect at lease something to happen. But the robot just stays still. On the Pi side I see messages like: [ WARN] [1489097005.894847319]: Velocity Smoother : using robot velocity feedback end commands instead of last command: 0.0349276, 0, -0.3, [navigation_velocity_smoother] So something is not quite night. The only thing roswtf reports is: Found 1 error(s). ERROR The following nodes should be connected but aren't: * /move_base->/move_base (/move_base/global_costmap/footprint) * /move_base->/move_base (/move_base/local_costmap/footprint) Which I don't think is much of an issue. My rosgraph looks like this: rosgraph.png Any ideas what I've got missing or why this is happening? Many Thanks Mark Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2017-03-09 Post score: 0 Answer: I found this to be a topic remapping issue. Once remapped all was well. Mark Originally posted by MarkyMark2012 with karma: 1834 on 2017-10-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27269, "tags": "navigation, move-base, kobuki" }
Lightweight logging library in C
Question: I want to get some practice in writing C libraries and so started with something I can use straight away, a logging library. It is very basic and only has two levels: log everything and log only errors. Use logd (log debug) to log any low level thing and if log everything level set, that item will be logged. Use loge to log important stuff that will always be logged. Is there critical functionality missing here? Any bugs? Any usability issues? Style issues? log.h: /* lightweight logging library */ #ifndef LOG_H_ #define LOG_H_ #ifdef __cplusplus extern "C" { #endif typedef enum { LOG_ERRORS, LOG_EVERYTHING } log_level; /* configuration */ void log_init(const char* path, const char* filename, const char* file_extension); void set_log_level(log_level level); log_level get_log_level(); /* logging functions */ void logd(const char* format, ...); /* debug log */ void loge(const char* format, ...); /* error log */ /* cleanup / ancillary */ void flush_log(); void close_log(); #ifdef __cplusplus } #endif #endif /* LOG_H_ */ log.c: #include <string.h> #include <stdio.h> #include <stdarg.h> #include <sys/timeb.h> #include <time.h> #include <assert.h> #include "log.h" #ifdef __linux__ #define SEPARATOR ('/') #elif _WIN32 /* includes windows 64 bit */ #define SEPARATOR ('\\') #else #error Platform not supported #endif #define MAX_PATH_LENGTH 512 #define MAX_FILENAME 256 #define MAX_FILE_EXTN 20 #define LOG_FILENAME "program" static log_level loglevel = LOG_ERRORS; static char logpath[MAX_PATH_LENGTH] = { 0 }; static char filename[MAX_FILENAME] = LOG_FILENAME; static char file_extn[MAX_FILE_EXTN] = "log"; static FILE* fp = NULL; static const char* get_log_filename() { return filename; } static void set_log_filename(const char* name) { if (name && *name) strncpy(filename, name, MAX_FILENAME); } static void set_path(const char* path) { int len; if (path && *path != '\0') { strncpy(logpath, path, MAX_PATH_LENGTH); len = strlen(logpath); if (len > 0 && logpath[len - 1] != SEPARATOR) logpath[len] = SEPARATOR; } } static const char* get_path() { if (!logpath) { sprintf(logpath, ".%c", SEPARATOR); } return logpath; } static char* get_append_name(char* buf) { time_t now; time(&now); strftime(buf, 20, "%y%m%d", localtime(&now)); return buf; } static const char* get_log_filename_extension() { return file_extn ? file_extn : ""; } static void set_log_filename_extension(const char* name) { if (name && *name != '\0') strncpy(file_extn, name, MAX_FILE_EXTN); } static char* construct_full_path(char* path) { char append[20] = { 0 }; sprintf(path, "%s%s%s.%s", get_path(), get_log_filename(), get_append_name(append), get_log_filename_extension()); return path; } void log_init(const char* path, const char* filename, const char* file_extension) { char fullpath[MAX_PATH_LENGTH]; if (path && *path != '\0' && filename && *filename != '\0') { set_path(path); set_log_filename(filename); set_log_filename_extension(file_extension); fp = fopen(construct_full_path(fullpath), "a"); assert(fp != NULL); /* just in case fopen fails, revert to stdout */ if (fp == NULL) { fp = stdout; fprintf(fp, "Failed to change logging target\n"); } } else { if (fp != NULL && fp != stdout) fclose(fp); fp = stdout; } } static char* get_timestamp(char* buf) { int bytes; struct timeb start; ftime(&start); bytes = strftime(buf, 20, "%H:%M:%S", localtime(&start.time)); sprintf(&buf[bytes], ".%03u", start.millitm); return buf; } void set_log_level(log_level level) { loglevel = level; } log_level get_log_level() { return loglevel; } void logd(const char* format, ...) { char tmp[50] = { 0 }; if(loglevel > 0) { va_list args; va_start (args, format); fprintf(fp, "%s ", get_timestamp(tmp)); vfprintf (fp, format, args); va_end (args); } } void loge(const char* format, ...) { char tmp[50] = { 0 }; va_list args; va_start (args, format); fprintf(fp, "%s ", get_timestamp(tmp)); vfprintf (fp, format, args); va_end (args); fflush(fp); } void flush_log() { fflush(fp); } void close_log() { if(fp != stdout) fclose(fp); fp = NULL; } An exercising program: #include "log.h" int main() { set_log_level(LOG_EVERYTHING); #ifdef __linux__ log_init("/home/acomber/Documents/code/log_test", "myprogram", "log"); #else log_init("c:\\dell", "myprogram", "log"); #endif logd("my number: %d\n", 1); logd("my string: %s\n", "string one"); loge("log session 1 finished\n"); flush_log(); close_log(); set_log_level(LOG_EVERYTHING); log_init(0, 0, 0); logd("my number: %d\n", 2); logd("my string: %s\n", "string two"); loge("log session 2 finished\n"); close_log(); set_log_level(LOG_ERRORS); log_init(0, 0, 0); logd("my number: %d\n", 3); logd("my string: %s\n", "string three"); loge("log session 3 finished\n"); close_log(); } Answer: Bug: Use of sprintf in construct_full_path does not check length It's possible for strlen(logpath) + strlen(filename) + strlen(append) + strlen(".log") to be longer than MAX_PATH_LENGTH; in this case, your call to sprintf will smash memory beyond the end of fullpath. You should at least use snprintf to avoid out-of-bounds accesses; preferably, you would ensure that the sizes all add up, by either: Allocating enough memory statically (currently, logpath could be large enough to take up the whole of fullpath), or Allocating memory dynamically to ensure the buffer will be large enough. Bug: Check if(logpath) in get_path is always true logpath is a statically-allocated array. It will never be a null pointer, so this condition will always be true (high compiler warning levels should catch this). Usability issue: SEPARATOR There are lots of platforms which are not windows or linux; exploding on those platforms because you can't work out what the path separator should be seems excessive. I would suggest wrapping that whole #ifdef ... #elif ... #else block in something like #ifndef SEPARATOR; that way, you can define the SEPARATOR macro using a compiler flag, and you don't have to care that you can't auto-detect this. If you were going to do that, I'd also suggest renaming SEPARATOR to PATH_SEPARATOR, as it has a kind of global scope (it's also a clearer name). Portability issue: #include <sys\timeb.h> only works on windows Strictly, I think that backslashes appearing in #include paths is undefined behaviour in some C standards. It certainly doesn't work right on Linux. On most platforms, #include <sys/timeb.h> should work if the file exists. If you were feeling really fussy about portability, you might also want to make it possible to get reasonable behaviour on systems which: Don't have sys/timeb.h Don't have access to wall-clock time Convention violation: logging to stdout Your 'default' log destination is stdout. For a well-behaved program, a more natural logging destination would be stderr. Nonsense: checking assertion conditions after an assertion The fragment: assert(fp != NULL); if (fp == NULL) { is bizarre. While it's true that the assertion could be compiled out with NDEBUG (meaning the condition is reachable), one generally works quite hard to ensure that turning off assertions does not change the behaviour of the system. Either fall back to default, or raise an error; don't do different things depending on whether or not the program is being debugged. Questionable: file extensions are ignored AFAICT, the extension parameter is just dropped on the floor; the file extension is always .log. Suggestion: use dynamic allocation You have lots of ugly fixed-size buffers for path components etc. While these are mostly probably large enough to not run into issues, there are a lot of awkward edge cases if you get near the edges of those buffers, and awkward interactions of sizes. Unless you have some hard performance/memory requirement, it would be much easier just to malloc a chunk of memory the size you need (which you can work out easily from the various strings you're given). Feature suggestion: accept a user-provided filehandle A client of this library might want to take advantage of the logging functions, timestamping, and log level management, while managing log location themselves. It would be very straightforward to have the option of specifying a filehandle rather than a collection of path components. Feature suggestion: consider wrapping your logging functions in macros to get location For debugging purposes, I generally find that it's very helpful to log the filename and line number of the call to a logging function (depending a little on the context you're using the logger in). If you wrap loge and logd in macros, you can get these out automatically using __FILE__ and __LINE__. For example, if you had loge as: void loge(const char *line, int line, const char *format, ...); then you could write a macro LOGE as: #define LOGE(...) loge(__FILE__, __LINE__, __VA_ARGS__) the macros __FILE__ and __LINE__ get expanded to the filename of the containing C file, and the line in that file at which the macro appears; then you can pass them through to the loge function, which could then log something like: 20:41:28.864 logtest.c:32: log session 3 finished this could be useful in some situations, although you might want an option to turn it off, or to adjust the order (lots of tools can be configured to reference <filename>:<line> pairs in various formats back to the source line, as a way of handling e.g. compiler messages). Naming consideration: loge and logd are very similar. The visual difference between loge and logd is tiny. Using more different names (e.g., log_error, log_debug), or passing the log_level as a parameter to a single log function: void log(log_level level, const char *format, ...) would in my opinion be easier to read (you might choose to make the log level names a little shorter, e.g., LOG_ERR, LOG_ALL, in that case). Naming consideration: prefer systematic naming This is probably quite subjective, but: in the case of a module such as this one, my inclination would be to prefix all of the names with log_, like so: log_init log_set_level log_get_level log_flush log_close I find this makes it easier to cope with C's lack of 'real' namespacing. Dependency-management suggestion: include your own header first Opinions vary on this, but if you start your implementation file (e.g., log.c) by including your header file (i.e., line 1: #include <log.h>) then you can tell immediately if your header file has an accidental dependency on some type or macro introduced by another header file you're including. I prefer to make my header files self-sufficient; including them first anywhere guarantees this. Subjective layout suggestion: implement top-level interface first This is overwhelmingly subjective, but I would at least consider the overall layout of your implementation file. Personally, for a module like this one, I would prefer to forward-declare the static functions, then define them at the end. This way, the file reads top-down. In other contexts, that might be less clear, and it is more typing, so it's a judgement call.
{ "domain": "codereview.stackexchange", "id": 28376, "tags": "c, logging" }
Vector Isolation/ Rotation
Question: I have a series of vectors (current speeds) based on an ENU (East North Up) system. I would like to estimate the current speed of the water headed in a specific direction. In this example I am interested in isolating the water flowing in the direction of 150 degrees, and am looking at the best way to calculate flow speeds of water in this direction. Is it possible to rotate the y component of the flow 30 degrees (180-150=30) and use this to estimate the magnitude of flow at 150 degrees? Given u=0.0407 m/s v=-0.1392 m/s Speed=0.1451 m/s Direction = 163 degrees Can I rotate the south v vector component like so New_v= v*cos(30) + u*sin(30) New_v=-0.1392*cos(30) + 0.047*sin(30) New_v=-0.10023 And say the velocity of water flowing at 150 degrees would be .10023 m/s? If I am way off, I'd be happy to learn a better way to do this Answer: First note that your angle is being measured in a clockwise sense with $\theta=0$ being the $y$-axis. the thing you want to calculate is the magnitude of the component of velocity along a particular axis right? Then the way I would do it is not look at $u$ and $v$ but instead look at the Speed "$s$" and Direction "$\theta$" that you are also given. If you want to know the magnitude of the component of velocity along a direction $\phi$, compute $s \cos(\theta - \phi)$. You could do it with the $u$'s and $v$'s. Then let $c = \cos(\phi)$ and redefine $s$ to be $\sin(\phi)$. Then the answer would be $us+vc$. The idea being that you are rotating your vector by $\phi$ so that what was in the $\phi$ direction is now pointing along the $y$ axis. Then you take the $y$ component to get what was the component along the $\phi$ direction before the rotation. Note that what you did wrong is you have $\pi - \phi$ instead of $\phi$.
{ "domain": "physics.stackexchange", "id": 9492, "tags": "homework-and-exercises, vectors, rotation" }
How is momentum conserved during the short period of time during collision when there is no kinetic energy and only potential energy?
Question: We know that in an elastic collision , momentum of the system is always conserved. But during the short period of time during collision when there is no kinetic energy and only potential energy how is momentum conserved as kinetic energy zero implies zero velocity and that using p=mv implies that momentum is zero. I tried to think that momentum could be in terms of impulse I may or may not be right. Answer: . . . . during the short period of time during collision when there is no kinetic energy and only potential energy . . . . implies that momentum is zero. Think about the centre of mass of the system of the two colliding masses whose velocity does not change during the collision. The centre of mass carries the net momentum of the system and so the system always has kinetic energy. In the centre of mass frame the two masses have equal magnitude and opposite direction momentum ie net zero momentum, and so at an instant during the collision the masses can stop relative to the centre of mass frame and have no kinetic energy with energy stored as potential energy.
{ "domain": "physics.stackexchange", "id": 64954, "tags": "energy, work, collision, kinetic-theory" }
Yeast Fermentation with YPD + Apple Juice
Question: I am working on a project for my General Chemistry class. I am trying to compare the EtOH production (and the rate of EtOH production) between seven types of yeast strains by fermenting the YPD media, apple juice, or both, if necessary. I can describe the difference between the yeast strains if asked, but I don't think it is necessary right now. Right now, I have only tested plain YPD media, which is 1% yeast extract, 2% peptone and 2% d-glucose mixture. All strains yield around 8-9 mg/mL of EtOH, and the fermentation stops after ~24 hours. My next step is to test YPD + apple juice media (because the project asks for "fun" element), and I was wondering if I should use the same method for analyzing this YPD + apple juice fermentation. The method I used for YPD media is: Grow each strain (total of 7 strains) in ~5 mL YPD media at 30 degrees shaker ~250 rpm overnight. Subculture each strain into new fresh YPD media so that the new media is 50 mL and has 0.5 OD at 600 nm. Use the "special" flask designed for anaerobic growth. This flask is a 250 mL flask with a special cap. The cap is plastic at the outer radius and rubber at the inside. This allows me to grow yeast anaerobically and poke a needled syringe and retrieve small amount of sample without letting the air in. Flush all flasks with nitrogen gas for four minutes. After 6 hour, 12 hour, and 24 hour growths, use syringe to retrieve ~1.5 mL of sample. Measure OD600 of each sample. (This is used to get the number of yeast cells in the media) Centrifuge samples for 8 minutes in 14,000 rpm. Pipette 1 mL of the supernatant into a GC vial. Use HPLC to obtain concentration of ethanol. I am using refractive index detector, ion exclusion column(?) (I forgot the type of column used at this moment), and sulfuric acid as mobile phase. For YPD + apple juice fermentation, I was thinking of doing the same procedure with 50 mL YPD + 50 mL apple juice in each "special flask". Will this work? Will this be good for the same HPLC setting? Answer: Thank you for an especially clear and detailed question! For YPD + apple juice fermentation, I was thinking of doing the same procedure with 50 mL YPD + 50 mL apple juice in each "special flask". Will this work? I think this would work but it seems like a better comparison to your earlier experiments to keep using 50 mL of total medium, i.e., mix 25 mL of YPD with 25 mL of apple juice. That way the volume of the fermentation is the same in each experiment. Will this be good for the same HPLC setting? Probably, although it depends on the components of the apple juice you use. Have you injected just apple juice (unfermented) into your HPLC? Have you verified that this negative control injection contains no ethanol or other interfering peaks in the region of interest? If not, those are things you should do to make sure the HPLC method you have developed for YPD medium will still work in apple juice-based media.
{ "domain": "chemistry.stackexchange", "id": 3231, "tags": "analytical-chemistry, chromatography" }
To create feedback, does the peak to peak amplitude of a signal loop have to have an amplification factor (gain) that reaches zero?
Question: In technical terms, feedback occurs when the gain in the signal loop reaches "unity" (0dB gain). How to Eliminate Feedback Peak-to-peak amplitude (abbreviated p–p) is the change between peak (highest amplitude value) and trough (lowest amplitude value, which can be negative). Amplitude - Wikipedia Does that mean that, to create feedback, the signal's peak to peak amplitude must reach unity gain? Or must only some component of the signal undergo at least unity gain, perhaps some bandwidth (or recurring signal element, if that makes sense)? I thought the former, but I think I just proved that wrong by trial and error IRL. Answer: In technical terms, feedback occurs when the gain in the signal loop reaches "unity" (0dB gain). Correct. Peak-to-peak amplitude (abbreviated p–p) is the change between peak (highest amplitude value) and trough (lowest amplitude value, which can be negative). Also correct but has absolutely nothing to do with the feedback. Feedback occurs when the output of a system is fed back (as the name implies) into the input of a system. If the total round trip gain is larger than 1, feedback occurs. Let's look at a simple example: a microphone hooked up to a loudspeaker. Sound that gets into the microphone gets amplified and played back from the loudspeaker. The sound radiated from the loudspeaker is picked up by the microphone, amplified and sent to the loudspeaker, etc. This creates a loop. If the gain from the mic to the loudspeaker multiplied with the gain from the loudspeaker to the microphone is larger than 1, the signal gets larger at every round trip through the loop until it hits some non-linear limit. That's feedback. In reality this a little more complicated (depends on frequency and phase as well), but that's the high level mechanism: Feedback occurs when the closed-loop gain is larger than 1. The amplitude of the input is irrelevant: a microphone will feedback even if there is no input signal at all. To create feedback, does the peak to peak amplitude of a signal loop ... That question is somewhat non-sensical. The loop is a system and doesn't have an amplitude: it only as a transfer function. The type of things that have amplitude are signals. A system (like a microphone or loudspeaker) produces on output signal from an input signal but it's not a signal itself.
{ "domain": "dsp.stackexchange", "id": 11772, "tags": "amplitude, feedback" }
What causes the collapse of a Magnetic Field?
Question: It's theorized that Mars once had a magnetic field which collapsed at some point. There's also lots of speculation that a similar process will eventually happen to Earth. More recently has been the information that the earth's magnetic field is slowly dropping, presumably preparing for its 'flip' that happens every few hundred thousand years. But what causes a magnetic field to eventually collapse to the point that it ceases to be? Answer: You might get a hint of how magnetic fields are macroscopically modeled looking at the solar dynamo The solar dynamo is the physical process that generates the Sun's magnetic field. The Sun is permeated by an overall dipole magnetic field, as are many other celestial bodies such as the Earth. The dipole field is produced by a circular electric current flowing deep within the star, following Ampère's law. The current is produced by shear (stretching of material) between different parts of the Sun that rotate at different rates, and the fact that the Sun itself is a very good electrical conductor (and therefore governed by the laws of magnetohydrodynamics). The earth's magnetic field is modeled Walter M. Elsasser, considered a "father" of the presently accepted dynamo theory as an explanation of the Earth's magnetism, proposed that this magnetic field resulted from electric currents induced in the fluid outer core of the Earth. He revealed the history of the Earth's magnetic field through pioneering the study of the magnetic orientation of minerals in rocks. In particular However, unlike the field of a bar magnet, Earth's field changes over time because it is generated by the motion of molten iron alloys in the Earth's outer core . So , because the magnetic fields depend on fluid dynamics earth's field is time dependent, and the geological studies experimentally support the dynamo model. If the fluid congeals, not currents will flowing in the interior and there will be no magnetic field in the planet.
{ "domain": "physics.stackexchange", "id": 7669, "tags": "planets, magnetic-fields" }
Would there be a potential difference across a cell connected with just a wire in complete circuit?
Question: My textbook says, that across a battery or a cell there should be a potential drop equal to its e.m.f. Now consider a simple circuit with a cell of e.m.f 9 V. Let us assume a point on the negative terminal side of the cell to be of 0 potential. Then any point on the other side of the cell must be of electric potential 9 v. Now I've also learnt from sites that the potential at all points connected by a wire (or simply a conductor) remains same. Now consider any point on the wire and it is connected with the part having 9 v (with the positive terminal) and the 0 v part (the side close to negative terminal). This means that particular point must have both 9 v and 0 v at the same time instant. Unfortunately, I also read that potential at any point at any instant is unique - a point cannot have 2 potentials at the same time - implies - this circuit cannot exist. Does this mean there will be no potential difference because two potentials cannot exist here? If that is true then no current should be flowing through the battery implying that the battery doesn't spend any of its chemical energy. But even when battery is simply connected by a wire into a complete circuit it slowly dies - it spends chemical energy by converting it to electrical energy to create the potential difference but from my assumption that shouldn't happen. How do I solve this confusion? -- There should be a potential difference but there shouldn't be a potential difference. (note: potential difference means or is the alternative term for voltage) (assume the wire has zero resistance) Answer: My textbook says, that across a battery or a cell there should be a potential drop equal to its e.m.f. To be clear, you textbook is referring the potential measured across the battery terminals without any circuit connected to the terminals, i.e., the no load or open circuit potential across the terminals. Now I've also learnt from sites that the potential at all points connected by a wire (or simply a conductor) remains same. That would only apply if there was no resistance in the wire or conductor. With the exception of superconductors all wires and conductors have some resistance. Now consider any point on the wire and it is connected with the part having 9 v (with the positive terminal) and the 0 v part (the side close to negative terminal). This means that particular point must have both 9 v and 0 v at the same time instant. Now you are assuming that both the wire has no resistance and the battery has no resistance. All real batteries have internal resistance. When current is drawn by the wire there will be voltage drop across the battery's internal resistance, so the voltage across the battery terminals will equal the battery emf (no load voltage) minus the voltage drop across the internal resistance. How do I solve this confusion? You solve the confusion when you realize that no conductor (except superconductors) has zero resistance and no battery is an ideal voltage source (a source having zero internal resistance). You said that a voltage equals e.m.f only when there is no current flowing . But according to Ohms law, V = IR. If I=0 then V=0 .How does pd exist then Re-write Ohm's law as $$I=\frac{V}{R}$$ Now think about the 9 V battery terminals not connected to anything. It's the equivalent as connecting a resistor across the terminals where the value of the resistance is infinite, or $R = ∞$. Then we have $$I=\frac{emf}{∞}= \frac{9}{∞}=0$$ Hope this helps.
{ "domain": "physics.stackexchange", "id": 65561, "tags": "electricity, electric-circuits, voltage" }
if a simple pendulum is dropped in a elevator with a acceleration greater than acceleration due to gravity then what will be its frequency
Question: if a simple pendulum is dropped in a elevator with a acceleration greater than acceleration due to gravity then what will be its frequency ? We know time period depends on frequency. Answer: Same formula as usual but with the difference that. $$ g\rightarrow g\pm g'$$ where $g'$ is the acceleration of the elevator. The plus sign is used if the two accelerations are in different directions... Thus if elevator free fall then $g-g'=0 $ and the period goes to infinity.
{ "domain": "physics.stackexchange", "id": 7811, "tags": "classical-mechanics" }
Is any term within a quantum lagrangian equal to its transpose?
Question: A lagrangian is a scalar under any relevant symmetry group (at least in standard theory). This would make me think that we can take any single term within any lagrangian and transpose it without repercussions -- doing any subsequent algebra correctly (taking fermionic anti-commutation into account, for instance), of course. Is this correct? I'm not sure how the fact that the degrees of freedom here are operator-valued fields can mess with this conclusion -- even if the transposition regards only Lorentz spinor indices. For instance, consider an interaction term of a doubly-charged vector boson with a pair of same-sign charged leptons. Can we 'impose' the first equality below $$ U^{++}\bar{\ell_b^c}\gamma_\mu P_L \ell_a=(U^{++}\bar{\ell_b^c}\gamma_\mu P_L \ell_a)^T=-U^{++}\bar{\ell_a^c}\gamma_\mu P_R\ell_b $$ (where $\ell^c$ is the charge conjugate of $\ell$ and the $P$ are chirality projectors)? Answer: Generally people do not work with operator-valued Lagrangians. In fact, the only source I can think of off the top of my head that does is Weinberg's QFT volume 1, and if I recall correctly he only uses it to work out some things for QED. Working with operator-valued Lagrangians will generically get you into a lot of trouble with ordering issues. However, let me also say that, indeed, the Lagrangian is a scalar in the sense that all indices, whether they be Lorentz or internal, must be contracted in a sensible way. So necessarily, this object is invariant under the operator of "transpose" where "transpose" in defined to mean exchanging the orders of these indices. If your Lagrangian is operator-valued, it will not necessarily be invariant under transposing terms as operators. In general, if you are going to work with objects like the ones you're describing, I recommend you switch to using index notation as using position-based notation for indicating matrix multiplications tends to conflict with operator multiplication ordering (essentially, you're trying to use adjacency of symbols to indicate both operator multiplication and matrix multiplications). It's generally a good idea to not overload notation in this way, and it would seem that this is precisely what's causing your confusion.
{ "domain": "physics.stackexchange", "id": 76436, "tags": "quantum-field-theory, lagrangian-formalism" }
Why is PSD estimated, and not simply computed?
Question: I apologize if this comes as a basic question, but I am struggling to understand why the PSD can only be estimated and not directly computed. For example, this thread discusses several such PSD estimation methods. If I have a deterministic signal with a fixed number of samples, shouldn't I be able to directly determine its spectral information? Also, it is well known that PSD can be defined as the Fourier transform of the auto-correlation function. Isn't this calculation deterministic? Finally, in this paper the authors compute the PSI directly from the FFT coefficients as shown below. There doesn't seem to be anything stochastic about the computation, and the computation of FFT coefficients is not really an estimation, to the best of my understanding. What am I missing? Answer: Well, because in a lot of real world problems this If I have a deterministic signal with a fixed number of samples, shouldn't I be able to directly determine its spectral information? is just not the case. Very often, measured signals are more of a random process. A simple and common case would be to have the desired signal and some additive noise, very often Gaussian in nature. While you can capture the signal for a while and then deterministically calculate the PSD of those samples, what you get is the instantaneous PSD of these samples which may or may not be close to the actual PSD of the whole random process that you are sampling. It all boils down to what you want to do but in a lot of cases you want the PSD of the random process that you are sampling and also very often, just capturing some samples and calculating the instantaneous PSD just does not come close to it. There is a lot of different methods for PSD estimation. Some may work better in some cases than other, maybe some are more computationally expensive than others. Or possibly some reach a better estimate with fewer samples as input data. In the end it boils down to select the most appropriate method for your given scenario.
{ "domain": "dsp.stackexchange", "id": 1254, "tags": "power-spectral-density, estimation" }
Correct way of zero padding in time domain
Question: I am very new to signal processing and want to learn the correct way of zero padding for 'n' even and odd input signals. For example, $N=6\;\&\;M=32$ $x(n)=[a, b, c, d, e, f]\;\;\;n=0,1,\dots,N-1$ $x_{zeros}(m)=[a, b, c, d, e, f, zeros(M-N)]\;\;\;m=0,1,\dots,M-1$ -or- $x_{zeros}(m)=[left\;zeros\dots a, b, c, d, e, f,\dots right\;zeros]\;\;\;m=0,1,\dots,M-1$ -or- $x_{zeros}(m)=[a, b, c,\dots zeros\;in\;the\;middle\dots d, e, f]\;\;\;m=0,1,\dots,M-1$ What to consider when the input is $\mathrm{Real}$ or $\mathrm{Complex}$ of even and odd lengths? Answer: Zero padding changes the bin size of the DFT resulting a finer frequency resolution. It is done by appending zeros to the end of the signal because we are only artificially increasing the length to decrease the bin size, $\frac{2\pi k}{N}$. An $N$ point DFT is just evaluating the DTFT on certain frequencies. If you were to append some zeros to the front of the signal then you are doing a time shift, and a shift in time domain is modulation in the frequency domain: $x[n-k]=X(\omega)e^{-j\omega k}$.
{ "domain": "dsp.stackexchange", "id": 8278, "tags": "zero-padding" }
Escape velocity from long ladder
Question: The escape velocity of earth is roughly $11 kms^{-1}$. However, what if a long ladder was built extending out of Earth's atmosphere and considerably more. Then if something was to climb up at much less than the escape velocity, what would happen when it reached the end? And what if the object that climbed the "ladder" then fired some kind of thruster/rocket and was going fast enough so that it orbited the Earth. It would mean less energy required to get into orbit? Answer: "Escape velocity" is really just a measure of the kinetic energy an object near the surface of the Earth would need to have to start with in order to just run out of energy at the point where it was infinitely far from Earth, having converted all of its initial kinetic energy to gravitational potential energy. Even if you built a giant ladder or a space elevator or whatever, the total energy required to get to orbit is exactly the same, it just comes in a less spectacular form. Rather than burning a rocket the whole way, you would be doing a slower conversion of energy into gravitational potential energy-- electricity running a motor to winch the rocket up to the top of a space elevator, or chemical energy from food as you climbed a bazillion stairs to get there, or whatever. A space elevator would be an attractive way to get a rocket into orbit, or away from the Earth because it reduces the amount of rocket fuel you need to use to get there, replacing it with some other source that is more convenient (and less explosive) to work with. But you still need the same total amount of energy to get your payload into orbit.
{ "domain": "physics.stackexchange", "id": 107, "tags": "newtonian-gravity, earth, escape-velocity" }
How do we know that the actual universe has no Killing vector fields?
Question: This article states the following: The infinitude of conserved energies constructed via Noether’s theorem suffers a startling reversal as soon as Special Relativity is superseded by General Relativity. There, in the generic case and certainly for the actual universe, instead of an infinitude of global time-like Killing vector fields, there are none. Is this a basic fact about GR? If so, how does one show that, in general, curved space time has no Killing vector fields? Answer: Local (over scales like the Solar system at least) timelike Killing vectors do exist, otherwise we could not formulate and experimentally confirm any conservation low for energy. What can be said is that the observed expansion of the spatial sections of the universe does not permit a large scale timelike Killing vector orthogonal to those spatial sections. This is because the lines representing the stories of galaxies would be simultaneously geodesics and integral lines of the Killing field. A consequence of the Killing symmery would be a stationary 3D geometry on those spatial sections with respect to the Killing time (or also the local proper time). Instead, we see an expansion: the spatial distaces between galaxies increases in time. I think there is only the possibility that a large scale conformal timelike Killing vector exists, as it seems referring to the cosmic background radiation...
{ "domain": "physics.stackexchange", "id": 62567, "tags": "general-relativity, spacetime, metric-tensor, symmetry, vector-fields" }
Dark energy and light red shifting
Question: When light is red shifted from distant galaxies, the photons have lost energy. When dark energy pushes objects apart, those objects have gained energy from a larger gravitational potential. Is the amount of energy that dark energy applies to push objects apart equal to the amount of energy lost because light from distant galaxies is red shifted? Answer: Yes, the loss in energy between the redshifting photons and the gain in energy by the expansion of the universe (as the energy density per unit volume of space is maintained with the expansion) appears to be exactly conserved according to this astronomer: Is Energy Conserved When Photons Redshift In Our Expanding Universe?
{ "domain": "astronomy.stackexchange", "id": 6379, "tags": "redshift, dark-energy" }
How to do high-resolution FFT on just the lower frequencies in a signal?
Question: Description of the data and problem: I have a signal sampled at 1000 Hz. I'm low-pass filtering it at 120 Hz, and want to make spectrograms of the frequencies below this threshold. I'm using the scipy functions fftpack.fftfreq to get the Fourier coefficients, then fftpack.fft to do the actual transform. The signal is quite long, and I want the spectrograms to be about 5 seconds in length using 50 millisecond windows. I also filter out negative frequencies for plotting. Given the number of samples in this window (50), I get 50 Fourier coefficients. However, half of these are negative, and another portion goes between 120 Hz and 500 Hz (naturally, because the sampling rate is 1000 Hz). This leaves me with pretty low-resolution spectrograms. I only have about 6 blocks in the frequency area of interest (0, 20, 40, 60, 100, and 120 Hz), then 18 blocks showing low activity in the frequencies up to 500 Hz, since they were filtered out. Question: How could I, for instance, do the FFT specifying something like 50 frequency bins between only 0 and 120 Hz? What I tried: I tried doing something like this by using np.linspace to specify the frequencies (instead of fft.fftfreq), but this introduced some other bug, namely the spectrograms always looked like a mirror, with high power at the highest and lowest frequencies and low power in the middle of the graph, regardless of the range. I'm honestly not sure why this happens with linspace and not with fftfreq. They both return arrays of floats. Code sample included below. Thanks for any help in advance! Cheers X = samples_filt # long 1-D vector of low-pass filtered samples fs = 1000 time = .05 # window length in sec N = int(fs * time) # num samples tot_len = 5 * fs # 5 sec of the whole signal X = X[:tot_len] f = fftpack.fftfreq(N, 1.0/fs) # f = np.ceil(np.linspace(0, max_freq, 52)[1:51]) # introduced error described above mask = (f > 0) # mask for positive freqs n_max = int(np.ceil(X.shape[0] / N)) # the number of segments of length N in the sample array data f_values = np.sum(1 * mask) # how many values meet mask reqs spectogram_data = np.zeros((f_values, n_max)) window = sp.signal.blackman(N) # taper used to improve contrast of spectrogram for n in range(0, n_max): subdata = X[(N * n):(N * (n + 1))] F = fftpack.fft(subdata * window) spectogram_data[:, n] = np.log(abs(F[mask])) fig, ax = plt.subplots(1, 1, figsize=(8, 6)) p = ax.imshow(spectogram_data, origin='lower', extent=(0, X.shape[0] / fs, 0, max(f)), aspect='auto', cmap=mpl.cm.RdBu_r) cb = fig.colorbar(p, ax=ax) Sample output: For instance, it would be great to have this same graph with 24 frequency bins going up to 120 Hz. Answer: There is a fundamental limit to how much linear frequency resolution you are going to get for a given window size, at least using traditional linear techniques and making no assumptions about the source signal. There are techniques that does pretty much the same as an FFT but returning only a subset of frequency bins. They are typically used for cost reasons. For convenience, you might consider resampling your signal to 2x120=240Hz, then do a spectrogram of longer than 50ms windows depending on the temporal resolution you need. Frequency resolution is: d_f = fs/L For d_s = 120/50=2.4 Hz you might want something like: L = fs/d_f=240/2.4=100 samples Ie a window length of: 100/240 = 0.42 seconds
{ "domain": "dsp.stackexchange", "id": 10920, "tags": "fft, fourier-transform, python, spectrogram" }
How to derive the theorem about CHSH inequality in the $\mathbb{C}^2\otimes\mathbb{C}^2$ space in (Horodecki, 1995)?
Question: In the $\mathbb{C}^2\otimes\mathbb{C}^2$ space, a state can be represented as: \begin{equation} \rho=\frac{1}{4}(\mathbb{I}\otimes\mathbb{I}+\mathbf{r}\cdot\mathbf{\sigma}\otimes\mathbb{I}+\mathbb{I}\otimes\mathbf{s}\cdot\mathbf{\sigma}+\sum^3_{n,m=1}t_{nm}\sigma_n\otimes\sigma_m) \end{equation} where $\mathbf{r}$, $\mathbf{s} \in\mathbb{R}^3$ And the Bell operator associated with the CHSH inequality is given by: \begin{equation} \mathcal{B}_{CHSH}=\mathbf{\hat a}\cdot\sigma\otimes(\mathbf{\hat b}+\mathbf{\hat b}')\cdot\sigma+\mathbf{\hat a}'\cdot\sigma\otimes(\mathbf{\hat b}-\mathbf{\hat b}')\cdot\sigma \end{equation} where $\mathbf{\hat a}$,$\mathbf{\hat a}'$,$\mathbf{\hat b}$,$\mathbf{\hat b}'$are unit vectors in $\mathbb{R}^3$ and the CHSH inequality states that: \begin{equation} \mathrm{tr}(\rho\mathcal{B}_{CHSH})\leq2 \end{equation} and this paper* states that by some calculation \begin{equation} \mathrm{tr}(\rho\mathcal{B}_{CHSH})=\langle\mathbf{\hat a},T_{\mathscr{l}}(\mathbf{\hat b}+\mathbf{\hat b}')\rangle+\langle\mathbf{\hat a'},T_{\mathscr{l}}(\mathbf{\hat b}-\mathbf{\hat b}')\rangle \end{equation} where $T_{\mathscr{l}}$ is a $3\times 3$ matrix formed by $t_{nm}$. I've tried to expand every term in its matrix form, but doing so is simply not feasible. What is the proper way to arrive this result? *Ryzard Horodecki, Paweł Horodecki, Michał Horodecki, (1995). Violating Bell Inequality by Mixed Spin- 1/2 States: necessary and sufficient condition, Physics Letter A, 200, 340-344 Answer: Huh, the calculation is actually quite simple. I am surprised I didn't notice that earlier. By noticing that the trace of $\mathbf{e}\cdot\sigma$, $\forall \mathbf{e}\in \mathbb{R}^3$ is actually zero , and the following property: \begin{equation} \mathrm{tr}(A\otimes B)=\mathrm{tr}(A)\mathrm{tr}(B) \end{equation} We can see that most of the terms in $\mathrm{tr}(\rho\mathcal{B}_{CHSH})$ is actually zero except for the last term, which gives: \begin{align} \mathrm{tr}(\rho\mathcal{B}_{CHSH}) &=\sum^3_{n,m=1}t_{nm}[a_n(b_m+b_m')+a_n'(b_m-b_m')]\\ &=\sum^3_{n=1}a_n(T_{\mathscr{l}}(\mathbf{\hat b}+\mathbf{\hat b}'))_n+a_n'(T_{\mathscr{l}}(\mathbf{\hat b}-\mathbf{\hat b}'))_n\\ &=\langle\mathbf{\hat a},T_{\mathscr{l}}(\mathbf{\hat b}+\mathbf{\hat b}')\rangle+\langle\mathbf{\hat a}',T_{\mathscr{l}}(\mathbf{\hat b}-\mathbf{\hat b}')\rangle \end{align}
{ "domain": "physics.stackexchange", "id": 84180, "tags": "quantum-information, bells-inequality" }
Conjectured NP-complete problems
Question: Assume P != NP. Then there are many examples of problems in NP that are known not to be NP-complete, like 2-SAT, and many that are conjectured not to be NP-complete, like factorization. However, are there any problems that are widely conjectured but not known to be NP-complete? Answer: There are no natural problems that I am aware of that have that property. There are two problems which are "close", however. The Minimum Circuit Size Problem is, given a truth table of a boolean function $f$ and an integer $k$, is $f$ computed by a circuit of size at most $k$? Note $MCSP \in NP$, and it is widely expected that $MCSP$ is "hard"; however, it is not known that $MCSP$ is $NP$-complete, and while it is easy to conjecture that it might be, there is also some evidence that it is not. The Unique Games Conjecture states that approximating the value of a certain combinatoric problem is $NP$-hard. This problem is not strictly in $NP$ because its decision version is a "promise problem". Nevertheless, the UGC is one of the premier open questions of computer science today and there is little consensus on whether it is true or not.
{ "domain": "cs.stackexchange", "id": 6482, "tags": "complexity-theory, reference-request, np-complete" }
Using both setInterval and setTimeout for simple image carousel
Question: I have a really basic image carousel that I just wrote. Basically I'm trying to keep it as small and light weight as possible, but I get the feeling that I've gotten way off the beaten path of how these things normally work. function hide(element, index, array) { if (index > 0) { slides[index].setAttribute('style', 'opacity:0;'); } } var carousel = document.getElementById("carousel"), slides = carousel.getElementsByTagName('li'), counter = 0, liList = Array.prototype.slice.call(slides); setInterval(function() { slides[counter].setAttribute('style', 'opacity:1;'); counter++; if (counter == slides.length) { counter = 0; setTimeout(function() { liList.forEach(hide); }, 3000); // setTimeout } }, 3000); // setInterval #carousel { padding: 0; margin: 0; position: relative; width: 315px; height: 177px; } #carousel li { opacity: 0; list-style: outside none none; width: 315px; position: absolute; background: #fff; transition: opacity 1s; } <ul id="carousel"> <li style="opacity:1;"> <img src="http://pluggedinwebdesign.com/images/labyrinth.jpg" alt="" /> </li> <li> <img src="http://pluggedinwebdesign.com/images/RopesLittlePlanet2.jpg" alt="" /> </li> <li> <img src="http://pluggedinwebdesign.com/images/FireRing2.jpg" alt="" /> </li> </ul> Could this be trimmed down further, is this a bad way to go about something this simple, or am I just over thinking it? Answer: How about this: every 3 seconds, you hide the previous image and show the next image. This is simple. Some changes I made: I prefer .style.opacity = as opposed to setAttribute("style"). This may be personal preference, but it looks cleaner to me because it uses javascript's object access instead of setting it via a string. You can use a modulus (%) instead of checking if counter === slides.length. var carousel = document.getElementById("carousel"), slides = carousel.getElementsByTagName('li'), counter = 0, liList = Array.prototype.slice.call(slides); setInterval(function() { slides[counter].style.opacity = 0; // Hide the previous image counter = (counter + 1) % slides.length; // Increment counter slides[counter].style.opacity = 1; // Show the next image }, 3000); // setInterval #carousel { padding: 0; margin: 0; position: relative; width: 315px; height: 177px; } #carousel li { opacity: 0; list-style: outside none none; width: 315px; position: absolute; background: #fff; transition: opacity 1s; } <ul id="carousel"> <li style="opacity:1;"> <img src="http://pluggedinwebdesign.com/images/labyrinth.jpg" alt="" /> </li> <li> <img src="http://pluggedinwebdesign.com/images/RopesLittlePlanet2.jpg" alt="" /> </li> <li> <img src="http://pluggedinwebdesign.com/images/FireRing2.jpg" alt="" /> </li> </ul>
{ "domain": "codereview.stackexchange", "id": 10451, "tags": "javascript, optimization" }
To what extent is an x86 machine equivalent to a Turing Machine?
Question: To what extent is the abstract model of computation specified by the x86 language Turing complete? The above question is related to this question: Is C actually Turing-complete? In theoretical computer science the random-access stored-program (RASP) machine model is an abstract machine used for the purposes of algorithm development and algorithm complexity theory. The RASP is a random-access machine (RAM) model that, unlike the RAM, has its program in its "registers" together with its input. The registers are unbounded (infinite in capacity); whether the number of registers is finite is model-specific. Thus the RASP is to the RAM as the Universal Turing machine is to the Turing machine. The RASP is an example of the von Neumann architecture whereas the RAM is an example of the Harvard architecture. https://en.wikipedia.org/wiki/Random-access_stored-program_machine Answer: RIP relative addressing of the x86 language provides access to unlimited memory An abstract machine having a tape head that can be advanced in 0 to 0x7FFFFFFF increments an unlimited number of times specifies a model of computation that has access to unlimited memory. The technical name for memory addressing based on displacement from the current memory address is relative addressing. I am focusing on building a precise bridge between conventional high level concrete programming models (none of which can possibly be Turing Complete) and the closest abstract model that is Turing complete. To do this we must necessarily make at least minimal adaptations to the concrete model. One concrete model of computation requiring very little change to adapt it to become a Turing complete abstract model of computation is the 32-bit signed RIP relative addressing of the x86-64. If we take this machine language and get rid of all absolute addressing modes, leave the size of memory and the width of the instruction pointer unspecified then with these minimal changes the abstract model based on the x86-64 becomes Turing complete.
{ "domain": "cs.stackexchange", "id": 16841, "tags": "turing-machines, turing-completeness, church-turing-thesis" }
Crispr complex in human cells?
Question: Is the crispr (where the parts of Virus DNA is saved) section of the DNA existing in human cells aswell or is it just in bacteria cells? Answer: No analogues of the CRISPR-Cas system have been found in any eukaryotic species, including humans. So far, it appears to have evolved only in prokaryotes and archaea. Reference: Evolution of RNA- and DNA-guided antivirus defense systems in prokaryotes and eukaryotes: common ancestry vs convergence
{ "domain": "biology.stackexchange", "id": 9220, "tags": "dna, crispr" }
PDO (MySQL) connection and query class, safety and logic
Question: For the last few day I tried to figure out how to work with MySQL and PDO. Although I've tried to read a lot about the subject, there are still a lot of things I don't understand. Because of this lack of knowledge, I can't really judge this code (or other example code on-line) on its safety and logic, and therefore was hoping I could get some feedback on it. The class makes a connection and returns a query: class Connection { private $username = "xxxx"; private $password = "xxxx"; private $dsn = "mysql:host=host;dbname=name_db"; private $sql; private $DBH; function setSQL($sql){ $this->sql = $sql; //if $sql is not set it troughs an error at PDOException } public function query() { //the connection will get established here if it hasn't been already if (is_null($this->DBH)) try { $this->DBH = new PDO( $this->dsn, $this->username, $this->password ); $this->DBH->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION ); //query $result = $this->DBH->prepare($this->sql); $result->execute(); return $result->fetch(PDO::FETCH_ASSOC); } catch(PDOException $e) { echo "I'm afraid I can't do that1."; //file_put_contents('PDOErrors.txt', $e->getMessage(), FILE_APPEND); } } //clear connection and variables function __destruct(){ $this->DBH = null; } } Using the class: $sql = "SELECT * from stock WHERE id = 302"; $test = new Connection(); $test->setSQL($sql); echo $test->query(); Answer: 1) When using PDO, you want to get the benefits of automatic parameter escaping. By wrapping your PDO class you are limiting it's functionality. Check out this question for a better example: https://stackoverflow.com/questions/6366661/php-pdo-parameterised-query-for-mysql 2) Every time you run a query you are making a new PDO instance. It might be better to have an application resource pool that you can call to get a preconfigured db handle. 3) You wrote some sql to be able to fetch a stock by ID, that should probably be functionality that is reusable. combining 1 & 2 & 3 .. code would probably look better as something like this: class ApplicationResourcePool { static var $_dbHandle; private static $_dbConfig = array( 'dsn' => '', 'username' => '', 'password' => '', ); public static getDbHandle(){ if(self::$_dbHandle == null){ self::$_dbHandle = new PDO( self::$_dbConfig['dsn'] , self::$_dbConfig['username'], self::$_dbConfig['password'] ); } return self::$_dbHandle; } } class StockMapper { protected $_dbh; public __construct($dbh = null) { if($dbh == null){ $this->_dbh = ApplicationResourcePool::getDbHandle(); } else { $this->_dbh = $dbh; } } public getStockById($stockId){ $sth=$this->_dbh->prepare("SELECT * from stock WHERE id = :stockId"); $sth->bindParam(":stockId", $stockId); $sth->execute(); $result = $sth->fetch(PDO::FETCH_ASSOC); return $result[0]; } } your codeBlock then becomes: $stockMapper = new StockMapper(); $stockData = $stockMapper->getStockById('302');
{ "domain": "codereview.stackexchange", "id": 811, "tags": "php, mysql, pdo" }
How to generate a signal which frequency are obtained?
Question: I want to generate a signal which frequency are obtained. For example,I want the frequency domain similiar to this one: So I use the following code: fs =44100; t = 0:1/fs:4; f = 800 + 500*sin(2*pi*t); x = sin(2*pi*f); plot(t,x) But using Adobe Audition to visualize its spectrogram. It looks different from my hope. So how can I generate a signal if already have freqency? Answer: Your code is wrong because when you want to get a signal whose frequency changes with time, then the argument of the sine you generate is given by the integral of the function you want to generate in frequency (see https://en.wikipedia.org/wiki/Chirp). That's why, in your case, your f function should be: $f(t)=\int_0^t800+500\sin{2\pi t}=\phi_0 + 800t -\frac{500}{2\pi} \cos(2\pi t)$ If you set the zero phase $\phi_0=0$, then you would obtain the signal you are looking for. Apart from that, take into account that if you just want that signal in the positive part of the spectrum of your double sided spectrum, then you should do $e^{if(t)}$, because if you just take the sine of $f(t)$ then the signal would be replicated in your negative side of the spectrum.
{ "domain": "dsp.stackexchange", "id": 4144, "tags": "matlab, frequency-spectrum" }
Set new Idents based on gene expression in Seurat and mix n match identities to compare using FindAllMarkers
Question: I am relatively new to Bioinformatics and scRNA-seq data analysis. I am using Seurat V3 to analyze a scRNA-seq dataset in R. Currently, I have merged three scRNA-seq samples from the same donor into one Seurat object, All_Samples. We'll call them: Uninfected Virus1 Virus2 All_Samples <- merge(x = Uninfected, y = c(Virus1, Virus2)) Uninfected cells did not receive virus (i.e. express no viral genes), the other two samples were infected with a virus introducing new genes into the cells. I would like to accomplish two things. First: I want to create a new Ident for Virus1 and Virus2 samples based on expression of a viral gene of interest: "GeneA". All cells from Virus1 and Virus2 samples that have "GeneA" expression > 0.5 would be labeled "Pos", those with "GeneA" < 0.5 would be labeled "Neg" in a the new Column called GeneA. I am able to subset the objects based on GeneA expression that applies to all samples in the object. For example: All_Samples_GeneA_Pos <- subset(All_Samples, subset = GeneA > 0.5) All_Samples_GeneA_Neg <- subset(All_Samples, subset = GeneA < 0.5) I assume I would have to modify All_Samples@meta.data based on this post and this post, but I admit I'm not entirely sure how to implement the suggested answers into my data. Second: Once I have separated the data from Virus1 and Virus2 into "Pos" and "Neg" cells for "GeneA", I want to look for differentially expressed genes between all Uninfected cells and "Pos" cells from Virus1 or Virus2. I have been using something like this to compare samples: FindMarkers(object = All_Samples, group.by = 'virus', ident.1 = "1", ident.2 = "2") But how would I do something equivalent to this: FindMarkers(object = All_Samples, ident.1 = "Uninfected", ident.2 = "Virus1_GeneA_Pos") Or maybe FindMarkers(object = All_Samples, ident.1 = "Uninfected_GeneA_Neg", ident.2 = "Virus1_GeneA_Pos") Let me know if I can clarify any points. I appreciate any help! Answer: For the first question, you can use ifelse() to create a new column in the meta.data slot: All_Samples@meta.data$new_column <- ifelse( rownames(All_Samples@meta.data) %in% colnames(All_Samples_GeneA_Pos), "GeneA_Pos", "GeneB_Pos" ) colnames(seurat_object) provides a vector of cell names in a given Seurat object. Here whatever cell that is in the All_Samples_GeneA_Pos object would be GeneA_Pos and whatever is not GeneB_Pos. To better control the behavior, you can use a "nested" ifelse(); you can have another ifelse() instead of the "GeneB_Pos" bit above. For the second question, the rationale is the same as above, you can define an extra column with the "identities of interest", then use these identities with SetIdent() and then in turn using two identities of interest in the differential expression call.
{ "domain": "bioinformatics.stackexchange", "id": 1349, "tags": "r, scrnaseq, seurat, covid-19, sars-cov-2" }
Translation Operator and Position Basis
Question: In Modern Quantum Mechanics by Sakurai, at page 46 while deriving commutator of translator operator with position operator, he uses $$\left| x+dx\right\rangle \simeq \left| x \right\rangle.$$ But for every $\epsilon > 0$ $$\langle x+ \epsilon \left| x \right\rangle = 0.$$ Therefore this limiting process $$\lim_{\epsilon \rightarrow 0} \left| x+ \epsilon \right\rangle = \left| x \right\rangle$$ does not make sense for me. I couldn't derive commutator relation without using these. Answer: The derivation by Sakurai is by no means mathematically rigorous, so you should expect something like your argument about the scalar product. Indeed, we have everything more or less fine until $$ [x,\mathcal{T}(\epsilon)]|z\rangle=\epsilon|z+\epsilon\rangle $$ where we want to replace $|z+\epsilon\rangle$ by $|z\rangle$ and claim that it is ok in the first order in $\epsilon$. As soon as position eigenstates are non-normalizable, there is no measure of 'smallness' to use in our reasoning about orders. However, what makes sense is to deduce $[x,\mathcal{T}(\epsilon)]=\epsilon\mathcal{T}(\epsilon)$, which is true for any finite $\epsilon$. Here the reason why everything works nice is that $\mathcal{T}$ is a good bounded(=continuous) operator which is defined on the whole Hilbert space of states, and is easily understood even on the generalized vectors like $|x\rangle$. In fact, if you work in coordinate representation, you can deduce this commutator working only with normalizable wavefunctions, on which the action of $x$ is defined (they remain normalizable after this action), giving completely rigorous mathematical sense to your calculation. What is different when you try to deal with Sakurai's $K$ (which you are trying to do every time when talking about infinitesimal translations) rigorously, is that it is a bad (unbounded, discontinuous) operator. Indeed, in a sense, $$ K=i\left.\frac{d}{d\epsilon}\mathcal{T}(\epsilon)\right|_{\epsilon=0}. $$ But the only way to give sense to this formula is to define the action of $K$ on states: $$ K|\psi\rangle=i\lim_{\epsilon\to0}\frac{\mathcal{T}(\epsilon)|\psi\rangle-|\psi\rangle}{\epsilon} $$ But this limit exists only for certain good states, which we say are in the domain of $K$. In fact, if you look at $K$ in the coordinate rep, it is just $-i\frac{d}{dx}$, which is defined on the (everywhere dense) subspace of differentiable functions of the space $L_2$ of square-integrable functions. When you deal with $K$ rigorously, you have to restrict yourself to the domain of $K$ (for example, if you consider joshphysics answer, where every formula with $K$ is restricted to the domain, it is almost a rigorous proof). However, due to some reason, which is surely related to the fact that the domain $D(K)$ of $K$ is everywhere dense -- any state can be approximated by a state from $D(K)$ to any desired accuracy, a careless treatment like that of Sakurai works.
{ "domain": "physics.stackexchange", "id": 7521, "tags": "quantum-mechanics, operators, commutator, dirac-delta-distributions" }
Can the up quark still be massless?
Question: It used to be commonly discussed that the bare mass of the up quark can be $0$. This was because we can't observe its effect directly. To my knowledge the up quark can only be measured by its effect on the pion mass due to chiral perturbation theory, but relating the mass to the up quark mass results in some large uncertainties (see for example here for a discussion on this). However, I can't find any recent reference still discussing this possibility. Is this still allowed experimentally? Answer: This admittedly, isn't much of an answer, as I'm merely repeating information from the Particle Data Group page about the up-quark, which I consider up-to-date. Their current combination is that $m_u = 2.3^{+0.7}_{-0.5}\,\text{MeV}$, but they warn that The $u$-, $d$-, and $s$-quark masses are estimates of so-called "current-quark masses," in a mass-independent subtraction scheme such as MS-bar. The ratios $m_u/m_d$ and $m_s/m_d$ are extracted from pion and kaon masses using chiral symmetry. The estimates of $d$ and $u$ masses are not without controversy and remain under active investigation. Within the literature there are even suggestions that the $u$-quark could be essentially massless Naively, then, the up-quark isn't massless (for a reasonable level of agreement with data), but there are indeed complications and caveats, which mean that the idea that it might be massless is alive.
{ "domain": "physics.stackexchange", "id": 24959, "tags": "mass, standard-model, quarks" }
Derivation of Raising operator of $\rm SU(2)$
Question: I'm reading a paper called: "A Simple Introduction to Particle Physics Part I - Foundations and the Standard Model" and i have some questions regarding the derivation of the raising and the lowering operators of $SU(2)$. in the adjoint representation for $j=1$ the only Cantan generator is the the $J^3$ matrix, so the root vectors are the following: $$t_1=1, t_2=0, t_3=-1 \ \ \text{Equation(s) (1)}$$ Since we want $[J^3,E^{\pm}]=\alpha E^{\pm}$, the raising and the lowering operator needs to be in this form: $$E^{\pm}=\alpha(J^1\pm iJ^2)$$ Next we evaluate: $$[J^3,E^{\pm}]=2\alpha^2J^3$$ Now the paper I'm reading states the following: From equations $(1)$ and the definitions of $E^{\pm}$ we see that $\pm(t_1-t_2)=\pm 1$, So we therefore set $\alpha^2=1 \Rightarrow \alpha=\frac{1}{\sqrt{2}}$ and we find the approriate non carter generators: $$E^{\pm}=\frac{1}{\sqrt{2}}(J^1\pm iJ^2)$$ Question 1: During the derivation of the relations that the generators should obey the authors state that: $$[H^a,E^{e_b}]=e^a_bE^{e_b}$$ where $H^a$ a Cartan and $E$ a non-Cartan generator. Why can the raising operator and the lowering operator be the combination of two Non-Cartan generators $J^1$ and $J^2$? Question 2: What's the relation between the final result and equations $(1)$? While i understand what these equations mean, i don't understand how one result leads to the final result. Why do we only care about $\pm(t_1-t_2)$ and not $\pm(t_1-t_3)$ for example? Answer: I think you mean "Cartan" (after Elie Cartan, the French mathematician) rather than "Cartar"? A semi simple Lie algebra is the direct sum of the Cartan algebra composed of a mutually commuting generators and the rest of the generators, whose skew adjointness with respect to the Killing form shows that they can be gathered into pairs of ladder operators. I have no idea, though, what you mean by $t_1$ and $t_2$. I do not have time to try and find the paper you cite so as to figure out what they are. If you want help here, you need to explain what is puzzling. If the Cartan algebra generators are $h_i$ (in $\mathfrak {su}(2)$ there is only one, which is $J_3$) the ladder operators ${\bf e}_{\boldsymbol \alpha}$ are the simultaneous eigenvectors of the adjoint action $$ {\rm ad}(h_i) {\bf e}_{\boldsymbol \alpha} \equiv[h_i, {\bf e}_{\boldsymbol \alpha}]= \alpha_i {\bf e}_{\boldsymbol \alpha}. $$ of the maximally commuting set $h_i$. This means that ${\bf e}_{\boldsymbol \alpha}$ changes the eigenvalues of $h_i$ by $\alpha_i$, the $i$'th components of the root vector ${\boldsymbol \alpha}$. For $\mathfrak {su}(2)$ the root vector ${\boldsymbol \alpha}$ has only one component "1" and so $J_3$ has its eigenvalue increased by unity by ${\bf e}_{\boldsymbol \alpha}=J_1+iJ_2$. For each ${\boldsymbol \alpha}$ there is a second ladder operator ${\bf e}_{-{\boldsymbol \alpha}}$, in this case it is $J_1-iJ_2$ that reduces the eigenvalue of $J_3$ by unity. The set $h_i$ and the ${\bf e}_{\pm{\boldsymbol \alpha}}$ form a basis for the whole (complexified) Lie algebra. I do not understand why you have three root vectors $t_{1,2,3}$ in your case.
{ "domain": "physics.stackexchange", "id": 72286, "tags": "angular-momentum, quantum-spin, group-theory, representation-theory, lie-algebra" }
What's the best way to keep track of changes to ROS stacks and packages?
Question: Title says it all. News feed? Run rosinstall frequently? Something else? macports has selfupdate, for example. Originally posted by Eponymous on ROS Answers with karma: 255 on 2011-03-14 Post score: 3 Answer: It depends on what OS you are on. If Ubuntu, just use debs and let the package manager handle it. If something else, running rosinstall is probably the best way to make sure you are up to date. Subscribing to the ros-users list is also an excellent way to keep track of important changes, as they are often announced there before they happen. There was talk of perhaps making something like a "ros-announce" list that would likely be moderated and would only handle important announcements (such as the change from setup.sh to setup.bash) so that they don't get lost in the sometimes high-traffic ros-users list. Originally posted by Eric Perko with karma: 8406 on 2011-03-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Eponymous on 2011-03-15: a ros-announce list sounds like a good idea. Or maybe just a page on the website listing updated packages and the like?
{ "domain": "robotics.stackexchange", "id": 5068, "tags": "ros, package, stacks" }
VSEPR theory, chemical bond and quantum mechanics
Question: VSEPR theory correctly predicts the shapes of many symmetry-broken molecules such as $\ce{H2O}$ and $\ce{NH3}$. Take $\ce{NH3}$ for example. In VSEPR theory, the nitrogen atom is (approximately) at the center of a tetrahedron, the three $\ce{N-H}$ bonds point to three of the four vertices of the tetrahedron, and the lone pair of nitrogen points to the $4$th vertex. But quantum mechanically speaking, the electrons should all be delocalized in the entire $\ce{NH3}$ molecule. How do I unify the two pictures to understand the concept of chemical bonds and VSEPR theory in quantum mechanics? Does VSEPR correspond to some kind of trial wave function (e.g. antisymmetrized geminal power (AGP))? Note: when I say how to understand chemical bonds in quantum mechanics, I mean in the chemical bond description of molecules, electron pairs are localized at the bonds while quantum mechanics again says everything can be delocalized. So it's the same discussion as VSEPR v.s. QM. If there are only two atoms and one bond, the quantum mechanical meaning of the chemical bond is clear. Answer: VSEPR is a simple and generalised approach based (mainly) on empirical observation. It is a great theory in predicting, in a first order approximation, the geometrical shape of molecules. That is a lot harder with other methods. It is of course based on a physical foundation, not only empiricism. You might want to explore electron-domains and the Pauli principle in this context. There have been a lot of post-rationalisations that cause more harm than good, because they clearly go beyond the limitations of the theory. One of the most miss-taught ones is the involvement of d-orbitals in hybridisation schemes, but that comes from the lack of understanding of some instructors, and its confusion with valence bond theory. One of the most prominent examples where it (basically) fails is outlined in my answer here: Are the lone pairs in water equivalent? At this point it should be noted, that the general topology of the electron density is quite well reproduced. With that in mind, a reconciliation of this empirical approach with quantum theory to understand bonding is dangerous, if not futile. It should be used for providing a reasonable guess for a molecular structure and its principle explanation. Anything beyond that might lead to wrong conclusions. It cannot, in any way, be used to generate a guess for a wave function, because it is not based on it. You will have to use another method for that. Often confused with VSEPR is the valence bond theory (VBT). It cannot be stressed enough, that the two are completely independent (even at its crudest level). There has been a lot of development in that field, and what is often taught as VBT can only be classified as a very crude approximation. It is in principle an exact theory, but carried out at the theoretical limit is not as easily understood as that what is taught commonly. (Just have a look at resonance, and its misconceptions as are outlined here: What is resonance, and are resonance structures real?) Another way to describe bonding is molecular orbital theory (MOT), which already comes with the necessary properties of electron delocalisation. Unfortunately, this theory is more difficult to get started in, and does not provide an easy picture to follow. The good thing about this is, that it doesn't get much more complicated at its theoretical limit. The two approaches are at their respective theoretical limit (VBT decribes electron correlation via resonance structures, MOT need multiple determinants for this) equivalent. Understanding bonding is not easy and it is by far not without controversy. Even at approximate VBT or MOT there can be many misconceptions, and incorrect deductions, conclusions, rationalisations. For everything in the "grey" areas, there are opinions, interpretations, and opinions about interpretations. In any and every case one should always be aware of the limitations of the model used. One should also be critical about the found results. One should always expect everything to be a lot more complicated than expected (point in case: CO2). If you keep all of that in mind: VSEPR is an awesome model system.
{ "domain": "chemistry.stackexchange", "id": 9537, "tags": "bond, quantum-chemistry, vsepr-theory" }
Is entanglement the only way to get mixed state that is consistent with the Schrödinger equation?
Question: If we treat our entire system (say an electron and a bunch of atoms) quantum mechanically then all possible interactions will be unitary transformations. Thus any state that I describe will always be a pure state. But if I observe only a subspace of my system (just the spin of the electron, say) I need to trace out rest of the space and I end up with a density matrix. If my states were separable to begin with then my density matrix will correspond to a pure state. The only way to get a mixed state would be if the spin of my electron was entangled with the rest of the system. Right? In other words, is a mixed state always an entangled state in a higher dimension? Edit: My question is not about purification. I do not care if I can find a state in my complete Hilbert space by purification. Rather, is entanglement the only way to go from a pure state to a mixed state. Thus it isn't a duplicate of this. Answer: Purely mathematically, this is certainly true, since any mixed state in a Hilbert space $H$ can be "purified", i.e. we can exhibit a pure state in $H\otimes H$ whose partial trace is the mixed state. In "physical" terms, a mixed state doesn't need to always arise as the partial trace of a larger entangled system: If I have an electron source and I tell you that half the electrons it spits out are spin-up (with respect to some spin operator) and the other half is spin down, then you will likely model what you know about an electron in the beam by assigning it a mixed state of 50% spin-up and 50% spin-down. So this mixed state models incomplete knowledge about the pure state of the individual electrons, but it didn't arise from any sort of entanglement or larger pure state - whether or not my source internally uses entanglement to achieve this outcome is completely irrelevant for your model.
{ "domain": "physics.stackexchange", "id": 93409, "tags": "quantum-mechanics, quantum-information, quantum-entanglement, density-operator, quantum-states" }
What are these caterpillars eating a potato plant?
Question: I saw these caterpillars rapidly consuming a potato plant in Central Texas: They have thin, dark green-blue bodies with yellow-white longitudinal stripes and a bright rust-red head. Their body seems to be covered in fine hairs which may be difficult to see in the photo. They are about 1 cm long. On some leaves of the same plant I saw what appear to be eggs - little black balls 1-2 mm in diameter, with a shiny appearance reminiscent of black caviar (well, I suppose they are black caviar). They seem to cover only a handful of leaves, in groups of about 100 or so (rough guess). Some can be seen in the photo. Curiously enough, nearby there were tomato and strawberry plants, but they were not affected, while the potato was extensively predated (a few dozen leaves consumed). What are these caterpillars? How do you kill them? Answer: I believe this is the middle instar of a Southern Armyworm (Spodoptera eridania) larvae. Or some closely related species in the Spodoptera genus (family Noctuidae). © 2016 Wings, Worms, & Wonder The larvae undergo six instars as they grow to attain a length of about 35 mm. You can see a series of life stage pictures from Mississippi State. Armyworms are so called because you typically find 100s at a time on a single leaf or plant. According to here: Armyworms strike in the blink of an eye, and there are so many in their army that you will be finding them for weeks after the primary attack. Behavior: larvae are mostly active at night and hide on leaf undersides, in curled leaves, or in leaf litter during the day. This University of Florida site claims Young larvae feed on under surface of leaflets leaving upper epidermis intact ("windowpaned"). Older larvae consume foliage and eat large holes in fruit. Range: native to the American tropics (occurs widely in Central and South America and the Caribbean). Also found in US (principally in the Southeast but extends West to Kansas and New Mexico). It also is reported from California. Here's a map showing some confirmed sightings [source]: Host Plants: This species has a very broad host range including important crops (including potatoes). I would be concerned for your nearby tomatoes, because thy too are a favorite food of these insects. See here for more information including management suggestions. Defecation vs. Eggs?? The picture in the question shows S. eridania's feces. This Mississippi State site includes a great zoomed-in picture of S. eridania's feces: © Lee Ruth Note: caterpillars don't lay eggs. Only adult moths/butterflies lay eggs. Eggs: large masses of 100-200 eggs covered with moth body scales, found on underside of leaves, hatch in 3 days. © 2016 Wings, Worms, & Wonder Here's a picture of inside the seed casing of closely related S. exigua [source]: © 2016 Wings, Worms, & Wonder Finally, here's a picture of the nocturnal adult:
{ "domain": "biology.stackexchange", "id": 6780, "tags": "species-identification, botany, zoology, entomology, pest-control" }
Finding most likely tree over a semilattice
Question: If I am not mistaken, then a semilattice defines a finite set of trees, for example spanning trees. Now assume that each semilattice edge is annotated with a transition probability. In addition, let's assume that the probability of a tree over the semilattice is the product of the probabilities of all the edges that the tree uses. How to find the most likely spanning tree, that is the most likely hypothesis how the nodes came to be? I am unsure whether this amounts to finding a maximum spanning tree of the semilattice. I guess this has applications in probabilistic graphical models... Answer: The maximum spanning tree is about summing weights; maximizing the probability is about multiplying weights. To convert multiplication to sums, take the logarithm. Hopefully that's enough for you to work out an algorithm for this task.
{ "domain": "cs.stackexchange", "id": 14983, "tags": "trees, spanning-trees, lattices, graphical-models" }
Programming languages with canonical functions
Question: Are there any (functional?) programming languages where all functions have a canonical form? That is, any two functions that return the same values for all set of input is represented in the same way, e.g. if f(x) returned x + 1, and g(x) returned x + 2, then f(f(x)) and g(x) would generate indistinguishable executables when the program is compiled. Perhaps more importantly, where/how might I find more information on canonical representation of programs (Googling "canonical representation programs" has been less than fruitful)? It seems like a natural question to ask, and I'm afraid that I just don't know the proper term for what I am looking for. I'm curious as to whether it is possible for such a language to be Turing complete, and if not, how expressive a programming language you can have, while still retaining such a property. My background is rather limited, so I would prefer sources with fewer prerequisites, but references to more advanced sources may be cool too, as that way I'll know what I want to work towards. Answer: The extent to which this is possible is actually a major open question in the theory of the lambda calculus. Here's a quick summary of what's known: The simply-typed lambda calculus with unit, products, and function space does have a simple canonical forms property. Two terms are equal if and only if they have the same beta-normal, eta-long form. Computing these normal forms is also quite straightforward. The addition of sum types greatly complicates matters. The equality problem is still decidable (the keyword to search for is "coproduct equality"), but the known algorithms work for extremely tricky reasons and to my knowledge there is no totally satisfying normal form theorem. Here are the four approaches I know of: Neil Ghani, Beta-Eta Equality for Coproducts, TLCA 1995. Vincent Balat, Roberto di Cosimo, Marcelo Fiore, Extensional normalisation and type-directed partial evaluation for typed lambda calculus with sums, POPL 2004. Sam Lindley, Extensional Rewriting with Sums, TLCA 2007. Arbob Ahmad, Daniel R. Licata, and Robert Harper, A Proof-Theoretic Decision Procedure for the Finitary Lambda-Calculus. WMM 2007. The addition of unbounded types, such as natural numbers, makes the problem undecidable. Basically, you can now encode Hilbert's tenth problem. The addition of recursion makes the problem undecidable, because having normal forms makes equality decidable, and that would let you solve the halting problem.
{ "domain": "cstheory.stackexchange", "id": 1416, "tags": "computability, pl.programming-languages, functional-programming, function" }
What exactly does point-of-no-return for carbon emissions mean?
Question: I think I have heard that if the carbon emission raises a small amount above the current, it would be "point of no return". I heard it a long time ago, so I just searched Google and the top article says "irreversible damage". But when I searched Google for a graph of historic CO2 levels, there was something like the one below. Granted that this was not the top search result (the top one was not continuous), but if this graph is correct, the CO2 level has been historically much higher than now. If there is any irreversible damage, won't that have already happened? And the CO2 level seems to be able to go down once it has risen, so it does not seem that the current CO2 level cannot get lowered once a certain level is reached. Answer: Its a question of time scales. Yes, carbon concentrations have been higher in the past. We've also had planetary extinction events that have wiped out a significant portion of the earth's biota. The 'point of no return' speaks to the idea that, before that point, we are able to return to a pre-anthropogenic climate which we are familiar with and is able to continue to support us (through agriculture, predictable drought & flood recurrance). Conversely, once we pass that point, there are feedback loops that will cause the climate to move towards a new 'stable' state which we are unfamiliar with and may not support many of the things that have made our lives easy. That's not to say "the earth" will be irreperably damaged, but rather that the time scales involved are not practical, and we will have to deal with some problematic climate change effects.
{ "domain": "earthscience.stackexchange", "id": 2539, "tags": "climate-change, carbon-cycle" }
Wait for a "3rd party" element to be appended to the DOM
Question: Let's say we're writing a user script which interacts with a 3rd party site we don't control. We want to open a menu, so we trigger a click on the button and then wait 10ms before accessing the menu (which is appended to the document after the click), like this: $button = $('#button'); $button.trigger('click'); setTimeout(function() { //access the menu }, 10); This works, because it takes less than 10ms for the 3rd party site's handler to appended the menu to the DOM once the button is clicked. But it feels like bad code. Is there a better way to write this? Answer: That's a perfect job for MutationObserver. new MutationObserver(function(mutations) { this.disconnect(); // access the menu ........ }).observe($('.immediate-parent-of-the-menu')[0], {childList: true}); $button = $('#button'); $button.trigger('click'); The simplest case above works as-is if the menu is added in one mutation, otherwise you'd have to check each of the mutations array elements for addedNodes.
{ "domain": "codereview.stackexchange", "id": 21356, "tags": "javascript, jquery, event-handling, dom" }
Where do planets get energy to revolve around sun?
Question: We know that every planet in our solar system revolve's around the sun in a particular orbit. But were to they get the energy to revolve around the sun. And why do they not drop into the sun there is only gravitational force acting which is always attractive in nature? Answer: They are technically falling to the sun. The gravitational force of the sun is what is keeping them in orbit around the sun and not floating away. But they are also moving really fast. They are moving so fast that the direction in which they are attracted to the sun is changing constantly and it makes them spin around it instead of actually falling into it. And since they do not encounter large amounts friction while moving though space (it's a near-vacuum) they do not need energy to keep moving.
{ "domain": "physics.stackexchange", "id": 18452, "tags": "energy, planets, solar-system" }
SW to urdf converter: errors in links masses
Question: Hi, We have used the SW to urdf converter to obtain the model of a 6 dof arm robot (6 links). The converetr works well but we found that the masses calculated for each link correspond to the mass of the entire robot. It seems that the same is happening with moments of inertia. Does anybody know if there is a mistake in the converter? Thanks in advance! Originally posted by Maddi on ROS Answers with karma: 11 on 2012-11-23 Post score: 1 Answer: Thanks for the question. Yes, this was a bug, you may have been the one to email me about it. It was addressed in V1.0 Build 4714.35040. Download the latest version and reinstall. Originally posted by brawner with karma: 581 on 2012-12-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11854, "tags": "urdf, solidworks" }
tf broadcasting and listening in the same node
Question: I am currently trying to make a perception part using tf. My robot 's position is broadcasted to tf, and the robot tries to read obstacles using tf. I want to put this on the same node " " test1 "passed to lookupTransform argument target_frame does not exist. "Message is generated. What happens when two nodes tf each succeed, and one node should not? I think this problem is "time" and I have put in various times. ros :: Time (), ros :: Time (0). ros :: Time :: now (), ros :: Time :: now () - ros :: Duration (0.1) etc .. But it was not all. I tested with two threads on one node, which was a success. What is the problem? #include "tf/transform_listener.h" #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char **argv) { ros::init(argc, argv, "tf_b"); tf::TransformBroadcaster br; tf::Transform transform; tf::Quaternion q; ros::NodeHandle nh; transform.setOrigin(tf::Vector3(1.0, 1.0, 1.0)); q.setRPY(0, 0, 0); ros::Rate loop_rate(10); while (ros::ok()) { ros::Time tGetTime = ros::Time::now(); transform.setRotation(q); br.sendTransform( tf::StampedTransform(transform, tGetTime, "world", "test1")); tf::TransformListener listener; try { listener.waitForTransform("/test1", "/world", tGetTime, ros::Duration(0.1)); tf::StampedTransform transform; listener.lookupTransform("/test1", "/world", tGetTime, transform); ROS_INFO_STREAM(transform.getOrigin().getX()); } catch (tf::TransformException &ex) { ROS_ERROR(ex.what()); } ros::spinOnce(); loop_rate.sleep(); } return 0; } On top of that I've simplified the code to make it easier to understand. Originally posted by EunsanJo on ROS Answers with karma: 3 on 2018-11-17 Post score: 0 Answer: I think this problem is "time" that might be, but perhaps not in the way you think. You create your TransformListener inside your while loop. Don't do that. It needs time to fill its buffer, and you don't give it any. Create it right after your NodeHandle, and use an AsyncSpinner. Originally posted by gvdhoorn with karma: 86574 on 2018-11-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by EunsanJo on 2018-11-17: Thank you. You are the best. I succeeded because I put 'listener creator' outside 'while loop'! Comment by gvdhoorn on 2018-11-17: Using an AsyncSpinner might still be a good thing to do, just to avoid blocking updates to your TransformListener with long running operations.
{ "domain": "robotics.stackexchange", "id": 32065, "tags": "ros, ros-kinetic, tf-listener, tf-broadcaster, transform" }
Nomenclature of ester with cyclic substituents
Question: I have to tell the name of the following compound The name is cyclopropylmethyl cyclobutanecarboxylate. All of it makes sense but I don't understand why the suffix carboxylate comes? Why are we writing the single carbon left unaccounted for by using carboxyl ? Kindly clarify. Answer: The group -COO- is called "carboxylate" if it is included in a molecule with some organic radicals attached on its left and on its right-hand side. It is a definition, due to the fact that there is a carbon atom bound to an oxygen atom. Joined together, it makes carboxy. This terminology is used when no simpler way of naming this structure exists. Examples. The structure CH3-COO- may be called methylcarboxylate. But usually it is called acetate. It is shorter. The structure C6H5COO- may be called phenylcarboxylate. But usually it is called benzoate, as it is a shorter name.
{ "domain": "chemistry.stackexchange", "id": 12979, "tags": "organic-chemistry, nomenclature" }
Inflation and CMB power spectrum
Question: So the inflation is evoked to solve the horizon problem so that every point in the CMB was in causal contact. Does this then contract to the calculation without inflation of the angular size of the horizon at the last scattering which is about 1 deg? That 1 deg was used to say, on scales larger than that its fluctuation is due to initial condition because nothing was in causal contact. Is this no longer valid with inflation then? Does it mean on scales larger than 1 deg there should be "acoustic peaks"? Answer: To study the causal structure it is useful to use the conformal time $\eta$, \begin{equation} ds^2=dt^2-a^2(t)dl^2=a^2(\eta) (d\eta^2-dl^2) \end{equation} that may be found as, \begin{equation} \eta=\int^t \frac{d\tilde{t}}{a(\tilde{t})} \end{equation} The reason is that in this coordinates the lightlike trajectories $ds^2=0$ corresponds to the diagonal lines $\eta=\pm x+\mathrm{const}$. In the hot Big Bang scenario the early universe is filled with radiation that corresponds to the equation of state $w=1/3$ and the power law for the cosmological expansion $a~t^{1/2}$. The conformal time in this case has the finite span since the Big Bang $\eta>\eta_0$ where $\eta_0$ corresponds to $t\rightarrow 0$. Thus the lightlike trajectory moving into the past ends at some finite distance $\delta x=\eta-\eta_0$. This means that the causally connected region is finite and grows with time. However for the classical solution at the inflationary stage $w<-\frac{1}{3}$ and $\eta$ is not bounded from below even if the cosmic time $t$ sonce the Big Bang $a=0$ were finite. Therefore, the casually connected region may be enormous (formally infinite for such solution) Instead you will have a finite range for $\eta$ in the future! This reflects the accelerated expansion of the universe that drives the distant points from each other faster then they may be connected by the light signal. You may match these two solutions at some $t_{reh}$ - the stage when inflation ends, inflation decays and reheating occurs. Because this happened when $a$ was very small the notion "causally connected since the beginning of time within the hot Big Bang scenario" and "causally connected since the end of inflation" are very close to each other. The actual horizon as we see it in CMB actually corresponds to the latter. So what happens in the inflationary scenario? The observable universe originated from the extremely tiny region of space that was more or less a vacuum. This erased any primal inhomogeneities such as acoustic peaks you want. It was expanded extremely fast, in fact so fast that different parts stopped to influence each other and it allowed to quantum fluctuations to grow and seed future inhomogeneities. The inhomogeneities produced this way are extremely simple: almost gaussian and almost scale-invariant (though not exactly) . Then the inflation stopped. Those different parts now could communicate with each other but this process takes time and this produces the horizon evident in CMB. Not true horizon from the beginning of time, but since the end of inflation.
{ "domain": "physics.stackexchange", "id": 92685, "tags": "cosmology, cosmological-inflation, cosmic-microwave-background" }
Is it possible to split a photon into two? And if so, how would Bohmian mechanics explain that?
Question: In standard QM, photons are waves, but in Bohmian mechanics, photons are particles being guided by waves. So, if you split the wave, do you also split the particle? How would that work? Answer: Contrary to popular belief, we cannot split a photon. Photons do not decay, and cannot be split like you split a nucleus for example. There is no natural decay of the photon due to conservation of momentum and energy. If it split into two photons their added four vectors would have an invariant mass. Can a photon be split? Though, what you might be referring to, is called Spontaneous Parametric Down Conversion, and is used quite frequently to produce entangled pairs of photons. Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon), into a pair of photons (namely, a signal photon, and an idler photon) of lower energy, in accordance with the law of conservation of energy and law of conservation of momentum. https://en.wikipedia.org/wiki/Spontaneous_parametric_down-conversion Now you are asking about Bohmian mechanics and how it would explain the splitting of the wavepacket. There is a interpretation that tries to do exactly that, and in this case, the explanation is that the wave packet (having only a single photon) enters a beam splitter, and splits into two smaller wavepackets. In this interpretation, one of the wavepackets has the particle inside it, and the other wavepacket is empty. https://arxiv.org/ftp/arxiv/papers/1410/1410.3416.pdf
{ "domain": "physics.stackexchange", "id": 68952, "tags": "quantum-mechanics, electromagnetic-radiation, bohmian-mechanics" }
Where should generated header files be generated to? How can I then export them with catkin?
Question: I have the following CMakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(datatypes) find_package(catkin REQUIRED) catkin_package( ->INCLUDE_DIRS ${CATKIN_DEVEL_PREFIX}/${CATKIN_GLOBAL_INCLUDE_DESTINATION} # LIBRARIES rtdb_config # CATKIN_DEPENDS other_catkin_pkg # DEPENDS system_lib ) file(MAKE_DIRECTORY ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION}) FILE(GLOB DATATYPE_RAW ${PROJECT_SOURCE_DIR}/datatypes/*) add_custom_target(${PROJECT_NAME} ALL COMMAND generate_some_header_files_to ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION} SOURCES ${DATATYPE_RAW} ) Where the custom command generates header files to be used by other packages in the dir ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION}. (Not sure this dir should be used!!) The problem seems to be that since this directory was still not created it fails to be exported by catkin_package: catkin_package() include dir '/home/.../devel/include' is neither an absolute directory nor exists relative to '/home/.../src/rtdb/src/datatypes' Now the question: Where should generated header files be generated to? How can I then export them with catkin? Thanks in advance. Luis Originally posted by loliveira on ROS Answers with karma: 45 on 2014-01-28 Post score: 2 Answer: Your approach described in the question was almost right. As the CMake error indicates the path you specified as catkin_package(INCLUDE_DIRS ...) must exist when the function is being invoked. Therefore you have to move the MAKE_DIRECTORY line above the function. But since the function is also responsible to define the catkin DESTINATION variables you need to get them explicitly before using catkin_destinations(). Further more you need to export your target which is responsible to generate the headers in order for downstream packages be able to cleanly depend on them being generated before. Usually you should pass EXPORTED_TARGETS ${PROJECT_NAME}_generate_headers to catkin (but due to a bug until now you have to set the variable ${PROJECT_NAME}_EXPORTED_TARGETS instead). Last but not least you need to install the header files. The complete example would then look like this: cmake_minimum_required(VERSION 2.8.3) project(datatypes) find_package(catkin REQUIRED) catkin_destinations() file(MAKE_DIRECTORY ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION}) file(GLOB DATATYPE_RAW ${PROJECT_SOURCE_DIR}/datatypes/*) # using a better target name for the custom target add_custom_target(${PROJECT_NAME}_generate_headers ALL COMMAND generate_some_header_files_to ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION} SOURCES ${DATATYPE_RAW} ) set(${PROJECT_NAME}_EXPORTED_TARGETS ${PROJECT_NAME}_generate_headers) catkin_package( INCLUDE_DIRS ${CATKIN_DEVEL_PREFIX}/${CATKIN_GLOBAL_INCLUDE_DESTINATION} # instead of set(${PROJECT_NAME}_EXPORTED_TARGETS ...) # but will only work as of catkin 0.5.81 # EXPORTED_TARGETS ${PROJECT_NAME}_generate_headers ) # no need for patterns / excludes since the path should only contain the generated headers # by using the directory name with a slash at the end # the directory name and destination become more natural # (both mentioning CATKIN_PACKAGE_INCLUDE_DESTINATION) install( DIRECTORY ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_INCLUDE_DESTINATION}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} ) Originally posted by Dirk Thomas with karma: 16276 on 2014-01-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by loliveira on 2014-01-28: Perfect... that seems to be exactly what I'm looking for... unfortunately the good config breaks the compilation :) (my fault this time) ;) Only using catkin 0.5.79 so I'll use the first option.
{ "domain": "robotics.stackexchange", "id": 16794, "tags": "ros, catkin, headers, ros-hydro" }
Kerr frequency combs in ring resonators
Question: Is it possible to generate Kerr frequency combs using a macroscopic fiber ring resonator? Or is the phenomenon exclusive to monolithic microresonators? If this is possible, would the presence of additional spatial modes (i.e., higher powers) be an advantage? Answer: Is it possible to generate Kerr frequency combs using a macroscopic fiber ring resonator? There's nothing stopping you from doing this. Frequency combs come from bulk optics, and the switch to microresonators is a rather recent development. (On the other hand, if you want to have a macroscopic fiber ring, then you should think very carefully about how you're coupling light into and out of the resonator. Evanescent coupling works well for microscopic fibers, but it's unlikely to scale well to higher fiber diameters.) If this is possible, would the presence of additional spatial modes (i.e., higher powers) be an advantage? This is unlikely, unless you really don't care about the coherence (in which case - why are you working on a frequency comb?). The cavity resonance represents a single transverse mode, and whether this is a single eigenmode of the fiber or not is ultimately irrelevant. As to engineering advantages, this depends completely on the particulars of the system, what tradeoffs you've done elsewhere, and what your design priorities are. It's entirely possible that there are systems where it can be beneficial to use macroscopic fibers. You'd have to ask an optical engineer what those are.
{ "domain": "physics.stackexchange", "id": 59412, "tags": "optics, frequency, fiber-optics, laser-cavity" }
Could not find library corresponding to plugin
Question: Hi all. I'm trying to learn how to write rviz plugins from the book Mastering Ros for robotics.But there is a problem I cannot solve. I created the package and then created the teleop_pad.h file here: #ifndef TELEOP_PAD_H #define TELEOP_PAD_H #include <ros/ros.h> #include <ros/console.h> #include <rviz/panel.h> class QLineEdit; namespace rviz_teleop_commander { //所有的plugin都必须是rviz::panel的子类 class TeleopPanel: public rviz::Panel { //后面需要用到Qt的信号和槽,都是Qobject的子类,所以需要声明Q_OBJECT宏 Q_OBJECT public: //构造函数。在类中会用到QWidget的实力来实现GUI界面,这里先初始化为0 TeleopPanel(QWidget* parent =0 ); //重载rviz::Panel基类中的函数,用于保存、加载配置文件中的数据,在我们这个plugin // 中,数据就是topic的名称 virtual void load(const rviz::Config& config); virtual void save(rviz::Config config) const; //公共槽 public Q_SLOTS: // 当用户输入topic的命名并按下回车后,回调用此槽来创建一个相应名称的topic publisher void setTopic (const QString& topic) //内部槽 protected Q_SLOTS: void sendvel(); // 发布当前的速度值 void update_Linear_Velocity(); // 根据用户的输入更新线速度值 void update_Angular_Velocity(); // 根据用户的输入更新角速度值 void updateTopic(); // 根据用户的输入更新topic name //内部变量 protected: //topic names输入框 QLineEdit* output_topic_editor_; QString output_topic_; //线速度输入框 QLineEdit* output_topic_editor_1; QString output_topic_1; //角速度输入框 QLineEdit* output_topic_editor_2; QString output_topic_2; //// ROS的publisher,用来发布速度topic ros::Publisher velocity_publisher_; //ROS node handle ros::NodeHandle nh_; //当前保存的线速度和角速度 float linear_velocity_; float angular_velocity_; }; } //end namespace rviz_plugin_tutorial #endif //TELEOP_PANEL_H the teleop_pad.cpp file #include <studio.h> #include <QPainter> #include <QLineEdit> #include <QVBoxLayout> #include <QHBoxLayout> #include <QLabel> #include <QTimer> #include <geometry_msgs/Twist.h> #include <QDebug> #include "teleop_pad.h" namespace rviz_teleop_commander { //构造函数,初始化变量 TeleopPanel::TeleopPanel(QWidget *parent) : rviz::Panel(parent), linear_velocity_(0), angular_velocity_(0) { // 创建一个输入topic命名的窗口 QVBoxLayout *topic_layout = new QVBoxLayout; topic_layout->addwidget(new QLabel("Teleop Topic")); output_topic_editor_ = new QLineEdit; topic_layout->addwidget(output_topic_editor_); // 创建一个输入线速度的窗口 topic_layout->addwidget(new QLabel("Linear Velocity")); output_topic_editor_1 = new QLineEdit; topic_layout->addwidget(output_topic_editor_1); //创建一个输入角速度窗口 topic_layout->addwidget(new QLabel("Angular Velocity")); output_topic_editor_2 = new QLineEdit; topic_layout->addwidget(output_topic_editor_2); QHBoxLayout *layout = new QHBoxLayout; layout->addLayout(topic_layout); setLayout(layout); //创建定时器,发布消息 QTimer *output_timer = new QTimer(this); //设置信号与槽连接 connect(output_topic_editor_, SIGNAL(editingFinished()), this, SLOT(updateTopic())); //输入topic命名,回车后,调用updateTopic() connect( output_topic_editor_1, SIGNAL(editingFinished()), this, SLOT( update_Linear_velocity())); //输入线速度值,回车后,调用update_Linear_Velocity() connect( output_topic_editor_2, SIGNAL(editingFinished()), this, SLOT( update_Angular_velocity())); //输入角速度值,回车后,调用update_Angular_Velocity() //设置定时器的回调函数。按周期调用sendvle connect(output_timer, SIGNAL(timerout()), this, SLOT(sendVel())); output_timer->start(100); } //更新线速度 void TeleopPanel::update_Linear_velocity() { //获取输入框内的数据 QString temp_string = output_topic_editor_1->text(); //将字符串转换为浮点数 float lin = temp_string.toFloat(); //保存当前输入值 linear_velocity_ = lin; } void TeleopPanel::update_Angular_velocity { QString temp_string = output_topic_editor_2->text(); float ang = temp_string.toFloat(); angular_velocity_ = ang; } //更新topic命名 void TeleopPanel::setTopic(const QString &new_topic) { //检查topic是否改变 if (new_topic != output_topic_) { output_topic_ = new_topic; //如果命名空,不发布任何信息 if (output_topic_ == "") { velocity_publisher_.shutdown(); } //否则初始化publisher else { velocity_publisher_ = nh.advertise<geometry_msgs::Twist>(output_topic_.toStdstring(), 1); } Q_EMIT configChanged(); } } //发布消息 void TeleopPanel::sendVel() { if (ros::ok() && velocity_publisher_) { geometry_msgs::Twist msg; msg.linear.x = linear_velocity_; msg.linear.y = 0; msg.linear.z = 0; msg.angular.x = 0; msg.angular.y = 0; msg.angular.z = angular_velocity_; velocity_publisher_.publish(msg); } } //重载基类功能 void TeleopPanel::save(rviz::Config config) const { rviz::Panel::save(config); config.mapSetvalue("Topic", output_topic_); } // 重载父类的功能,加载配置数据 void TeleopPanel::load(const rviz::Config &config) { rviz::Panel::load(config); QString topic; if (config.mapGetString("Topic", &topic)) { output_topic_editor_->setText(topic); updateTopic(); } } } // end namespace rviz_plugin_tutorials // 声明此类是一个rviz的插件 #include <pluginlib/class_list_macros.h> PLUGINLIB_EXPORT_CLASS(rviz_telop_commander::TeleopPanel,rviz::Panel) //tutorial the package.xml: <?xml version="1.0"?> <package format="2"> <name>rviz_teleop_commander</name> <version>0.0.0</version> <description>The rviz_teleop_commander package</description> <maintainer email="shantengfei@todo.todo">shantengfei</maintainer> <license>TODO</license> <buildtool_depend>catkin</buildtool_depend> <build_depend>roscpp</build_depend> <build_depend>rviz</build_depend> <build_depend>std_msgs</build_depend> <build_export_depend>roscpp</build_export_depend> <build_export_depend>rviz</build_export_depend> <build_export_depend>std_msgs</build_export_depend> <exec_depend>roscpp</exec_depend> <exec_depend>rviz</exec_depend> <exec_depend>std_msgs</exec_depend> <!-- The export tag contains other, unspecified, tags --> <export> <!-- Other tools can request additional information be placed here --> <rviz plugin="${prefix}/plugin_description.xml"/> </export> </package> the plugin_descriptioin.xml <library path="lib/librviz_teleop_commander"> <class name="rviz_teleop_commander/Teleop" type="rviz_teleop_commander::TeleopPanel" base_class_type="rviz::Panel"> <description> A panel widget allowing simple diff-drive style robot base control. </description> </class> </library> and the CMakelist.txt: cmake_minimum_required(VERSION 2.8.3) project(rviz_teleop_commander) find_package(Qt4 COMPONENTS QtCore QtGui REQUIRED) include(${QT_USE_FILE}) add_definitions(-DQT_NO_KEYWORDS) qt4_wrap_cpp(MOC_FILES src/teleop_pad.h ) set(SOURCE_FILES src/teleop_pad.cpp ${MOC_FILES} ) add_library(${PROJECT_NAME} ${SOURCE_FILES}) target_link_libraries(${PROJECT_NAME} ${QT_LIBRARIES} ${catkin_LIBRARIES}) I can build the package successfully After build it i have source the setup.bash of my workspace. But when I rosrun the rviz and tried to add this new panel, the error came up: [ERROR] [1511090623.201457618]: PluginlibFactory: The plugin for class 'rviz_teleop_commander/TeleopPanel' failed to load. Error: Could not find library corresponding to plugin rviz_teleop_commander/TeleopPanel. Make sure the plugin description XML file has the correct name of the library and that the library actually exists. I have checked the package.xml and the name in description.xml just I showed above.but I still couln't find how to solve it.Could anyone help me? Thanks! Update:I have changed alltelop*toteleop*,but still doesn't work. Originally posted by tengfei on ROS Answers with karma: 88 on 2017-11-19 Post score: 0 Original comments Comment by gvdhoorn on 2017-11-19:\ The plugin for class 'rviz_teleop_commander/TeleopPanel' failed to load There are quite some inconsistencies in the various file you show: rviz_telop_commander/Teleop vs rviz_teleop_commander/TeleopPanel is one. I would suggest you check all of them. Comment by tengfei on 2017-11-19: Thanks gvdhoorn.I have changed allteloptoteleop,but still doesn't work but it still doesn't work. I checked several times,but still have no idea where the error is. Comment by tengfei on 2017-11-20: gvdhoorn,I rebuilt the whole example from the beginning and now it works.Maybe I still didn't find the inconsistencies in the first version.Thanks! Answer: I have rebuilt the example and now it works. I guess I still didn't find the inconsistencies in the first version. now it works. Originally posted by tengfei with karma: 88 on 2017-11-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29403, "tags": "ros, rviz, library, qt" }
How to analyze the amortized running time of indexed linked list operations using potential method?
Question: I have implemented an indexed linked list that runs (under mild assumptions) all single-element operations in $\mathcal{O}(\sqrt{n})$ time. The description is here and Java implementation is here. It’s clear to me that the running time is linear in $n$ in the worst case, but I would love to see something smarter than that. The idea is to maintain a list of fingers. Each finger $f$ has two fields: (1) $f.node$ pointing to a linked list node, and (2) $f.index$ being the actual index of the node pointed by $f.node$. Now, the finger list has size $\lceil \sqrt{n} \rceil$, and it’s sorted by finger indices. Given an element index $i$, we can access it as follows: Apply C++ lower_bound to find the closest to $i$ finger $f$, and ”rewind” $f.node$ to point to the $i$th node. (Set $f.index \leftarrow i$ also.) This applies to the get operation and runs in logarithmic time. Assuming that the fingers are distributed more or less evenly, rewinding will run in $\mathcal{O}(n / \sqrt{n} = \sqrt{n})$ time. For the insert/removeAt operations, both do (1) finger list lookup in logarithmic time, (2) finger node rewind in $\mathcal{O}(\sqrt{n})$, and (3) also update the finger indices that runs in, too, $\mathcal{O}(\sqrt{n})$. Answer: Suppose we are appending elements to the tail of the list. Suppose also that the rightmost finger has index $f$. Now, the distance between $f$ and the second rightmost finger $f - 1$ (which should be the largest since we are comparing two rightmost fingers (the distance between adjacent fingers grows as we scan the fingers from left to right)), is given by $$ \begin{aligned} f^2 - (f-1)^2 &= f^2 - (f^2 - 2f + 1) \\ &= 2f - 1 \\ &\overset{1}{=} 2\Big\lceil \sqrt{n} \Big\rceil -1\\ &= \Theta(\sqrt{n}). \end{aligned} $$ We conclude that appending all the elements one by one keeps the finger indices (in asymptotic sense) evenly distributed. (Above, we have (1) by the data structure invariant.)
{ "domain": "cs.stackexchange", "id": 20148, "tags": "heuristics, linked-lists, succinct-data-structures" }
Plow a 2D polygonal area
Question: I have a problem that is similar to this that I am trying to solve: "Given a randomly-shaped field, what is the best (fastest I guess) way to plow it? Every part of the field must be plowed, plowing outside the area is not a problem, and turning around is slower than going straight." Basically, I'm in the situation of a farmer that wants to plow as fast as possible a weirdly-shaped field with no obstacle. As far as I can tell, this is a covering problem for a simply-connected 2D space, but with some special restrictions. It seems like it should be a common enough problem, but I can't seem to be able to find anything. I guess most fields have a shape similar enough to a rectangle and thus it is not an issue. Is there any algorithm or numerical method that allows to solve this problem? Thanks. Answer: Even for a near rectangular field, plowing along the long sides should be more efficient than along the short ones, but the solution isn't trivial I think. When the field slightly deviates from being perfectly rectangular, naive plowing will leave you with a triangular patch that will require a lot of turning to finish. We could observing that all furrows need to be parallel, both in practice as well as from an optimization perspective. Then a mathematical description of the problem would be that you have a set of evenly spaced parallel lines in the plane and you want to fit (i.e. rotate and translate) the polygon so that the number of intersections between it and the lines is minimized. Note that there are infinitely many infinitely long lines in this description, think drawing the polygon on lined paper. I don't see a quick archetypal problem for this myself, but I think you could try brute-forcing it by rotating the polygon over the lines and computing the lines-polygon intersections at each turn.
{ "domain": "cs.stackexchange", "id": 7802, "tags": "algorithms, optimization, computational-geometry" }
Skeleton for a command-line tool to scan domains for vulnerabilities
Question: I'm working on a new command line application in GO and was hoping I could get feedback on my design pattern, or suggestions on a better one to use. I'm still new to GO, only having used it for a handful of projects, and would value any suggestions to take better advantage of the "GO" way of doing things. The intent of the application will be to either scan a single domain, or a list of domains and then output results in either a normal format (depending on flags) containing information about vulnerable domains and suggestions, or a file only showing the domains in the initial input that are vulnerable. main.go package main import ( "flag" "fmt" "./libprojectstart" "os" ) func Banner() { fmt.Printf("%s%sprojectstart - SSL Mass Renegotiation Tester\n%s", libprojectstart.OKBLUE, libprojectstart.BOLD, libprojectstart.RESET) } func ParseCmdLine() (state *libprojectstart.State) { s := libprojectstart.InitState() flag.StringVar(&s.Domain, "d", "", "Domain to scan") flag.StringVar(&s.DomainList, "dL", "", "List of domains to scan") flag.StringVar(&s.OutputNormal, "oN", "", "File to write normal output to") flag.StringVar(&s.OutputDomains, "oD", "", "File to write successful domains to") flag.IntVar(&s.Threads, "t", 20, "Number of threads (Default: 20)") flag.BoolVar(&s.NoColour, "--no-colour", true, "Don't Use colors in output") flag.BoolVar(&s.Silent, "--silent", false, "Output successful scans only") flag.BoolVar(&s.Verbose, "v", false, "Verbose output") flag.BoolVar(&s.Usage, "h", false, "Display this message") flag.Parse() return &s } func main() { state := ParseCmdLine() if state.Silent != true { Banner() } if libprojectstart.IsUnchanged(*state) || state.Usage { Banner() flag.PrintDefaults() os.Exit(1) } if validated, error := libprojectstart.ValidateState(state); validated != true { fmt.Printf("%s[!] %s%s", libprojectstart.FAIL, libprojectstart.RESET, error) flag.PrintDefaults() os.Exit(1) } } state.go package libprojectstart // Contains state read in from the command line type State struct { Domain string // Domain to check for DomainList string // File location for a list of domains OutputNormal string // File to output in normal format OutputDomains string // File to output domains only to Verbose bool // Verbose prints, incl. Debug information Threads int // Number of threads to use NoColour bool // Strip colour from output Silent bool // Output domains only Usage bool // Print usage information } func InitState() (s State) { return State { "", "", "", "", false, 20, false, false, false } } func ValidateState(s *State) (result bool, error string) { if s.Domain == "" && s.DomainList == "" { return false, "You must specify either a domain or list of domains to test" } return true, "" } func IsUnchanged(s State) bool { return s == InitState() } colour.go package libprojectstart var ( HEADER = "\033[95m" OKBLUE = "\033[94m" OKGREEN = "\033[92m" WARNING = "\033[93m" FAIL = "\033[91m" BOLD = "\033[1m" UNDERLINE = "\033[4m" RESET = "\033[0m" ) The intent of this application is that the command line options, as well as the feedback being printed will grow significantly over time. I've aimed to demonstrate some of that error checking in ValidateState() as that function will grow with the project, as will the printing of colours. Open to any and all suggestions for improvements! Answer: Validation logic When checking whether the program should print the usage message or not, I find it an unusual approach to say "is the state (of initial parameter values) unchanged?" It would be more natural to check facts more directly. For example in this program the only input you really require is one or more domain names. Then that should be the determining factor for displaying the usage message: "did the user specify a domain name?" Direct, straightforward, to the point. Naming The current names are not very intuitive: State is state of what? It's the command line parameters. Something like CmdParams would be better. Banner is a noun, so it sounds like an object, not something that takes some action. PrintBanner would be better. Names starting with capital letters are exported (~ public API) in go. Many if not all names in the posted could be private, so their names should start with lowercase. Instead of InitX, when creating a new instance of X, the name NewX is more common. Performance Every time you evaluate s == InitState(), InitState() creates a new instance to perform the comparison. It would be enough to have just one such instance as a frame of reference, created as a var at top-level scope, no need to create a new instance every time. Style Many of the functions specify a name for the returned value, but don't actually use it in the function body. So those optional names can be dropped. Instead of this: return State { "", "", "", "", false, 20, false, false, false } You could take advantage of the default zero-values of strings and booleans, and write simpler like this: return State{Threads: 20} In my experience, negative boolean variables such as NoColour are often confusing in practice. Inverting the meaning and using !Colour tends to work better. Use short boolean conditions Instead of if state.Silent != true, you can write shorter as if !state.Silent. Banner printed twice When the script is called without parameters, the banner will be printed twice. Swap these two statements to avoid that. if state.Silent != true { Banner() } if libprojectstart.IsUnchanged(*state) || state.Usage { Banner() flag.PrintDefaults() os.Exit(1) }
{ "domain": "codereview.stackexchange", "id": 30680, "tags": "beginner, console, go" }
Not able to install ROS on ubuntu 14.04.3
Question: I tried following the regular procedure but stuck at :- sudo apt-get install xserver-xorg-dev-lts-utopic mesa-common-dev-lts-utopic libxatracker-dev-lts-utopic libopenvg1-mesa-dev-lts-utopic libgles2-mesa-dev-lts-utopic libgles1-mesa-dev-lts-utopic libgl1-mesa-dev-lts-utopic libgbm-dev-lts-utopic libegl1-mesa-dev-lts-utopic The Error i am getting is :- The following packages have unmet dependencies: libegl1-mesa-dev-lts-utopic : Depends: libegl1-mesa-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed Depends: libegl1-mesa-drivers-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed libgbm-dev-lts-utopic : Depends: libgbm1-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed libgl1-mesa-dev-lts-utopic : Depends: libgl1-mesa-glx-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed libgles1-mesa-dev-lts-utopic : Depends: libgles1-mesa-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed libgles2-mesa-dev-lts-utopic : Depends: libgles2-mesa-lts-utopic (= 10.3.2-0ubuntu1~trusty2) but it is not going to be installed libqt5feedback5 : Depends: libqt5multimedia5 (>= 5.0.2) but it is not going to be installed libqt5quick5 : Depends: libqt5gui5 (>= 5.2.0) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. Please Help!! Originally posted by niksguru on ROS Answers with karma: 1 on 2015-11-27 Post score: 0 Answer: Your question needs a bit more explanation. Did you experience dependency issues while installing ROS? The regular procedure is to skip sudo apt-get install xserver-xorg-dev-lts-utopic mesa-common-dev-lts-utopic libxatracker-dev-lts-utopic libopenvg1-mesa-dev-lts-utopic libgles2-mesa-dev-lts-utopic libgles1-mesa-dev-lts-utopic libgl1-mesa-dev-lts-utopic libgbm-dev-lts-utopic libegl1-mesa-dev-lts-utopic if you do not have dependency issues while ROS installation. There is a clear warning on the installation page which says "If you are using Ubuntu Trusty 14.04.2 and experience dependency issues during the ROS installation, you may have to install some additional system dependencies. Do not install these packages if you are using 14.04, it will destroy your X server " After sudo apt-get update you can proceed to sudo apt-get install ros-xxx-desktop-full (replace xxx with indigo or jade depending on what distro you want to install) and if you have any dependency issues during this stage, you can go back to fixing the dependencies. Originally posted by Willson Amalraj with karma: 206 on 2015-11-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by tfoote on 2016-05-08: Make sure to read this section closely: http://wiki.ros.org/indigo/Installation/Ubuntu#Installation-1
{ "domain": "robotics.stackexchange", "id": 23100, "tags": "ros" }
How to compute the number of centroids for K-means clustering algorithm given minimal distance?
Question: I need to cluster my points into unknown number of clusters, given the minimal Euclidean distance R between the two clusters. Any two clusters that are closer than this minimal distance should be merged and treated as one. I could implement a loop starting from the two clusters and going up until I observe the pair of clusters that are closer to each other than my minimal distance. The upper boundary of the loop is the number of points we need to cluster. Are there any well known algorithms and approaches estimate the approximate number of centroids from the set of points and required minimal distance between centroids? I am currently using FAISS under Python, but with the right idea I could also implement in C myself. Answer: Yes, the silhouette method (which is implemented in sklearn as silhouette_score) is commonly used to assess the quality of clusters produced by any clustering algorithm (including $k$-means or any hierarchical clustering algorithm). Roughly, you can compute the silhouette value for different $k$, then you would pick the $k$ with the highest silhouette value.
{ "domain": "ai.stackexchange", "id": 1219, "tags": "machine-learning, k-means, clustering" }
A violation of the Maximum Work Theorem?
Question: Consider the PV diagram I've drawn at the end of this post. Consider the processes drawn thereon: (1-2-3) is an isobaric step followed by an isochoric step, while (1-3) is an adiabatic expansion (I couldn't figure out how to flip the arrow, but NB that I am not interested in discussing a cycle here and (1-3) is meant to be interpreted as going from 1 to 3). My question is as follows. I recently learned (Callen Chapter 4.5) about the maximum work theorem, which gives the maximum work that can be extracted from a given system in taking it between two specified states, while there is access to some fixed "reversible heat source" (RHS) and fixed " reversible work source" (RWS) (on which the work is done). It was proven -- and I agree with the proof naturally -- that the maximum such work obtains from a reversible $\Delta S_{tot} = 0$ process with respect to the overall, composite system of three subsystems. Now in this specific case, fix the RHS and RWS. The adiabatic process (1-3) is clearly reversible, since it is quasistatic and since there is no heat transfer. Thus, by the theorem, it must supply the maximum work in going between the two states. But the process (1-2-3) evidently supplies more work (area under the PV diagram), so where am I going wrong? Obviously, one of the hypotheses entering the maximum work theorem must be violated, but I can't imagine which one. Edit: I believe the correct resolution is to note that, in fact, both lead to precisely the same work done by a RWS (as they must by the MWT); this fact is obscured in this particular case by the fact that the \emph{direct work} done in the given reversible path. That is, the difference in area between the two paths corresponds to different values of $W$ in the process, but we note again that $W \neq -W_{RHS}$ in general; instead, we have $Q + W = -Q_{RHS} - W_{RWS}$ and so $W_a \neq W_b$ does not at all contradict the possibility that $W_b + Q_b + Q_{RHS,b} = -W_{RWS,b} = - W_{RWS,a} = W_a + Q_a + Q_{RHS,a}.$ Answer: No, you are just mistaking what the theorem is saying. Along isentropic path 1->3 you have no entropy / heat transfer, and thus the maximum work is whatever you get. Along path 1->2->3 there would be a need for infinitely many heat pads of different temperatures, from which you absorbed some heat. That heat is being converted to work, and that is why the limit of maximum work along path 1->2->3 is a little greater than that of path 1->3 isentropically.
{ "domain": "physics.stackexchange", "id": 95677, "tags": "thermodynamics, work, entropy, heat-engine" }
Applying Ampere's law in situation with non-physical $E$-field?
Question: On an exam I was given this question: Suppose an electric field in a region with no current $(\textbf{J}=\textbf{0}$) is given by $\textbf{E}(t,x,y,z) = \sin(\omega t)\hat{\textbf{k}}$ and $C$ is the circle of radius $a$ in the $xy$-plane oriented counterclockwise when looking down the $z$-axis. Determine the value of $$ \oint_C \textbf{B}\cdot d\mathbf{\ell} $$ as a function of time. The intent of the question is obviously to use Ampere's law to find that $$ \oint_C \textbf{B}\cdot d\mathbf{\ell} = \epsilon_0\mu_0\iint_S \frac{\partial\textbf{E}}{\partial t}\cdot d\textbf{S} = \epsilon_0\mu_0\cos(\omega t)\iint_S dS = \epsilon_0\mu_0\pi a^2\cos(\omega t), $$ where $S$ is the disc of radius $a$ in the $xy$-plane centered at the origin. However, if this electric field were to satisfy Maxwell's equations, we should have that the magnetic field is constant, since $$ \frac{\partial\textbf{B}}{\partial t} = -\nabla\times \textbf{E} = 0, $$ and so the circulation should be constant with respect to time as well. I brought this concern up afterward with the instructor, but they weren't able to provide a satisfying answer to the dilemma. Is there a way to make sense of this, or should I not bother? Answer: In a region with no current or charge density, the electric and magnetic fields obey the wave equation $$\left( \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 \right) \mathbf E(\mathbf r, t) = 0$$ This result follows directly from Maxwell's equations (you are most likely familiar with this derivation; if not, it is straightforward and can be found e.g. here). Obviously the electric field you were given does not satisfy the wave equation, which means that it is not part of a viable solution to Maxwell's equations. In other words, there is no magnetic field $\mathbf B(\mathbf r,t)$ such that $\mathbf E(\mathbf r,t)$ and $\mathbf B(\mathbf r,t)$ together satisfy all four of Maxwell's equations at the same time, which means that the question (which refers to the magnetic flux) has no meaningful solution. This is a mistake (or possibly a deliberate omission) on the part of your instructor, likely stemming from a desire to simplify the calculation for you. A Possible Fix What s/he could have written is something like $$\mathbf E(\mathbf r,t) = E_0 \sin(\omega t) \cos(\frac{\omega}{c} y) \hat{\mathbf k}$$ This electric field does satisfy the wave equation, and corresponds to a magnetic field given by $$\mathbf B(\mathbf r,t) = \frac{E_0}{c} \cos(\omega t) \sin(\frac{\omega}{c}y) \hat{\mathbf i}$$ The trouble with something like this is that it makes that integral rather messy. However, our calculation can be simplified by assuming that the loop size is very small compared to the wavelength of this standing wave, i.e. $\frac{\omega a}{c} \ll 1$. Taylor expanding to second order gives us $$\mathbf E(\mathbf r,t) = E_0 \sin(\omega t)\left(1 - \frac{\omega^2 y^2}{2c^2}\right) \hat{\mathbf k}$$ and $$\mathbf B(\mathbf r,t) = \frac{E_0}{c} \cos(\omega t) \left(\frac{\omega y}{c}\right)\hat{\mathbf i}$$ Performing the flux integral yields $$\iint \frac{\partial \mathbf E}{\partial t} \cdot d\mathbf S = \omega E_0\cos(\omega t) \int_0^{2\pi} \int_0^a r \left(1 - \frac{\omega^2}{c^2}r^2 \sin^2(\theta)\right) dr d\theta$$ $$ = \omega E_0\cos(\omega t)\left( \pi a^2 - \frac{\omega^2}{c^2}\pi \frac{a^4}{4}\right)= \pi a^2 \omega E_0 \cos(\omega t)\left( 1 - \left[\frac{\omega a}{c}\right]^2\right) $$ The corresponding circulation in the magnetic field would be $$ \oint \mathbf B \cdot d\mathbf r = \frac{\omega E_0}{c^2} \cos(\omega t) \int_0^{2\pi} a^2 \sin^2(\theta) d\theta = \frac{\pi a^2 \omega E_0}{c^2}\cos(\omega t)$$ which matches what we expect to lowest order in $\left(\frac{\omega a}{c}\right)$. Discussion The key to this is the dimensionless parameter $\epsilon \equiv \frac{\omega a}{c}$. The electric field provided by your instructor can be viewed as the coarsest possible approximation to a real electric field like the one I wrote down. Whether this approximation is sufficient depends on what you want to do with it - as you can see, it correctly gives the lowest order contribution to the electric flux through the loop, but it incorrectly gives that the corresponding magnetic field vanishes everywhere. In order to obtain the lowest order term in $\mathbf B$ from $\mathbf E$, we need to keep at least one higher order term. To summarize, your instructor provided you with an approximation to a physical $\mathbf E$ field. The approximation is sufficient to calculate the circulation of the corresponding $\mathbf B$ field to lowest order, but not sufficient to calculate the $\mathbf B$ field itself.
{ "domain": "physics.stackexchange", "id": 62667, "tags": "electromagnetism, magnetic-fields, electric-fields, maxwell-equations" }
Understanding follow_joint_trajectory setup
Question: I can't figure out how the MoveIt interfaces with the hardware controller. Any help or references welcome. I tried the following: I created Python action servers that handle action messages sent by MoveIt for each move group in a new package named my_robot_controller. These are dummy SimpleActionServers I modified from the actionlib tutorial. I named them my_move_group_controller.py. They print to terminal using rospy.loginfo. I edited the controllers.yaml file in my_robot_moveit/config using the same my_move_group names. Following the MoveIt! tutorial on creating the controller launch file, I created a file my_robot_description_moveit_controller_manager.launch inside the my_robot_moveit package (created using the setup assistant). According to the SRDF file in that same package/config, the robot's name is my_robot_description, as that's where the URDF lives. I created a my_robot_moveit.launch file to start rviz and moveit (like demo.launch but enabling reading from joint_states). I set fake_execution to false. rosrun my_robot_moveit my_robot_description_moveit_controller_manager.launch. This command runs and terminates without error. rosrun my_robot_moveit my_robot_moveit.launch. This launches rviz and I can visualize the robot and plan trajectories, but it seems like moveit can't find the controllers: Relevant terminal output [ INFO] [1499120779.494268914]: Starting scene monitor [ INFO] [1499120779.498303784]: Listening to '/move_group/monitored_planning_scene' [ INFO] [1499120779.499209320]: waitForService: Service [/get_planning_scene] has not been advertised, waiting... [ INFO] [1499120781.117743014]: Waiting for arm6dof_controller/follow_joint_trajectory to come up [ INFO] [1499120784.515498443]: Failed to call service get_planning_scene, have you launched move_group? at /tmp/binarydeb/ros-kinetic-moveit-ros-planning-0.9.8/planning_scene_monitor/src/planning_scene_monitor.cpp:486 [ INFO] [1499120784.889705013]: No active joints or end effectors found for group ''. Make sure you have defined an end effector in your SRDF file and that kinematics.yaml is loaded in this node's namespace. [ INFO] [1499120784.890079419]: No active joints or end effectors found for group 'arm6dof'. Make sure you have defined an end effector in your SRDF file and that kinematics.yaml is loaded in this node's namespace. [ INFO] [1499120784.892088492]: No active joints or end effectors found for group 'arm6dof'. Make sure you have defined an end effector in your SRDF file and that kinematics.yaml is loaded in this node's namespace. [ INFO] [1499120784.892679679]: Constructing new MoveGroup connection for group 'arm6dof' in namespace '' QObject::connect: Cannot queue arguments of type 'QVector<int>' (Make sure 'QVector<int>' is registered using qRegisterMetaType().) QObject::connect: Cannot queue arguments of type 'QVector<int>' (Make sure 'QVector<int>' is registered using qRegisterMetaType().) [ INFO] [1499120786.117870928]: Waiting for arm6dof_controller/follow_joint_trajectory to come up [ERROR] [1499120791.118052218]: Action client not connected: arm6dof_controller/follow_joint_trajectory [ INFO] [1499120796.145848142]: Waiting for base_gripper_controller/follow_joint_trajectory to come up [ INFO] [1499120801.146001310]: Waiting for base_gripper_controller/follow_joint_trajectory to come up [ERROR] [1499120806.146111888]: Action client not connected: base_gripper_controller/follow_joint_trajectory [ INFO] [1499120811.254410130]: Waiting for ee_gripper_controller/follow_joint_trajectory to come up [ERROR] [1499120814.906571120]: Unable to connect to move_group action server 'move_group' within allotted time (30s) [ INFO] [1499120814.906992117]: Constructing new MoveGroup connection for group 'arm6dof' in namespace '' [ INFO] [1499120816.254549199]: Waiting for ee_gripper_controller/follow_joint_trajectory to come up [ERROR] [1499120821.254703646]: Action client not connected: ee_gripper_controller/follow_joint_trajectory [ INFO] [1499120821.358256875]: Returned 0 controllers in list [ INFO] [1499120821.371200102]: Trajectory execution is managing controllers I'm probably missing the point, but the the tutorials don't explain how to set this up properly and I can't figure out how to solve these hints. I'm using the dynamixel_motor package to control my motors (through topics) so any tips on how to do that properly would also be welcome. Originally posted by achille on ROS Answers with karma: 464 on 2017-07-03 Post score: 1 Answer: I could figure it out using the example files posted in this discussion. It is specific for the dynamixel motors, but gives a minimal example of a working configuration. Originally posted by achille with karma: 464 on 2017-07-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28276, "tags": "ros, microcontroller, moveit, follow-joint-trajectory" }
Lidar and Sonar do not update the costmap
Question: Hi, I have Lidar and Sonar readings and they clearly work as they see the obstacles: But as you can see, local costmap does not show the obstacle and robot just hits the object infront of it. Here is my common_costmap_params: obstacle_range : 2.5 raytrace_range : 3.0 robot_radius: 0.07 #footprint: [[0.3, 0.15], [0.3,-0.15], [-0.3, -0.15], [-0.3, 0.15]] #footprint_padding: 0.1 static_layer: enabled: true map_topic: map subscribe_to_updates: true inflation_layer: enabled: true inflation_radius: 1.75 # 1.45 cost_scaling_factor: 2.58 obstacle_layer: enabled: true obstacle_range: 2.5 raytrace_range: 3.0 inflation_radius: 0.2 track_unknown_space: true observation_sources: laser laser: {data_type: LaserScan, sensor_frame: laser_link, clearing: true, marking: true, topic: /scan} range_sensor_layer: clear_threshold: 0.46 mark_threshold: 0.98 no_readings_timeout: 2.0 topics: ["/sonar"] # ["sonar1","sonar2"] for multiple And my local_costmap_params: local_costmap: global_frame: map robot_base_frame: base_footprint update_frequency: 5.0 publish_frequency: 2.0 transform_tolerance: 5 # 0.25 seconds of latency, if greater than this, planner will stop static_map: true rolling_window: true # Follow robot while navigating width: 4.0 height: 4.0 origin_x: 0 #-1.5 origin_y: 0 #-1.5 resolution: 0.03 plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: range_sensor_layer, type: "range_sensor_layer::RangeSensorLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"} - {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"} Originally posted by stevemartin on ROS Answers with karma: 361 on 2019-01-29 Post score: 0 Answer: Hi, Your local map shouldn't be static, I'd remove the static layer plugin from the local_costmap_params and see what happens Originally posted by Syrine with karma: 57 on 2019-07-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32366, "tags": "ros-melodic" }
#P- vs PP-Completeness
Question: Suppose $A$ is any #P-complete problem. Now, $A$ is modified to obtain a decision problem $A'$ not by asking whether there is a solution but whether at least half of the potential solutions are actually true solutions. Question: Is $A'$ PP-complete? This works if $A$ is #Sat, since MajSat is PP-complete. My approach so far: If $A$ is #P-complete, then there is a reduction from #Sat. So it should be possible to adapt this reduction to obtain a reduction from MajSat to $A'$. However, I ran into the problem that the reduction from #Sat to $A$ often requires a Turing (or Cook) reduction and it seemed far from clear how to obtain a many-one reduction from it. Answer: Not necessary. Imagine the following Fake-#SAT problem: possible solutions are extended by one bit, and all vectors with this bit set are solutions. That is, the number of satisfying assignments for the new problem is $2^n+f$, where $f$ is the number of satisfying assignments for the original #SAT problem ($0\le f\le 2^n$). The problem remains #P-complete; however, the corresponding "majority" problem (formulates as $\ge$ vs $<$) has always the answer "yes".
{ "domain": "cstheory.stackexchange", "id": 3214, "tags": "cc.complexity-theory, complexity-classes, counting-complexity, reductions, probabilistic-complexity" }
What is the difference between A2C and Q-Learning, and when to use one over the other?
Question: I'm trying to get an accurate answer about the difference between A2C and Q-Learning. And when can we use each of them? Answer: The major difference between A2C and Q-Learning are what the algorithms learn. In A2C, and policy gradient algorithms in general, the policy is directly parameterised, i.e. we have $\pi_\theta (a|s)$. The parameters $\theta$ are typically optimised to maximise an objective that is a proxy for the expected returns $\mathbb{E}_{\pi_\theta}\left[\sum_{i=0}^\infty \gamma^i r(s_i, a_i)\right]$. Usually this proxy is the value function $v_{\pi_\theta}(s)$; the policy gradient theorem shows us that the derivative of this function wrt the policy parameters is $\nabla_\theta v_{\pi_\theta}(s) = \mathbb{E}_{\pi_\theta}\left[G_t \nabla_\theta \log \pi_\theta (A | S)\right]$. In Q-Learning, we instead learn the Q-function: $Q(s, a) = \mathbb{E}_\pi \left[r(s, a) + \gamma v_\pi(s') \right]$. In a tabular MDP we can maintain exact estimates for every state-action pair, but if the state space is continuous/too large then we typically rely on function approximation, and so the Q-function would be parameterised in a similar way to the policy above. Now, as Q-Learning (in the tabular case) can be shown to converge to the Q-function under the optimal policy, i.e. the value of taking an action in a given state and thereafter following the optimal policy. We can then define a deterministic policy using this by $\pi(s) = \arg\max_{a\in \mathcal{A}} Q_{\pi^*}(s, a)$ where $\pi^*$ is the optimal policy and $\mathcal{A}$ is the action space of the MDP. Note that in Actor-Critic methods we also learn a value function, similar to how the Q-function is learned in Q-learning. Another major difference between the two algorithms is that Q-Learning is an off-policy algorithm; that is, the policy that the values are learnt for does not necessarily correspond to the policy that collects the data. A2C is an on-policy algorithm, so the data must correspond to data collected by the policy. This has disadvantages in data thirsty Deep Learning setups, but A3C can help overcome this by using many workers that share the parameters of the current policy to obtain many on-policy trajectories, so the amount of data being used would be similar to that of an off-policy algorithm. Finally, I want to point out that policy gradient algorithms don't necessarily have to be on-policy -- Soft Actor Critic and Deterministic Policy Gradient are both examples of off-policy algorithms.
{ "domain": "ai.stackexchange", "id": 3530, "tags": "reinforcement-learning, comparison, q-learning, advantage-actor-critic" }
What are the advantages or disadvantages of Owl?
Question: Owl is the numerical library for OCaml: https://github.com/ryanrhymes/owl It is supposed to be an equivalent of numpy and also have capabilities of tensorflow. Any insights on why it should be used or why it shouldn't? Answer: In a nutshell it is promising but it lacks in multiple points. The research below explains why it is better/worse than Numpy and Tensorflow: http://pligor.tumblr.com/post/166198475026/owl-an-ocaml-numerical-library-research-by
{ "domain": "datascience.stackexchange", "id": 2080, "tags": "tensorflow, numerical" }
Why is a relativistic quantum theory of a finite number of particles impossible?
Question: In Dyson's book Advanced Quantum Mechanics , he said "These two examples (the discovery of antimatter and meson) are special cases of the general principle, which is the basic success of the relativistic quantum theory, that A Relativistic Quantum Theory of a Finite Number of Particles is Impossible." However, when we calculate the Feynman diagram of particles collision process, the number of particles should be finite. So my question is why a relativistic quantum theory of a finite number of particles is impossible. What does it mean explicitly? Does it mean that we need to consider infinite number of harmonic oscillators when we quantize free field? Answer: Because a pair of particle and anti-particle can be created from the vacuum, it means that infinite number of pairs of particle and anti-particle can be created from the vacuum. So when you consider relativistic quantum theory it's impossible to only consider a finite number of particles. When you calculate Feynman diagram, you are actually only doing perturbation theory to some order. If you want to calculate the exact result by Feynman diagrams, then you need to consider infinite number of Feynman diagrams, because you can add loops to the diagrams. And it means that you need to consider infinite number of particles. When quantize a field, we do have to consider an infinite number of harmonic oscillators. What do I mean by this is that if you look at the expression for the quantum field, say,a scalar field, you will find that it's a superposition of infinite creation operators and annihilation operators.
{ "domain": "physics.stackexchange", "id": 24561, "tags": "quantum-mechanics, quantum-field-theory" }
How do porcupines keep from pricking each other while mating?
Question: How do porcupines keep from pricking each other while mating? It seems like they would constantly be scratching each other. Answer: The female stood with the tail held sharply to one side, and the quills on the back lying very flat. The male stood on his hind legs, while the front legs grasped the sides of the female. There was no repetition of the act. The male's urethra is 115-120 mm long, and his penis is 75 mm, so the he doesn't need to be as close to the female as one might think. The retractor muscles are attached to the lower portion of the pelvis, which is likely well below the level at which the male's quills end. Apparently, a dozen or so porcupines will den up in close quarters for the winter, so one can probably guess they have a good sense of whether their spines are hurting another, but that's only conjecture. I suppose the take-home message is, nature will find a way. Struthers, P.H. (1928) Breeding Habits of the Canadian Porcupine (Erethizon dorsatum). Journal of Mammalogy, 9(4), 300-308. http://www.jstor.org/pss/1374084 Mirand,E.A., Shadle,A.R.(1953). Gross anatomy of the male reproductive system of the porcupine Journal of Mammalogy, 34(2), 210-220. http://www.jstor.org/pss/1375622
{ "domain": "biology.stackexchange", "id": 168, "tags": "zoology, ethology, sex, antipredator-adaptation, rodents" }
Value of features is zero in Decision tree Classifier
Question: I used CountVectorizer and TfidfVectorizer seperately to vectorize text which is 100K reviews and passed the vector data to a Decision tree Classifier. Upon using _feature_importances__ attribute of Decision tree Classifier, the feature importance values for all my feature are just 0.0. But with the same dataset, I'm able to find feature importances for naives bayes and logisitic regression by using feature_prob for naive bayes and coef_ attributes for logisitic regression. Other things I tried: 1. I tried changing ngram_range in countvectorizer 2. I tried limiting/not limiting min_df and max_feature parameters passed in countvectorizers But couldn't make it work. Any help is appreciated Code: positivereviews = df[df.Score == 1] negativereviews = df[df.Score == 0] countvect = CountVectorizer(stop_words='english') positivebow = countvect.fit(positivereviews.CleanedText[0:100000]) pos_xtrain = positivebow.transform(positivereviews.CleanedText[100000:200000]) pos_y = positivereviews.Score[100000:200000] clf = DecisionTreeClassifier(max_depth=3, min_samples_split=2) clf.fit(pos_xtrain, pos_y) def show_most_informative_features(vectorizer, clf, n=20): feature_names = vectorizer.get_feature_names() coefs_with_fns = sorted(zip(clf.feature_importances_, feature_names)) top = coefs_with_fns[:-(n + 1):-1] for (coef_1, fn_1)in top: print ("\t%.4f\t%-15s" % (coef_1, fn_1)) Counter(clf.feature_importances_) Counter({0.0: 63514}) 1) positivereviews is the dataframe which has features CleanedText and Score where CleanedText is text which is preprocessed and Score is either 1 in this set since I splitted positive and negative review using Score 2) I also searched online for this problem, but couldn't find any instance of this issue EDIT1: can it be because since we are dealing with categorical feature in this problem, I might be overfitting badly and hence I do not see any value for any features? Thanks in advance!! Answer: I'm pretty sure that your feature importances are 0 because your classifier isn't doing any classifying. From the code, it looks like you're training only on positive examples, and giving the fit function a label vector that consists entirely of 1s. The classifier has no information; the decision rule is just "when given an example, predict 1". There's no way to measure which features are most strongly associated with the label because they're all equally associated - there's only one label, so there's no way to associate the features with anything else. Is there a reason you're not using the negative examples? It seems like you have the dataframe available. When you ran naive bayes and logistic regression, did you also give those models only the positive examples?
{ "domain": "datascience.stackexchange", "id": 4359, "tags": "machine-learning, decision-trees, feature-extraction" }
How to train ML algorithm with multiple values in target data?
Question: I am new to data science and machine learning and looking for some help. I am trying to train a machine with following data set: Here, L3 is the target variable. As it can be seen that the target variable contains a set of possible values, I am wondering how I can train the machine with such a data. In my test data, I might want to predict any of the one or more values given one value. Example test case : 20 c aa 'aa' is given value and I want to predict other values based on the values 20, c and 'aa'. Is it possible to do what I am trying to achieve? Thank you. Answer: Because your test cases may have values from L3 as input, this strikes me as more of a recommender system; consider encoding each possible value in L3 as a new binary column (thinking of them as products that each row/user has purchased / expressed interest in), and look into different kinds of recommender systems to see what seems most appropriate for your data. (I'm not too familiar with recommenders. AIUI, these are all clustering algorithms, but with different notions of distance and focusing on either the products or the users. sklearn does not appear to have any built-ins specifically for recommendations, but you could use it for an underlying model. There are also some other Python packages specifically for recommenders, but I can't recommend any [see what I did there?].)
{ "domain": "datascience.stackexchange", "id": 5104, "tags": "machine-learning, python, data" }
Maximum angular velocity to stop in one rotation with a known torque
Question: I have an object I can rotate with a given torque. I would like to stop applying torque once I've reached a defined maximum rotational speed. The maximum rotational speed should be defined so that applying maximum torque will stop the rotation of the object within one rotation. If I know my torque and moment of inertia, how can I find the maximum rotational velocity to allow me to stop the object in one rotation? Time is whatever is needed. I've tried finding the angular acceleration required to stop the object, but that leaves me with the time variable. Of the equations I've tried, I'm left with a time variable as well as the maximum angular velocity. Answer: To stop the object you must do work. For a constant torque perpendicular to the moment arm, the work it does is equal to $\tau\cdot\Delta\theta$, and you want $\Delta\theta\leq2\pi$. It should be obvious that the greatest angular velocity that a torque $\tau$ can stop will take it the full $2\pi$ radians to stop. In a rotating system, the rotational kinetic energy is given by $E_r=\frac12I\omega^2$ (a direct analogue of $E_K=\frac12mv^2$ ). Now consider work-energy equivalence.
{ "domain": "physics.stackexchange", "id": 8072, "tags": "homework-and-exercises, rotational-dynamics, torque, rotation" }
Is Gauss law applicable for infinite surfaces?
Question: In the statement of the Gauss Law if we happen to consider a surface at infinity what should we expect to get keeping in mind that electric field is 0 at infinity? Answer: Let's consider a few cases here. There's two rough methods we can use here : either we consider the entire space directly, or we can take some limit process. In the first case, this is simple enough. The boundary of $\mathbb{R}^3$ is the empty set $\varnothing$. Then we have \begin{equation} \int_{\varnothing} \vec{E} \cdot d\vec{S} = \int_{\mathbb{R}^3} \rho(x) d^3x \end{equation} The first term is zero, the second term is the total charge $Q / \varepsilon_0$. Therefore, $Q = 0$. Why is that? The Gauss theorem is a specific form of Stoke's theorem, which is defined specifically for functions with a compact support, which means that in our case, $\vec{E}$ should be zero outside of a finite region of space. This cannot be true if $Q \neq 0$, because, for instance, by the shell theorem, any non-zero static will generate an EM field like the Coulomb field outside of a volume containing them, therefore it will never be zero outside of a compact region. Therefore, for any electric field where it makes sense to use Gauss's law on all of space, the total charge must be zero. The second method is to simply consider sequences of volumes, ie we have a sequence $(\Omega_n)$ such that $\lim \Omega_n = \mathbb{R}^3$. What happens here will depend on exactly what sequence of shapes and charge distribution you're dealing with, using the usual theorems from analysis, but let's consider three cases here. First, let's consider a point charge at the center. Then we have \begin{eqnarray} \int_{S_{0, R}} \vec{E} \cdot d\vec{S} &=& \iint E_r R^2 \sin \theta d\theta d\varphi\\ &=& 4 \pi \frac{q}{4\pi \varepsilon_0 R^2} R^2\\ &=& \frac{q}{\varepsilon_0} \end{eqnarray} Thi is independent of the radius, therefore taking the limit is fairly trivial here. Now let's consider the case of a charge distribution that is not of a compact support. Let's pick for instance a (standard) normal distribution \begin{eqnarray} \rho(r, \theta, \varphi) &=& \sqrt{2} 2 Q \frac{e^{-r^2}}{(2\pi)^{3/2}} \end{eqnarray} The total charge is $Q$, and the charge in a ball is \begin{eqnarray} Q_R &=& \int_S \int_{0}^R \sqrt{2} 2 Q \frac{e^{-r^2}}{(2\pi)^{3/2}} r^2 \sin \theta dr d\theta d\varphi\\ &=& \frac{8\sqrt{2} \pi Q}{(2\pi)^{3/2}} \int_{0}^R r^2 e^{-r^2} dr\\ &=& \frac{4\sqrt{2} Q}{(2\pi)^{1/2}} \frac{1}{4} (\sqrt{\pi} \textrm{erf}(R) - 2 e^{-R^2} R) \\ &=& \frac{Q}{(\pi)^{1/2}} (\sqrt{\pi} \textrm{erf}(R) - 2 e^{-R^2} R) \\ \end{eqnarray} $\mathrm{erf}$ converges to $1$, the second term to $0$, so that our charge converges to $Q$. By symmetry, we get \begin{eqnarray} \iint_{S_{0, R}} \vec{E} \cdot d\vec{S} &=& \iint_{S_{0, R}} E_r R^2 \sin \theta d\theta d\varphi\\ &=& 4\pi E_r R^2 \end{eqnarray} Therefore, \begin{eqnarray} E_r &=& \frac{Q}{4\pi r^2 (\pi)^{1/2}} (\sqrt{\pi} \textrm{erf}(r) - 2 e^{-r^2} r) \\ \end{eqnarray} Everything converges nicely too, here, we have $E_r$ being asymptotically equivalent to Coulomb's law. As a last example, take a constant charge distribution, \begin{eqnarray} \rho(r, \theta, \varphi) &=& \rho \end{eqnarray} By symmetry, the electric field should not point in any particular direction, and therefore should be zero, but for any volume, \begin{eqnarray} \int_\Omega \rho(x) d^3 x = V(\Omega) \rho \end{eqnarray} This is the old problem of the Newtonian cosmology for a universe with constant density. It has been attempted many different ways, with a variety of limiting process and approximation, but there are no good solutions for this (this is related to the fact that such a universe cannot have the appropriate boundary condition $\phi = 0$ at infinity). Therefore, the verdict is that yes, you can use Gauss's theorem for infinite volumes, but be very careful, because it will only work for very specific cases. In particular, to have a well-defined Poisson equation (ie for the potential to be $\Delta \phi = \rho$), we should be able to define $\phi(\infty) = 0$.
{ "domain": "physics.stackexchange", "id": 65815, "tags": "electrostatics, electric-fields, gauss-law" }
Why is the coupling of the matter field with the inflaton field neglected at the inflationary epoch?
Question: The simplest model of inflation involves a hypothetical, yet undiscovered, single scalar field $\phi$ called \emph{inflaton}. The action of this scalar field minimally coupled to gravity is given by $$S[\phi,g]=\int d^4x\sqrt{-g}[\frac{1}{2}\mathcal{R}+\frac{1}{2}g_{\mu\nu}\partial^\mu\phi \partial^\nu\phi-V(\phi)-\mathcal{L}_{\phi-\text{matter}}]$$ $$=S_{EH}+S_{\phi}+S_{\phi-matter}$$ Why is it that in the discussion of inflation, one often neglects the coupling of the matter field with $\phi$ i.e., $S_{\phi-matter}$? Answer: Coupling to matter is neglected during the inflation because there is no matter. All matter is produced after inflation, during the reheating (and preheating) phase, when inflaton oscillates around the minimum of the potential and decays into the matter particles. I suggest this review of the topic. Although you can find it in other standard cosmology textbooks and lecture notes.
{ "domain": "physics.stackexchange", "id": 40971, "tags": "quantum-field-theory, particle-physics, cosmology, cosmological-inflation, beyond-the-standard-model" }
Showing an item in a shopping cart
Question: I'm updating and old project from jQuery 1.4 to jQuery 1.7. Can this be simplified, thus reducing the amount of code, and perhaps in such reduction, have it improved? Having the following code: var cartItemDel = '<td class="del"><img src="components/themes/default/img/icons/cross-small.png" width="16" height="16" alt="X"><input type="hidden" class="lid" name="lineID'+ id +'" value="'+ id +'"><span class="hidden token">'+token+'</span></td>', cartItemName = '<td><input class="CartItemId" type="hidden" name="id_'+ id +'" value="'+ id +'"><span class="CartItemName">'+ name +'</span></td>', cartItemPrice = '<td><input class="CartItemRef" type="hidden" name="ref_'+ ref +'" value="'+ ref +'"><span class="class="CartItemPrice"><span class="lineSum">'+ price +'</span>&euro;</span></td>'; $(".cart-table tbody") .append('<tr>'+cartItemDel+cartItemName+cartItemPrice+'</tr>'); I've updated it to: /* * ADD NEW LINE - First TD */ var $newLineIcon = $("<img/>", { src : "components/themes/default/img/icons/cross-small.png", width : "16", height : "16", alt : "X" }); var $newLineLID = $("<input/>", { type : "hidden", class : "lid", name : "lineID"+ lid, value : lid }); var $newLineToken = $("<span/>", { class : "hidden token", text : token }); var $newLine_firstTD = $("<td/>", { class : "del"}) .append($newLineIcon) .append($newLineLID) .append($newLineToken); /* * ADD NEW LINE - Second TD */ var $newLinePID = $("<input/>", { type : "hidden", class : "CartItemId", name : "id_"+ id, value : id }); var $newLinePNAME = $("<span/>", { class : "CartItemName", text : name }); var $newLine_secondTD = $("<td/>") .append($newLinePID) .append($newLinePNAME); /* * ADD NEW LINE - Third TD */ var $newLinePREF = $("<input/>", { type : "hidden", class : "CartItemRef", name : "ref_"+ ref, value : ref }); var $newLinePPRICE = $("<span/>", { class : "CartItemPrice", html : "&euro;" }) .prepend($("<span/>", {class : "lineSum", text : price})); var $newLine_thirdTD = $("<td/>") .append($newLinePREF) .append($newLinePPRICE); $("<tr/>") .append($newLine_firstTD) .append($newLine_secondTD) .append($newLine_thirdTD) .appendTo($(".cart-table tbody")); Answer: +1 to this code looking fine as is (with caveats and one issue). Issue: You should always quote class as it is a reserved word for future use in javascript. Don't take this as me advocating you use it or saying it is ready for production1, but I'm using JsRender in production to ease much of the pain you are seeing/imagining here. I think the central issue here is that this type of Javascript invokes a hard dependency on the structure of the page and so we look at it with a bit of queasiness that it rightly deserves. If your visual designer comes along and decides that the shopping cart needs to be in a dropdown list where you can drag (or swipe) items out of or into now you have to go and make a whole bunch of changes all over the place. Unfortunately I don't think it is reasonable in a modern website to expect both that the html has no dependencies on the Javascript ("page works without javascript") and that the Javascript has no dependencies on the design of the html("javascript works equally well on a totally different design that happens to have a couple of the same named elements"). I personally am comfortable ignoring the former to get much closer to an ideal on the latter. That said, my working environment is such that I have very little control over the redesigns that happen all too often in the products I work on. At a minimum I would strive to make sure this code stayed together inside its own function at or near the top of your javascript (or in its own global/namespaced function in the html file itself). This code has very little to do with your logic and a lot to do with your site design. Therefore it doesn't belong sitting alongside your page logic. On minification (for this code in particular) I am less concerned. A solid minifier is going to combine your var statements, rename the variables and possibly inline them and then remove whitespace. If you were concerned about this, you could inline everything yourself (and you would lose a little on the readibility), but the gzip of this even uncompressed is only about 480 bytes (closure compiler adds some stuff before it estimates) so at most you are going to gain maybe 100 bytes (closure advanced gets 140 but the result will not work; I can manually do it and get 145 off; none of these numbers will mean anything in terms of page performance for your site). FWIW, your original code was only 332 bytes zipped (smaller than my manual attempt) and compressed (manually) down to 297. 1. The tests are poorly covering the functionality (willing to bet there are significant issues in various places) and it is sorely missing even half decent documentation.
{ "domain": "codereview.stackexchange", "id": 2024, "tags": "javascript, jquery, dom, e-commerce" }
Partial order, total order, and version order in transaction histories
Question: In Section 3.1.2 "Transaction Histories" of the PhD thesis by Atul Adya [1]: A history $H$ over a set of transactions consists of two parts: (1) a partial order of events $E$ that reflects the operations (e.g., read, write, abort, commit) of those transactions, and (2) a version order, $\ll$, that is a total order on committed object versions. The author gives a comment on "the partial order of events" (Page 36): "For convenience, we will present history events in our examples as a total order that is consistent with the partial order. Furthermore, wherever possible in our examples, we make this total order be consistent with the real-time ordering of events in a database system." Question 1: Why can we present the partial order of events as a total order of them? Because a partial order may imply multiple total orders consistent with it, which one to choose? Does the choice matter for later definitions and theorems? Some comments on "the total version order" (Page 36) are as follows: [Added (01-10-2015)] "The version order in a history $H$ can be different from the order of write or commit events in $H$. This flexibility is needed to allow certain optimistic and multi-version implementations where it is possible that a version $x_i$ is placed before version $x_j$ in the version order even though $x_i$ is installed in the committed state after $x_j$ is installed." And, "The system chooses the version order for each object." Question 2: What does it mean for the database system to choose the version order? And how? Is this implementation-dependent? [1] Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions by Atul Adya, 1999 Answer: for what it concerns Question 1: there is a theoretical result from 1930 [1] that states that any partial order $S$ can be extended to a total order $S'$ which contains $S$. This result is known as the "Order Extension Principle"; the proof of this result uses the Axiom of Choice (I am not aware whether there is an alternative proof that does not use it). The paper is written in French, but you can find a proof at https://www.proofwiki.org/wiki/Order-Extension_Principle. When Adya states that he chooses the real-time order of events, the best guess is that he assumes an implementation of the database; in this case, every history $H$ corresponds to an execution of the database, where events are totally ordered; such an order is the real-time order. More specifically, by requiring that the partial order $<$ in a history $H$ is extended by the real-time (total) order, Adya imposes that whenever $e_1 < e_2$ in $H$, then the instant of time $t_1$ at which the event $e_1$ takes place (in the execution of the database that leads to $H$) is less than the instant of time $t_2$ at which $e_2$ takes place. Choosing the real-time order of events to extend the partial order of a history $H$ is needed if one wants to prove the correctness of an abstract specification (i.e. set of properties that a history has to satisfy) with respect to a given specification. Let's turn to Question 2: in this case it seems to me that Adya wants to stress the fact that he allows the implementation of a database to access a version of an object which precedes (in the version order) the latest version installed. In practice, choosing an earlier version of an object could be either the result of the database not being able to access such a version (e.g. the lost update anomaly in causal consistency), or the user explicitly requesting to access an older version of an object (e.g. accessing an older version of a revision in a SVN repository). Hope this helps, Andrea Cerone. [1] Edward Szpilrajn - sur l'extension de l'ordre partiel
{ "domain": "cs.stackexchange", "id": 3935, "tags": "terminology, database-theory" }
steam engines and reciprocating engines
Question: I am going to make a couple of statements: 1)The four-stroke reciprocating engine has one power stroke in every two complete revolutions of the crankshaft. 2)Two-stroke reciprocating engine has one power stroke in every full revolution of the crankshaft. 3)single-acting steam engine has one working stroke in each revolution. 4)double-acting steam engine has two working strokes in each revolution. I have problem with the last two statements; actually, I am not getting how piston would return in the single-acting steam engine after completion of forwarding journey. And for the double-acting steam engine, we have two power strokes in each revolution!!! Wow! Then it must have more power than a four-stroke reciprocating engine. But we rarely see a double-acting steam engine in everyday use. Why is it so? Someone please help me out from this disorder. Answer: In the case of a single-acting steam engine, the crankshaft is connected to a flywheel, which urges the crank to keep turning after the power stroke is finished. This returns the piston back to its starting position. Once the engine is running at normal speed, the flywheel arrangement keeps it going and all is OK. Here is how a double-acting steam piston works. the piston sits in the center of a cylinder. each end of the cylinder has a steam inlet in it. those steam inlets are connected to a valve that is operated by the crankshaft which feeds steam into one or the other end of the cylinder while also opening up the unfed cylinder to the air. the piston is connected to the crankshaft by a rod that emerges through one end of the cylinder through a tight seal. so, for example, steam enters the left end of the cylinder, pushing the piston to the right and letting spent steam escape from the right end of the cylinder. the crankshaft turns and the valve reverses itself, sending steam into the right end of the cylinder and venting the steam out of the left end. this was a very common arrangement in steam locomotives where lots of power was required in a compact space.
{ "domain": "physics.stackexchange", "id": 61071, "tags": "thermodynamics, heat-engine" }
Sun like star in our milkyway?
Question: Is there any popular star or one which we can see with our naked eyes in our milky way that is atleast 90% sun-like(mass,radius,spectra,luminosity).The important thing is it must not be a binary. Its okay if its not visible to our naked eyes because our sun's absolute magnitude is +4.5!. Answer: There's a whole Wikipedia page about it. https://en.wikipedia.org/wiki/Solar_analog If you don't want Alpha Cen A, then 18 Sco might be the one.
{ "domain": "astronomy.stackexchange", "id": 1458, "tags": "star, milky-way, star-systems" }