anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why vector is defined as one straight line?
Question: I'm starting my studies now, and wondering why a lot of things that at the beginning I don't know if the motivation is based on the intuition of concrete thinking or the logic of abstract thinking, and this is the case with the definition of vectors. I understood that the vectors are composed by direction and magnitude, which represent the difference between two points, which mathematically speaking makes intuitive sense to be a line. But why are they defined exactly as a line if not always when applied to physical concepts, for example, not necessarily what happens between these two is a line? Answer: A vector is not a line. Nor is it "something that has magnitude and direction". A vector is instead a type of mathematical object that can be used to represent things with magnitude and direction, or even just direction. That is, vectors come first, "magnitude and direction" come second. In fact, in the most general sense of vectors, vectors do not have "magnitude" in an absolute sense, and even their sense of "direction" is fairly weak. What a vector "is" is an element of a vector space. That may seem really unhelpful, but one can say vector spaces are designed specifically to make working with quantities where we need to encode magnitude and direction information into a single object easy and useful. A slightly less abstract way to look at it is vector spaces generalize the idea of a list of numbers: $$\langle v_1, v_2, \cdots, v_n\rangle$$ where we can add those numbers in an elementwise fashion, as well as multiply them all by a single specific number (called a "scalar"). This lets us encode magnitude and direction in a neat way, because if we interpret that list of numbers as a list of Cartesian coordinates, we can say the direction represented is that of an arrow from the origin to the point given by the numbers we have so taken as coordinates (i.e. the point $(v_1, v_2, \cdots, v_n)$, where we have used different bracketing styles to distinguish the point from the vector), and the distance from the origin to the point is the magnitude. That's what you're seeing when you see "lines" or "arrows". But the vector itself is just this list of numbers, not the line or arrow, which is a pictorial representation of how the magnitude and direction information are encoded. The relation between these two ideas is that when you go through the essential properties that "adding and multiplying in elementwise fashion" entail, you can prove that the only spaces that have all those properties can be considered "equivalent to" such a space of number lists, possibly infinitely long (though things get tricky in that case). (This is described in linear algebra texts as "there is one vector space of each dimension up to isomorphism", where "isomorphism" here means the equivalence in question.) So when the vectors get used in physics, we are not using lines, we are using a purely abstract object which in some cases it is useful to interpret as describing a line, but in other circumstances, it may not be. And we may draw it using a little arrow or similar symbol when we want to convey or visualize the encoded magnitude and direction information.
{ "domain": "physics.stackexchange", "id": 79557, "tags": "vectors, geometry" }
Complexity of finding a perfect matching in directed graphs
Question: To the best of my knowledge, finding a perfect matching in an undirected graph is NP-hard. But is this also the case for directed and possibly cyclic graphs? I guess there are two possibilities to define whether two edges are incident to each other, which would also result in two possibilities to define what is allowed in a perfect matching: Allowed in the perfect matching are only edges that do not share a common start or end point, i.e. antiparallel edges are not allowed in the perfect matching. Antiparallel edge are allowed in the perfect matching, i.e. selecting edge $v_i\rightarrow v_j$ does not exclude selecting $v_ j\rightarrow v_i$. What I've read sofar, option 1 is the typical generalization for directed graphs, but is finding a perfect matching NP-hard for both cases? Answer: You can find the maximum matching in all three cases in polynomial time. It follows that it's possible to check whether there exists a perfect matching and, if so, to find one. Finding a maximum matching in an undirected graph can be done in polynomial time. It is not known (or likely) to be NP-hard. See https://en.wikipedia.org/wiki/Matching_(graph_theory)#Algorithms_and_computational_complexity. If you define the problem according to option 1, then finding a maximum matching in an undirected graph $G$ is equivalent to finding a perfect matching in the corresponding undirected graph (where you just ignore the edge directions) -- and thus can also be done in polynomial time. If you use option 2, then the following algorithm can be used to find a maximum matching in polynomial time: Given a directed graph $G=(V,E)$, construct a weighted undirected graph $G'=(V,E')$ with edge set defined as follows: If $G$ has an edge $v \to w$ but not its reverse $w \to v$, add $(v,w)$ to $E'$ with weight 1. If $G$ has an edge $v \to w$ and also its reverse $w \to v$, add $(v,w)$ to $E'$ with weight 2. Find the maximum-weight matching in $G'$, i.e., the matching whose weight is maximized. Now turn this into a set of edges in the original directed graph. If this process selected an edge $(v,w)$ and $G$ contains both $v\to w$ and $w\to v$, add both to the matching in $G$. If it selected an edge $(v,w)$ and $G$ contains only one of $v \to w$, $w\to v$ but not the other, add the one you can. This runs in polynomial time and finds the maximum matching in $G$. A simpler way to check whether $G$ has a perfect matching (i.e., one that covers all the vertices) is to construct the undirected graph $G'$ (you can ignore the weights) and check whether $G'$ has a perfect matching.
{ "domain": "cs.stackexchange", "id": 7188, "tags": "complexity-theory, np-complete, np-hard" }
How does the human body metabolize gasoline?
Question: A Chinese man has been drinking gasoline to relieve his pain for 25 years. How does the human body metabolize gasoline? Also, what are the side-affects to gasoline? Answer: Just to add an answer to the 'how does the body process gasoline?' portion of the question, the liver and kidney would be doing most of the work of removing the stuff from the system once it was absorbed in the digestive tract. The liver does most of the processing of toxins and their removal from the blood and would tend to do the most work in removing hydrocarbons from gasoline. It has enzymes that oxygenate toxins (adds oxygens) which make them more soluable in the blood, usually less toxic, and also removable from the body by the liver or the kidney. In the case of gasoline the compounds are likely to be just as toxic. The kidney works by actively filtering out excess water and mostly water soluable wastes like oxygenated hydrocarbons. Kidney damage occurs when gasoline is ingested in excess. This may be due to the toxicity of the gasoline, but also due to the compounds the liver is producing. Gasoline will tend to be fat soluable too, so it will leave the system more slowly, even after being processed by the liver (benzene and toluene in gasoline will tend to become phenols which are quite toxic and fat soluable). http://www.ncbi.nlm.nih.gov/pubmed/3379185
{ "domain": "biology.stackexchange", "id": 254, "tags": "human-biology, metabolism, toxicology" }
Diluting 4 % NaOCl to get a 1 % solution
Question: Assuming that one has a 1 litre bottle of liquid bleach containing $4~\%$ sodium hypochlorite. If I wish to dilute it down to $1~\%$ sodium hypochlorite (to use as disinfectant), would adding 3 litres of water to it, be sufficient? This should yield 4 litres of $1~\%$ sodium hypochlorite. And that should be same as taking $\pu{250ml}$ of the liquid bleach, and then adding $\pu{750ml}$ of water, to yield 1 litre of $1~\%$ sodium hypochlorite. Now the liquid bleach also contains a stabiliser to reduce the rate of decomposition of $\ce{NaOCl}$. How will diluting the solution affect the action of the stabiliser? Answer: There is a simple dilution formula: $\mathrm{C_iV_i=C_fV_f}$, which is valid for all concentration units. The "i" and "f" indicate initial and final concentrations or volumes. Using this formula, you can see that you don't need to prepare buckets full of 1% bleach. All you need to decide is what should be your final volume and final concentration. As you highlighted, the issue, you cannot prepare a 1% solution and expect it to last for a day. It is better to prepare on need basis. The stabilizer in bleach is sodium hydroxide itself. Upon dilution, of course the base is also diluted and the pH is reduced. One can look at the distribution diagram of "bleach" as a function of pH (yes this is from a Handbook of Surfactant. It is a 4000 paged volume- books are still useful)
{ "domain": "chemistry.stackexchange", "id": 13888, "tags": "aqueous-solution, decomposition" }
Schema.org microdata code check
Question: I updated my code based on the feedback from my previous question. There is one for the home page and one for the product pages: Home page code: <div itemscope itemtype="http://schema.org/TravelAgency"> <span itemprop="name">NAME OF TRAVEL AGENCY</span> <link itemprop="url" href="HOME PAGE URL"> <span itemprop="description">SHORT DESCRIPTION OF THE TRAVEL AGENCY</span> </div> Product page code: <body itemscope itemtype="http://schema.org/MedicalWebPage"> <link itemprop="author" href="http://schema.org/TravelAgency" /> <link itemprop="url" href="HOME PAGE URL"> <link itemprop="specialty" href="http://schema.org/PlasticSurgery" /> <meta itemprop="aspect" content="treatment"/> <span itemprop="name">TITLE OF PAGE</span> <span itemprop="description">SHORT DESCRIPTION OF PAGE CONTENT</span> <img itemprop="image" src="NAME OF IMAGE ON THIS PAGE.jpg" alt="ALT NAME OF IMAGE"/> <span itemprop="offers" itemscope itemtype="http://schema.org/Offer"> The cost of <span itemprop="itemOffered">NAME OF TREATMENT</span> <span itemprop="alternateName">ALTERNATE NAME OF TREATMENT</span> is <span itemprop="priceCurrency" content="EUR">€</span><span itemprop="price" content="600.00">600.00</span> </span> </body> Answer: Home page (TravelAgency) Looks correct. Note that you could (if you want) use this item on every page of your website, not only on the home page. Typically, you could always use this item for marking up the header of each page. By using its url property, you always link back to the same URL (i.e., your home page’s URL), so consumers can understand that it’s always the same TravelAgency entity. Product page (MedicalWebPage) Your use of the properties specialty, aspect, name, description, and image is correct. author property The value of the author property needs to be the URL of the author, not the URL of the Schema.org type. Schema.org expects an Organization/Person as value, so you could make this explicit by using: <span itemprop="author" itemscope itemtype="http://schema.org/TravelAgency"> <link itemprop="url" href="HOME PAGE URL" /> </span> If you don’t want to use so "much" markup, you could also simply link directly: <link itemprop="author" href="HOME PAGE URL" /> However, most sites have a link to the home page anyway (i.e., in the logo or the navigation), so you could reuse this link instead of duplicating it with link. url property The value of the url property would have to be the URL of this MedicalWebPage, not the URL of your homepage. It doesn’t hurt to have it, but it isn’t really needed in the first place (as it’s just the URL of the current document anyway). You could, if you want, combine this with a possible rel-canonical in your head. offers property Using the offers property on MedicalWebPage would mean that you are offering this web page (e.g., for sale), as its definition says (bold emphasis mine): An offer to provide this item […] This is probably not what you want to say, or is it? Offer itemOffered property The itemOffered property expects a Product as value. This is not required (using text, like you do, is allowed), but if you want to go the extra mile, make it a Product item: <span itemprop="itemOffered" itemscope itemtype="http://schema.org/Product"> <span itemprop="name">NAME OF TREATMENT</span> </span> span element can’t have a content attribute In HTML5/Microdata, only the meta element can have a content attribute, so it’s not allowed on span. Instead of <span itemprop="priceCurrency" content="EUR">€</span>, you might want to use: <meta itemprop="priceCurrency" content="EUR">€ And instead of <span itemprop="price" content="600.00">600.00</span>: <span itemprop="price">600.00</span>
{ "domain": "codereview.stackexchange", "id": 11743, "tags": "html5, microdata" }
Sensing when a network is available
Question: The code below just senses when the network is available and enables/disables a button on the UI. If I am registered to the NetworkAvailability event for the life of the program, will it cause a memory leak? I heard it might but didn't understand why. System.Net.NetworkInformation.NetworkChange.NetworkAvailabilityChanged += new System.Net.NetworkInformation.NetworkAvailabilityChangedEventHandler(NetworkChange_NetworkAvailabilityChanged); delegate void EnableCallback(); void NetworkChange_NetworkAvailabilityChanged(object sender, System.Net.NetworkInformation.NetworkAvailabilityEventArgs e) { EnableSync(); //throw new NotImplementedException(); } private void EnableSync() { if (this.btnSync.InvokeRequired) { EnableCallback methodCallback = new EnableCallback(EnableSync); this.Invoke(methodCallback, new object[] { }); } else { if (btnSync.Enabled == true) btnSync.Enabled = false; else btnSync.Enabled = true; } } Answer: Well, if it's for the life of the program as you say, then no. When the program ends, all memory will be released. Second, this all looks like one class. So root references will be removed when the class is GC'd and no memory leak will occur. Lastly, write if (btnSync.Enabled == true) btnSync.Enabled = false; else btnSync.Enabled = true; as btnSync.Enabled = !btnSync.Enabled; Much more concise.
{ "domain": "codereview.stackexchange", "id": 4752, "tags": "c#, memory-management, event-handling" }
Capabilities of Distributed Processing in ROS
Question: I have 2 questions about how distributed processing works: Is it possible to selectively choose which computer/single board computer is used for different packages? For example, can the open cv package be run on board 1 while packages related to SLAM are run on board 2? So, is it possible to tell ROS where each package should be run and just have a default board identified that is used to run all packages that aren't specifically identified? Would that just mean that one would start different processes on different boards - just wondering how this is done. For packages that are very computationally intense is it possible to split the processing of a package like open cv to run on multiple boards? Originally posted by d7x on ROS Answers with karma: 53 on 2015-07-26 Post score: 0 Answer: I think it is important to understand that packages are nothing special: they are just a convenient way to distribute one or several nodes (and related files) that happen to have some kind of relationship (ie: operate on the same data, on the same concepts and / or same domain model). Nodes are nothing but ordinary Linux/OSX/Windows (sometimes) programs, where the authors have chosen to use the ROS middleware (ie: software that takes care of distributing the messages) to communicate with other programs that have been written that way. One of the specific advantages of using the ROS middlware, is that nodes become agnostic to where they are running: within the same process (nodelets), on the same machine (nodes) or distributed over multiple machines connected through a TCP/IP network (still nodes). An immediate consequence of this fact is that how nodes are deployed (ie: where they are run and in what way) becomes a configuration phase decision, not an implementation one. A more thorough explanation of this can be found on the wiki/Concepts page and its various subpages. Coming back to your questions: Is it possible to selectively choose which computer/single board computer is used for different packages? I hope it is clear that you are completely responsible for how you deploy your nodes. There is no automated system for this, so use your knowledge of / insight into your application's requirements and constraints, resource usage of the involved nodes and the hardware they need to have access to. So, is it possible to tell ROS where each package should be run [..]? Yes. You can setup launch files to launch nodes on different machines, identified by their hostnames (see wiki/roslaunch/XML/machine and wiki/Roslaunch tips for large projects - Machine tags and Environment Variables). [..] and [..] have a default board identified that is used to run all packages that aren't specifically identified? I haven't used it, but the default attribute seems to allow you to express just that: "Sets this machine as the default to assign nodes to". For packages that are very computationally intense is it possible to split the processing of a package like open cv to run on multiple boards? I hope it is also clear now that 'splitting a package' is not really something that makes sense: the only thing you could possible do is run the different nodes provided by a package on different cpus/machines. Obviously that is only possible if the package contains multiple nodes in the first place. But if it does, then it should be possible. You have to keep in mind though that messaging isn't free: it uses cpu time, memory and network bandwidth. In addition it introduces latency (time between sending and receiving). For things like image processing the cost might be such that it completely negates any advantage you may gain from distributing your computations. So, yes, it is possible, but it doesn't necessarily lead to better performance. It's always a trade-off. Edit: be sure to have a properly working network setup. See wiki/MultipleMachines for that. Originally posted by gvdhoorn with karma: 86574 on 2015-07-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 22292, "tags": "ros" }
Change in Electric Flux on Rotating a Circular Ring
Question: A Circular ring of radius r made of a non conducting is placed with its axis parallel to a uniform electric field. The ring is rotated about a diameter through 180 degrees. Does the Flux increase, decrease or doesn't change? My opinion: A Circular ring is quantity that is not associated with any area itself( It encloses an area, but it itself has no contribution). So Flux in any case should be 0. However on googling I found several answers dealing with area, all varying and I failed to get a correct explanation. Is it wrong to assume the ring to have no thickness, i.e., no area? And please answer what the result would have been of the question. Answer: When dealing with the changing electric or magnetic field, problems usually refer to the area enclosed by a ring or a frame. It is OK to assume that the ring has no thickness. Does the Flux increase, decrease or doesn't change? Flux is defined as a dot product of the area and the field, here it is an electric field: $\Phi =\mathbf E \cdot \mathbf A$, or $\Phi = EA\cos\theta$, where $\theta$ is the angle between the electric field and the area normal. Since angle $\theta$ changes in time (because the ring rotates), the flux changes periodically as the $\cos\theta$ does: decreases for $\theta \in \left(0 + \pi, \displaystyle \frac{\pi}{2} + k\pi\right)$ increases for $\theta \in \left(\displaystyle\frac{\pi}{2} + \pi, \pi + k\pi\right)$ So it either increases or decreases depends on the phase of rotation.
{ "domain": "physics.stackexchange", "id": 68371, "tags": "electrostatics, gauss-law" }
Harmonics on Hyperbolic space
Question: I would like to know if there exists an analogue for hyperbolic space of the so called spherical harmonics which play a major role in the quantum states construction in a hydrogen atom. In other words are there 'hyperbolic harmonics' and how trivial is it to obtain them? Answer: As I understood the question, you would like to find a solutions $u$ of the hyperbolic laplacian $\Delta_{h} u=0$ that are harmonic homogenous polynomials such that every function that is solution of the hyperbolic laplacian can be represented as a sum of these hyperbolic harmonic homogenous polynomials. Every hyperbolic harmonic function can be represented as some combination of ordinary spherical harmonics, but what about hyperbolic spherical harmonics, that are solutions of the hyperbolic laplacian and also homogenous polynomials. For further reading try Audrey Terras, Harmonic Analysis on Symmetric Spaces and Applications, vol. I and II Peter Buser, Geometry and Spectra of Compact Riemann Surfaces Isaac Chavel, Eigenvalues in Riemannian Geometry
{ "domain": "physics.stackexchange", "id": 56029, "tags": "spherical-harmonics" }
What are the qubit capabilities of Microsoft Azure?
Question: I've always worked on IBM Cloud when I needed to deploy a quantum application to a NISQ devices. However, though not an hardware expert, I was asked to explore Microsoft Azure, which I see offers IonQ and Quantinuum platforms access. I'm however struggling to understand their capabilities. For instance, here I can clearly see, for each device, how many qubits it has and what type of process it is. I'm looking for something similar on Azure cloud. Answer: As @Sam_QC mentioned, through Azure it is possible to access to several external providers that Azure are working with. It looks like there is no 1 organized documentation with all the desired data inside like in IBM. But I'll post here what have I found. Here (updated to 09/06/2022) it's written that there are 2 available providers - IonQ and Quantinuum - both providing real quantum hardware and simulators, and if we care about real quantum computers only - there are 3 available: IonQ: one 11-qubits trapped-ion based quantum computer available. Here are the most detailed specs that I found. Quantinuum: two trapped-ion based quantum computers are available. The H1-1 model has 20 qubits, and the H1-2 model has 12 qubits. Here you can find more detailed specs, and here you can find some more information. In this documentation (updated to 01/08/2022) it looks like several more providers has been added, but without further information beside that (I guess it is very fresh, if someone reading this can elaborate in this issue it will be great). The important part: Quantinuum: Trapped-ion system with high-fidelity, fully connected qubits, low error rates, qubit reuse, and the ability to perform mid-circuit measurements. IONQ: Dynamically reconfigurable trapped-ion quantum computer for up to 11 fully connected qubits, that lets you run a two-qubit gate between any pair. Pasqal: Neutral atom-based quantum processors operating at room temperature, with long coherence times and impressive qubit connectivity. You can pre-register today for Azure Quantum’s private preview of Pasqal. Rigetti: Gate-based superconducting processors will be available in Azure Quantum soon and utilize Quantum Intermediate Representation (QIR) to enable low latency and parallel execution. You can pre-register today for Azure Quantum’s private preview of Rigetti. Quantum Circuits, Inc: Full-stack superconducting circuits, with real-time feedback that enables error correction, encoding-agnostic entangling gates. You can pre-register today for Azure Quantum’s private preview of QCI.
{ "domain": "quantumcomputing.stackexchange", "id": 4055, "tags": "azure-quantum" }
Where does the energy of light go, when it red-shifts?
Question: When talking about the expansion of the universe, it is said that it can be proven by the red-shifting of light.(As we would need higher than lightspeed to get this redshift by the Doppler effect) I am an amateur, so I am not sure I am correct, but here is what I think. Redshifting increases the wavelength of the light. higher wavelength = lower frequency = less energy. So, if my assumptions are correct, where does this energy from the light go? If not, where did I make an incorrect assumption? Answer: The problem is that conservation of energy is a slippery concept in General Relativity. There are arguments back and forth but most people accept that conservation of energy is only a local law - it applies only to a local inertial frame and cannot be applied to the universe as a whole. However in an expanding universe it is very difficult to identify any inertial frames and certainly not ones that encompass a cosmologically significant volume. What this means is that if you make a local "box" small enough that it is not affected by the expansion of the universe, then energy conservation will apply. But of course in such a box, a photon would enter and leave with the same energy because the box is unaffected by the expansion of the universe and so there would be no redshifting of the photon.
{ "domain": "astronomy.stackexchange", "id": 1882, "tags": "expansion, redshift" }
Parallel Joints in ROS
Question: I work on a team that creates a humanoid robot, and we are currently making the much-procrastinated change to port over to ROS. However, the robot has multiple parallel joints in the legs, and I believe that URDF cannot define such a robot model (Correct me if I'm wrong). I have an SDF model of the robot, but I'm not able to find any good resources on how to control a robot through SDF, how to spawn it in gazebo, and what limitations I may face in the future based on this. I was hoping someone could enlighten me about what exactly I'm getting into here, since content related to this is sparse. Thanks in advance! For reference, we work on ROS melodic and gazebo9. The robot was modelled in SolidWorks. Originally posted by favre49 on ROS Answers with karma: 1 on 2019-08-26 Post score: 0 Answer: There isn't an ideal solution, but one workaround involves mimic joints a tutorial Originally posted by David Lu with karma: 10932 on 2019-08-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33691, "tags": "ros, gazebo, urdf, ros-melodic, sdf" }
(solved) How to glue syringe?
Question: I foud tons of answer how glue with syringe, but I need to glue syriges together to be airtight and glue some silicon tube there too and cannot find a hint how to do it. Does anybody know what glue does stick with syringe (eventually dissolve it a little)? Answer: I used just hot soldering rod and it went well. I need to keep the heat low, but was able to solder the plastick together to be airtight. Just to be sure more I use the hot glue afteward to enforse the connections and make them more smooth. Not exactly nice looking, but works well enough.
{ "domain": "robotics.stackexchange", "id": 1358, "tags": "robotic-arm" }
Implementation in js to find if N can be written as X ^Y , N<100
Question: From past year ,i have been mainly working on Js, and so have started implementing Algo Questions in js .can it be implemented in a better way. /** * Implementaion to find if a number can be expressed as x^y * x is assumed to be >=0 */ var canBeExpressedAsXraisedToY = function (n) { if (n == 0 && n == 1) { console.log("Supplied 0 or 1"); return true; } else if (n > 1) { var x = 2, prod = 1,y = 1; while (prod < n) { while (prod < n ) { prod = Math.pow(x,y); console.log('prod:',prod,'y:', y ,'x:',x) if (prod == n) { console.log('it can be expressed as x:', x, 'in terms of:', y); return true; break; } y++; } x++; } console.log("the number cannot be expressed as x ^ y "); return false; } else { // to handle negative cases . console.log('the number given is not supported '); return false; } } console.log(canBeExpressedAsXraisedToY(4)); console.log(canBeExpressedAsXraisedToY(67)); console.log(canBeExpressedAsXraisedToY(64)); Answer: The algorithm is wrong. canBeExpressedAsXraisedToY(81) returns false. Of course any n can have a trivial solution x=n and y=1. I assume you want to require y>1. But you didn't state that anywhere, and in fact initialize y=1 in your code. Initializing prod=1 is sloppy programming. You should structure the loops to compute prod before testing it. That is pretty much a brute-force exhaustive search. For large n, a much better strategy would be to analyze the prime factors of n instead.
{ "domain": "codereview.stackexchange", "id": 24115, "tags": "javascript, node.js, ecmascript-6" }
Types of radioactive decay
Question: Besides alpha, beta, and gamma, are there any other types of radioactive decay? If so, what are they? Is there any type of radioactive decay that is more powerful than gamma? Answer: Here's a list of types of radioactive decay. The most notable types of decay that are not among the classic three involve the direct emission of a free proton or a neutron, the emission of atomic clusters other than helium nuclei (alpha particles), the absorption of one of the innermost shell electrons into the nucleus, or spontaneous fission of the unstable nucleus. Regarding the question about gamma ray energies, it is first important to notice that "gamma ray" is a generic term that refers to any electromagnetic energy (photons) coming from a nucleus or subatomic particle decays, and has no direct reference to the energy of the photon (though in the past it did). For example, a nuclear excited state of thorium-229, $^{229m}\ce{Th}$, decays through the emission of gamma rays with an energy of 7.6 eV, which corresponds to photons in the ultraviolet region of the electromagnetic spectrum. Also, the answer depends on exactly what you mean by "power". Assuming you're comparing kinetic energies, Wikipedia says gamma rays from nuclear decay rarely have energy above 10 MeV, and another source says electrons from beta decay usually have energies up to 4 MeV. The energies are all of the same order of magnitude, which makes sense because all the particles are coming from the same source (an unstable nucleus). Theoretically gamma rays can have a slightly higher kinetic energy than other decay particles because photons have zero rest mass, therefore all the decay energy is converted into kinetic energy, whilst for electrons 511 keV is "wasted" on creating the rest mass (and even higher "wastes" for heavier particles such as protons/neutrons/alpha particles etc). I think one can also consider that some energy is used up in "climbing out" of the electrical potential and residual strong nuclear potential wells of the nucleus, neither of which applies to gamma ray photons (though positively charged species would in fact gain energy, being electrically accelerated away from the nucleus by repulsion, but this contribution is probably smaller than the nuclear force one). However, different types of nucleus have different allowed decay paths, and depending on complicated details the most energetic beta particles or proton/neutron emissions might have a slightly higher energy than the most energetic nuclear decay gamma ray. As a final note, it is also possible to create even more energetic photons than any that could be produced via nuclear decay by putting electrons in a sufficiently powerful particle accelerator. Any charged particle under acceleration/deceleration loses some energy in the form of electromagnetic radiation. For ultrarelativistic charged particles, the photons emitted can be of extremely high energy. These are a type of x-ray termed "hard x-rays" (more energetic than so called "soft x-rays"). Edit: Correction regarding energy lost in rest masses of particles from decay. Energy is used up in the creation of electrons, but protons/neutrons/clusters already exist inside the nucleus, and so are not created. These particles could never be produced directly as a single one would have upwards of 930 MeV, far above typical nuclear decay energies.
{ "domain": "chemistry.stackexchange", "id": 804, "tags": "atoms, nuclear, radioactivity" }
Chernoff bound for weighted sums
Question: Consider $X = \sum_i \lambda_i Y_i^2$, where $\lambda_i$ > 0 and $Y_i$ is distributed as a standard normal. What kind of concentration bounds can one prove on $X$, as a function of the (fixed) coefficients $\lambda_i$? If all the $\lambda_i$ are equal then this is a Chernoff bound. The only other result I am aware of is a lemma from a paper of Arora and Kannan ("Learning mixtures of arbitrary Gaussians", STOC'01, Lemma 13), which proves concentration of the form $\Pr[X < E[X] - t] < \exp(-t^2/(4 \sum_i \lambda_i^2)$, i.e., the bound depends on the sum of the squares of the coefficients. The proof of their lemma is analogous to the usual proof of the Chernoff bound. Are there other "canonical" such bounds, or a general theory of which functions of the $\lambda_i$'s are such that their largeness ensures good exponential concentration (here, the function was simply the sum of the squares)? Maybe some general measure of entropy? A more standard reference for the Arora-Kannan lemma would also be great, if it exists. Answer: The book by Dubhashi and Panconesi collects together many such bounds, more numerous than can be listed here. If you find that hard to access immediately, there's an online survey of Chernoff-like bounds by Chung and Lu
{ "domain": "cstheory.stackexchange", "id": 22, "tags": "chernoff-bound" }
Custom indexOf() without String methods
Question: I created my own indexOf function. I was wondering if anyone could help me come up with a way to make it more efficient. I am practicing for interviews so the catch is that I cannot use any String methods. I believe the runtime of this method is O(n2) with space of O(n). Correct me if I am wrong. Also, I want to ensure the program runs safely and correctly, the only test case I can think of it the length comparison. public static int myIndexOf(char[] str, char[] substr) { int len = str.length; int sublen = substr.length; int count = 0; if (sublen > len) { return -1; } for (int i = 0; i < len - sublen + 1; i++) { for (int j = 0; j < sublen; j++) { if (str[j+i] == substr[j]) { count++; if (count == sublen) { return i; } } else { count = 0; break; } } } return -1; } Answer: Complexity Pedantically, the time-complexity is \$ O( m \times n ) \$, where m is str.length and n is substr.length. This matters when \$ \left| m-n \right| \$ is large. The Space complexity is \$ O(1) \$. You do not allocate any size-based memory structures. Safety It all looks good. There are no threading issues, no leaks, no problems. Correctly Nope, I don't like the lack of neat handling for invalid inputs.... you should be null-checking, etc. Getting a raw 'NullPointerException' looks bad. Edit: Note that Josay has pointed out that your code (and my code below) produce different behaviour to String.indexOf() when the search term is the empty-string/empty-array. Alternative I think your code is fine, but... I tend to use loop break/continue more than most... and, this saves a bunch of code in this case... Also, for readability, I often introduce a limit variable when the loop-terminator can be complicated.... Consider the following loops which do not need the count variable: int limit = len - sublen + 1; searchloop: for (int i = 0; i < limit; i++) { for (int j = 0; j < sublen; j++) { if (str[j+i] != substr[j]) { continue searchloop; } } return i; } return -1;
{ "domain": "codereview.stackexchange", "id": 6542, "tags": "java, performance, algorithm, strings, search" }
Isolating a specific gene (specifically TRAV* series of genes) for sequencing
Question: I'm trying to figure out (for pedagogical purposes) the right way to isolate and PCR the regions coding the T-Cell receptor. My understanding is that I would need to use restriction enzymes that target regions near the start and end of the gene. I was looking at a bunch of different restriction enzymes on this site: https://www.addgene.org/mol-bio-reference/restriction-enzymes/ What I'm not sure about is: how do you pick the combination that will specifically cut out that gene and not any of the other ones? It seems like the restriction enzymes only consist of a few base pairs, and I would guess they can slice at many many sites. Furthermore once it is cut, how do I isolate just the DNA for the gene I want to sequence as opposed to the rest of the junk DNA? Answer: Restriction enzymes are usually only used when you do molecular cloning using plasmids or other forms of shorter DNA moleclues, since - like you said - using them on genomic DNA would cut it into much more than thousands of pieces. Instead what you can do is perform your PCR directly on the genomic DNA (or potentially cDNA, if you want to get rid of introns). Doing PCR on genomic DNA is often a bit tricky (i.e. you'll likely have to try different conditions to get a product), but generally possible. If you want to analyse the whole coding region it might be better to extract mRNA instead of DNA and reverse transcribe it, so that can amplify the coding region as one block, that doesn't contain Introns - PCR on cDNA is also easier than on genomic DNA. Another thing you need to look out for is that T-Cell-Receptors (similar to antibodies) are different in every T-Cell. This means that you can't use Sanger sequencing, which will only give you a proper result if you input has the exact same sequence. To get the sequences of a mix of T-cells you would need to use next-gen sequencing.
{ "domain": "biology.stackexchange", "id": 8978, "tags": "dna-sequencing" }
Path loss effect on BER
Question: Does including log distance path loss to a communication system cause a drastic difference to the BER of the system? In other words, if I have a simulation for a communication system without path loss, does the BER vs SNR curve differ drastically when I add path loss or should both the BER vs SNR curves(with and without path loss) look similar? Answer: Generally speaking, the BER only depends on the SNR. If the received signal power $P_\mathrm{S}$ is reduced due to path loss, while the noise power $P_\mathrm{N}$ remains constant, then the SNR will be reduced, because the SNR is defined as $$ \gamma = \frac{P_\mathrm{S}}{P_\mathrm{N}} $$ Consequently, the BER will be increased and the answer to your first question is: yes. However, this should not influence your BER vs SNR curves. Thus the answer to your second question is: no. If your simulation behaves differently, i.e. if introducing a path loss "shifts" your BER curves, then the SNR is probably not calculated correctly. I suggest that you estimate the received signal power $\hat P_\mathrm{S}$ at the receiver (just before the sampler). Then the power of the additive noise should be calculated by $$ P_\mathrm{N}=\frac{\hat P_\mathrm{S}}{\gamma}. $$ Then add noise with mean power $P_\mathrm{N}$ to the received signal. With this setup, your BER/SNR curves should be independent of path loss.
{ "domain": "dsp.stackexchange", "id": 2819, "tags": "signal-analysis, digital-communications, dsp-core" }
How long will it take for a dampened spring to reach a certain point?
Question: I've written a spring simulation for a UI in JavaScript, and everything is going great, users are able to throw UI elements all over the place and have them slide right where they need to go with a little wiggle. However I'm trying to chain spring simulations together, and I'd like to know what time a spring will cross its equilibrium point so that I can ready the next animation in the chain to start at the right time and velocity. How can I solve this formula for $t$ so I can know when the spring will cross a certain point? I'd like to know for all types of springs, but practically I'm only working with slightly underdamped ones. My current formula for finding an underdamped spring's displacement from its equilibrium is $$ f(t) = e^{-ct/2m} * (d_0 * cos(\frac{\sqrt{4mk - c^2}*t}{2m}) + \frac{2mv_0+cd_0}{\sqrt{4mk-c^2}} * sin(\frac{\sqrt{4mk - c^2}*t}{2m})) $$ where: $m$: the mass of the springing object, $k$: the stiffness of the spring, the spring constant, $c$: the damping force, $d_0$: the springing object's initial distance from its equilibrium at $time = 0$ , $v_0$: the springing object's velocity once the user lets go of it at $time = 0$, and $t$ for time. Can I solve this mess for $t$ rather than $f(t)$? Answer: There is an analytical solution, if you bring the equation to the form $$ x = R \exp(-\zeta \theta) \cos \left(\theta \sqrt{1-\zeta^2}+\psi \right) $$ The solution for $x=0$ is $$ \theta = \frac{ \frac{\pi}{2} ( 2 i -1) - \psi }{\sqrt{1-\zeta^2}} $$ where $i=1,2,3 \ldots \infty$. The time and angle are related as $\theta = \omega_n t$, with $\omega_n^2 = \frac{k}{m}$ and damping ratio $\zeta = \frac{c}{2 m \omega_n}$. The amplitude is $R = \sqrt{A^2+B^2}$ and phase $\psi =-\arctan(A/B)$ where $A$ is the coefficient of $\sin()$ and $B$ the coefficient of $\cos()$.
{ "domain": "physics.stackexchange", "id": 81904, "tags": "newtonian-mechanics, mass, friction, spring, oscillators" }
viewing rgb and depth from kinect camera in ubuntu 10.10
Question: I don't get any data (rgb image,depth image,rviz) from the kinect,I'm running on Ubuntu 10.10 ,diamondback;basically what I did is : I set up my computer to accept software from ROS.org I ran roscore on a new terminal by typing : roscore 3)I installed openNI kinect by typing : sudo apt-get install ros-diamondback-openni-kinect 4)I installed openNi camera by typing : rosmake openni_camera 5)I launched the openNI driver by typing :roslaunch openni_camera openni_node.launch 6)I ran rviz by typing: rosrun rviz rviz 7)I ran the rgb camera by typing : rosrun image_view image_view image:=/camera/rgb/image_color please help me fix it. Originally posted by bassim on ROS Answers with karma: 1 on 2011-10-25 Post score: 0 Original comments Comment by bassim on 2011-11-04: I got them from ROS.org ,if you think that they are wrong,could you please tell me what are the right instructions ? Comment by tfoote on 2011-10-25: Where did you get these instructions from? These are not the standard wa to install the kinect drivers. Answer: If you would like to view images coming from the kinect camera, follow the instructions posted here: http://www.ros.org/wiki/openni_kinect and here: http://www.ros.org/wiki/openni_camera which are summarized as follows: install openni_kinect: sudo apt-get install ros-electric-openni-kinect run openni: roslaunch openni_launch openni.launch run image viewer: rosrun image_view image_view image:=/camera/rgb/image_color This video tutorial for the TurtleBot might also be helpful: http://www.ros.org/wiki/turtlebot/Tutorials/Looking%20at%20Camera%20Data Originally posted by mmwise with karma: 8372 on 2011-10-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by mmwise on 2011-11-07: you should see a line like this in the output ... [ INFO] [1320711146.846538470]: Number devices connected: 1 Comment by mmwise on 2011-11-07: can you copy and paste the output from that terminal into your question Comment by bassim on 2011-11-07: no , I didnt get any error ! Comment by mmwise on 2011-11-04: are there any errors in the terminal where you called roslaunch openni_camera openni_node.launch? Comment by bassim on 2011-11-04: when I typed on a new terminal : rostopic hz/camera/rgb/image_color . I got this : WARNING: topic [/camera/rgb/image_color] does not appear to be published yet Comment by mmwise on 2011-11-04: @bassim do you see any errors in the terminal when you roslaunch openni_camera openni_node.launch .. can you rostopic hz /camera/rgb/image_color Comment by mmwise on 2011-11-02: @bassim what happens when you follow these instructions. do you see any console output? Any error messages? You need to provide more information so that we can help you
{ "domain": "robotics.stackexchange", "id": 7088, "tags": "kinect, openni-kinect" }
Is there a quick way to calculate the derivative of a quantity that uses Einstein's summation convention?
Question: Consider $F_{\mu\nu}=\partial_{\mu}A_\nu-\partial_\nu A_\mu$, I am trying to understand how to fast calculate $$\frac{\partial(F_{\mu\nu}F^{\mu\nu})}{\partial (\partial_\alpha A_\beta)}$$ without expanding the multiplication in terms of $A$. Clearly if we do not have indices, this can be done very quickly with a glance, yet when there are indices multiplication factors come into play we would, in this case get a factor of $4$ before usual derivative. Is there a way to glance at it and get correct answer immediately? Answer: The product rule (Leibniz rule) applies to functional derivatives, so we have $$ \frac{\partial(F_{\mu\nu}F^{\mu\nu})}{\partial (\partial_\alpha A_\beta)} = 2 F^{\mu \nu} \frac{\partial(F_{\mu\nu})}{\partial (\partial_\alpha A_\beta)} = 2 F^{\mu \nu}\left( \delta^{\alpha}_{\mu} \delta^{\beta}_{\nu} - \delta^{\alpha}_{\nu} \delta^{\beta}_{\mu}\right) = 4 F^{\alpha \beta}. $$ The step where I actually take the derivative of $F_{\mu \nu}$ with respect to $\partial_\alpha A_\beta$ does involve writing out the field strength in terms of the derivatives, but it's less complicated in this version.
{ "domain": "physics.stackexchange", "id": 91916, "tags": "homework-and-exercises, electromagnetism, special-relativity, field-theory, differentiation" }
Proof for BFS and DFS equivalence
Question: I'm trying to prove (by induction) that BFS in equivalent to DFS, in the sense that they return the same set of visited nodes, but I'm stuck in the middle of some of the cases. Let $G$ be a directed graph and $u \in V(G)$. We want to prove that $ BFS(G,u) = DFS(G,u)$. $\text{BASE CASE}$ $V(G) = \{u\}$ $BFS(G, u) = \{u\} \quad DFS(G,u) = \{u\}$ $BFS(G,u) = DFS(G,u) \quad Qed.$ $\text{INDUCTIVE HYPOTHESIS}$ $BFS(G,u) = DFS(G,u) \;\;\forall G, u \in V(G)$. $\text{INDUCTIVE STEP}$ $(i)\;G' = (V(G) \cup\{v\}, E(G))$ We know that $BFS(G',u) = BFS(G,u)$ if $u \ne v$ (and so does $DFS$), because $v$ is disjoint from the rest of the graph, and so the proof follows from the hypothesis. But if $u = v$ ? $(ii)\;G' = (V(G), E(G) \cup (v,w)) \quad v,w \in V(G)$ How can I use the hypothesis here? Answer: There is not much hope in proving $BFS(G,u) = DFS(G,u)$ directly by mathematical induction on the number of nodes in $G$ or on the degree of $u$. The problem is that as an induction hypothesis that equality does not capture the "right" kind of information or cover the "right" cases that are useful for the induction step. Approach one: the explicit description as reachable nodes Instead, you can try proving separately that each side is equal to the set $R$ of reachable nodes from $u$, that is, $R=\{v\in V(G)\mid \mbox{ there is a directed path from } u \mbox{ to } v \}$. More specifically, $$R=\{u\}\cup\left\{v\in V(G)\mid \mbox{ there exist } u_0, u_1, \cdots,u_n \text{ such that }u_0=u, u_n=v, u_i\in V(G) \text{ for } 0\le i\le n \text { and }(u_i,u_{i+1})\in E(G)\text{ for } 0\le i\lt n\right\}$$ You can prove the case of $DFS$ by mathematical induction on the total number of nodes of $G$. You can prove the case of $BFS$ by mathematical induction on the distance of $v$ to $u$. Or use whatever as you see fit. Approach two: the characterization as nodes closed under neighbourhood A set of nodes $S$ is said to be closed under neighbourhood if for any node $n$, $S$ contains the adjacent nodes of $n$. That is, if $n\in G$ and $(n, m)\in E(G)$, then $m\in S$. Here are the critical observations on both $BFS$ and $DFS$. Lemma 1. $DFS(G,u)$ is closed under neighbourhood. Proof: It becomes obvious once we check what $DFS$ does when it discovers a new node. Lemma 2. $BFS(G,u)$ is closed under neighbourhood. Proof: It becomes obvious once we check what $BFS$ does when it pops out a node from the queue. Lemma 1 and 2 suggest us to consider the minimal set of nodes that contains $u$ and closed under neighbourhood. Name it $C(G,u)$. It is enough to prove that both $BFS(G,u)$ and $DFS(G,u)$ are equal to $C(G,u)$. We have shown both contain $C(G,u)$. It is easy to verify "the set of nodes visited so far are contained in $C(G,u)$" is an invariant of $BFS$ on $G$ starting from $u$. The same hold for $DFS$.
{ "domain": "cs.stackexchange", "id": 12647, "tags": "graphs, graph-traversal" }
Sign of gravitational force
Question: I'm reading Lanczos's The variational principles of mechanics, and on pp. 80-81 there is an example involving a system made up of $n$ rigid bars, freely jointed at their end points, and the two free ends of the chain being suspended. The coordinates are chosen so that the $x$ axis is horizontal, and the $y$ axis is pointed vertically downwards. If the rectangular coordinates of the end points of the bars are denoted by $(x_k, y_k)$ and the length of the bars is denoted by $l_k$, then the expression for the potential energy will be of the form $$ \frac{g}{2} \sum_{k=0}^{n-1} (y_k + y_{k+1}) l_k .$$ My problem with this is: the way I've understood things so far, the potential should have a negative sign because going "down", that is, going in the direction of the force of gravity, should decrease the value of the potential function. But in this example, the opposite appears to be true: going down increases the value of the potential. What am I getting wrong here? Answer: I) Yes, it appears that the sentence [...] the $y$-axis vertically downwards [...] in Ref. 1 p. 81 should have been [...] the $y$-axis vertically upwards [...] II) Let us also mention that Ref. 1 p. 29 eq. (17.9) introduces a function $U$ to be minus the potential energy, however, this $U$ seems unrelated to above. References: C. Lanczos, The variational principles of mechanics, 1949.
{ "domain": "physics.stackexchange", "id": 14606, "tags": "classical-mechanics, lagrangian-formalism, potential-energy, variational-principle, statics" }
Are really only ~1% of the physical CPU space used for computing
Question: In a Talk by Herb Sutter C++ and Beyond 2012: Herb Sutter - atomic Weapons 1 of 2 at 46:30 the point is made that only around 15% of the physical space (or the transistors) on a CPU do actual processing while the rest is basically just caches. Further it is mentioned that of those 15%, only around 1% account for actual processing while the rest contains the logic for e.g. out of order execution, branch prediction, pipelining etc. This is the slide shown: In the bottom right corner the source is shown to be from 2004 (the talk itself is from 2012). Now my question is: Is this statement true for modern processors as well or did the ratio change significantly in either direction? And are there any more sources that show this (either for old or for new CPUs)? When I tried to find more information I mostly found articles about cache sizes and how it matters or not matters but that is not what I am interested in. Answer: I don't have any numbers for you, but that's partly because I don't know what Herb Sutter meant by "actual processing". What I think he meant is, essentially, ALUs. So you have to consider what you're comparing with. This is a die shot of the Intel 8086, a top-of-the-line CPU from 1978. What I want you to notice about this is that about 2/3 of the die is control logic, and only about 1/3 (over on the left) is the data path, and about half of that is the register file. There is a lot of die space on the right-hand side that isn't occupied by wires, transistors, and pads, so this may be a little tricky to estimate, but it seems to me that only about 10-15% of the die space seems to be used for "actual processing", if by that you mean ALUs. So that 1% figure is 10% of 10%. When you put it that way, it doesn't seem like such a small number, at least to me. Alternatively, let's say you do count the register file as "actual processing". Computation doesn't really "happen" inside it (not in a modern CPU, anyway), but it's pretty important; if you got rid of cache, the CPU might go much slower, but if you got rid of the register file, it wouldn't work at all. Then consider that modern register files are multi-ported for superscalar execution, and expanded for register renaming, and this all consumes die area. How much of that is it fair to assign to "actual computation"? The Itanium may be a bit of a special case. Just consider its user-visible register set, which is quite something: 128 64-bit general-purpose registers. (Or possibly 127; register 0 was hardwired to the value zero as with MIPS.) 128 82-bit floating point registers. (Again, two are hardwired to constants.) 128 64-bit special purpose "application registers". 8 64-bit registers just used for the targets of indirect branches. 64 1-bit predicate registers. A whole bunch of other stuff that you may not think of as "registers" (e.g. performance monitoring counters), but is still part of the "stored state" when you context switch. The Itanium also has an unusual approach to register spilling. When you have so many registers, you almost never need to spill registers except when performing function calls. But the Itanium avoids that too by managing general and floating-point registers as a stack (similar to SPARC), and spilling those "stack frames" implicitly behind the scenes, automatically during spare memory cycles. Is this "actual processing"? Arguably not, but it's not just a performance issue, it is part of the CPU's user-visible semantics. The Itanium is unusual in how much of the CPU implementation is exposed to the programmer. Other CPUs go the opposite route: a lot of the non-"actual processing" circuitry there is to hide the caches and the speculative execution. Well, up until you get a Spectre-like vulnerability that exposes the detail by accident... A few things have changed since 2004. Increasing transistor density influences the design of functional units, sometimes in strange ways. Consider an integer adder. One of the things we learn early on is how to manage ripple carry as the width of an adder gets larger, through look-ahead. Modern thinking is to use a fast single-cycle circuit which works on 95% of addition problems, and then use a second cycle to "repair" the answer if it turned out to have a difficult carry pattern. You can think of this as yet another form of speculative execution, so it's getting much more difficult to see where that stops and "actual processing" begins. Another thing that's changed is the rise of SIMD. Wide vector units allow CPU designers to add more arithmetic and logic for the essentially the same amount of instruction decoding overhead; one add instruction might trigger 4-8 parallel additions. But it does mean that some busses also need to be wider. Finally, many CPUs have ubiquitous integrated GPUs, which complicate the "processing vs overhead" calculation further. One last comment is that you have to remember is that people have to design and manufacture these CPUs, and CPU design is also influenced by what we can physically build. When you have 6 billion transistors to place, it's inevitable that they are designed from units that can be designed once and then replicated a lot, whether that's putting multiple cores on a die or putting in large arrays of cache memory. The fabrication process is not perfect, either; a lot of CPUs are thrown away because they failed testing, and others are packaged and sold as lower-grade chips (e.g. lower speed, fewer cores) if they only partially failed. Some kinds of circuit are easier to test than others, and at least some of the die space is used by test infrastructure which is never used outside the factory.
{ "domain": "cs.stackexchange", "id": 20322, "tags": "computer-architecture, cpu, cpu-cache" }
How does damping constant relate to mass?
Question: (Moderator note: this question is not answered by a different post here) In damped harmonic motion, I'm led to believe that the equation of motion in a mass-spring system is as follows $$x = Ae^{-λt} cos(wt)$$ After researching, I couldn't find a clear - explicit - relationship between λ and the mass of the object. I am aware of the identity relating the natural frequency to λ, but the frequency is influenced by λ itself. Some people say that λ is proportional to the square root of mass, and some say that it is proportional to 1/sqrt(mass). What is the correct relationship between mass and λ in damped harmonic motion? Is it a power relationship, a linear relationship, a square root relationship... etc? Please do tell me why as well. P.S. If you had to, please keep the calculus to a minimum. I'm only a senior high school student. Answer: Questions that say essentially "Some people report A, whereas other people report B" without identifying the sources are difficult to address because (1) those people may be flat wrong or (2) they may be right for certain scenarios (but not this one) or (3) they may be right but being misinterpreted, etc. Putting that aside, if I do a quick online search for underdamped oscillation equation, the first three sources that come up (1, 2, 3) are all consistent and unambiguous: $\lambda=\frac{c}{2m}$, where $c$ is the (constant) damping factor, i.e., the coupling coefficient between the speed of mass $m$ and the corresponding damping resistance in units of force. So $\lambda \sim\frac{1}{m}$, not $\frac{1}{\sqrt{m}}$ or $\sqrt{m}$. A dimensional analysis provides further reassurance: $c$ has units of [force]/([distance]/[time])=[mass]/[time], and $m$ has units of [mass], so $\lambda$ has units of 1/[time], which combines with $t$'s units of [time] to correctly provide a nondimensional argument in the exponential function.
{ "domain": "physics.stackexchange", "id": 86503, "tags": "newtonian-mechanics, friction, harmonic-oscillator, spring, oscillators" }
Is poison still poisonous after its 'expiration date'?
Question: As we all know that, Any poison is nothing but a chemical compound. And as discussed in the question: Chemicals-do-have-an-expiry-date! So, my question is: Is poison still poisonous after its 'expiration date'? and Is every poison always as poisonous as just after it was synthesised ? please feel free to update with the suitable tag(s). Answer: It depends on what the poison is. If we take the colloquial use of the word and include toxins and venoms, many are things like proteins that will certainly denature or otherwise degrade, eventually becoming harmless. e.g. tetrodotoxin, ricin, botulinum, etc. I would expect that type of poison to have the shortest shelf-life as they are relatively fragile. Many other poisons are small organic molecules. These can often be degraded by oxidation in air, exposure to UV, hydrolysis etc. and would include things like nicotine and nerve agents like sarin and VX. Many nerve agents, have shelf lives of a few years and research has actually been done to extend them for use in munitions. Several metals are known to be poisonous (like lead, mercury, and cadmium) and are problematic because they are toxic in not only their elemental forms, but also in inorganic and organic compounds. There may be a great difference in toxicity of the different forms, (see elemental mercury vs methylmercury), but most forms remain at least somewhat toxic. These may last a very long time because reactions likely to occur under normal conditions may not render them safe, e.g. a chunk of cinnabar ($\ce{HgS}$ mineral) sitting on your desk will not undergo any significant change to render it safe, even on a geological timescale.
{ "domain": "chemistry.stackexchange", "id": 8280, "tags": "everyday-chemistry, toxicity" }
Why wouldn’t the COM change position due to internal forces acting on objects inside a trolley?
Question: 7.3 A child sits stationary at one end of a long trolley moving uniformly with a speed $V$ on a smooth horizontal floor. If the child gets up a runs about on the trolley in any manner, what is the speed of the CM of the (trolley + child) system? This is where I have got a bit of a problem. I do know that the COM would remain unaffected by internal forces in its system but according to my textbook those internal forces are supposed to cancel out each other. In the problem the internal forces seem to come from the child inside the trolley but I don’t see how those forces could cancel out and not affect the position of the COM. I have tried to think about this in an another way - if COM’s position is mathematically defined to be dependent on the mass and relative position of the objects in the system why wouldn’t it change when the child (an object of the system) changes their position by running around the trolley? Answer: Because of Newton's Third Law. If the child exerts a force on the trolley, then the trolley must exert an equal and opposite force back on the child. This means that the accelerations of the trolley and child respectively must satisfy $$ m_t a_t = - m_c a_c. $$ If we integrate both sides of the equation over some period of time we get $$ m_t (v_t - V) = -m_c (v_c - V) $$ where $V$ is the initial velocity. Rearranging we see that $$ \frac{m_t v_t + m_c v_c}{m_t + m_c} = V $$ i.e., the center of mass is still moving at $V$ regardless of the forces the child and the trolley exert on each other.
{ "domain": "physics.stackexchange", "id": 93583, "tags": "homework-and-exercises, newtonian-mechanics, reference-frames, momentum" }
Unclear point in derivation of action-value function
Question: I did not understand how third equality follows from the second equality. Could some expert explain? Answer: As $q_{\pi}(s,a)$ is defined in S&B RL book as follows: we define the value of taking action a in state s under a policy $\pi$, denoted $q_{\pi}(s,a)$, as the expected return starting from $s$, taking the action $a$, and thereafter following policy $\pi$ Therefore similar to state value, your highlighted step here is trying to represent your action value's $\mathbb{E}_{\pi}[G_{t+1}|S_t=s,A_t=a]$ component by the state values of all possible entering states $s'$ after taking action $a$ following the same policy $\pi$ thereafter. As for the other component involving $R_{t+1}$ you've probably already known it's only determined by the environment under above conditions.
{ "domain": "ai.stackexchange", "id": 4172, "tags": "reinforcement-learning, sutton-barto" }
Self-adjoint and nonpositive differential operators
Question: I recently tumbled over a statement in a geophysics paper (PDF here). They have a wave equation which they formulate as $$ \frac{1}{v_0}\frac{\partial^2}{\partial t^2} \begin{pmatrix}p \\ r\end{pmatrix} = \begin{pmatrix} 1+2\epsilon&\sqrt{1+2\delta}\\\sqrt{1+2\delta} &1\end{pmatrix} \begin{pmatrix}G_{\bar x \bar x}+G_{\bar y \bar y}&0\\0&G_{\bar z \bar z} \end{pmatrix} \begin{pmatrix} p\\r \end{pmatrix}\tag{20} $$ and they claim that To achieve stability, the rotated differential operators $G_{\bar x \bar x}$, $G_{\bar y \bar y}$, and $G_{\bar z \bar z}$ should be self-adjoint and nonpositive definite as are the second-order derivative operators ($\tfrac{\partial^2}{\partial x^2}$, $\tfrac{\partial^2}{\partial y^2}$ and $\tfrac{\partial^2}{\partial z^2}$). (see eq. 14 and statement under eq. 20). They also claim that self-adjointness and nonpositiveness of the differential operators of the wave equation are necessary to conserve the energy in this system, and that if they were not self-adjoint, numerical instabilities occur. We have solved the problem by introducing the self-adjointness to the operator matrices in equation 20 to make sure that energy is conserved during the wave propagation to avoid amplitude blowup in the modeling. In this case the wave equation consists of two coupled elliptical PDEs. What happens in general, when some operators are not bounded and linear? Unfortunately I don't have the mathematical background to understand this statement. Answer: This is much to do with the possible eigenvalues of the operators. Normal operators on a Hilbert space are closely analogous to complex numbers, with the adjoint taking the role of the conjugate; these relations are typically inherited directly to the operator's eigenvalues. Thus, if a linear operator $L$ has an eigenfunction $f$ with eigenvalue $\lambda$, $$Lf=\lambda f,$$ then saying "$L$ is self-adjoint" means that $L^\dagger=L$ which translates to $\lambda^\ast=\lambda$, i.e. that $\lambda$ be real. Similarly, $L$ being nonpositive implies that $\lambda\leq0$. In your case, the behaviour can be reduced to an equation of the form $$ \frac{\partial^2 p}{\partial t^2}(x,t)=\hat Lp(x,t), $$ where $\hat L$ is some differential operator. In general the solution will not be of this form, but you can take a first stab of the problem by inserting in an eigenfunction of the differential operator for the spatial dependence. That is, you use the trial solution $p(x,t)=p_0(x)T(t)$, where $\hat Lp_0=\lambda p_0$. This hugely simplifies the time-propagation equation, which reduces to the solvable form $$ \frac{\partial^2 }{\partial t^2}T=\lambda T. $$ While the solutions of this equation are formally all the same (i.e. $T(t)=T_+e^{\sqrt{\lambda}t}+T_-e^{-\sqrt{\lambda}t}$) regardless of what $\lambda$ is, the behaviour will be very different and depend, sometimes sensitively, on $\lambda$: If $\lambda>0$, then at least one of the exponentials $e^{\pm\sqrt{\lambda}t}$ will have a blow-up. If $\lambda$ has an imaginary part, however small, then one of the two square roots $\pm\sqrt{\lambda}$ will have a positive real part, and the corresponding contribution to $T(t)$ will oscillate at a blowing-up amplitude. If $\lambda$ is negative or zero, then both roots $\pm\sqrt{\lambda}$ will be imaginary or zero, and both exponentials will be completely oscillatory and have bounded amplitude for all time. It is clear that only the third case is consistent with conservation of energy. In terms of the differential operator, it corresponds to a condition of self-adjointness (i.e. $\lambda\in\mathbb R$) and non-negativity of the operator.
{ "domain": "physics.stackexchange", "id": 11955, "tags": "mathematical-physics, operators, hilbert-space" }
Electromagnetic field and continuous and differentiable vector fields
Question: We have notions of derivative for a continuous and differentiable vector fields. The operations like curl,divergence etc. have well defined precise notions for these fields. We know electrostatic and magneto static fields aren't actually well behaved. They blow up at the sources, have discontinuities and yet we use the same mathematical formulations for them as we would have done for continuous and differentiable vector field. Why is this done ? Why are laws of electromagnetism(maxwell's equations) expressed in the so called differential forms when clearly that mathematical theory is not perfectly consistent with the electromagnetic field. Why not use a new mathematical structure ? Is there a resource which can help me overcome these issues without handwaving at particular instances when the methods seem to give wrong results? Also one of the major concerns is that, given a charge distributions, the maxwell equations in differential form, will always give a nicely behaved continuous and differentiable vector field solution. But the integral form (alone, not satisfying the differential form) can give a discontinuous solution as well. Leading to two different answers for the same configuration of charges. hence there is an inconsistency. Like there is an discontinuous solution for the boundary condition of 2D surface, the perpendicular component of the electric field is discontinuous. ( May be it is just an approximation) and actually the field is continuous but due to not being able to solve the differential equation we give such an approximation, but this isn't mentioned in the textbooks. Answer: One of the major issues that seems to be going on here is the notion of point and surface structures in our 3D world. When we define electrostatic fields by a distribution of point charges, we are being somewhat non-physical. If we keep zooming in on an electron, it's going to start not looking like a point charge anymore. Consider the Darwin Term in the Fine Structure Hamiltonian. The "rapid quantum oscillation smearing out the charge" removes the idea of a stationary point charge (albeit for the proton). What's more important in electrostatics is to say: in what region does our field need to be valid? The answer is only the region in which we're doing physics. To a good approximation, the electron behaves like a point charge as long as you're not on top of. Our point like charge distribution gives a field which is valid and a good approximation pretty much all the way down to the point itself. This doesn't need to be a problem though. Let us compare with an example from GR: In the normal derivation of the Schwarschild Metric in GR, we're only concerned with the region outside the spherical body. If the Schwarschild radius of the body lies outside the physical boundary of the spherical body, then our solution starts producing strange behaviours, and that's great, but we never try to go into the body itself using this metric. There's a region we're concerned with and we stick to it and it's all fine. There's a similar issue with surface charges. Physically, you cannot confine charge to the plane. You can do a pretty good job approximating the plane, but random quantum behaviour puts a limit in place. We have to realise that the model is not a perfect representation of the world. But, the level we're usually looking at it, the normal E-Field is pretty much discontinuous across a boundary and our theory is the limit that it is discontinuous. That doesn't mean it isn't useful. If we start going right up to that boundary, our model is going to break down. As an aside, a spherical conductor is not a uniform distribution of matter. If it were, it would be a mathematical ball, and the Banch-Tarski paradox would have some very interesting things to say about that conductor. If we're going to say let's throw away this theory because the field isn't defined everywhere, I'd say we should have thrown it away sooner because of Banach-Tarkski. If we stick with Maxwell's Electrodynamics then we need to study it for itself to make sure we're always self consistent. You mention the electrostatic energy derivation given in Griffiths text in a comment. I think you're talking about the Electric Potential calculation and the choice of reference point. If the charge distribution extends to infinity, we cannot use the point at infinity as the zero reference in calculating potential because the potential blows up at infinity. This is fundamental to the the theory we use. It is equivalent to trying to use the point at a point charge as the zero. We have to use the theory as is. If I remember correctly, Griffiths goes on to say that such problems do not occur in the real world because infinite distributions do not exist, which brings a small amount of peace. But you have to ask yourself if you're really surprised when unhelpful things happen because you playing with mathematical curiosities. You ask about an alternative that doesn't have these issues? We don't use Maxwell's Electrodynamics to calculate electromagnetic cross-sections when colliding electrons. We use QED. In QED, the electrons don't have an Electric field like they do in Maxwell's. Electrons go in, something happens, electrons come out. That something is the exchange of virtual photons: the first electron excites the background field, and the excitation - the photon - propagates and then interacts with the other electron. There are many different 'paths' via which this can happen and we need to sum over them etc. Let's not get bogged down with Quantum Field Theory though, because you don't need to be an expert to know it's littered with infinities. So should we use the full standard model lagrangian to do everything? Well no. It's probably worth taking a look at the two big reasons why. Firstly, it's not a theory of everything, it doesn't do gravity. Secondly, the computational demands of the dynamics of the 3 quarks + gluon plasma (+ whatever else is hanging around through pair production) is somewhat vast, never mind what's going on in my glass of water at the quark level. If we want to say something useful about my glass of water, we have a look at what assumptions we can make and find a simpler theory we can actually work with. Really, what you've stumbled on to is the nasty truth of physics. We're used to hearing it all the time, but usually we don't realise quite what it means and how far reaching it is. Physics is about modelling the universe. Newton's Law of gravity is a model. It works in the weak field limit, but GR is "better". We accept it's not 100% but we know it's pretty darn accurate under certain conditions, and it's a hell of a lot easier to deal with. Here its obvious. But in the same sense, GR is wrong, the standard model of particle physics is wrong etc. There are some fundamental assumptions being made and we have to restrict ourselves to problems where the assumptions hold, or we go and win a noble prize.
{ "domain": "physics.stackexchange", "id": 16046, "tags": "electromagnetism, mathematical-physics, vector-fields, calculus" }
FtpEasyTransfer - .NET5 Worker Service for Easy FTP Sync'ing
Question: I've created a simple worker service, which uses FluentFTP to sync either files or directories from one ftp client to another, or simply to a local machine, depending on how appsettings.json is configured. Overall I'm pretty happy with the code, but there's certain points I find the code repeats itself or looks a bit messy. This is the first micro-service I've finished, would love some thoughts on how I can improve it. public class FtpWorker : IFtpWorker { private readonly ILogger<FtpWorker> _logger; private TransferSettingsOptions _options; private string _localDirectory; public FtpWorker(ILogger<FtpWorker> logger) { _logger = logger; } public async Task RunAsync(TransferSettingsOptions options) { _options = options; _localDirectory = _options.LocalPath; if (!_options.LocalPathIsFile) { Directory.CreateDirectory(_options.LocalPath); } switch (DetermineRunMode()) { case RunMode.DownloadDir: await RunDownloadDirAsync(); break; case RunMode.DownloadFile: await RunDownloadFileAsync(); break; case RunMode.UploadDir: await RunUploadDirAsync(); break; case RunMode.UploadFile: await RunUploadFileAsync(); break; case RunMode.SyncDirs: await RunSyncDirsAsync(); break; case RunMode.SyncFile: await RunSyncFileAsync(); break; default: break; }; } private async Task RunDownloadDirAsync() { if (_options.Source is not null && !string.IsNullOrWhiteSpace(_options.Source.Server)) { try { await DownloadDirectoryFromSourceAsync(); } catch (Exception ex) { _logger.LogError("Exception in RunDownloadDirAsync: {Message}", ex.Message); } } } private async Task RunDownloadFileAsync() { if (_options.Source is not null && !string.IsNullOrWhiteSpace(_options.Source.Server)) { try { await DownloadFileFromSourceAsync(); } catch (Exception ex) { _logger.LogError("Exception in RunDownloadDirAsync: {Message}", ex.Message); } } } private async Task RunUploadDirAsync() { throw new NotImplementedException(); } private async Task RunUploadFileAsync() { if (_options.Destination != null && !string.IsNullOrWhiteSpace(_options.Destination.Server)) { try { await UploadFileToDestinationAsync(); } catch (Exception ex) { _logger.LogError("Exception in RunUploadFileAsync: {Message}", ex.Message); } } else { _logger.LogError("Destination or DestinationServer empty in RunUploadFile"); } } private async Task RunSyncDirsAsync() { if (_options.Source != null || !string.IsNullOrWhiteSpace(_options.Source.Server)) { try { await DownloadDirectoryFromSourceAsync(); } catch (Exception ex) { _logger.LogError("Exception in DownloadFromSource: {Message}", ex.Message); } } else { _logger.LogDebug("No source configured."); } foreach (var opt in _options.ChangeExtensions) { ChangeFileExtensions(opt); } if (_options.Destination != null || !string.IsNullOrWhiteSpace(_options.Destination.Server)) { try { await UploadDirectoryToDestinationAsync(); } catch (Exception ex) { _logger.LogError("Exception in UploadToDestination: {Message}", ex.Message); } } else { _logger.LogDebug("No destination configured."); } } private Task RunSyncFileAsync() { throw new NotImplementedException(); } private async Task<List<FtpResult>> DownloadDirectoryFromSourceAsync() { var token = new CancellationToken(); using (var ftp = new FtpClient(_options.Source.Server, _options.Source.Port, _options.Source.User, _options.Source.Password)) { ftp.OnLogEvent += Log; await ftp.ConnectAsync(token); var rules = new List<FtpRule> { new FtpFileExtensionRule(true, _options.Source.FileTypesToDownload) }; var results = await ftp.DownloadDirectoryAsync(_options.LocalPath, _options.Source.RemotePath, FtpFolderSyncMode.Update, FtpLocalExists.Skip, FtpVerify.None, rules); if (_options.Source.DeleteOnceDownloaded) { foreach (var download in results) { if (download.IsSuccess && download.Type == FtpFileSystemObjectType.File) { await ftp.DeleteFileAsync(download.RemotePath); } } } foreach (var download in results) { if (download.IsFailed) { _logger.LogWarning("Download of {Name} failed: {Exception}", download.Name, download.Exception); } } return results; } } private async Task<FtpStatus> DownloadFileFromSourceAsync() { var token = new CancellationToken(); using (var ftp = new FtpClient(_options.Source.Server, _options.Source.Port, _options.Source.User, _options.Source.Password)) { ftp.OnLogEvent += Log; await ftp.ConnectAsync(token); var overwriteExisting = _options.Source.OverwriteExisting ? FtpLocalExists.Overwrite : FtpLocalExists.Skip; string localPath = _options.Destination.RemotePath; if (!_options.LocalPathIsFile) { var fileName = Path.GetFileName(_options.Source.RemotePath); localPath = $"{_options.LocalPath}/{fileName}"; } var result = await ftp.DownloadFileAsync(localPath, _options.Source.RemotePath, overwriteExisting); if (_options.Source.DeleteOnceDownloaded) { if (result.IsSuccess()) { try { await ftp.DeleteFileAsync(_options.Source.RemotePath, token); } catch (Exception ex) { _logger.LogWarning("Error deleting {RemotePath}: {Message}", _options.Source.RemotePath, ex.Message); } } } return result; } } private void ChangeFileExtensions(ChangeExtensionsOptions options) { foreach (var file in Directory.GetFiles(_localDirectory, $"*.{options.Source}")) { var newFileName = @$"{_localDirectory}\{Path.GetFileNameWithoutExtension(file)}.{options.Target}"; try { File.Move(file, newFileName, true); } catch (Exception ex) { _logger.LogWarning("Moving file {file} failed: {Message}", file, ex.Message); } } } private async Task<FtpStatus> UploadFileToDestinationAsync() { var token = new CancellationToken(); using (var ftp = new FtpClient(_options.Destination.Server, _options.Destination.Port, _options.Destination.User, _options.Destination.Password)) { ftp.OnLogEvent += Log; await ftp.ConnectAsync(token); var overwriteExisting = _options.Destination.OverwriteExisting ? FtpRemoteExists.Overwrite : FtpRemoteExists.Skip; string remotePath = _options.Destination.RemotePath; if (!_options.Destination.RemotePathIsFile) { var fileName = Path.GetFileName(_options.LocalPath); remotePath = $"{_options.Destination.RemotePath}/{fileName}"; } var result = await ftp.UploadFileAsync(_options.LocalPath, remotePath, overwriteExisting); if (_options.Destination.DeleteOnceUploaded) { if (result.IsSuccess()) { try { if (_options.LocalPathIsFile) { File.Delete(_options.LocalPath); } } catch (Exception ex) { _logger.LogWarning("Error deleting {LocalPath}: {Message}", _options.LocalPath, ex.Message); } } } return result; }; } private async Task<List<FtpResult>> UploadDirectoryToDestinationAsync() { var token = new CancellationToken(); using (var ftp = new FtpClient(_options.Destination.Server, _options.Destination.Port, _options.Destination.User, _options.Destination.Password)) { ftp.OnLogEvent += Log; await ftp.ConnectAsync(token); var results = await ftp.UploadDirectoryAsync(_options.LocalPath, _options.Destination.RemotePath, FtpFolderSyncMode.Update, FtpRemoteExists.Skip, FtpVerify.None); if (_options.Destination.DeleteOnceUploaded) { foreach (var upload in results) { if (upload.IsSuccess) { try { File.Delete(upload.LocalPath); _logger.LogInformation("File deleted: {LocalPath}", upload.LocalPath); } catch (Exception ex) { _logger.LogWarning("Error deleting file {LocalPath}: {Message}", upload.LocalPath, ex.Message); } } } } foreach (var upload in results) { if (upload.IsFailed) { _logger.LogWarning("Upload of {LocalPath} failed: {Exception}", upload.LocalPath, upload.Exception); } } return results; } } private RunMode DetermineRunMode() { if (_options.LocalPathIsFile) { _logger.LogDebug("Local Path: {LocalPath} is file, RunMode determined as UploadFile", _options.LocalPath); return RunMode.UploadFile; } else if (_options.Source is not null && _options.Destination is not null) { if (_options.Source.RemotePathIsFile) { _logger.LogDebug("Source & Destination defined, Source.RemotePath: {RemotePath} is file, RunMode determined as SyncFile", _options.Source.RemotePath); return RunMode.SyncFile; } else { _logger.LogDebug("Source & Destination defined, Source.RemotePath: {RemotePath} is directory, RunMode determined as SyncDirs", _options.Source.RemotePath); return RunMode.SyncDirs; } } else if (_options.Source is null && _options.Destination is not null) { if (_options.Destination.RemotePathIsFile) { _logger.LogDebug("Only Destination defined, Destination.RemotePath: {RemotePath} is file, RunMode determined as UploadFile", _options.Destination.RemotePath); return RunMode.UploadFile; } else { _logger.LogDebug("Only Destination defined, Destination.RemotePath: {RemotePath} is directory, RunMode determined as UploadDir", _options.Destination.RemotePath); return RunMode.UploadDir; } } else { if (_options.Source.RemotePathIsFile) { _logger.LogDebug("Only Source defined, Source.RemotePath: {RemotePath} is file, RunMode determined as DownloadFile", _options.Source.RemotePath); return RunMode.DownloadFile; } else { _logger.LogDebug("Only Source defined, Source.RemotePath: {RemotePath} is directory, RunMode determined as DownloadDir", _options.Source.RemotePath); return RunMode.DownloadDir; } } } I'm curious if there's an easier way to handle the using statements, as I seem to be passing in params in the same way every time, is there a better way? Answer: Quick remarks: I notice you always use ex.Message. However, what if you've got an InnerException? I'd recommend an approach like this (I've copied the code of the method below; note that you can adapt the string.Join to your own liking, of course). Also, I'd also recommend to log the entire stack trace as well, in case you run into an exception where the message doesn't tell you enough. public static string Execute(Exception exc) { var messages = new List<string>(); do { messages.Add(exc.Message); exc = exc.InnerException; } while (exc != null); return string.Join(" - ", messages); } In several places you use new FtpClient(_options.Source.Server, _options.Source.Port, _options.Source.User, _options.Source.Password). Move this to a method and call that method. Same with new FtpClient(_options.Destination.Server, _options.Destination.Port, _options.Destination.User, _options.Destination.Password). Matter of fact, if _options.Source and _options.Destination are the same type (which I'd expect, but you haven't posted this class), I'd recommend a method that accepts this class as a parameter and returns an FtpClient. DetermineRunMode() is too noisy and repetitive and inelegant for me. I'd favor an approach where you'd determine various factors (e.g. whether both Source and Destination are defined) and at the end compile a message, e.g. var sourceIsNotNull = _options.Source is not null; var destinationIsNotNull = _options.Destination is not null; var message = (sourceIsNotNull && destinationIsNotNull) ? "Source & Destination defined" : sourceIsNotNull ? "Only Source defined" : "Only Destination defined"; Maybe you could have a method for each "factor" e.g. which is defined, what the RunMode is, whether RemotePath is a file or a folder,..., perhaps even move all that to a separate class (called RunModeRetriever or alike). (Also, considering you're doing a return, I don't think all those elses are even necessary.) You check too much in the methods themselves. In RunDownloadDirAsync you already know that _options.Source is not null (because that is checked in DetermineRunMode and RunDownloadDirAsync is called due to the result of that method call) so there is no need to be "extra careful": this only adds noise to your logic. You aren't even consistent: you log _logger.LogError("Destination or DestinationServer empty in RunUploadFile");, but there is no equivalent for string.IsNullOrWhiteSpace(_options.Source.Server). You should move all those checks before executing DetermineRunMode, perhaps even rethink that logic into a "data check" class which looks at the TransferSettingsOptions, verifies all the necessary data is in there, returns a "fail" if there is required data missing combined with a report of what is missing, and returns a "success" when all is OK, and perhaps also determines the RunMode while doing all those checks. Or perhaps you should group the data checks, return a report, and based on that report either you stop execution of the method and report to the user why you have stopped execution, or you continue the execution by determining the RunMode (using the data in the data check report), and then call the relevant method without having to worry that a certain setting is missing. The important point is to group your functionality, that way you can remove redundant code and improve the logic flow of the method. Each method has a try...catch. Why not put the try...catch around the switch (DetermineRunMode()) instead? Just make sure to include the RunMode when you log any eventual Exception.
{ "domain": "codereview.stackexchange", "id": 40854, "tags": "c#, .net-5" }
Is it theoretically possible to prevent or deter a star from becoming a black hole?
Question: Suppose NASA decides too many black holes are being produced in a year, so it decides to do something about that. Given our current scientific knowledge, is there theoretically any way NASA could accomplish this non-trivial feat? Answer: As long as it's not a black hole yet, you can always pull mass out of the star until it's too light to collapse into a singularity. Physically accomplishing that is far outside the range of current energy capacities, but there's nothing particularly hard about it from a physics standpoint. Just slam a high speed object into the star's edge and the momentum will knock some nuclear soup past escape velocity. Repeat for a few billion years. Problem solved.
{ "domain": "physics.stackexchange", "id": 24612, "tags": "black-holes" }
AdS/CFT with circle boundary
Question: I am having a hard time understanding the holographic geometry of a CFT on a circle. Say we have a CFT on a circle of perimeter $L$. And it extends in the interior to a gravitational solution, say a BTZ black hole or AdS. Consider the case of constant time. On one side, the perimeter of the circle is finite $=L$ and on the other side the interior geometry has a radius of infinite range $r\in[0,\infty)$. Can someone explain how is this possible? Answer: What you're seeing is that if we limit ourselves to the simplest definition of "boundary", then AdS doesn't have one. Taking $z \to 0$ in Poincare co-ordinates \begin{equation} ds^2 = \frac{R^2}{z^2} \left [ dz^2 + \eta_{\mu\nu} dx^\mu dx^\nu \right ] \end{equation} for instance doesn't give the flat metric until we also strip off an overall factor that would diverge. Therefore what AdS does have is a conformal boundary. The gravitational solutions in it determine a CFT on an equivalence class of metrics related by a Weyl transformation. This can be seen well in the notes https://arxiv.org/abs/1608.04948 by Penedones. Starting from Euclidean $AdS_{d + 1}$ defined as the hyperboloid \begin{equation} -X_0^2 + X_1^2 + \dots + X_{d + 1}^2 = -R^2, \end{equation} he discusses different ways to parameterize it. The metrics obtained this way just differ by co-ordinate choices from the bulk perspective. But looking at the conformal boundary, you can get flat space, cylinders and spheres to name a few. We can see that this is inevitable by going to asymptotically large $X_i$ where the hyperboloid becomes the null cone \begin{equation} -P_0^2 + P_1^2 + \dots + P_{d + 1}^2 = 0 \end{equation} where the conformal group acts linearly. Nulity is only one constraint in a $d + 2$ dimensional space so, to describe $CFT_d$, we also have to choose a section of this cone. There are many sources including https://arxiv.org/abs/1107.3554 which discuss this construction not in a holographic context. But they are implicitly focusing on CFTs in $\mathbb{R}^d$ so they choose \begin{equation} (P_+, P_-, P_\mu) = (1, x^2, x^\mu). \end{equation} This is called the Poincare section which matches up nicely with the choice to use Poincare co-ordinates in AdS. But a compact boundary, such as the one you're describing, will require a different choice which is the one for global AdS.
{ "domain": "physics.stackexchange", "id": 80042, "tags": "general-relativity, conformal-field-theory, ads-cft, holographic-principle" }
Difference between measurement, state and system in quantum mechanics
Question: This question refers to the following point made in Susskind's book Quantum Mechanics- The Theoretical Minimum: In the classical world, the relationship between the state of a system and the result of a measurement on that system is very straightforward. In fact, it’s trivial. The labels that describe a state (the position and momentum of a particle, for example) are the same labels that characterize measurements of that state. To put it another way, one can perform an experiment to determine the state of a system. In the quantum world, this is not true. States and measurements are two different things, and the relationship between them is subtle and nonintuitive. I'm not sure I understand the last line since it seems to imply that states and measurements are not "different things" in the classical realm. Are both of them the "same" in the sense that they both refer to a point in the system's phase space? Of course, any particular state would uniquely specify a measurement in classical mechanics and conversely a set of measurements would uniquely specify a state. Such a correspondence doesn't exist in quantum mechanics. So is that what the author means by "states and measurements are two different things"? Finally, what does the author mean by "labels"? Do they simply refer to the values of the various degrees of freedom of the system? Now, coming to the next part: Attached to the electron is an extra degree of freedom called its spin. [...] We can and will abstract the idea of a spin, and forget that it is attached to an electron. The quantum spin is a system that can be studied in its own right. Why is the spin being called a "system"? Isn't a system supposed to be something physical instead of a mathematical abstraction? (And it's defined as a degree of freedom in the first place- from what I understand, a degree of freedom is meant to characterize a physical system.) Answer: Are both of them the "same" in the sense that they both refer to a point in the system's phase space? Yes, this is how we identify a point in the system's phase space. We look for what measurements uniquely define a state, and the results of these measurements are used to describe that state. This only works because if the the system is at a specific point in phase space, it will - classically - always yield the same measurement. This is related to: Finally, what does the author mean by "labels"? Do they simply refer to the values of the various degrees of freedom of the system? Yes. For a free particle, these labels might be "position" (times 3), and "momentum" (times 3). Since these labels uniquely define the state, and we can also perform measurements on these quantities that - given the same state - will always yield the same result, we can use the results of the measurements to uniquely define a point in phase space. That is to label them with the measurement results. Such a correspondence doesn't exist in quantum mechanics. So is that what the author means by "states and measurements are two different things"? Exactly. Because things are no longer deterministic (for non-commuting operators, anyway), we can no longer use the results of measurements to define a state, because even if two systems are in the same quantum state (ignoring the Pauli principle for a second), if we measure all degrees of freedom for both systems, the results will almost certainly vary. So the results of all possible measurements are no longer suitable to identify a state. Some measurements are still usable as identifiers ("labels"), if the system is in a certain state - a so-called eigenstate of this operator. These are then called quantum numbers. Complications arise because some measurements influence each other - that is when they do not commute. Whenever this is the case, e.g. position and momentum, the system cannot be in an eigenstate of both of these operators. This is why we can no longer use all possible measurements as identifiers. Furthermore, a general state is a linear combination of eigenstates of some set of operators, which also leads to varying outcomes of measurements. Why is the spin being called a "system"? This is a bit of hair-splitting the terminology. Usually, a system is some physical part of the universe that we want to investigate and describe, and it interacts with (or is maybe isolated from) the environment. In this specific case it means that we can decouple the description of some physical thing (the spin) from a different physical thing (the electron) it is usually attached to.
{ "domain": "physics.stackexchange", "id": 42565, "tags": "quantum-mechanics, hilbert-space, measurements, measurement-problem, quantum-states" }
Why map frame is not continuous?
Question: In rep-0105,it says "The map frame is not continuous, meaning the pose of a mobile platform in the map frame can change in discrete jumps at any time." In my opinion, the map frame (on navigation application) is a global coordinate,its three axis is continuous, and the robot pose on that coordinate can be change over time. Is that right? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2012-02-02 Post score: 1 Answer: The map frame is the result of a localization node. Typical global localization algorithms such as SLAM for instance are absolute but movements in the map frame is not continuous due to sensor noise for instance. Therefore, it is better not to inject them directly into the robot command. On the opposite, the odometry provided by the PR2 wheel encoders for instance are continuous but is less and less precise over time. See robot_pose_ekf for instance as a way to merge both strategies. So yes you are right, the map frame provide a global position of the robot base link. However, depending on your localization algorithm this position may "jump". Originally posted by Thomas with karma: 4478 on 2012-02-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Thomas on 2012-02-02: map is a frame which does not move. The transformation linking map to base_link changes over time and is updated by a localization node. Comment by tfoote on 2012-02-02: It would be better to word it as "Movement in the map frame is not continuous." Comment by sam on 2012-02-02: So map frame(a coordinate) is fixed not move or change by SLAM or localization node, and the only moved thing is the robot pose,right?
{ "domain": "robotics.stackexchange", "id": 8088, "tags": "navigation, mapping" }
How does the CHC Algorithm deal with child populations with lower fitness?
Question: I am basing my question on the pseudocode for the CHC Adaptive Search Algorithm by Eshelman given in this answer by deong: delta = k/4 # k = chromosome length while not done create new child population for i = 1 to n/2 # n = population size select p1, p2 from population without replacement if hamming_distance(p1, p2) > delta c1, c2 = HUX crossover(p1, p2) insert c1, c2 into child pop end if end for if child pop is empty delta = delta - 1 else take best n individuals from union of parent and child populations as next population end if if delta < 0 keep one copy of best individual in population generate n-1 new population members by flipping 35% of the bits of the best individual delta = k/4 end if How does this algorithm deal with situations where the child population has lower fitness than the parent population? Consider a population of two parent with a large enough hamming distance to produce children. Now if both children have a lower fitness than their parents, doesn't the survivor selection ("take best n") select the same parents again, resulting in an infinite loop? Answer: CHC never has a next generation's population with lower fitness than the current one due to the use of a truncation selection step. It combines the parents and offspring, sorts by fitness, and takes the best $n$ individuals. So if including any child would lower the fitness of the population, it would just select the parents as the next generation and the average fitness would stay the same. In principle, you could indefinitely select the same two parents over and over, continually producing inferior offspring. But generally you're running CHC with a population size of 50 or more, so you don't expect to continually pair the same two parents. Also, it's technically an infinite loop anyway, as most GAs are. In practice you define some stopping condition based on evaluation count or wall clock time or whatever.
{ "domain": "cstheory.stackexchange", "id": 4399, "tags": "genetic-algorithms" }
What helps us carry things around?
Question: Say I move from point A to point B. I carry an apple in my hand. What causes the apple to move from point A to B? In other words, what force is responsible for keeping the apple stuck to my hand and accelerate with the same acceleration as me? Is it friction or normal reaction? What is the work done by each? Please mention the sign too. Answer: Puk is correct, but I'd like to elaborate on his answer a little bit. If you just place the apple on your hand (and don't grip it), the friction from your hand is responsible for moving the apple. Let's assume that you, your hand, and the apple are moving to the right. Now think of it this way: when the hand begins moving, the apple is not moving. So, if you think the hand as your reference frame, you could think that the apple was trying to move to the left, and therefore there will be a static friction which will oppose this, facing in the opposite direction (to the right). This static friction is what causes the apple to stay on the hand as you continue moving. Lastly, adding to Puk's answer about the net work, if you stop moving (and the apple stops moving), the net work you did will be zero. This is because when you initially begin to move, you accelerate the apple by applying a positive force in the right direction. This force multiplied by the positive distance it travels while this force is being applied is the work done during this period. But then, when you slow down to a stop, you're applying a static force in the negative/left direction, so the work done would be this negative force multiplied by still a positive distance moved during this deceleration. They work done during acceleration and deceleration would then cancel out, making the net work be zero.
{ "domain": "physics.stackexchange", "id": 60252, "tags": "homework-and-exercises, newtonian-mechanics, friction, everyday-life, free-body-diagram" }
Sending a string with a certain path
Question: I want to send a string with the following path, and I want to verify if this is the right way to do that in JavaScript. Is this the right way to use the return in an if/else, or there is shorter way? Is the indexOf using correctly? https://test.com/plugins/plug/pluginr/poc/fact function isExt(path) { if(path.indexOf("/plug/pluginr/")) { return true; } else { return false; } } Answer: Your code seems not to be correct. In case you don't have the substring /plug/pluginr/ it would still return true since indexOf returns an integer containing the position of the first ocurrence. If theres no ocurrence of the substring then it returns -1 which is evaluated to true in JavaScript. You better use this instead; function isExt(path){ return path.indexOf("/plug/pluginr/") > -1; }
{ "domain": "codereview.stackexchange", "id": 11295, "tags": "javascript" }
Java basic user account registration
Question: I made a very simple user account registration program that works in the terminal using no external libraries other than java.util.Scanner. I want to know what improvements I can make to this code to make it more efficient and more readable: private static void newAccount(){ boolean next = false; System.out.println("Welcome to the User Account registration."); Scanner scan = new Scanner(System.in); String username = null; String password = null; String email = null; while(!next){ System.out.println("Username:"); username = scan.nextLine(); if(username.length() < 3 || username.length() > 20){ System.out.println("Your username must be between 3-20 characters in length"); } else { next = true; } } next = false; while(!next){ System.out.println("Password:"); password = scan.nextLine(); if(password.length() < 8){ System.out.println("Your password must be at least 8 characters in length"); } else { next = true; } } next = false; while(!next){ System.out.println("Email:"); email = scan.nextLine(); if(!email.contains("@")){ System.out.println("Your email is invalid"); } else { next = true; } } next = false; while(!next){ System.out.println("Confirm Email:"); String confirmEmail = scan.nextLine(); if(!confirmEmail.equals(email)){ System.out.println("This email does not match the email above"); } else { next = true; } } next = false; while(!next){ System.out.println("Does the following information look the same? (Y or N)\nUsername: " + username + "\nPassword: " + password + "\nEmail: " + email); String response = scan.nextLine(); if(response.equals("Y") || response.equals("y")){ System.out.println("Congratulations on completing the User Account Registration!"); next = true; } else if(response.equals("N") || response.equals("n")){ newAccount(); } } } Answer: The logic to request the user data is fairly similar. It can be extracted to a method (in pseudocode): inputInvalid = true while (inputInvalid) { print(inputToBeRequested) input = requestInput() if (!isValid(input)) print(errorMessage) else inputInvalid = false } Now, you just need to fill in the gaps. For this, you could use a consumer/supplier approach, or the one I'd rather go with, as I think it makes the code more readable, the method template pattern: import java.util.Scanner; public class UserRegistration { private String username; private String password; private String email; public static void main(String[] args) { new UserRegistration().newAccount(); } public void newAccount() { UsernameRetriever usernameRetriever = new UsernameRetriever(); PasswordRetriever passwordRetriever = new PasswordRetriever(); EmailRetriever emailRetriever = new EmailRetriever(); ConfirmationRetriever confirmationRetriever; do { username = usernameRetriever.requestData(); password = passwordRetriever.requestData(); email = emailRetriever.requestData(); confirmationRetriever = new ConfirmationRetriever(username, password, email); } while (!confirmationRetriever.isResponseYes(confirmationRetriever.requestData())); System.out.println("Registered user with: " + "Username=" + username + ", " +" Password=" + password + ", " + "Email=" + email); } static abstract class UserDataRetriever { private static Scanner scanner = new Scanner(System.in); public String requestData() { String input; boolean inputIsInvalid; do { System.out.println(getDataRequestMessage()); input = scanner.nextLine(); inputIsInvalid = !isValid(input); if (inputIsInvalid) { System.out.println(getInvalidInputMessage()); } } while (inputIsInvalid); return input; } protected abstract String getInvalidInputMessage(); protected abstract String getDataRequestMessage(); protected abstract boolean isValid(String input); } static class UsernameRetriever extends UserDataRetriever { @Override protected String getInvalidInputMessage() { return "Your username must be between 3-20 characters in length"; } @Override protected String getDataRequestMessage() { return "Username:"; } @Override protected boolean isValid(String username) { return username.length() >= 3 && username.length() <= 20; } } static class EmailRetriever extends UserDataRetriever { private boolean confirmEmail = false; private String email; @Override public String requestData() { email = super.requestData(); confirmEmail = true; super.requestData(); return email; } @Override protected String getInvalidInputMessage() { return confirmEmail ? "This email does not match the email above" : "Your email is invalid" ; } @Override protected String getDataRequestMessage() { return confirmEmail ? "Confirm Email:" : "Email:"; } @Override protected boolean isValid(String email) { return confirmEmail ? this.email.equals(email) : email.contains("@"); } } static class PasswordRetriever extends UserDataRetriever { @Override protected String getInvalidInputMessage() { return "Your password must be at least 8 characters in length"; } @Override protected String getDataRequestMessage() { return "Password:"; } @Override protected boolean isValid(String password) { return password.length() >= 8; } } static class ConfirmationRetriever extends UserDataRetriever { private String username; private String password; private String email; public ConfirmationRetriever(String username, String password, String email) { this.username = username; this.password = password; this.email = email; } @Override protected String getInvalidInputMessage() { return "Please, answer with just 'Y' or 'N'"; } @Override protected String getDataRequestMessage() { return "Does the following information look the same? (Y or N)\n" + "Username: " + username + "\n" + "Password: " + password + "\n" + "Email: " + email; } @Override protected boolean isValid(String response) { return isResponseYes(response) || isResponseNo(response); } public boolean isResponseYes(String response) { return response.equals("Y") || response.equals("y"); } public boolean isResponseNo(String response) { return response.equals("N") || response.equals("n"); } } } Now, I would personally have each class on their own file. Rather than them being inner classes in UserRegistration. But I wrote it like that so that by simply copy-pasting, the code can be run. Also, we could argue that the code is now way longer. However, that is not necessarily worse. The way the code is written now, if you just read the newAccount method, it is fairly easy to tell what is going on. The UserDataRetriever has a very simple and specific purpose, and each of its children are really easy to debug. Say, for example, the email validation needs to be changed now. Then, you just need to look for the isValid(String) method in EmailRetriever, rather than having to read through the newAccount method and look for where the email is validated.
{ "domain": "codereview.stackexchange", "id": 41555, "tags": "java" }
What is hydraulic diffusivity?
Question: I'm having trouble grasping which properties of the soil are described by hydraulic diffusivity (sometimes referred to as soil-moisture diffusivity which is based on the diffusion equation). What does hydraulic diffusivity describe and how is it related to hydraulic conductivity? Answer: So let's get through some definitions. I will not discuss the derivations of this, but you can look this up, if you want to in the source I provided. We find ourselves in a porous medium, so we will have always some volume filling factor $\Theta$ of water in rock. (copyright K. Roth, Heidelberg University) Then we can start with the hydraulic conductivity, that can be defined as $$K = \frac{\Theta \left< \kappa \nabla P \right>}{\mu}$$ Here we have quite some notation to clarify: $\Theta$ is the local filling fraction of water in an aquifer $\mu$ is the molecular diffusivity The brackets $\left< ... \right>$ denote the macroscopic average over the microscopic aquifer structure that is given by Permeability $\kappa$ and local pressure gradient. The Permeability itself is a way to encapsulate how easy water can flow through a given geometry. This is the most rigorous definition I could find in my notes. The lecture however is unavailable on the internet, so you'll have to take my word for this. I just mention this as I don't know whether you're more interested on the theoretical side, or the pragmatic. The pragmatic approach is usually to simply state Darcy's law $$ q = - K \frac{dh}{dz} $$ relating the water mass flux $q = \rho v$ to the gradient of hydraulic head. We see now that a flow velocity will result from a pressure gradient, but strongly modified through the microscopic properties encapsulated in K. The hydraulic diffusivity you search for is then written as $$D = \frac{K}{S}$$ where S is the water storage fraction, with $S \sim \Theta$ (not strictly correct). So in a way this just cancels the $\Theta$ in $K$ and appears in the pressure diffusion equation: $$ \partial_t P = D \Delta P $$ and therefore D gives the local diffusion speed of pressure disturbances (which is interestingly not the speed of sound $c_s$ in aquifers!) We could now say, for educatory purposes that the difference between $K$ and $D$ is only $\Theta$. This makes sense when we look at the context where those two parameters appear: $K$ gives the strength of mass-flow for the fluid, the better filled the volume is, the more mass will flow. Pressure, however does not care for the amount of water that flows, as it exists everywhere where water is sitting at a given moment, so the coefficient for its propagation does not depend on the filling of volume. You can look up the derivations for example here. Tell me in the comments, if anything is unclear.
{ "domain": "earthscience.stackexchange", "id": 262, "tags": "hydrology, soil, soil-moisture" }
What neuro-motor diseases cause the lower esophageal sphincter (LES) to malfunction?
Question: Please note: This question is neither homework nor seeking medical advice. I'm simply asking for a factual, objective, biological explanation of the various neuro-motor diseases/illnesses that can act as the underlying cause to LES malfunction. The primary function of the lower esophageal sphincter (LES) is to prevent stomach acid from backwashing into the esophagus. This sphincter is an involuntary muscle, and so, for it to malfunction (unable to fully or properly close), it must be the result of an underlying neuro-motor (autonomic) disease or condition. What such condition attacks this sphincter/valve, thereby causing it to malfunction and not operate properly? How exactly does this condition effect the sphincter biologically/neurologically? Update Looks like the Vagus Nerve (10th Cranial) innervates this sphincter, so I guess the question is: What neuro-motor diseases attack the Vagus Nerve or cause it to malfunction? Answer: This is called achalasia. According to UpToDate, the mechanism for this diseases are generally unknown. What is known, however, is that the esophagus has a neural system called the myenteric plexus and if these ganglion cells in the esophagus are damaged in any way (e.g. physically, followed by Wallerian degeneration, or the primary process of degeneration after physical damaging of neurons), it can lead to lower esophageal dysfunction as these neurons regulate the inhibition of the contraction of this sphincter muscle. Dysinhibition leads to a closed sphincter. Yes, you are right in that the vagus nerve is the primary route of the myenteric plexus to the central nervous system, so damage here might result in dysfunction as well. Histological studies have reported fewer myenteric neurons in people with achalasia, and inflammatory cells in the region, perhaps suggesting infection or autoimmune etiology 1. Additionally, from the UpToDate article, Chagas Disease has been associated with this disease. Chagas disease is a parasitic disease caused by the species Tropanosoma Cruzi, and these parasites multiply within cells, causing them to burst. Check out pathophysiology of achalasia on websites such as UpToDate. 1 "Histopathologic features in esophagomyotomy specimens from patients with achalasia." Goldblum JR, Rice TW, Richter JE. Gastroenterology. 1996;111(3):648. EDIT: I answered the opposite, but the principles still apply. Here is a wonderful resource on this topic ("Physiology of esophageal motility," http://www.nature.com/gimo/contents/pt1/full/gimo3.html. It includes a diagram showing the vagal neural pathway and the main neurotransmitters responsible for each action (relaxation and contraction).
{ "domain": "biology.stackexchange", "id": 4433, "tags": "human-anatomy, muscles, neurology, gastroenterology, autonomic-nervous-system" }
Project Euler #54 - Poker Streams
Question: This challenge posted by Durron597 intrigued me, and inspired me to answer his question, and also to determine whether a more functional approach was available for poker hand ranking. The problem description is: The file, poker.txt, contains one-thousand random hands dealt to two players. Each line of the file contains ten cards (separated by a single space): the first five are Player 1's cards and the last five are Player 2's cards. You can assume that all hands are valid (no invalid characters or repeated cards), each player's hand is in no specific order, and in each hand there is a clear winner. How many hands does Player 1 win? I decided to use a cascading most-signigicant-bits type approach to rank hands against each other. In other words, calculate a unique score for each hand. The actual score does not matter, the only reason for the score is to be a relative ranking against another hand. The hand with the larger score wins. The difference between the scores is not important. In order to accomplish this, I broke a Java long value in to 8 4-bit segments. 7777 6666 5555 4444 3333 2222 1111 0000 | | | | | | | --> Lowest ranked card | | | | | | -------> Second Lowest card | | | | | ------------> Third lowest card | | | | -----------------> Second highest card | | | ----------------------> Highest ranked card | | ---------------------------> SMALLSAME - Rank of low pair, if any | --------------------------------> LARGESAME - Rank of largest group, if any -------------------------------------> NAMENIBBLE - Hand type value There are 13 cards, which fit quite nicely in the 16 avaialble values in a nibble. I ordered the cards as 2 through ace, with the values (in hex) of 2 through E The hand classifications in the highest nibble are a bit more complicated. The overall classification uses a little trick of bit manipulation too, so I present it in bit (and decimal) format, with a hexadecimal example too: Type Dec Example Description ==== === ======== ======================================================= 0000 0 000DA742 High card only -> King, 10, 7, 4, 2 (no flush) 0001 1 140DA442 One pair -> King, 10, 4, 4, 2 0010 2 2D4DD442 Two pair -> King, King 4, 4, 2 0011 3 340DA444 Three of a kind -> King, 10, 4, 4, 4 0100 4 400DCBA9 Straight -> King, Queen, Jack, 10, 9 (no flush) 1000 8 800DA742 Flush -> King, 10, 7, 4, 2 1001 9 94DDD444 Full House -> King, King, 4, 4, 4 1010 10 A40D4444 Four of a kind -> King, 4, 4, 4, 4 1100 12 C00DCBA9 Straight Flush -> King, Queen, Jack, 10, 9 1100 12 C00EDCBA Royal Flush -> Ace, King, Queen, Jack, 10 Notice how the Straight and the flush bits are 'toggles', and also notice that the Royal Flush is nothing special, just a straight flush starting with an Ace. Some other notes... no hand with any pairs, triples, or quads, could ever be a straight, or a flush. Using this system, I can relatively easily shift a card's details around in a way that just slots everything in to position. Any hand with a higher score than another hand will automatically win. Hands with the same score are a tie. So, the following code is just a way to sort a hand in to a bitwise vector using a few tricks to accomplish the task. As an example, it reads the data from the Project Euler website, or from the specified input file, if given. I have tried to use Java 8 streams and lambdas where they make sense. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URI; import java.net.URL; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.IntSummaryStatistics; import java.util.List; import java.util.stream.Collectors; import java.util.stream.IntStream; import java.util.stream.LongStream; import java.util.stream.Stream; public class HandSome { private static final boolean DEBUG = Boolean.getBoolean("DEBUG"); /* 7777 6666 5555 4444 3333 2222 1111 0000 | | | | | | | --> Lowest ranked card | | | | | | -------> Second Lowest card | | | | | ------------> Third lowest card | | | | -----------------> Second highest card | | | ----------------------> Highest ranked card | | ---------------------------> SMALLSAME - Rank of low pair, if any | --------------------------------> LARGESAME - Rank of largest group, if any -------------------------------------> NAMENIBBLE - Hand type value */ // Where to shift important information // - Hand category in most significant. // - rank of most important group (4 of a kind, 3 of a kind, // 3 group in full house, highest pair rank) // - rank of the lesser group (low pair in full house, or 2 pairs) // Remaining lower bits in number represent the individual cards. private static final int NAMENIBBLE = 7; // bits 28-31 private static final int LARGESAME = 6; // bits 24-27 private static final int SMALLSAME = 5; // bits 20-23 private static int lookupRank(char c) { switch (c) { case '2' : return 0; case '3' : return 1; case '4' : return 2; case '5' : return 3; case '6' : return 4; case '7' : return 5; case '8' : return 6; case '9' : return 7; case 'T' : return 8; case 'J' : return 9; case 'Q' : return 10; case 'K' : return 11; case 'A' : return 12; } throw new IllegalArgumentException("No such card '" + c + "'."); } private static final int[] REVERSE = { 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 }; // These constants are carefully selected to ensure that // - STRAIGHT is > 3-of-a-kind // - STRAIGHT and FLUSH are less than 4-of-a-kind and full-house. // - STRAIGH + FLUSH (12) is better than others. private static final int STRAIGHT = 4; private static final int FLUSH = 8; // groups representing : // HIGH_CARD, 1_PAIR, 2_PAIR, 3_OF_A_KIND, FULL_HOUSE, 4_OF_A_KIND private static final int[] GROUPSCORE = { 0, 1, 2, 3, 9, 10 }; private static final int[] GROUPS = { groupHash(new int[]{ 1, 1, 1, 1, 1 }), groupHash(new int[]{ 1, 1, 1, 2 }), groupHash(new int[]{ 1, 2, 2 }), groupHash(new int[]{ 1, 1, 3 }), groupHash(new int[]{ 2, 3 }), groupHash(new int[]{ 1, 4 }) }; private static final int groupHash(int[] group) { int ret = 0; for (int i = group.length - 1; i >= 0; i--) { ret |= group[i] << (3 * i); } return ret; } private static final boolean isStraight(int[] ranks) { // true if there are 5 distinct cards // and the highest is 4 larger than the lowest. IntSummaryStatistics stats = IntStream.of(REVERSE) .filter(i -> ranks[i] != 0).summaryStatistics(); return stats.getCount() == 5 && stats.getMax() - stats.getMin() == 4; } private static long shiftCard(long base, int id, int nibble) { // Represent cards nicely in the long base. // card 0 (the lowest rank), is shifted to 2, and so on, so that // the ranks of 0 through 12 become hex 2 through E with 10, // Jack, Queen, King, and Ace being represented as A, B, C, D, E // Don't offset the values highest nibble, those are not cards. int offset = nibble == NAMENIBBLE ? 0 : 2; return base | ((long) (id + offset) << (nibble * 4)); } /** * Process an input hand (5 cards) and return a long value that * can be used to compare the value of one hand against another * @param hand The 5 cards to rank * @return the long value representing the hand score. */ public static long scoreHand(List<String> hand) { if (hand.size() != 5) { throw new IllegalArgumentException("Illegal hand " + hand); } // sort the cards we are holding in ascending order of rank. int[] holding = hand.stream().mapToInt(c -> lookupRank(c.charAt(0))) .sorted().toArray(); int[] countRanks = new int[13]; IntStream.of(holding).forEach(r -> countRanks[r]++); // filter and sort the group counts. int countSummary = groupHash(IntStream.of(countRanks).filter(c -> c > 0) .sorted().toArray()); // match the counts against those things that matter final int group = IntStream.range(0, GROUPS.length) .filter(i -> GROUPS[i] == countSummary) .findFirst().getAsInt(); // record each card as values in the low nibbles of the score. long score = IntStream.range(0, 5) .mapToLong(i -> shiftCard(0, holding[i], i)).sum(); // record any group rankings in to the score in the high nibble. score = shiftCard(score, GROUPSCORE[group], NAMENIBBLE); // for no-cards-the-same, look for a flush. if (group == 0 && hand.stream().mapToInt(c -> c.charAt(1)).distinct().count() == 1) { score = shiftCard(score, FLUSH, NAMENIBBLE); } // for no cards the same, look for a straight (could also be a flush) if (group == 0 && isStraight(countRanks)) { score = shiftCard(score, STRAIGHT, NAMENIBBLE); } // if there are cards the same, record the groups in descending // relevance in the mid-tier nibbles. if (group != 0) { int[] scounts = IntStream .of(4, 3, 2) .flatMap( c -> IntStream.of(REVERSE).filter( i -> countRanks[i] == c)).limit(2) .toArray(); score = shiftCard(score, scounts[0], LARGESAME); if (scounts.length > 1) { score = shiftCard(score, scounts[1], SMALLSAME); } } if (DEBUG) { System.out.printf("Hand %s scores as %8X\n", hand, score); } return score; } public static long compareHands(String hand) { // Convert the String to separate cards List<String> cards = Stream.of(hand.split(" ")).collect( Collectors.toList()); long handA = scoreHand(cards.subList(0, 5)); long handB = scoreHand(cards.subList(5, 10)); return handA - handB; } public static BufferedReader readSource(String[] args) throws IOException { if (args.length > 0) { return Files.newBufferedReader(Paths.get(args[0])); } URL url = URI.create( "https://projecteuler.net/project/resources/p054_poker.txt") .toURL(); return new BufferedReader(new InputStreamReader(url.openStream())); } public static long countPlayer1Wins(Path path) throws IOException { try (BufferedReader reader = Files.newBufferedReader(path)) { return reader.lines().mapToLong(hands -> compareHands(hands)) .filter(diff -> diff > 0).count(); } } public static void main(String[] args) throws IOException { final long[] times = new long[1000]; final long[] results = new long[1000]; final Path source = Paths.get(args.length == 0 ? "p054_poker.txt" : args[0]); for (int i = 0; i < times.length; i++) { long nano = System.nanoTime(); results[i] = countPlayer1Wins(source); times[i] = System.nanoTime() - nano; } System.out.println(LongStream.of(results).summaryStatistics()); System.out.println(LongStream.of(times).mapToDouble(t -> t / 1000000.0).summaryStatistics()); } } If you want, you can enable the debug output by setting -DDEBUG=true on the java commandline (VM argument, not program argument). When you run with debug you get output like: Hand [KS, 7H, 2H, TC, 4H] scores as DA742 Hand [2C, 3S, AS, AH, QS] scores as 1E0EEC32 Hand [8C, 2D, 2H, 2C, 4S] scores as 32084222 Hand [4C, 6S, 7D, 5S, 3S] scores as 40076543 Hand [TH, QC, 5D, TD, 3C] scores as 1A0CAA53 Hand [QS, KD, KC, KS, AS] scores as 3D0EDDDC Hand [4D, AH, KD, 9H, KS] scores as 1D0EDD94 Hand [5C, 4C, 6H, JC, 7S] scores as B7654 Hand [KC, 4H, 5C, QS, TC] scores as DCA54 Answer: The implementation; it is really slick. I find the code difficult to review, because you've deliberately set out to present a demonstration of a functional approach. Making the code more elegant may end up disguising the point.... That said, there are improvements available that should not interfere with the implementation More Abstractions public static void main(String[] args) throws IOException { final long[] times = new long[1000]; final long[] results = new long[1000]; final Path source = Paths.get(args.length == 0 ? "p054_poker.txt" : args[0]); for (int i = 0; i < times.length; i++) { long nano = System.nanoTime(); results[i] = countPlayer1Wins(source); times[i] = System.nanoTime() - nano; } System.out.println(LongStream.of(results).summaryStatistics()); System.out.println(LongStream.of(times).mapToDouble(t -> t / 1000000.0).summaryStatistics()); } In this method, you have at least three different ideas; your calculator -- the featured element of your demonstration, an instrumentation harness around it, and a data provider. I'd prefer and example that teases those ideas apart. public static void main(String args[]) throws IOException { PokerScoringCalculator calculator = new PokerScoringCalculator(); InstrumentationHarness testHarness = new InstrumentationHarness(calculator); TestDataFactory testDataFactory = new testDataFactory(args); testHarness.run(testDataFactory.getSource()); } public static class InstrumentationHarness { private final PokerScoringCalculator target; ... public void run(Path source) { final int testIterationCount = 1000; final long[] times = new long[testIterationCount]; final long[] results = new long[testIterationCount]; for (int i = 0; i < times.length; i++) { long nano = System.nanoTime(); results[i] = calculator.countPlayer1Wins(source); times[i] = System.nanoTime() - nano; } ... } } Further refactoring here might reveal a Clock abstraction, and a Reporter abstraction that is separate from the TestHarness itself.... Another example public static long compareHands(String hand) { // Convert the String to separate cards List<String> cards = Stream.of(hand.split(" ")).collect( Collectors.toList()); long handA = scoreHand(cards.subList(0, 5)); long handB = scoreHand(cards.subList(5, 10)); return handA - handB; } Buried in here, you've got a parser, a comparator, and the scoring encoder. // These constants are carefully selected to ensure that // - STRAIGHT is > 3-of-a-kind // - STRAIGHT and FLUSH are less than 4-of-a-kind and full-house. // - STRAIGH + FLUSH (12) is better than others. // groups representing : // HIGH_CARD, 1_PAIR, 2_PAIR, 3_OF_A_KIND, FULL_HOUSE, 4_OF_A_KIND Don't these comments just scream that there's an enumeration waiting to be discovered? private static final int[] GROUPSCORE = { 0, 1, 2, 3, 9, 10 }; private static final int[] GROUPS = { groupHash(new int[]{ 1, 1, 1, 1, 1 }), groupHash(new int[]{ 1, 1, 1, 2 }), groupHash(new int[]{ 1, 2, 2 }), groupHash(new int[]{ 1, 1, 3 }), groupHash(new int[]{ 2, 3 }), groupHash(new int[]{ 1, 4 }) }; These arrays do not communicate that score:9 is bound to group[2,3]. It's not even obvious that they should be the same size! There really ought to be a builder here, which takes score pattern pairs as inputs, and gives you the arrays you need at the end. You could be plain builder.add(new int[] {2,3}, 9); Reversing the order of the arguments gives you an opportunity to get cute: builder.add(9, 2, 3); But I don't approve of that approach -- it makes it look like these are all the same thing. It looks a little bit better with the enumeration. builder.add(FULL_HOUSE, 2, 3); I think it's a bit weird that the patterns are described in ascending order. In natural language, the three-of-a-kind comes first; so I would rather see the logic written that way builder.add(FULL_HOUSE, 3, 2); If you really wanted to make things readable, you might go with a fluent interface here builder.forPattern(3,2).scoreAs(FULL_HOUSE); Readability again: switch (c) { case '2' : return 0; case '3' : return 1; case '4' : return 2; case '5' : return 3; case '6' : return 4; case '7' : return 5; case '8' : return 6; case '9' : return 7; case 'T' : return 8; case 'J' : return 9; case 'Q' : return 10; case 'K' : return 11; case 'A' : return 12; } Why not "23456789TJQKA".indexOf(c) Although, as before, there's an argument that the more natural representation is "AKQJ...", which gets reversed. Magic Numbers In spades. Choosing two of the easy ones: if (hand.size() != 5) { throw new IllegalArgumentException("Illegal hand " + hand); } ... long score = IntStream.range(0, 5) .mapToLong(i -> shiftCard(0, holding[i], i)).sum(); ... long handA = scoreHand(cards.subList(0, 5)); long handB = scoreHand(cards.subList(5, 5+5)); Those are all the same "5". int[] countRanks = new int[13]; switch (c) { case '2' : return 0; case '3' : return 1; case '4' : return 2; case '5' : return 3; case '6' : return 4; case '7' : return 5; case '8' : return 6; case '9' : return 7; case 'T' : return 8; case 'J' : return 9; case 'Q' : return 10; case 'K' : return 11; case 'A' : return 12; } private static final int[] REVERSE = { 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 }; Those are all the same "13".
{ "domain": "codereview.stackexchange", "id": 12373, "tags": "java, programming-challenge, playing-cards, stream, rags-to-riches" }
How is quantum error applied to the qubits?
Question: I am trying to check the way qiskit has the noise implemented. I have read how is this theoretically done using quantum channels (see Nielsen and Chuang chapter 8) and I want to verify if qiskit follows the same procedure. I have started by the bit-flip error using the pauli_error function. Finally it creates a quantum error object which can be added to the NoiseModel object. However when adding this you have to use the add_all_qubit_quantum_error function and I do not really understand how this works. It is supposed to add the quantum error to all qubits when given a set of gates as an argument, as in the following example: # Example error probabilities p_reset = 0.03 p_meas = 0.1 p_gate1 = 0.05 # QuantumError objects error_reset = pauli_error([('X', p_reset), ('I', 1 - p_reset)]) error_meas = pauli_error([('X',p_meas), ('I', 1 - p_meas)]) error_gate1 = pauli_error([('X',p_gate1), ('I', 1 - p_gate1)]) error_gate2 = error_gate1.tensor(error_gate1) # Add errors to noise model noise_bit_flip = NoiseModel() noise_bit_flip.add_all_qubit_quantum_error(error_reset, "reset") noise_bit_flip.add_all_qubit_quantum_error(error_meas, "measure") noise_bit_flip.add_all_qubit_quantum_error(error_gate1, ["u1", "u2", "u3"]) noise_bit_flip.add_all_qubit_quantum_error(error_gate2, ["cx"]) print(noise_bit_flip) However I do not know when the bit-flit is applied. Is it applied everytime one gate that can be decomposed in any of the ones of the list ["u1", "u2", "u3"]? What I need to know is how often it is applied and with which criterion. Thank you for your help! Answer: Qiskit simulator behaves like this for your code snippet: whenever it encounters one of u1, u2, or u3 gates (in the compiled circuit), it first applies the gate; then it performs an X gate according to the provided probability (0.05 in our case).
{ "domain": "quantumcomputing.stackexchange", "id": 1240, "tags": "qiskit, programming, noise" }
Generating a few layers from 3D laser scanner
Question: I would like to generate several layers of points that I obtain from 3D laser scanner. Will it be possible to do this without building 3D point clouds? Any suggestion how to go about this. -alfa- Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2011-07-15 Post score: 0 Original comments Comment by alfa_80 on 2011-07-18: Yes, you got it right. Thanks..@ Martin Günter, i think you had better copy and paste your answer in the box below, so I can tick your answer as "right" or at least the one that I was hunting.. Comment by Martin Günther on 2011-07-17: What do you mean by "several layers of points"? Do you mean all points recorded by the 3D laser scanner lying on several 2D planes that are horizontal to the ground? You could do that without explicitly building 3D point clouds, using trigonometry, but why? I think point clouds would be easiest. Answer: What do you mean by "several layers of points"? Do you mean all points recorded by the 3D laser scanner lying on several 2D planes that are horizontal to the ground? You could do that without explicitly building 3D point clouds, using trigonometry, but why? I think point clouds would be easiest. Originally posted by Martin Günther with karma: 11816 on 2011-07-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6149, "tags": "ros, data" }
Maxwell's equation in free space from wave equations of electric and magnetic field
Question: How to go from the wave equations of electric and magnetic field and $$ \boldsymbol{\nabla}\cdot \mathbf E = 0 \quad \text{ and } \quad \ 0 = \boldsymbol{\nabla}\cdot\mathbf B, $$ to the remaining two Maxwell's equation in free space? I am unable to do this, for a long time now. Please help. :) Is it even possible to do it? The question arose from the following: Are Wave equations equivalent to Maxwell equations in free space? Answer: No, this is impossible. A simple counter-example is the fields \begin{align} \mathbf E(\mathbf r,t) & = E_0 \hat{\mathbf e}_x \cos(kz-\omega t)\\ \mathbf B(\mathbf r,t) & = 0, \end{align} i.e. a plane-wave electric field and a vanishing magnetic field. This satisfies both force-field wave equations as well as both transversality conditions, but it breaks both the Faraday-Lenz and the Ampere-Maxwell laws. The core intuition that this counter-example captures is that the basis that you've laid out simply does not include enough information that relates the electric to the magnetic field (specifically: it doesn't provide any such information at all) for the curl equations to be reconstructed.
{ "domain": "physics.stackexchange", "id": 56519, "tags": "electromagnetism, waves, electromagnetic-radiation, maxwell-equations" }
Frequency Response with Delta Function?
Question: I am trying to find frequency response and magnitude of the frequency response of the following system impulse response: $$h[n] = 2\delta [n] + 2\delta [n-1]$$ I understand, that through the DTFT: $$H(\omega ) = 2(1 + e^{-j\omega})$$ However, I am unsure how to find the magnitude of the frequency response. The solution is: $$|H(\omega )| = 4 \left| \cos \left(\frac{\omega }{2} \right) \right|$$ What steps are needed in order to convert the above frequency response to the magnitude of the frequency response? Answer: The long road computes the modulus of the DTFT. This works in general, but can be tedious. When the formula possesses some symmetry, like here (the two coefficients are the same), you can more efficiently factor a term, so that you recover a known Euler formula for the sine or the cosine, in the shape of $e^{j\nu}+e^{-j\nu}$ or $e^{j\nu}-e^{-j\nu}$. Here, factoring by $e^{-j\omega/2}$ yields $2e^{-j\omega/2}(e^{j\omega/2}+e^{-j\omega/2})$, from which the result follows easily.
{ "domain": "dsp.stackexchange", "id": 4275, "tags": "fourier-transform, frequency-response, dtft" }
How is memory address applied to address pins in RAM chips
Question: I'm reading this book about assembly however I got stuck on one part, I can't seem to comprehend how memory address is 'applied' to the address pins This is the image from the book What I know here is the data pin applies voltage on the memory cell's input or output pin. And for the select pin the book explained that a binary code address is applied to the address pins, I imagined this part where there is only 1 address pin and 'something' is encoding the address to it, however looking at the diagram there are a lot of address pins for a few memory locations and they are connected to each other. So my question here is how is the address "applied" to the address pins? Which part of the chip applies the address to the address pings because the book only said "You apply this address to the address pins" btw, the book is titled "Assembly Language Step-by-Step - Programming with Linux 3rd edition" in case somebody who might've read it before can explain it to me.. Thanks in advance! Answer: Actually those 4 dots mean repeat. This means there are $2^{20} = 0x0FFFFF = 1,048,576$ memory cells, each with a line from the decoder. A few too many to draw them all. Each of the lines from the decoder to a cell is an enable line. If not enabled the memory cell will do nothing (and have any output lines set to the data bus high impedance). If enabled the memory cell will read and write as you would expect.
{ "domain": "cs.stackexchange", "id": 11364, "tags": "computer-architecture" }
Ranking lines of a file according to number of occurrences
Question: I recently dived into Ruby, so I want to improve this code. Is this good enough? def ranksystem logfile = IO.readlines("some_logfile.log") logfile.each do |value| value.gsub!(/[\n]+/, "") end logfile.delete("") logcount = Hash.new(0) logfile.each do |v| logcount[v] += 1 end logarray = logcount.sort_by {|k,v| v}.reverse finalreturn = "" logarray.each_with_index do |value, index| finalreturn = finalreturn + "Rank #{index+1} : #{value[1]} (#{value[0]})\n" end finalreturn end This code reads each line in log file like this: apple orange apple apple apple apple orange orange banana banana and converts like this: Rank 1 : 4 (apple) Rank 2 : 3 (orange) Rank 3 : 2 (banana) Answer: "Cool" enough? It seems fine, if procedural. Depends what you mean by cool. I might combine a few steps. I'd also refactor and not put everything into one method, since each separate "chunk" does very distinct, and different, things. Separation makes testing easier. The below isn't necessarily any better (and the chunk that creates the report may actually be worse--I'm not sure; it'd be less efficient), but it provides some alternate avenues to explore further. def get_log_lines name logfile = IO.readlines(name) logfile.collect { |l| l.gsub(/\n+/, "")}.reject { |l| l == "" } end def ranksystem loglines = get_log_lines "some_logfile.log" logcount = Hash[loglines.group_by {|l| l}.collect {|k, v| [v.size, k]}] i = 0 logcount.keys.sort.reverse.collect { |n| i += 1; "Rank #{i}: #{n} (#{logcount[n]})" }.join("\n") end
{ "domain": "codereview.stackexchange", "id": 11933, "tags": "ruby, sorting, io" }
Lagrangian mechanics not relying on time or independent of time
Question: If neither the potential energy nor kinetic energy depends on time, then Lagrangian is explicitly independent of time I find this statement a little bit odd because velocity is distance over time or may be i missed something,please clarify. Answer: It seems relevant to mention the importance of distinguishing between explicit, implicit, and total time-dependence. The Lagrangian $L=L(q,v,t)$ depends implicitly on time via the position $q$ and the velocity $v$. The total time derivative of the Lagrangian $L=L(q,v,t)$ is $$\underbrace{\frac{dL}{dt}}_{\text{total $t$-derivative}}~=~\underbrace{\frac{\partial L}{\partial t}}_{\text{expl. $t$-derivative}} + \dot{q}\frac{\partial L}{\partial q} + \dot{v}\frac{\partial L}{\partial v}.$$ See also e.g. this Phys.SE post and links therein. In particular the Lagrangian can depend on time even if it does not depend explicitly on time.
{ "domain": "physics.stackexchange", "id": 31864, "tags": "classical-mechanics, lagrangian-formalism, time" }
Moment of inertia graph
Question: To find the moment of inertia of an object, we take each individual mass element and multiply it by the square of its radius, and then find the sum of all these products. If the object is continuous, the integral ∫ r^2 dm is used. If we plot a graph with the y axis being r^2 and the x axis being m, then we get the plot of the graph that we are integrating. In this case, what does the x axis (m) represent? Answer: It's better to think of your integral as a sum at first rather than the area under a curve. The integrand is not a function of $m$ exactly. This can be seen better in the discrete case: $$I=\sum_im_ir_i^2$$ If you wanted to plot each $r_i^2$, the plot would depend on how you label your masses (which mass is $m_1$, which is $m_2$, etc.) So there wouldn't be a unique "plot" to make. $r^2$ isn't a function of $m$, it is a "function" of the index. When we move to the continuous case the idea is the same. We are just adding up the squared distance from some axis for each mass weighted by the mass of each part. We have a freedom in the "order" in which we add these up. However, in one dimension the typical method involves replacing $\text d m$ with $\lambda \text d x$, where $\lambda$ is the linear mass density. Then you can use an ordering of your masses from smallest to largest $x$ values, where you would then be plotting $\lambda(x)\left(r(x)\right)^2$. Using this method, you could see how the your moment of inertia is the area under this curve.
{ "domain": "physics.stackexchange", "id": 54532, "tags": "rotational-dynamics, reference-frames, moment-of-inertia" }
Arduino publish Odom
Question: Hi Arduino MEGA 2560 can publish odometry values (nav_msgs/Odometry.msg)? I am using an H-bridge that is connected to arduino and the arduino connected to raspberryPi. I'm using ros_serial communication between arduino and raspbery. What is advised? thanks Originally posted by mateusguilherme on ROS Answers with karma: 125 on 2019-11-08 Post score: 0 Answer: Yes you can publish any message type from your arduino. You need to write a .ino script to do so. Make sure you include the following lines: #include <ros.h> #include <nav_msgs/Odometry.h> at the top before the line #include "Arduino.h" Edit: The rosserial library even has an example sketch named Odom. You can check it out. Originally posted by parzival with karma: 463 on 2019-11-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mateusguilherme on 2019-11-09: I was able to implement the arduino odometry, but the rate was very low, around 8 hz. So I made arduino publish geometry_message / Twist and calculate odometry on a python node in raspberry (got rates around 100hz). I think the nav_message / Odometry message is too much for arduino to process thanks Comment by parzival on 2019-11-09: Yes, it is better to keep arduino scripts light, mostly for input and output, and let main computer handle the computation to be done with that data.
{ "domain": "robotics.stackexchange", "id": 33990, "tags": "arduino, navigation, odometry, ros-kinetic" }
Formation of Life on various planets
Question: Is there any theory that suggests every creature (including bacteria and fungi) in our planet is formed by the atoms of our planet. If that is true, then is it possible that any other organisms will take form on another planet and evolute accordingly? Answer: Yes, every creature is composed of atoms, the same which composes whole Earth planet and it's contents. In addition to that, atoms up to iron in periodic table were formed in a cores of stars as nuclear reaction side products, which were scattered across space in a super-nova explosions. Atoms heavier than iron were formed in other processes, such as neutron star collisions and etc. That's why we (all creatures) are sometimes called simply "star dust". We are remnants of dead stars. Planet formation process is pretty much known and must be usual across all universe. I believe that life formation in Earth is not unique at all. It must also follow some rules. Given that some exo-planet has same starting conditions as were in a primordial Earth, it's natural to expect life emergence and evolution like in Earth. However, an exact evolution path on exoplanet can be somewhat different than that on Earth, because evolution is a sporadic process. We don't even have guarantee that if we would rollback time and could repeat whole life evolution cycle on Earth - would have finally produced same humans as we do look now. Maybe we would look similar or maybe evolved intelligent creature would have another ancestors than that of apes. For, example Dinosaurs were somewhat very evolved creatures and adapted to their environment as best as they can. Some interesting facts about most best killer Tyrannosaurus : It had a binocular range of 55 degrees, surpassing that of modern hawks Had 13 times the visual acuity of a human Could see an objects moving at a 6 km distance ! (A limiting far point) Another Dyno - Stenonychosaurus, had biggest brain among Dinosaurs at that time (it could be called "genius" of Dinosaurs). It's been calculated that it had cerebrum to brain volume ratio between 31.5% and 63% percents.
{ "domain": "physics.stackexchange", "id": 65356, "tags": "biology, exoplanets" }
What is the difference between sinew and tendon?
Question: I wonder what the difference between sinew and tendon is. I searched for it but didn't get any clear answer: https://www.quora.com/Whats-the-difference-between-sinew-and-tendon: They are often used interchangably but to be technical, a tendon connects a muscle to a bone. The term sinew also seems to include ligaments which connect bones to bones. It is however, not a medical term. https://answers.yahoo.com/question/index?qid=20070410182936AA2Kcnk Sinew is another word for muscle which is the functional unit of movement. Tendons connect bone to muscle to make movement possible: The muscle contracts and pulls the bone that it's connected to Source(s) http://the-difference-between.com/tendon/sinew : Sinew is a synonym of tendon. As nouns the difference between sinew and tendon is that sinew is (anatomy) a cord or tendon of the body while tendon is (anatomy) a tough band of inelastic fibrous tissue that connects a muscle with its bony attachment. http://www.thefreedictionary.com/sinew sinew = tendon https://en.wikipedia.org/wiki/Tendon A tendon (or sinew) is a [...] Is sinew a synonym for tendon, and if not what is the difference? Answer: They get used somewhat interchangeably, which blurs the lines on the definitions. When I had my anatomy classes, sinews were regarded as an inclusive class, which included both ligaments and tendons. For the breakdown: Tendon: Fibrous tissue that connects muscle to bone. Ligament: Fibrous tissue that connects bone to bone. Sinew: Includes both of the above.
{ "domain": "biology.stackexchange", "id": 9624, "tags": "terminology, tendons" }
Given regular expression construct regex for the complement language
Question: Disclamer: this is my uni assignment, which is rated comparatively low, thus I assume that the answer should be simple. Hints are appreciated (as opposed to direct answers). Write an algorithm which accepts a regular expression $r$ and produces a language $\overline{L[r]}$. I think it is reasonable to assume that all operations I need to consider are just the concatenation, union and Kleene star. For simplicity, I assumed the alphabet to be $\{a,b,c\}$. I also think that I can invert individual operations as follows: $a^* \to (a^*(b+c))^+$. $(a+b) \to (\epsilon+(c(a+b+c)^*))$. $ab \to \epsilon+((aa+b+c)(a+b+c)^*)$. But attempting to combine these operations produces wrong results, for example: $$ a^*b^* \to \\ (a^*(b+c))^+(b^*(a+c))^+ \to \\ (\epsilon+((a^*(b+c))^+(a^*(b+c))^++b+c)(a+b+c)^*)(\epsilon+((b^*(a+c))^++a+c)(a+b+c)^*)) $$ doesn't seem to do what I expect it to because, for example, it will match the empty string on both sides. Should I perhaps consider transforming the regexp into DFA and "inverting" the DFA instead? Answer: The problem with your attempt is that you've only looked at what happens when the regexp to transform is a simple one. For example, you've looked at what the complement of $a^*$ looks at. But in order to write a compositional complementation algorithm, you need to figure out how to compute a regular expression that recognizes the complement of the language of $r^*$, for an arbitrary regular expression $r$. The bad news is that your approach to build a complement for $a^*$ does not generalize easily. Consider for example the regular expression $(a + b + c)$. A regular expression that recognizes its complement language is $\epsilon + (a + b + c) (a + b + c) (a + b + c)^*$ — but this isn't really useful to know if you're looking for the complement of $(a + b + c)^*$, which is the empty language. You can't reach the complement of $(a+b+c)^*$ from starring something related to the complement of $(a+b+c)$. You can approach this problem by adding a complement operator to regular expressions, in which case the algorithm you're looking for is an algorithm to remove the complement operator from such generalized regular expressions. It turns out that this brings you close to an open problem, the generalized star height problem: it's unknown whether you can eliminate nested stars in such generalized regular expressions, which shows that the interaction between the star operator and the complement operator is poorly understood. Your idea of transforming the regular expression into a DFA and back is a good one. It's perfectly reasonable a different representation of an object where the operation you're interested in is easier. In fact, this is the usual algorithm to find a regular expression for the complement language of a regular expression: Build a DFA that recognizes the language of the regular expression. Transform the DFA into one that recognizes the complement language. Build a regular expression whose language is that of the second DFA. Step 2 is very easy: just swap accepting and non-accepting states. Note that this doesn't work on an NFA!
{ "domain": "cs.stackexchange", "id": 5185, "tags": "formal-languages, regular-languages, automata, regular-expressions" }
Peacock species identification
Question: A few weeks ago I encountered a peacock while driving to a friend's house in Central KY, USA. I don't believe I've ever seen a peacock in KY, and when attempting to ID [just through Google searches], I could only find restaurants & musical artists that contain "peacock" in their name. Can someone help me ID this bird? And, are peacocks wild in KY? Have they always been, and are just rare? Aside from species ID, any insights about this bird and its prevalence within KY would be most appreciated. Answer: That is the male of the Indian peafowl species (Pavo cristatus), also called the blue peafowl or common peafowl. The male is called a peacock and the female a peahen, collectively peafowl, although often "peacocks" is used instead of peafowl. The hen does not have the impressive tail of the peacock (although she does have a similar crest on the head). They are not native to Kentucky, or anywhere else in the Americas, but have been widely introduced (because of their appearance). There are only 3 species of peafowl, the green peafowl being similar and native to southeast Asia, and the Congo peafowl being much more subdued and native to Africa.
{ "domain": "biology.stackexchange", "id": 7880, "tags": "species-identification, species-distribution" }
Concerned about snow and dirt getting into the drive chain based on my design
Question: I am working on building an autonomous snow thrower similar to the one in this video. I tore an old RX73 tractor down to just the chassis, by removing the engine, transmission, mower deck, brakes ... basically everything. Here are pictures of the Top of Frame, and the Underside of Frame after I cleaned it a bit. In terms of steering and control, differential drive robots are dead simple to control vs. ackerman steering (traditional car steering). It's often preferred in robotics to use two motors, one connected to each back wheel, and use a free caster wheel in the front, or use a chain to connect each back wheel to the front wheel on its same side. Another popular method is all-wheel drive by connecting a motor to each wheel but use the same DC output for the front and back motors on each side. Seeing as I already just two 250W Motors, and chains, it seems like the easier option is to use a chain to connect the front and back wheels on each side, rather than buying more motors. I am concerned however, with this design, that the chain in exposed to the ground seeing as the underside of the chassis is not enclosed. I image this could potentially cause snow to build up somewhere in that mechanism. If you look at the video and how the chassis is connected to the thrower, there is a pair of linear actuators with arms, so I can also switch out the thrower for the mower deck and this robot to mow the lawn next summer. So I am concerned about dirt, mud and grass getting stuck as well. My questions are: Are my concerns valid, or just lack of experience? If valid whats a good way to prevent that from happening? What would you change about the drive mechanism? An additional two motors is not out of the question, they're approx $40 each Answer: Let's first think about the snow blower function. Depending on where you live and if there is salt (may be by road cleaning trucks ) in the environment the exposed chains' grease will shave and collect and salty dust and particles from pavement and form a sticky paste, making it hard to keep them from rusting. The chains tend to throw this stuff to under carriage too and cause damage. As for attaching a mower, again an exposed chain is bound to get jammed in small surface roots and old dead plant sticks. you could pick heavy duty chains and deal with occasional cleaning and maintenance, but your idea of having a robot control means you don't want to constantly observe the machine. So your idea of using separate motors for front wheels make more sense.
{ "domain": "engineering.stackexchange", "id": 4523, "tags": "mechanical-engineering, structural-engineering" }
Logging observable values while extending service
Question: I'm trying to extend a service function which returns an Observable<T>. The idea is to extend the function so I can add logging functionality whenever the observable emits a new value. fromEvent<T>(eventName: string): Observable<T> { super.fromEvent(eventName).subscribe( function next(data) { console.group(); console.log('----- SOCKET INBOUND -----'); console.log('Action: ', eventName); console.log('Payload: ', data); console.groupEnd(); } ); return super.fromEvent(eventName); How can I make my function more effective? Because whenever fromEvent() is invoked a subscription is created and an observable is returned from the base class. As I understand, all subscriptions should at some point be unsubscribed. Or are there any other way you could implement a logging functionality on an observable that listens to a specific event? Answer: One way to achieve what you want is to use .do() and may be .catch() operators. .do() operator taps on a target stream and executes a function on each non-error event in the stream. It is designed specifically for this purpose, which you can define more generically as side effects: This method can be used for debugging, logging, etc. of query behavior by intercepting the message stream to run arbitrary actions for messages on the pipeline. -- RxJs docs Notice that .do(): does NOT affect the original Observable stream in any way. This function itself returns an Observable of the same type as the original one. does NOT subscribe to the original Observable, so the .subscribe() invocation is required down the road. does NOT react to error events in the original Observable. Therefore, you may want to use .catch() to log the errors. fromEvent<T>(eventName: string): Observable<T> { return super .fromEvent(eventName) .do(eventData => { console.group(); console.log('----- SOCKET INBOUND -----'); console.log('Action: ', eventName); console.log('Payload: ', eventData); console.groupEnd(); }) .catch(error => { console.group(); console.error('----- SOCKET INBOUND -----'); console.error('Action: ', eventName); console.error('Error: ', error); console.groupEnd(); }); }
{ "domain": "codereview.stackexchange", "id": 28816, "tags": "logging, typescript, redux, angular-2+, rxjs" }
Gas halo of our Milky Way Galaxy
Question: This question relates to a diffuse hot gas halo of our Milky Galaxy. I've read that there is a hot diffuse halo of gas surrounding our Galaxy (NED, Caltech) I was wondering why such a halo can exist? Why doesn't it collapse to a disk shape? Is it because the gas itself is still hot and so remains largely unaffected by the Galaxy potential? Answer: The scale height of gas in a disk (if it were in equilibrium) is roughly $kT/mg$, where $T$ is the temperature, $g$ is the gravitational field, $m$ the mean mass of agas particle, and $k$ the Boltzmann constant. If we assume most of the mass is in a thin disk, then Gauss's law for gravitation tells us that that $g = 2\pi G \sigma$, where $\sigma$ is the mass per unit area in the disk. According to Rix & Bovy, $\sigma \simeq 70 M_{\odot}$ pc$^{-2}$ at the location of the Sun (http://arxiv.org/abs/1309.0809). If we assume hydrogen gas, then the effective particle mass is that of a proton, and this means the gas scale height is $$ H = 4300 \left(\frac{T}{10^6\ K}\right)\ pc$$ Thus gas hotter than a million degrees will have a very substantial scale height and is not expected to be confined to the Milky Way disk.
{ "domain": "astronomy.stackexchange", "id": 1224, "tags": "gas" }
Did cyanobacteria commit evolutionary suicide by producing oxygen?
Question: I heard this story from one of my friends: Cyanobacteria can convert carbon dioxide into oxygen through photosynthesis, so the concentration of oxygen in the atmosphere kept going up over a long period after their appearance. Finally, those oxygen poisoned cyanobacteria through some mechanism, which effectively killed many of them. As a side effect, the oxygen produced lead to the evolution of fishes and other fantastic creatures, including human beings. I wonder whether it is true. After some searching, I found that cyanobacteria can perform photosynthesis. However, I can't tell whether oxygen is poisonous to them. Answer: Oxygen is, indeed, highly toxic to cells, due to its oxidizing power, i.e., the ability to remove electrons from another substances. Since the first carbon-reducing organisms started to increase the amount of molecular oxygen in the atmosphere, more than 2 billion years ago, life on earth has been dramatically changed by this compound: you can even consider that aerobic respiration appeared originally as a mechanism of reducing the toxic molecular oxygen to the innocuous water. Today, organisms that use water to reduce carbon dioxide, releasing oxygen as a by-product, have molecular mechanisms (as aerobic respiration, for instance) to protect them from the very oxygen they produce. But your question is quite interesting: how did the first photosynthetic organisms protected themselves against the oxygen they started producing? This is a catch 22. An interesting solution was proposed by a team of geobiologists from Caltech: There was a little amount of molecular oxygen in the ocean's water before the appearance of photosynthesis. This small amount of oxygen was able to promote the evolution of biochemical mechanisms protecting organisms from its toxicity. Some of those organisms, then, were capable to develop photosynthesis and protect themselves against the huge amount of oxygen that they started releasing. According to the team: Low levels of peroxides and molecular oxygen generated during Archean and earliest Proterozoic non-Snowball glacial intervals could have driven the evolution of oxygen-mediating and -using enzymes and thereby paved the way for the eventual appearance of oxygenic photosynthesis. Source: Liang, M., Hartman, H., Kopp, R.E., Kirschvink, J.L. and Yung, Y.L. (2006) "Production of hydrogen peroxide in the atmosphere of a snowball earth and the origin of oxygenic photosynthesis", Proceedings of the National Academy of Sciences, 103(50), pp. 18896–18899. doi: 10.1073/pnas.0608839103. EDIT: according to your comment, all you want to know is "whether cyanobacteria is able to survive oxygen". Well, that's even easier to answer: Cyanobacteria can perform aerobic respiration. That means that they can easily use protons and electrons obtained from organic matter to reduce molecular oxygen to H2O: Cyanobacteria... are among the very few groups that can perform oxygenic photosynthesis and respiration simultaneously in the same compartment,and many cyanobacterial species are able to fix nitrogen. Therefore, they can survive and prosper under a wide range of environmental conditions. Source: Photosynthesis and Respiration in Cyanobacteria
{ "domain": "biology.stackexchange", "id": 6447, "tags": "evolution, microbiology, photosynthesis" }
How to build a PDA that accepts strings which have odd numbers of a and even numbers of b and has only 2 states?
Question: How can i draw a PDA which has only 2 states and accepts strings which have odd number of a and even number of b? the alphabet is {a,b}, and the PDA has to be only 2 states? Answer: I like to remind you that in fact for every context-free grammar there exists an equivalent PDA that has only a single state using acceptance by empty stack. If we need to use PDA acceptance by final state then two states are required as we need a working state and a separate accepting state. The single state construction is given on wikipedia's page for PDA and is called the expand-match construction for "parsing" context-free languages. The state here is called $1$. $(1,\varepsilon ,A,1,\alpha )$ for each rule $\displaystyle A\to \alpha $ (expand) $\displaystyle (1,a,a,1,\varepsilon )$ for each terminal symbol $a$ (match) Note this uses the classical definition of PDA where the automaton starts with a fixed single element on its stack. A certain textbook has decided to change this basic definition and starts with the empty stack, where an extra state is needed to push the initial stack symbol. (horrible, this textbook is actually quite common and continues to spoil young kids). when restricted to FSA the construction is actually quite simple. The PDA keeps the state on the stack as its only element. Accepting states of the FSA may be popped from the PDA so the PDA can accept (by empty stack) whenever the FSA can accept.
{ "domain": "cs.stackexchange", "id": 9886, "tags": "automata, pushdown-automata" }
Optimizing code for Project-Euler Problem #23
Question: I'm working on Project Euler's problem #23, which is Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers I came up with this algorithm. Find all abundant numbers under 28123 (defined Numeric#abundant?), this is slow and could be faster if skipped primes, but is fairly fast (less than 4 secs): abundant_numbers = (12..28123).select(&:abundant?) Find all numbers that can be expressed as the sum of 2 perfect numbers: inverse_set = abundant_numbers.each.with_index.inject([]) do |a,(n,index)| a.concat( abundant_numbers[index..abundant_numbers.size-1] .take_while { |i| (n+i) <= 28123 }.map { |i| n+i } ) end.to_set The rest them from all the integers under 28123 and sum them all: solution_set = (1..28123).set - inverse_set solution_set.reduce(:+) Benchmarked: ▸ time ruby 0023.rb real 0m20.036s user 0m19.593s sys 0m0.352s ▸ rvm use 2.0.0 ▸ time ruby 0023.rb Solution: 4*****1 real 0m7.478s user 0m7.348s sys 0m0.108s It works, but it's a little bit slow, takes about 20secs to solve, and I hear people around saying it can be solved within miliseconds. I'm sure many of you will have a quick insight on what have I missed. Answer: Your idea is perfectly fine (subtracting all sum of pairs from the candidates range), but I would write it differently: xs = (1..28123) abundants = xs.select(&:abundant?) solution = (xs.to_set - abundants.repeated_combination(2).to_set { |x, y| x + y }).sum With a similar idea, this is probably faster (but also a bit less declarative): xs = (1..28123) abundants = xs.select(&:abundant?).to_set solution = xs.select { |x| abundants.none? { |a| abundants.include?(x - a) } }.sum
{ "domain": "codereview.stackexchange", "id": 3551, "tags": "ruby, programming-challenge, primes" }
Renormalization is a Tool for Removing Infinities or a Tool for Obtaining Physical Results?
Question: Quoting Wikipedia: renormalization is any of a collection of techniques used to treat infinities arising in calculated quantities. Is that true? to me, it seems better to define renormalization as a collection of techniques for adjusting the theory to obtain physical results. I'll explain. According to Wilson's renormalization group, a quantum field theory always inherently has a cutoff parameter, so in any case integrals should be done only up to the cutoff, so there are no infinite quantities. Yet the results are still not consistent with observation if you don't renormalize the calculations (e.g. using counterterms). Am I correct? Is it true that the usual presentation of renormalization as a tool for removing divergences is a misinterpretation of the true purpose of it? Answer: You're totally right. The Wikipedia definition of the renormalization is obsolete i.e. it refers to the interpretation of these techniques that was believed prior to the discovery of the Renormalization Group. While the computational essence (and results) of the techniques hasn't changed much in some cases, their modern interpretation is very different from the old one. The process of guaranteeing that results are expressed in terms of finite numbers is known as the regularization, not renormalization, and integrating up to a finite cutoff scale only is a simple example of a regularization. However, the renormalization is an extra step we apply later in which a number of calculated quantities is set equal to their measured (and therefore finite) values. This of course cancels the infinite (calculated) parts of these quantities (I mean parts that were infinite before the regularization) but for renormalizable theories, it cancels the infinite parts of all physically meaningful predictions, too. However, the renormalization has to be done even in theories where no divergences arise. In that case, it still amounts to a correct (yet nontrivial) mapping between the observed parameters and the "bare" parameters of the theory. The modern, RG-based interpretation of these issues changes many subtleties. For example, the problem with the non-renormalizable theory is no longer the impossibility to cancel the infinities. The infinities may still be regulated away by a regularization but the real problem is that we introduce an infinite number of undetermined finite parameters during the process. In other words, a non-renormalizable theory becomes unpredictive (infinite input is needed to make it predictive) for all questions near (and above?) its cutoff scale where its generic interactions (higher-order terms) become strongly coupled.
{ "domain": "physics.stackexchange", "id": 8433, "tags": "quantum-field-theory, renormalization, regularization" }
Why is this grammar an LL(2) grammar?
Question: I had a question regarding LL($k$) grammars. I came across a problem that I attempted to solve, but my answer varied from the solution and I wasn't sure why. $$L = \{a^{n + 2}b^mc^{n + m}\ :\ n \ge 1,\ m \ge 1\}$$ The grammar I came up with is: $$ \begin{align} S & \rightarrow aaA \\ A & \rightarrow aAc\ \vert\ bBc \\ B & \rightarrow bBc\ \vert\ bc \\ \end{align} $$ According to the solution, this given language has an LL($2$) grammar, but the answer I came up with was LL($4$). My reasoning is that since the start of the language has at least three $a$'s, you need to check the fourth position in order to see which production to use afterwards. Then you can "slide" the window one position to the right to check subsequent productions. Why is it considered LL($2$)? Thank you. Answer: An LL parse produces a leftmost derivation, which means that at each point in the derivation, the leftmost non-terminal must be replaced by one of its productions. The issue is to decide which production to use. A grammar is LL(k) if the production to be used in the leftmost derivation can always be determined by examining only the terminals prior to the first non-terminal and the $k$ following terminals in the input. (These will be the next $k$ symbols in the input if we're doing a left-to-right parse; the terminals prior to the non-terminal will already have been input.) In your grammar, neither $S$ nor $A$ present any problems at all. $S$ only has one production, so there is no need to examine any terminal at all to make a decision. $A$ has two productions, but their first terminal differs so it is trivial to predict which one to select: if the next input symbol is $a$, select the first alternative; if it is $b$, select the second one. $B$ also has two productions, but both of them start with a $b$. So we need to look at least one symbol further in the input. Now, what is the second symbol in a derivation starting with each production for $B$? For $B \to b c$, the second symbol is clearly $c$. In $B \to b B c$, the second symbol must be the first symbol in some derivation of $B$. But, as we've just observed, every derivation of $B$ starts with $b$. So the second symbol in $B \to b B c$ must be a $b$. With that, we have a complete decision procedure: If the non-terminal to expand is $S$: choose the production $S \to a a A$ If the non-terminal to expand is $A$: If the next input is $a$: choose the production $A \to a A c$ If the next input is $b$: choose the production $A \to b B c$ If the non-terminal to expand is $B$: If the next two input symbols are $bb$ choose the production $B \to b B c$ If the next two input symbols are $bc$ choose the production $B \to b c$ If there is no non-terminal left Report success If none of the above rules apply Report failure The longest sequence of terminals we need to examine in any of those rules is 2. So the grammar is LL(2).
{ "domain": "cs.stackexchange", "id": 16314, "tags": "formal-languages, context-free, formal-grammars" }
What runtime complexity if DNF formula not PAC-learnable, $O(2^N)$ or $O(2^{2^N})$ runtime?
Question: Valiant's theory of PAC learning looks at the tradeoff between expected error and algorithm runtime in different classes of learning problems. In particular, a lot of his analysis focuses on which classes of disjunctive normal form (DNF) formulae are learnable in polynomial time while minimizing expected error (PAC learnable). However, many classes of DNF formulae are not PAC learnable. This means that learning takes an exponential amount of time based on the number of variables $N$ in the formula, i.e. the algorithm runtime is $O(2^N)$. I assume that the DNF formula that is learned in this case is not the minimal DNF formula, since it is NP-Hard to do so, meaning the complexity is $O(2^{2^N})$. If so, then this is confusing to me. Say there are $2^{10}-5$ samples from $f$, and the algorithm has to predict the remaining 5 bits. If my understanding of PAC learning is correct, there is an algorithm that should be able to accurately predict the 5 bits in $O(2^N)$ time. However, it also seems to me that to accurately predict the 5 bits the algorithm has to figure out which of the $2^{5}$ different ways of filling in the bits has the shortest DNF formula, due to Occam's razor. But, if the algorithm has to figure out the shortest DNF formula in each of the $2^5$ permutations, then it has to perform logic minimization, which has a runtime of $O(2^{2^N})$. Then, that means that if a DNF formula is not PAC learnable, its runtime is super-exponential, not exponential. So, which is it? Does a DNF being not PAC learnable mean the algorithm's runtime is $O(2^N)$ or $O(2^{2^N})$? Answer: There's a misconception here. PAC learning doesn't require the learning to find the shortest formula. It merely requires the learner to find any formula that has sufficiently low generalization error. If there are multiple formulas that meet that criteria, the learner is allowed to output any of them -- not necessarily the shortest. Take a closer look at the formal definition, and hopefully you'll see how that follows from the definition.
{ "domain": "cs.stackexchange", "id": 9186, "tags": "machine-learning" }
FizzBuzz from a file
Question: This was found on CodeEval. Challenge: Write a program that prints out the final series of numbers where those divisible by X, Y and both are replaced by “F” for fizz, “B” for buzz and “FB” for fizz buzz Specifications: Your program should accept a file as its first argument. The file contains multiple separated lines; each line contains 3 numbers that are space delimited. The first number is the first divider (X), the second number is the second divider (Y), and the third number is how far you should count (N). You may assume that the input file is formatted correctly and the numbers are valid positive integers. your output should print out one line per set. Ensure that there are no trailing empty spaces in each line you print. Sample Input: 3 5 10 2 7 15 Sample Output: 1 2 F 4 B F 7 8 F B 1 F 3 F 5 F B F 9 F 11 F 13 FB 15 My Implementation: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class FizzBuzz { public static void main(String[] args) throws FileNotFoundException { File file = new File(args[0]); Scanner fileScanner = new Scanner(file); while (fileScanner.hasNextLine()) { printFizzBuzz(fileScanner.nextLine()); } } public static void printFizzBuzz(String line) { int fizz = Integer.parseInt(line.split(" ")[0]); int buzz = Integer.parseInt(line.split(" ")[1]); int limit = Integer.parseInt(line.split(" ")[2]); StringBuilder sb = new StringBuilder(); for (int i = 1; i <= limit; i++) { if (i % fizz == 0) { sb.append("F"); } if (i % buzz == 0) { sb.append("B"); } else { if (i % fizz != 0) { sb.append(i); } } if (i < limit) { sb.append(" "); } } System.out.println(sb.toString()); } } Answer: If you extract the logic that builds the output string to its own method buildFizzBuzz, you'll get something testable: public static String buildFizzBuzz(int fizz, int buzz, int limit) { StringBuilder sb = new StringBuilder(); // ... return sb.toString(); } public static void printFizzBuzz(String line) { int fizz = Integer.parseInt(line.split(" ")[0]); int buzz = Integer.parseInt(line.split(" ")[1]); int limit = Integer.parseInt(line.split(" ")[2]); System.out.println(buildFizzBuzz(fizz, buzz, limit)); } And you can add some unit tests, for example: @Test public void test_3_5_20() { assertEquals("1 2 F 4 B F 7 8 F B 11 F 13 14 FB 16 17 F 19 B", buildFizzBuzz(3, 5, 20)); } @Test public void test_2_7_15() { assertEquals("1 F 3 F 5 F B F 9 F 11 F 13 FB 15", buildFizzBuzz(2, 7, 15)); } Having these unit test, which pass at the moment, now we can refactor a bit safely, knowing that if anything breaks, the tests will break. It's not great to have a check on the limit twice in the loop, in the loop condition and also in the loop body. It would be better to eliminate the condition from the body, always appending ' ', and cutting off the space at the end. It's not great to have a check on i % fizz twice. It would be better to use a boolean to track if fizz or buzz were already appended. Putting these two points together, the implementation becomes: public static String buildFizzBuzz(int fizz, int buzz, int limit) { StringBuilder sb = new StringBuilder(); for (int i = 1; i <= limit; i++) { boolean shouldAppendNum = true; if (i % fizz == 0) { sb.append("F"); shouldAppendNum = false; } if (i % buzz == 0) { sb.append("B"); shouldAppendNum = false; } if (shouldAppendNum) { sb.append(i); } sb.append(" "); } sb.setLength(sb.length() - 1); return sb.toString(); }
{ "domain": "codereview.stackexchange", "id": 11653, "tags": "java, programming-challenge, io, fizzbuzz" }
Does String Theory predict things like Unruh effect and Hawking radiation?
Question: I've seen the other post about this, but the answer only discusses Unruh effect rather than String theory. Hawking radiation and Unruh effect solidify fields as the universe's fundamental objects. Particles become a relative concept that everyone defines differently depending on the eigenstates of their fields. However, string theory treats particles as the fundamental entities, as we arrive at string theory by quantising a theory of classical string-like particles. String theory implies that field theory is just an effective calculational tool to predict the low energy behavior of particles, like the same way hydrodynamics predicts the zoomed out behavior of fluids (a description that breaks down at the atomic scale). I saw Weinberg's book also argue along the same lines. He said something like "any relativistic quantum theory can be described as a field theory at low energies", meaning that fields are just an effective calculational tool. Does this mean that the existence/non-existence of Unruh effect/Hawking radiation will prove whether particles or fields are more fundamental? Or does String theory also predict the Unruh effect somehow? Answer: Many effective field theories we know about arise as low energy limits in string theory. These include Yang-Mills in the case of open strings and Einstein gravity in the case of closed strings. Since the Unruh effect is a very general consequence of QFT in curved spacetime, this is enough to say it will arise in string theory. Similarly, a large number of researchers are confident that string theory will not just include Hawking radiation (as it certainly does) but solve the related puzzles about what black hole microstates really are. The "does string theory predict" questions where the answer might be negative pertain to details about the standard model like the number of generations, the rarity of proton decay and a mass hierarchy with neutrinos at one end and quarks at the other. It also makes sense to ask "does string theory uniquely predict" these things because the number of consistent vaccua might be extremely large. However, I share your concern that most introductions to the subject are too steeped in the language of the 1960s when string theory and quantum field theory were genuine competitors. They were both proposed methods of doing quantum mechanics relativistically and it took several years for QFT to "win" at modelling the events accessible to particle colliders. Indeed, fields are the most fundamental objects while particles have been relegated to a concept that only makes sense perturbatively. So where does that leave string theory? For that, we should note that approaches based on quantizing a worldsheet action have only ever gotten us as far as perturbative string theory. There are non-perturbative proposals which include matrix models, string field theory and (most pleasing to me) holographic duality with a QFT in a lower number of dimensions. These formulations do not make reference to fundamental strings. So ultimately, I think a string theorist should not believe in strings any more than a modern particle physicist believes in particles.
{ "domain": "physics.stackexchange", "id": 90142, "tags": "quantum-field-theory, particle-physics, string-theory, hawking-radiation, unruh-effect" }
svn co https://code.ros.org/svn/wg-ros-pkg/branches/trunk_cturtle/stacks/executive_trex \ executive_trex command is not working. Please help
Question: I was trying to install trex on my computer. But it is not being installed since the command above in the link http://wiki.ros.org/trex/Tutorials/Getting%20started does not seem to be working, since I don't see any console output for quite some time. Please help me. Originally posted by Jaskirat Singh on ROS Answers with karma: 1 on 2017-06-19 Post score: 0 Answer: code.ros.org has long since been taken off-line. You might want to search github for a fork / clone of that package. Edit: a quick search doesn't show anything that seems related. trex_executive is also listed on the Abandoned page on the wiki, so I expect that that package is no longer available. Originally posted by gvdhoorn with karma: 86574 on 2017-06-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Jaskirat Singh on 2017-06-20: So if I have to install trex, what should I do now ? Comment by gvdhoorn on 2017-06-20: I'm afraid I can't help you with that. You could try and search with Google or some other search engine to see if you can find someone who has a fork / copy of it somewhere.
{ "domain": "robotics.stackexchange", "id": 28148, "tags": "ros" }
Reversibility of Hamiltonian dynamics
Question: I'm trying to understand a very basic property of Hamiltonian dynamics. I don't have a physics background but I do know some mathematics. I want to understand why negating the momentum is equivalent to reversing time in Hamiltonian dynamics. Suppose I have a Hamiltonian $H(q, p)$ which satisfies $H(q, p)=H(q, -p)$. The Hamiltonian equations of motion are $$ \dot{p} = -\nabla_q H(q, p) ~~~~~~~~ \dot{q} = \nabla_p H(q, p) $$ That if $(q(t), p(t))$ satisfy Hamilton's equations then so does $(q(-t), -p(-t))$ seems to be an oft-quoted fact in many works on classical mechanics. But how do I convince myself that it is true? Answer: If we set $p\rightarrow -p$ then the Hamilton equations look like $$-\dot p=-\nabla_qH(q,p)~~~~~~~~~~~\dot q=\nabla_{-p}H(q,p)=-\nabla_pH(q,p)$$ Now we use the fact that $-\dot q=-\partial q/\partial t=\partial q/\partial (-t) $ to get the modified equations that look like $$\frac{\partial p}{\partial(-t)}= -\nabla_qH(q,p)~~~~~~~~~~~\frac{\partial q}{\partial(-t)}=\nabla_pH(q,p) $$ These are the same equations albeit $t\rightarrow-t$. Thus we see that Hamilton’s equations are symmetric under time reversal and also that this can be brought about by setting $p\rightarrow-p$.
{ "domain": "physics.stackexchange", "id": 64569, "tags": "classical-mechanics, hamiltonian-formalism, phase-space, reversibility, time-reversal-symmetry" }
Why are actinides not commonly included in rare earth metals?
Question: According to the German Wikipedia, the rare earth metals include all elements of the third side group except actinium, and all lanthanides. Zu den Metallen der Seltenen Erden gehören die chemischen Elemente der 3. Nebengruppe des Periodensystems (mit Ausnahme des Actiniums) und die Lanthanoide – insgesamt also 17 Elemente. https://de.wikipedia.org/wiki/Metalle_der_Seltenen_Erden Since actinides (like lanthanides) should be part of the third side group, why aren't they considered part of the rare earth metals? Answer: It comes down to the history of their discovery, natural occurrence and applications. It's a fascinating story that you can read about in here: http://www.vanderkrogt.net/elements/rareearths.php The rare earths (i.e. lanthanides) were discovered over a few decades in the late 1800s. They were laboriously separated from "rare earths" - the name used back then to describe the rare minerals (silicates, oxides, phosphates, etc) that included these elements. These are all naturally occurring rocks. You can literally pick up from the earth a mineral that has rare earths in it. They are also stable (apart from Pm). Therefore, they are used in industry. There are rare earth mines, rare earth magnets, rare earth lasers. You get the stuff from the earth. Actinides, were mostly discovered in the early to mid 1900s, after rare earths were a "thing". Other than U and Th, you do not find them in nature. You do not mine them from the earth, and they do not exist in "earths". Their radioactivity precludes their use in industrial applications other than a few niche uses. Although their chemical properties are similar to the lanthanides, there are many differences. They are simply not rare earths. Almost, but not quite. I'll also add that the definition of "rare earths" is a loose one. The broad chemical definition includes the lanthanides La–Lu, Y and Sc. As far as geologists care, Sc is not a rare earth element, and Y is only "honorary". Industry and mining usually consider the lanthanide oxides as REO (rare earth oxides) and use REY (rare earths + yttrium) when they are talking also about Y, because it's not obvious to them that it is a rare earth.
{ "domain": "chemistry.stackexchange", "id": 8609, "tags": "terminology, elements, rare-earth-elements" }
What are the most common deep reinforcement learning algorithms and models apart from DQN?
Question: Recently, I have completed Atari Breakout (https://arxiv.org/pdf/1312.5602.pdf) with DQN. Similar to DQN, what are the most common deep reinforcement learning algorithms and models in 2020? It seems that DQN is outdated and policy gradients are preferred. Answer: There are several common deep reinforcement algorithms and models apart from deep Q networks (or deep Q learning). I will list some of them below (along with a link to the paper that introduced them), but note that some of these may not be state-of-the-art (at least, not anymore, and it's likely that all of these will be replaced in the future). Double DQN (DDQN) (2015) Duelling DQN (2015) Trust Region Policy Optimization (TRPO) (2015) Deep Deterministic Policy Gradient (DDPG) (2016) Asynchronous Advantage Actor-Critic (A3C) (2016) Hindsight Experience Replay (HER) (2017) Proximal policy optimization (PPO) (2017) Twin Delayed Deep Deterministic policy gradient algorithm (TD3) (2018) Soft Actor-Critic (SAC) (2018) For an exhaustive overview of deep RL algorithms and models, maybe take a look at this pre-print Deep Reinforcement Learning (2018) by Yuxi Li.
{ "domain": "ai.stackexchange", "id": 1852, "tags": "reinforcement-learning, reference-request, deep-rl" }
Compute conditional median of PANDAS dataframe
Question: I am new to Python/Pandas. Consider the following code: import pandas as pd import numpy as np df = pd.DataFrame({'Time': [0.0, 1.0, 2.0, 0.0, 1.0, 2.0, 0.0, 2.0, 0.0, 1.0, 2.0], 'Id': [1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4], 'A': [10, 15, np.NaN, 11, 16, 25, 10, 15, 9, 14, 19]}) print(df) Output: A Id Time 0 10.0 1 0.0 1 15.0 1 1.0 2 NaN 1 2.0 3 11.0 2 0.0 4 16.0 2 1.0 5 25.0 2 2.0 6 10.0 3 0.0 7 15.0 3 2.0 8 9.0 4 0.0 9 14.0 4 1.0 10 19.0 4 2.0 I want to add a column Feature_1 which, for each row of the dataframe, compute the median of column A for ALL the values which have the same Time value. This can be done as follows: df['Feature_1'] = df.groupby('Time')['A'].transform(np.median) print(df) Output: A Id Time Feature_1 0 10.0 1 0.0 10.0 1 15.0 1 1.0 15.0 2 NaN 1 2.0 19.0 3 11.0 2 0.0 10.0 4 16.0 2 1.0 15.0 5 25.0 2 2.0 19.0 6 10.0 3 0.0 10.0 7 15.0 3 2.0 19.0 8 9.0 4 0.0 10.0 9 14.0 4 1.0 15.0 10 19.0 4 2.0 19.0 My problem is now to compute another feature, Feature_2, which for each row of the dataframe, compute the median of column A for OTHER values which have the same Time value. I was not able to vectorize this, so my solution with a for loop: df['feature_2'] = np.NaN for i in range(len(df)): current_Id = df.Id[i] current_time = df.Time[i] idx = (df.Time == current_time) & (df.Id != current_Id) if idx.any(): df['feature_2'][i] = df.A[idx].median() print(df) Output: A Id Time Feature_1 Feature_2 0 10.0 1 0.0 10.0 10.0 1 15.0 1 1.0 15.0 15.0 2 NaN 1 2.0 19.0 19.0 3 11.0 2 0.0 10.0 10.0 4 16.0 2 1.0 15.0 14.5 5 25.0 2 2.0 19.0 17.0 6 10.0 3 0.0 10.0 10.0 7 15.0 3 2.0 19.0 22.0 8 9.0 4 0.0 10.0 10.0 9 14.0 4 1.0 15.0 15.5 10 19.0 4 2.0 19.0 20.0 This is working but it is very slow as my dataframe has 1 million rows (but only four different IDs). Is it possible to vectorize the creation of Feature_2 ? I hope, I am clear enough. Live code can be found here. Answer: So, you want to get the medians of the groups by removing each value from the group in turn: group => individual removal of values NaN [ ] NaN NaN NaN 25.0 => 25.0 [ ] 25.0 25.0 15.0 15.0 15.0 [ ] 15.0 19.0 19.0 19.0 19.0 [ ] median 19.0 19.0 17.0 22.0 20.0 An other way of doing, beside manually reconstructing the group without the current value for each value, is to build the above intermediate matrix and ask for the median on each column. This will return a Series of length the length of the group, which is supported by SeriesGroupBy.transform. The steps to get the desired result are: build the matrix by repeating the input group as many time as its length; fill the diagonal of the matrix with NaNs; ask for the median by row/column depending on how you built the matrix. The function that can be fed to transform may look like: def median_without_element(group): matrix = pd.DataFrame([group] * len(group)) np.fill_diagonal(matrix.values, np.NaN) return matrix.median(axis=1) An other advantage of this approach is that you are able to reuse the same groups of elements and so cut on the need to recompute them again and again: import numpy as np import pandas as pd def median_without_element(group): matrix = pd.DataFrame([group] * len(group)) np.fill_diagonal(matrix.values, np.NaN) return matrix.median(axis=1) def compute_medians(dataframe, groups_column='Time', values_column='A'): groups = dataframe.groupby(groups_column)[values_column] dataframe['Feature_1'] = groups.transform(np.median) dataframe['Feature_2'] = groups.transform(median_without_element) if __name__ == '__main__': df = pd.DataFrame({ 'Time': [0.0, 1.0, 2.0, 0.0, 1.0, 2.0, 0.0, 2.0, 0.0, 1.0, 2.0], 'Id': [1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4], 'A': [10, 15, np.NaN, 11, 16, 25, 10, 15, 9, 14, 19], }) compute_medians(df) print(df)
{ "domain": "codereview.stackexchange", "id": 30611, "tags": "python, performance, statistics, pandas" }
Computer "Ports", port scans and daemons
Question: If I do a port scan from the internet, do I hit a bundle of wires that are physical ports into the motherboard (like a PCE port)? Or do I actually go into the operating system and interact with the virtual or 'logical' ports of the OS? I am trying to understand what the difference is between these ports, and why, if they are logical, I would be able to scan any of them from the internet, because it implies my bytes are passing over hardware to reach these 'logical' ports inside the CPU and cache, rather than on the peripheral hardware (before they are rejected)… This feels wrong... But if they are actually physical, then where are they? And, if I have a daemon listen on a port that is "opened", does this mean that the program is listening to bytes that pass over a physical component on the motherboard, exterior to the cache-ram-cpu area? Answer: The ports referred to in a "port scan" are logical. Consider the fact that all network communication to a computer typically is over a single physical connection, usually Ethernet. There are multiple simultaneous conversations happening routinely. Think about how the network traffic for fetching email in one window, a web browser in another window, various apps accessing servers elsewhere, and all those viruses sending your passwords and bank account details over to my servers all work independently without getting each other's traffic. At the TCP and UDP level, this is done with virtual ports. Servers are waiting on "well known ports" to accept connections. For example, port 25 is usually reserved for requesting new connections of a SMTP server. Once a connection is established, it is assigned a free port. Subsequent traffic then references that port to identify the connection. A port scan essentially (this is over-simplified, but that's all that's needed for the basic concept) sends a connection request to every port (they are in a 16 bit namespace) to see which ones the server machine reacts to. Some OS services of particular operating systems and versions are known to have servers on particular ports. Some of there are know to have vulnerabilities. One purpose of a port scan is to find out what ports your machine reacts to, and if there are any you might know how to exploit in doing unintended things with the machine.
{ "domain": "engineering.stackexchange", "id": 2138, "tags": "computer-engineering, computer-hardware" }
Display element coordinate system using Ansys Mechcanical APDL
Question: Is there any way to display element coordinate system in ANSYS Mechanical APDL GUI (the same way the global coordinates system is displayed)? Answer: Under the menu PlotCtrls > Symbols, you can find the following window where the option ESYS Element coordinate sys you are looking for is available. As previously answered, it's also available via the following command: /psymb,esys,1 ! or 0 to remove it. You can look for /help,/psymb for more information.
{ "domain": "engineering.stackexchange", "id": 1613, "tags": "ansys, ansys-apdl" }
Which unit of measurement does SetForce() use?
Question: Hello evryone, I have a model of a quadrotor in matlab and to stabilize it at a fixed height I need a thrust of 6,3 N. I also have a simple model of the same quadrotor in gazebo controlled through a PD and it appears that the force applyed to stabilize it is 62.7, is it possible that the force is computed in 100*mN ?? Originally posted by erpa on ROS Answers with karma: 111 on 2011-10-03 Post score: 3 Answer: There is a ROS standard for that. It should be Newtons. Perhaps that model is not in compliance with REP-0103? Originally posted by joq with karma: 25443 on 2011-10-03 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by erpa on 2011-10-03: didn't know about ROS standard but it seems that the model is still compliant with REP-0103.
{ "domain": "robotics.stackexchange", "id": 6844, "tags": "ros" }
Code runs Speed Test CLI using the python wrapper and then stores values within CSV
Question: I wanted to make use of the speedtest-cli while also incorporating pandas into the project. My initial goal, which I am posting here, was to take the values of the speed test and incorporate them into a CSV. I've been learning about Pandas and figured it would be a good way to accomplish the project. import speedtest import pandas as pd import os # Initial Speedtest variables s = speedtest.Speedtest() servers = [] # File path variables localpath = '' filepath = '' filename = '' # Functions # def speedtest(): """Makes use of the speed-test-cli python wrapper to get speed test data and returns in a dictionary""" s.get_servers(servers) s.get_best_server() s.download() s.upload() s.results.share() results_dict = s.results.dict() return results_dict def convert_mbps(arg): arg = arg / 1000000 return arg def csv_upload(localpath, filepath, filename): """Attempts to append a csv file and, if none exists, creates one in a specified path""" try: df2 = pd.read_csv(localpath) df2 = df2.append(df) df2.to_csv(os.path.join(filepath, filename), index=False) except OSError: df.to_csv(os.path.join(filepath, filename), index=False) # Speedtest and convert to Data Frame result = speedtest() df = pd.io.json.json_normalize(result) df2 = None # Uploads CSV csv_upload(localpath, filepath, filename) I've tried as best as I could to document throughout my code, as well as to not statically specify parameters within functions; something which I've been bad for in the past. I've also tried to incorporate in some exception handling, though this is something that I believe needs more work in my code. I've tested the code within a cron, it does exactly what it seems to be functioning as I would like it to. Future developments will surround analyzing data collected from the CSV; trying to determine trends in bandwidth or latency as a function of time of day. Any feedback would be most appreciated. Answer: Don't use globals. E.g., you refer to df and df2 in csv_upload, why not make them parameters? Use the standard if __name__ == '__main__': guard around the script's main body. It's common to just accept a flat file path, and expect the calling code to perform any os.path.join in the creation of said path. "upload" sounds like network activity, but you're saving the data to disk. This should probably follow a more typical naming scheme of to_csv or save or dump or something of that sort. This seems like a prime candidate for being a class. I'd suggest something like the following: import speedtest import pandas as pd import os class MySpeedtest: def __init__(servers, initial_data_path=None): # Initial Speedtest variables self.s = speedtest.Speedtest() self.servers = servers if initial_data_path is not None: self.df = pd.read_csv(initial_data_path) else: self.df = pd.DataFrame() def run_test(self): """Makes use of the speed-test-cli python wrapper to get speed test data and returns in a dictionary""" self.s.get_servers(self.servers) self.s.get_best_server() self.s.download() self.s.upload() self.s.results.share() results_dict = self.s.results.dict() self.df.append(pd.io.json.json_normalize(results_dict)) @staticmethod def convert_mbps(arg): return arg / 1000000 # Actually, this should probably be "arg / (2**20)" # 1 MB = 2**20 B def to_csv(self, path): """Attempts to append a csv file and, if none exists, creates one in a specified path""" self.df.to_csv(path, index=False) if __name__ == '__main__': # TODO: populate a list of servers, define localpath/filepath/filename tester = MySpeedtest(servers, localpath) tester.run_test() tester.to_csv(os.path.join(filepath, filename))
{ "domain": "codereview.stackexchange", "id": 30631, "tags": "python, python-3.x, csv, pandas, benchmarking" }
Gravity without force of gravity. What does it mean?
Question: As we know, the gravitation force in General Relativity is the semblant phenomenon. Let's look at this phenomenon in more detail. A brief preface From the Landau-Lifshitz "Volume 2. The Classical Theory of fields" (LL2)(Problem 1 in §88), we can see the equation of motion (EOM) and an expression for gravitation force (we put $c=1$): EOM(also known as $\frac{dp}{dt} =F$ or $ma = F$) : \begin{equation} \sqrt{1-v^2} \frac{d}{ds}\frac{v^{\alpha}}{\sqrt{1-v^2}} + \lambda^{\alpha}_{\beta\gamma}\frac{mv^{\beta}v^{\gamma}}{\sqrt{1-v^2}} = f^{\alpha} \end{equation} Gravitation force (also known as $F = G\frac{mM}{r^2}$): \begin{equation} f_{\alpha} = \frac{m}{\sqrt{1-v^2}}\left[ -\frac{\partial \ln\sqrt{h}}{\partial x^{\alpha}} + \sqrt{h} \left(\frac{\partial g_{\beta}}{\partial x^{\alpha}} - \frac{\partial g_{\alpha}}{\partial x^{\beta}}\right){v^{\beta}}\right] \end{equation} where $h = g_{00}$, $g_{\alpha} = -\frac{g_{0\alpha}}{g_{00}}$. As one can see, the origin of gravitation force in GR is mostly due to the curvature of time, because it depend on component of metrics $g_{00}$. And this force in LL2 expressed in 3-vector form, which looks similar to Lorentz force: \begin{equation}\label{3Force} \vec{F} = \frac{m}{\sqrt{1-v^2}} \left( -\vec\nabla\ln\sqrt{h} + \left[\vec{v}\times\left[ \sqrt{h}\vec\nabla\times\vec{g}\right] \right] \right), \end{equation} Now consider an EOM. The RHS of EOM consists of two terms. The second term contains spatial Cristoffel symbols $\lambda^{\alpha}_{\beta\gamma}$ which is also depend on metrics. Attempts to ask a question Lets consider some kind of spherical-symmetric metrics (now we are not interested in the distribution of matter that caused it): $$ds^2 = dt^2 - \frac{dr^2}{1-\frac{2M}{r}} - r^2(d\theta^2 + \sin^2\theta d\phi^2).$$ Nonzero components of $\Gamma^i_{jk}$: \begin{align} \Gamma^r_{rr} &= -\frac{M}{r^2}\frac{1}{1-\frac{2M}{r}},\\ \Gamma^r_{\theta\theta} &= 2M - r\\ \Gamma^r_{\phi\phi} &= (2M - r)\sin^2\theta\\ \Gamma^{\theta}_{r\theta} &= \Gamma^{\phi}_{r\phi} = \frac1r\\ \Gamma^{\theta}_{\phi\phi} &= -\sin\theta\cos\theta \\ \Gamma^{\phi}_{\theta\phi} &= \cot\theta, \end{align} $\Gamma\text{'s} = \lambda\text{'s}$. The Riemann Tensor nonzero components: \begin{align} R_{r\theta r\theta} & = \frac{M}{r}\frac{1}{1-\frac{2M}{r}} \\ R_{r\phi r\phi} & = \frac{M}{r}\frac{\sin^2\theta}{1-\frac{2M}{r}} \\ R_{\theta\phi\theta\phi} &= - 2Mr\sin^2\theta \end{align} This metrics is only spattialy curved. Scalar curvature $R = 0$. According to gravitation force expression, we conclude there is no force $f_{\alpha} = 0$ in such metrics. Then (in case $\theta =\frac\pi 2$, $\frac{d^2\theta}{ds^2} = 0$) the EOM has the simple form: \begin{align} \frac{d^2r}{ds^2} &= \frac{M}{r^2}\frac{1}{1-\frac{2M}{r}}\frac{(v^r)^2}{1-v^2} + (r-2M)\frac{(v^{\phi})^2}{1-v^2}\\ \frac{d^2\phi}{ds^2} &= \frac1r \frac{v^r v^{\phi}}{1-v^2}. \end{align} Due to nonzero values of $\lambda^{\alpha}_{\beta\gamma}$ in EOM, the geodesics are not straight lines in such metrics. If particle has zero initial velocity, then will be no force of attraction to the origin. But, if the particle has the radial velocity, then it will be attract to the origin. Thus, we have the gravity without force of gravity. How to correctly judge the reality of gravity, by the existence of force $f_{\alpha}$ or by the geodesics deviation? What is the meaning of the force of gravity in LL2? Answer: The geodesic equation tells us: $$ \frac{d^2x^\alpha}{d\tau^2} = -\Gamma^\alpha{}_{\mu\nu}u^\mu u^\nu $$ The reason we often say the acceleration is due to the curvature in time is because in everyday life the four-velocity is dominated by the time component i.e. $u^t \gg u^r, u^\theta, u^\phi$. That means to a first approximation we can ignore the terms in $u^tu^a$ and $u^au^b$ ($a$ = $r$, $\theta$ or $\phi$) and consider only the term in $u^tu^t$. Then the only significant Christoffel symbols are $\Gamma^a{}_{tt}$. But you've chosen a metric for which (in the coordinates being used) $\Gamma^a{}_{tt}=0$. That means you are quite correct that a stationary object will not accelerate in space. Only objects moving in the spatial coordinates will accelerate in space. This isn't that unusual. The Ellis wormhole has exactly the same property. If you're interested I discuss this in my answer to How do spatial curvature and temporal curvature differ? (actually that is close to a duplicate of this question). So I guess the answer to your question is that the gravitational acceleration is due to the curvature in time in most circumstances but not all. You have managed to choose one of the exceptional cases. One last comment, be careful about statements like: This metric is only spatially curved Remember that the Christoffel symbols are not tensors and we can always choose a coordinate system that makes them all zero i.e. the normal coordinates. In your coordinates it's certainly true that the curvature is spatial, but that's just down to your choice of coordinates and it wouldn't be true of a different coordinate choice.
{ "domain": "physics.stackexchange", "id": 48132, "tags": "general-relativity, forces, metric-tensor, geodesics" }
SQL JOIN QUERY with more than one items in common: Find the franchise pairs that operate together in more than one location
Question: I have a table called dfmt that lists the location, revenue and franchise. I want to find the franchise pairs that operate together in more than one location. So far, I have a query that finds the franchise pairs that operate in the same location: select T1.fr, T2.fr2 from dfmt T1 join (select fr as fr2, loc as loc2 from dfmt) as T2 on T1.fr < T2.fr2 and T1.loc = T2.loc2 order by loc; I do not know how to go from here to find the franchise pairs that operate together in only more than one location. Another query that may be useful that Finds the franchise that generates the maximum revenue in more than one location. select fr, count(*) from tst2 where rev in (select max(rev) from tst2 group by loc) group by fr having count(*)>1; enter code here Answer: Something like this - use group by to gather all the "franchise pairs" and you count how many have locations.. SELECT X.fr, X.fr2, COUNT(X.loc) as count FROM ( select T1.fr, T2.fr2, T1.loc from dfmt T1 join (select fr as fr2, loc as loc2 from dfmt) as T2 on T1.fr < T2.fr2 and T1.loc = T2.loc2 ) AS X GROUP BY fr, fr2 HAVING count > 1; NOTE: This relies on the inner query having distinct results - ie.. you can't have say "Best Western" and "Raddison" twice for the same location.
{ "domain": "codereview.stackexchange", "id": 41077, "tags": "sql, join" }
Comparing the experimental and calculated UV/vis spectra for ethene
Question: I am trying to compute UV/Vis absorption spectrum for Ethene using Gaussian 09. I created the ethene molecule in GaussView 5, and cleaned it. Then I optimized the molecule and checked if I got the minimum energy conformation with the following: # freq=noraman cphf=noread b3lyp/6-31g(d) geom=connectivity formcheck integral=grid=ultrafine scf=maxcycle=1000 Next I performed the energy calculation as follows: # td b3lyp/6-31g(d) geom=connectivity As a result I got a single peak with the maximum at 134 nm (see picture below) which seems to be way off from what I googled around on the Internet, which is like 173 nm or 180 nm. Can anyone please help me figure out if I have done it correctly and whether I am way off the experimental data? Answer: In short, there are two obvious problems with the setup OP uses for TD-DFT calculations: B3LYP functional is not a good choice for TD-DFT. 6-31G(d) basis is usually too small. At M06-2X/Def2-TZVP level I get a maximum at ~160 nm, which, taking the accuracy of the TD-DFT approximations into account, is close enough to the experimental value.
{ "domain": "chemistry.stackexchange", "id": 6387, "tags": "quantum-chemistry, computational-chemistry, spectroscopy" }
Filtering performance on Poisson noise with quadratic data-fidelity
Question: We recently performed a work on signal filtering/component separation (sparse signal/trend/noise). The cost function contains: A quadratic data fidelity term, Some smoothed $\ell_1$ terms for sparsity and positivity promotion. This was initially designed for Gaussian noise, and performed satisfactorily. Reviewers asked to test the algorithm on the same signals with Poisson noise, and it performed well also. In many works concerned with Poisson-Gaussian noise mixtures, authors usually use more involved penalty functions and/or use variance stabilization transforms (such as Anscombe's). Presenting the work to different signal processing folks, several colleagues (signal and image persons equally) said they were "not surprised" that a simple quadratic term would work well with Poisson noise too. Since I am less a theoretician than a frequentist, and a bayesian (much) less than a frequentist, I do not understand the reason behind the relatively good behavior with both Gaussian and Poisson perturbations. May a reader offer practical or theorical hints behind this "non surprise"? Answer: For large intensities / large "bins", i.e. "areas for which events are counted and accumulated", Poisson processes lead to nearly Gaussian distributed individual values -- basically, without trying to derive this, I think that's the application of the CLT on a lot of realization of a point process. EDIT: Shameless plug: I really really like the wikipedia page on "Relationships among probability distributions; notice the arrow between the Poisson and the normal distribution:
{ "domain": "dsp.stackexchange", "id": 3096, "tags": "noise, filtering, optimization" }
Magnetic moments of tetrahedral Cobalt (II) and (III) complexes
Question: From the spin-only formula we can predict that for tetrahedral cobalt (II) complexes $$m_{eff} = 3.87 \mu_B $$ This ignores orbital angular momentum effects, which result in higher magnetic moments for the tetrahedral halide complexes (Hund's 3rd rule results in spin-orbit coupling "together"). Why do the Co(II) cyanide and Co (III) alkyl complexes have lower magnetic moments than the spin only formula predict ? I have quoted values of 2.15 $\mu_B$ and ~3 $\mu_B$ for these cases. Answer: The question isn't really clear exactly what complexes you are referring to. However, cyanide is quite a high field ligand, so perhaps the Co(II) cyanide compound is low spin while the 3.87 value is for high spin (3/2). I don't see how it could be low spin if it is really tetrahedral, but isn't $\ce {Co(CN)_4^2-}$ square planar? For Co(III) alkyl, alkyl ligands can also be relatively high field, and it is at least theoretically possible to have low spin (s=1) Co(III) tetrahedral complex. However, it is rare for tetrahedral complexes to be low spin because the ligand field splitting energy is lower than for octahedral. If those are not the reasons, see Iron, Cobalt, and Nickel Complexes having Anomalous Magnetic Moments Quarterly Review Chemical Society volume 22, pp 457-498.
{ "domain": "chemistry.stackexchange", "id": 2392, "tags": "coordination-compounds, transition-metals, magnetism" }
Dehydration of primary alcohols without a β-hydrogen
Question: What happens in dehydrations of primary alcohols (in presence of acid catalyst at high temperatures) which don't have a β-hydrogen? For example, in the case of neopentyl alcohol, will it dehydrate as it doesn't have a β-hydrogen? Obviously, E2 reaction will not be possible, but can it react via an E1 reaction, with a rearrangement in the carbocation step? Answer: Yes. Most dehydration reactions proceed via E1 method in (acidic conditions because that is the most common way of proceeding with it) So the the protons in the acidic medium attack the lone pair of oxygen. And this in turn leads to the cleavage of the C-O bond, as it is weaker than C-H bond. So this leads to the formation of a carbocation on the carbon atom. As you said, this undergoes rearrangements before a proton is released and thus a pi bond forms.
{ "domain": "chemistry.stackexchange", "id": 14215, "tags": "organic-chemistry, reaction-mechanism, elimination" }
Derivative of mean anomaly w.r.t true anomaly
Question: I am trying to work through the workings in this paper. At one point (Eq. 10) the authors define the usual mean anomaly, $\beta$, the true anomaly, $\psi$, and the eccentric anomaly $u$ in the usual way: $$ \beta = u - e \sin u \, \, ; \, \, \cos \psi = \frac{\cos u - e}{1 - e \cos u}$$ for eccentricity $e$. Later on in the paper (Eq. 17), the authors declare than given the definitions presented for the various anomalies, than $$ d \beta = \frac{(1-e^2)^{3/2}}{(1 + e \cos \psi)^2} d \psi$$ however I am struggling to reproduce the equation. Can anyone shed some light on how this is derived? Thanks Answer: Differentiating $$\beta=u-e\sin{u}\tag{1}$$ gives $$\frac{d\beta}{du}=1-e\cos{u}\tag{2}.$$ Differentiating $$\psi=\cos^{-1}\left(\frac{\cos u-e}{1-e\cos u}\right)\tag{3}$$ gives $$\frac{d\psi}{du}=\frac{(1-e^2)^{1/2}}{1-e\cos u}\tag{4}.$$ Dividing (2) by (4) gives $$\frac{d\beta}{d\psi}=\frac{(1-e\cos u)^2}{(1-e^2)^{1/2}}\tag{5}.$$ Solving (3) for $\cos u$ gives $$\cos u=\frac{\cos\psi+e}{1+e\cos\psi}\tag{6}$$ so $$1-e\cos u=\frac{1-e^2}{1+e\cos\psi}\tag{7}.$$ Substituting this into (5) gives the desired result $$\frac{d\beta}{d\psi}=\frac{(1-e^2)^{3/2}}{(1+e\cos\psi)^2}\tag{8}.$$ Fortunately all of these intermediate results are relatively simple expressions.
{ "domain": "physics.stackexchange", "id": 62761, "tags": "newtonian-mechanics, astrophysics, orbital-motion, astronomy" }
Gradient of a line integral of a vector field
Question: I need some advice on how to perform the gradient of a line integral of a vector field. My problem refers to the Aharonov-Bohm Effect as it is discussed in the QM book from David Griffiths, as it follows in the image linked here $\rightarrow$ https://i.stack.imgur.com/mYrxp.jpg I don't uderstand how the gradient (nabla operator in $R$) is acting on the integral in g. Answer: Whoever wrote that is either being very loose and unusual with notation or misstating what’s happening. It is true that the derivative of $f(x-y)$ w.r.t. the argument is the negative of the derivative w.r.t. a variable in summation ($x-y$ being a summation): $$f’(x-y) \equiv \frac{df(x-y)}{d(x-y)} $$ $$ \implies f’(x-y) = - \frac{\partial f(x-y)}{\partial y}$$ While that’s correct (and hence the parenthetical statement “$\nabla_R = -\nabla$ when acting on a function of $r-R$” is also correct), that is not what’s happening here. $g= \int_{R}^r A(r’) \cdot dr’$ is not a function of $r-R$. What’s happening is just that taking the derivative of a limit integral wrt a limit is simply the function at the value of the limit for the upper limit, and is the inverse of that for the lower limit: $$\frac{\partial (\int_{a}^b f(x)dx)}{\partial a}=-f(a)$$ It’s not clear what mean with $\nabla_R =-\nabla$. It could be saying $$\nabla_R g(r-R) = -\nabla_{(r-R)} g(r-R)$$ where the RHS is w.r.t. to the argument, usually just written $\nabla \cdot g$. Or it could also mean $$\nabla_R g(r-R) =-\nabla_r g(r-R) $$ Because both are true. But again, that is not the explanation for the relationship giving $\nabla_R \cdot (\int_{R}^r A) = -A(R)$. The explanation is the derivative of a limit integral, technically from the fundamental theorem of calculus.
{ "domain": "physics.stackexchange", "id": 82511, "tags": "quantum-mechanics, electromagnetism, vector-fields, berry-pancharatnam-phase, aharonov-bohm" }
Reactivity series of metal
Question: I have learned about metal and their reactivity series. However, why metals have different reactivity? What are the factors of different reactivity rate for metal? All I can think of is the number of electrons which causing it. Answer: I suppose that with reactivity you mean a metal's tendency to become oxidized. There are several reasons and it depends on the metals you are analyzing. If you are referring to alkaline or alkaline earth metals, reactivity increases as you go down the group. They all have a large oxidation potential since it's relatively easy to remove an electron (or two) of its orbital. As you move down the group, atoms are larger and the valence electrons will be less atracted by the nucleus. Therefore, they are easier to remove and the metal will be more reactive. For d-block transition metals (most of them are considered "transition" metals) it's a whole different story. There are many factors that contribute to its reactivity and it's harder to specify general trends. Many of them (especially the ones on the second and third transition rows) can be oxidized to different oxidation states depending on the conditions. You need to look at their electron configuration and reduction potentials to start rationalizing its properties. Depending upon the electron configuration, they will be able to lose (or even gain) less or more electrons and with different ease. I don't know your background with chemistry, so I won't write a more detailed explanation but if you clarify that you have a more concrete interest in any explanation, just ask.
{ "domain": "chemistry.stackexchange", "id": 11254, "tags": "reactivity" }
Git commit-msg URL shortener
Question: I have just written my first git hook script. It is very simple that simply finds any URLs in the commit message and uses the Google URL shortener to rewrite the URL nicely. it is located here. I feel it could be improved immensely (as it is my first) and would love to have your input. #! /bin/bash message=`cat $1` shorten=`sed -ne 's/.*\(http[^"]*\).*/\1/p' $1` echo "Shortening Url $shorten ...." new_url=`curl -s https://www.googleapis.com/urlshortener/v1/url \-H 'Content-Type: application/json' \-d "{'longUrl': '$shorten'}"` latest=`echo $new_url | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["id"]';` final=${message/$shorten/$latest} echo $final > $1 Answer: First of all, the `cmd` form of running commands and capturing their output is deprecated. Use the recommended, modern way everywhere: $(cmd) Safety Be careful with spaces in filenames. These commands will break: message=`cat $1` shorten=`sed -ne 's/.*\(http[^"]*\).*/\1/p' $1` It will be safer like this: message=$(cat "$1") shorten=$(sed -ne 's/.*\(http[^"]*\).*/\1/p' "$1") The pattern there in the sed is not very safe. The text "add support for http protocol" will match. I think you want to be more strict, maybe something like: longUrl=$(sed -ne 's/.*\(https\{0,1\}:\/\/[^"]*\).*/\1/p' "$1") Note: if you have GNU sed (you're in Linux, or have gsed), then instead of the tedious https\{0,1\} you can simplify as https\?, though it's less portable. And what if there are multiple URLs in the script? It will fail. You probably want to loop over the results of the sed. Or take a lazier approach and just ensure that sed will always produce at most one line: longUrl=$(sed -ne 's/.*\(https\{0,1\}:\/\/[^"]*\).*/\1/p' "$1" | head -n 1) What if there are no matches? An if would be good, as in that case you won't need the curl call, and it's better to not overwrite a file if you don't really have to. When you echo $somevar, whitespaces inside like tabs and newlines would be replaced by spaces. To prevent that, quote the variable: echo "$message" > "$1" Naming Your variable names are not so good: shorten is the output of sed, a long url. Something like longUrl would have been better. newUrl is the output of curl, a json text. Something like json would have been better. Unnecessary things In the curl command, you don't need the backslashes in \-H and \-d flags. And instead of saving the output of the curl and then echo-ing to python, it would be better to skip that intermediary variable and just pipe it directly, like this: shortUrl=$(curl -s https://www.googleapis.com/urlshortener/v1/url -H 'Content-Type: application/json' -d "{'longUrl': '$longUrl'}" | python -c 'import json, sys; print(json.load(sys.stdin)["id"])') Also notice in that python that I skipped the intermediary obj variable and just used the ["id"] directly on json.load(...). Suggested implementation #! /bin/bash message=$(cat "$1") longUrl=$(sed -ne 's/.*\(https\{0,1\}:\/\/[^"]*\).*/\1/p' "$1" | head -n 1) if test "$longUrl"; then echo "Shortening Url $longUrl ..." shortUrl=$(curl -s https://www.googleapis.com/urlshortener/v1/url -H 'Content-Type: application/json' -d "{'longUrl': '$longUrl'}" | python -c 'import json, sys; print(json.load(sys.stdin)["id"])') message=${message/$longUrl/$shortUrl} echo "$message" > "$1" fi This is still not perfect, because it doesn't handle multiple urls. I might add that later, gotta go now. Online shell checker This site is pretty awesome: http://www.shellcheck.net/# Copy-paste your script in there, and it can spot many mistakes that are easy fix.
{ "domain": "codereview.stackexchange", "id": 8626, "tags": "bash, url, git" }
How to set Rviz to 60 Hz?
Question: Hello, I am trying to set Rviz to 60 Hz according to David Gossow´s post here: http://ros-users.122217.n3.nabble.com/Oculus-Rift-Integration-in-RViz-td4020193.html How can I do so? Thanks! Originally posted by kubiak54 on ROS Answers with karma: 43 on 2016-01-05 Post score: 3 Answer: If you're using a newer version of ROS such as Indigo or Jade, the ability to set the frame rate is built into rviz. In the "Displays" panel (usually on the left side of the window), within "Global Options", there should be a option for "Frame Rate", which defaults to 30. Change this value to 60. Originally posted by ahendrix with karma: 47576 on 2016-01-05 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by kubiak54 on 2016-01-05: Doh I was totally blind haha, thanks.
{ "domain": "robotics.stackexchange", "id": 23352, "tags": "rviz" }
Problems running the ccny_rgbd package
Question: I'm trying to get ccny_rgbd working on a turtlebot. It seems to be kind of working but I think it's still not quite working as it should. On the turtlebot I launch: roslaunch ccny_openni_launch openni.launch publish_cloud:=true I get a complaint in the terminal about a missing camera calibration file but it still seems to run anyway. Then on the workstation I launch: roslaunch ccny_rgbd vo+mapping.launch I run the corresponding rViz config file. At this point I did get a rgbd image in rViz but it takes a very long time, several minutes. I left it running for a while and it did update the rgbd image but the update rate was very slow. Again several minutes per image. Other times it doesn't give me the rgbd image at all after waiting up to 10min. I'm sure this isn't normal but I'm not really sure where to ever start at trying to fix it. Maybe my computer is just too slow for this? I also tried to record a bag file to try and process the data later but ran into a problem with the TF data being old. Not sure what's going on there either. It seems to run fine when I'm not trying to record the bag file. Originally posted by jd on ROS Answers with karma: 62 on 2013-03-10 Post score: 0 Original comments Comment by Ivan Dryanovski on 2013-03-10: Can you post information about your setup? What's your processor? Is anything running over a wifi network? Comment by jd on 2013-03-10: The turtlebot has a dual intel atom 1.86Ghz processor. The workstation I'm worried about. It's an acer laptop with an AMD triple core at 2.1Ghz. I've tried it on two networks both seem to give the same result. I do have an unused router that I could use to set up a network just for turtlebot. Answer: The turtlebot processor might be a little slow for the VO. Try it with QVGA data. Post the output of your console here, and I will try to help you further. Alternatively, you can try streaming the data out of the turltebot and runing the VO on the laptop. You might run into network issues, so again, setting to QVGA is recommended to reduce the bandwidth. Originally posted by Ivan Dryanovski with karma: 4954 on 2013-03-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by jd on 2013-03-14: Thanks. I actually was testing it again today before I saw your comment so I don't have any output right now. I connected the workstation and turtlebot directly to my router to see if the network was a problem and it works much better now. I'll try lowering the resolution now too.
{ "domain": "robotics.stackexchange", "id": 13283, "tags": "ros, ccny-rgbd" }
Linear Momentum in General Relativity
Question: My question is, does a particle moving in a straight line at constant velocity through empty space create "frame dragging" that would tend to entrain other bodies in the direction of its motion, via the $T_{0k}$ terms in the stress energy tensor? Is there a metric that describes this case? My thought process was that this situation, from the point of view of a test particle, is indistinguishable from the Kerr metric very close to a rotating body, if we only consider an instant in time, when the test particle acquires linear momentum tangential to the spherical body. Edit: I suppose in the frame of the "moving" body, that momentum does not exist. But that just changes my question to: how do the $T_{0k}$ terms affect the local spacetime metric in general? A related point, if a particle has a very high linear momentum, its total energy which can be thought of as "relativistic mass," should create gravitational affects. How does this manifest in different frames? I know this seems like multiple questions, but I think they are all facets of the same question. Answer: Is there a metric that describes this case? Yes. It is the Schwarzschild metric (valid outside of the gravitating body if we are talking about something like a star). When written in the form where the linear motion of the gravitating body is explicit, the metric usually referred as boosted Schwarzschild metric. Of course, boosted Schwarzschild is related to the usual form of Schwarzschild metric via diffeomorphism, however this is a large diffeomorphism and the vector field generating it does not vanish at infinity, so arguably, this metric does describe a distinct physical situation. For more details and references on the boosted Schwarzschild metric see this answer of mine. … does a particle moving in a straight line at constant velocity through empty space create "frame dragging" that would tend to entrain other bodies in the direction of its motion, via the $T_{0k}$ terms in the stress energy tensor? Wikipedia page does reference the linear frame-dragging effects. Consider the following situation: test particle is initially at rest and a mass $M$ (let's call it a star) flies by it. In the reference frame of the particle, gravitational field of a star has both gravitoelectric and gravitomagnetic components. The particle would accelerate toward the star and after gaining velocity component toward it would interact with the gravitomagnetic field of the star (via analog of Lorentz force) gaining velocity component parallel to the velocity of a star. This effect could be described as “entraining”. Also, if the test body is equipped with a gyroscope, it would “wobble” during the star's flyby, analogously to the Lense–Thirring precession. Keep in mind however, that too literal, mechanistic interpretation of this frame–dragging via some kind “ether” could be unsatisfactory (see Rindler, 97). Additionally, the extreme manifestations of frame–dragging associated with the strong gravity of rotating black holes such as ergosphere, Penrose process, Blandford–Znajek process all have analogues with linearly moving nonrotating black hole (see Penna, 2015). The energy powering such processes is the kinetic energy of moving black hole. One could formulate the laws of black hole mechanics that include the linear momentum. I suppose in the frame of the "moving" body, that momentum does not exist. But that just changes my question to: how do the $T_{0k}$ terms affect the local spacetime metric in general? GR is a nonlinear theory, so generally one cannot attribute physical effects to specific components of a tensor in a given reference frame. Moreover, since there are constraints on stress–energy tensor (in the form of its conservation law), we cannot just take components $T_{0k}$ of, say, a star moving with constant velocity and plug them alone into Einstein equations (even linearized ones) since those components by themselves do not satisfy the constraints. Similarly in electromagnetism, one cannot ask, What is the EM field of a current at a given point? (but one can talk of EM field of a current loop or EM field of a charge moving along a given trajectory, since in these cases constraints are satisfied). A related point, if a particle has a very high linear momentum, its total energy which can be thought of as "relativistic mass," should create gravitational affects. How does this manifest in different frames? Tensor objects describing gravitational fields would transform according to the rules of general covariance. When linearized approximation is valid we could organize our reference frames using (approximately) cartesian coordinates and just use special relativistic transformation rules for perturbations of the flat Minkowski background globally. For example, Weyl tensor under $3+1$ split of spacetime in space ant time parts is described by its electric $E_{ij}$ and magnetic $B_{ij}$ parts (similar to Faraday tensor of EM field, but with more indices). So if we have different reference frames moving relative to each other, there would be transformation laws of electric and magnetic parts between the frames, similar to transformation rules of electromagnetic field. For static spacetimes in static reference frame Weyl tensor has only electric part, but if expressed in a moving reference frame there would also be a magnetic part.
{ "domain": "physics.stackexchange", "id": 88379, "tags": "general-relativity, spacetime, momentum, metric-tensor, stress-energy-momentum-tensor" }
Why does the negative reward function in LQR encourage convergence to the origin?
Question: I was reading Stanford's CS 229 materials on Linear Quadratic Regulation (LQR) (Lecture note 13, YouTube Lecture 18, around minute 36), and it mentions that: [...] the quadratic formulation of the reward is equivalent to saying that we want our state to be close to the origin. For example, if $U_t = I_n $(the identity matrix) and $W_t = I_d$, then $R_t = -\Vert(s_t)\Vert^2-\Vert(a_t)\Vert^2$, meaning that we want to take smooth actions (small norm of $a_t$) to go back to the origin (small norm of $s_t$). Why is that? Why the norm of the origin is smaller? $s_t$ for any time point are just a vector of n dimension which can take any value, right? Answer: Minimizing $s^TUs=s^Ts=\|s\|^2$ minimizes the distance from the origin, because the norm of a vector is its Euclidean distance from the origin: $$\|s\|= \sqrt{s_1^2+s_2^2+\dots+s_n^2} = \sqrt{(s_1-0)^2+(s_2-0)^2+\dots+(s_n-0)^2}$$
{ "domain": "cs.stackexchange", "id": 10824, "tags": "optimization, machine-learning, dynamic-programming" }
Is it possible to heat plasma to the point where there is no electrical resistance?
Question: I was discussing Ohm's Law with my teacher and asked about what would happen if the resistance of a circuit was zero, to which he replied that the wire would melt. This got me wondering about plasma/ionised gas and whether or not it would have zero/negative resistance. Answer: The conductivity of plasmas is very high, though not infinite. In a metal wire the applied voltage accelerates the conduction electrons, but the electrons collide with and scatter off the atoms that make up the crystal lattice of the metal. This transfers energy from the electrons to the metal and the metal heats up as a result. The energy lost as heat is what leads to resistance. In a plasma applying a voltage causes the electrons to accelerate, but because a plasma has a much lower density than a metal the scattering and consequent energy loss of those electrons is very low. There will still be some scattering because electrons will scatter off the positive ions and any unionised gas molecules and indeed each other. So although the resistance is low it won't be zero.
{ "domain": "physics.stackexchange", "id": 21113, "tags": "electricity, plasma-physics" }
Top wiki pages as an app
Question: I recently did an interview task. I was rejected because of bad code quality. There was two tasks. Here I will present the first, second will be posted in time. This project is avalible on GitHub First: Table view Please create an iPhone application that will present the list of most popular wikis from wikia.com Requirements The application should include one screen using UITableView Each row in the table should contain: wiki title, wiki url, wiki thumbnail image API calls should not block the UI Notes UI Code should be done in code, no *.xib or *.storyboard files The application can have additional screens The application should be "release ready" (shall not use any private API, etc.) The applicant can use any library that will help him/her in achieving the expected result Please implement this in Objective-C only WikiApi.h (5 lines) #import <UIKit/UIKit.h> @interface WikiApi : NSObject +(void)fetchTop10WithThumbnailDownload:(void (^)(NSString *key,UIImage *thumbnail))onThumbnailDownload complete:(void (^)(NSDictionary* json))complete; +(void)fetchThumbnail:(NSString*)link complete:(void (^)(UIImage *thumbnail))complete; @end WikiApi.m (106 lines) #import "WikiApi.h" @implementation WikiApi +(void)fetchTop10WithThumbnailDownload:(void (^)(NSString *key,UIImage *thumbnail))onThumbnailDownload complete:(void (^)(NSDictionary* json))complete { NSURL *url = [[NSURL alloc]initWithString:@"http://www.wikia.com/wikia.php?controller=WikisApi&method=getList&lang=en&limit=10"]; NSURLRequest*request = [[NSURLRequest alloc]initWithURL:url]; //Request list of wikis [[[NSURLSession sharedSession] dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { if (error != NULL) { NSLog(@"%@",[error localizedDescription]); return; } NSDictionary *json = [NSJSONSerialization JSONObjectWithData:data options:0 error:&error]; //Getting ids for details request NSMutableArray *ids = [[NSMutableArray alloc]init]; for (NSDictionary *wikia in json[@"items"]) { NSString *wikiaid = [[NSString alloc]initWithFormat:@"%@",wikia[@"id"]]; [ids addObject:wikiaid]; } //Fetching Details [self fetchDetails:ids complete:^(NSDictionary *links) { //Async downloading thumbnails for (NSString *key in [links allKeys]) { NSString *link = links[key]; [self fetchThumbnail:link complete:^(UIImage *thumbnail) { //pass tumbnail and key for complete block. Called each time thumbnail is downloaded onThumbnailDownload(key,thumbnail); }]; } }]; //Pass wikis data to complete block complete(json); }] resume]; } +(void) fetchDetails:(NSArray*)fetchedWikias complete:(void (^)(NSDictionary *links))complete { //Building string for detail request, one request for all ids NSString *preUrl = [self buildURLForDetails:fetchedWikias]; NSURL *url = [[NSURL alloc]initWithString:preUrl]; NSURLRequest *request = [[NSURLRequest alloc]initWithURL:url]; [[[NSURLSession sharedSession]dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { if (error != NULL) { NSLog(@"%@",[error localizedDescription]); } NSDictionary *fetchedDetails = [NSJSONSerialization JSONObjectWithData:data options:0 error:&error]; //Getting links from results and passing them to block NSDictionary *links = [self linksFromDetails:fetchedDetails[@"items"]]; complete(links); }] resume]; } +(void)fetchThumbnail:(NSString*)link complete:(void (^)(UIImage *thumbnail))complete { //Fetch thumbnail with cachePolicy returnCacheDataElseLoad NSURL *url = [[NSURL alloc]initWithString:link]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url cachePolicy:NSURLRequestReturnCacheDataElseLoad timeoutInterval:60.0]; [[[NSURLSession sharedSession] dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { if (error != NULL){ NSLog(@"%@",[error localizedDescription]); } else { //If thumbnail is ok then pass it to block UIImage *img = [[UIImage alloc]initWithData:data]; if (img.CGImage != NULL) { complete(img); } } }] resume]; } +(NSString*)buildURLForDetails:(NSArray*)fetchedWikias{ NSMutableString *buildedURL = [[NSMutableString alloc]initWithString:@"http://www.wikia.com/wikia.php?controller=WikisApi&method=getDetails&ids="]; for (NSInteger i=0 ; i<[fetchedWikias count]; i++) { NSString *wId = [fetchedWikias objectAtIndex:i]; [buildedURL appendString:wId]; if (i < [fetchedWikias count]-1) { [buildedURL appendString:@","]; } else { [buildedURL appendString:@"&height=400&width=400"]; } } return buildedURL; } +(NSDictionary*)linksFromDetails:(NSDictionary*)fetchedDetails { NSMutableDictionary *linksForReturn = [[NSMutableDictionary alloc]init]; for (NSString *key in [fetchedDetails allKeys]) { NSString *link = fetchedDetails[key][@"image"]; [linksForReturn setValue:link forKey:key]; } return linksForReturn; } @end WikiaTableViewCell.h (10 lines) #import <UIKit/UIKit.h> @interface WikiaTableViewCell : UITableViewCell @property (readwrite,retain) UILabel *title; @property (readwrite,retain) UILabel *url; @property (readwrite,retain) UIImageView *thumbnail; @property (readwrite) long idNumber; @end WikiaTableViewCell.m (38 lines) #import "WikiaTableViewCell.h" @implementation WikiaTableViewCell @synthesize title,url,thumbnail,idNumber; -(id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { [self setupPositions:frame]; } return self; } -(void)setupPositions:(CGRect)frame { int x = frame.size.width * 0.375; int y = 8; title = [[UILabel alloc]initWithFrame:CGRectMake(x, 8, frame.size.width - x, 22)]; int secondY = y + title.font.pointSize + 8; url = [[UILabel alloc]initWithFrame:CGRectMake(x, secondY, frame.size.width - x, 22)]; int thumbnailSize = frame.size.height; thumbnail = [[UIImageView alloc]initWithFrame:CGRectMake(2, y, thumbnailSize, thumbnailSize- (y + 8))]; thumbnail.contentMode = UIViewContentModeScaleAspectFit; [self addSubview:title]; [self addSubview:url]; [self addSubview:thumbnail]; } - (void)awakeFromNib { // Initialization code } - (void)setSelected:(BOOL)selected animated:(BOOL)animated { [super setSelected:selected animated:animated]; } @end Top10WikiaTableViewController.h (5 lines) #import <UIKit/UIKit.h> @interface Top10WikiaTableViewController : UITableViewController @end Top10WikiaTableViewController.h (127 lines) #import "Top10WikiaTableViewController.h" #import "WikiaTableViewCell.h" #import "WikiApi.h" @interface Top10WikiaTableViewController () { NSArray *wikias; NSMutableDictionary *thumbnails; NSMutableDictionary *cells; } @end @implementation Top10WikiaTableViewController CGFloat CELL_HEIGHT = 100.0; - (instancetype)init { self = [super init]; if (self) { self.title = @"Top 10"; } return self; } - (void)viewDidLoad { [super viewDidLoad]; wikias = [[NSArray alloc]init]; thumbnails = [[NSMutableDictionary alloc]init]; cells = [[NSMutableDictionary alloc]init]; self.tableView.contentInset = UIEdgeInsetsMake(20, 0, 0, 0); } -(void)viewWillAppear:(BOOL)animated{ [super viewWillAppear:animated]; [self fetchData]; } - (void) fetchData { [WikiApi fetchTop10WithThumbnailDownload:^(NSString *key, UIImage *thumbnail) { dispatch_async(dispatch_get_main_queue(), ^{ [thumbnails setObject:thumbnail forKey:key]; [self updateCellThumbnail:key]; }); } complete:^(NSDictionary *json) { dispatch_async(dispatch_get_main_queue(), ^{ wikias = json[@"items"]; [self.tableView reloadData]; }); }]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } //Updates -(void)updateCellThumbnail:(NSString*)key { WikiaTableViewCell *cell = [self cellWithId:key]; NSIndexPath *indexPath = [self.tableView indexPathForCell:cell]; if (indexPath != nil){ cell.thumbnail.image = [thumbnails objectForKey:key]; } } -(WikiaTableViewCell*)cellWithId:(NSString*)key{ return [cells objectForKey:key]; } -(NSDictionary*)linksFromDetails:(NSDictionary*)fetchedDetails { NSMutableDictionary *linksForReturn = [[NSMutableDictionary alloc]init]; for (NSString *key in [fetchedDetails allKeys]) { NSString *link = fetchedDetails[key][@"image"]; [linksForReturn setValue:link forKey:key]; } return linksForReturn; } #pragma mark - Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [wikias count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { CGRect frame = [UIScreen mainScreen].bounds; WikiaTableViewCell *cell = [[WikiaTableViewCell alloc]initWithFrame:CGRectMake(0, 0, frame.size.width, CELL_HEIGHT)]; NSDictionary *wikia = wikias[indexPath.row]; cell.title.text = wikia[@"name"]; cell.url.text = wikia[@"domain"]; cell.idNumber = [wikia[@"id"] longValue]; cell.thumbnail.image = [self thumbnailForId:cell.idNumber]; [self addCellToArrayIfNeeded:cell]; return cell; } //Adding cell if that cell is not alredy in dictionary -(void)addCellToArrayIfNeeded:(WikiaTableViewCell*)cell { NSString *key = [[NSString alloc]initWithFormat:@"%ld",[cell idNumber]]; if ([cells objectForKey:key] == nil) { [cells setObject:cell forKey:key]; } } //If thumbnail exist for id then return it -(UIImage*)thumbnailForId:(long)wId { return [thumbnails valueForKey:[[NSString alloc]initWithFormat:@"%ld",wId]]; } -(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath{ CGFloat height = CELL_HEIGHT; return height; } @end Questions What comment will you give in code review for presented code? In your oppinion did I fulfill requirements? Now I see that there is lack of comments at top of every function. Also there is lack well organise structure in those classes like class { public fields private fields inits public methods private methods } Do you think that this is the case? Summation I also did some test but I think that there is no reason to put them here. You can found them in GitHub (link above). Answer: The first thing I would have done in accomplishing this task is probably something very big that really cost you. Each row in the table should contain: wiki title, wiki url, wiki thumbnail image One of the absolute first things I would have done would be to create a class to represent the "wiki" object. @interface WikiPage @property NSNumber *id; @property NSString *title; @property NSURL *url; @property UIImage *thumbnail; @end We then create an array of these objects for our table view's data source to deal with. As for the code you actually wrote, I want to just comment on your WikiApi class for now. It's not a particularly good sign that we've implemented only classs methods. Instead, we should make an instantiable WikiPageFetcher perhaps. And rather than passing blocks, let's set up a delegate. For that, we need a protocol. Something like this: @protocol WikiPageFetcherDelegate <NSObject> @required - (void)wikiFetcher:(WikiPageFetcher *) wikiFetcher didFetchWikiPage:(WikiPage *)wikiPage; @optional - (void)wikiFetcher:(WikiPageFetcher *) wikiFetcher didFailWithError:(NSError *)error; @end You'll notice here that we're going to pass a WikiPage object one at a time. Each time we finish parsing and downloading the thumbnail for an individual page, we're going to pass it back. By doing this, we can actually dynamically update our table view and add the results one at a time as they come in instead of waiting for the whole thing to complete. Additionally, we could add a whole separate protocol method for the completion of the thumbnail download. That way we can quickly inform the delegate that we've parsed the link (shouldn't take long) and come back a few seconds later to pass the image back. Importantly, internally, we'll get our code passed out to multiple threads. Each of the top 10 (or however many) wiki links we're grabbing will be handled by its own thread, so we should be able to get our results slightly quicker. It's not a good sign that we've hard coded the number of results we're requesting. Yes, they specified 10... but you'll have specifications like this throughout your career. And then the specification will change. So make the code more versatile and prepare ahead to be asked for a different number. I think this is enough to get you started on a revision before I go into too much more specific detail. The gist of this is, my end usage should look something like this: @interface MyTableViewController() <WikiPageFetcherDelegate> @property NSMutableArray *wikiPages; @property WikiPageFetcher *wikiFetcher; @end @implementation MyTableViewController - (void)viewDidLoad { [super viewDidLoad]; self.wikiPages = [NSMutableArray array]; self.wikiFetcher = [WikiPageFetcher wikiPageFetcherWithDelegate:self]; } - (void)viewDidAppear { [super viewDidAppear]; [self refresh]; } - (void)refresh { [self.wikiFetcher loadTopWikiPages:10]; } - (void)wikiFetcher:(WikiPageFetcher *)wikiFetcher didFetchWikiPage:(WikiPage *)wikiPage { [self.wikiPages addObject:wikiPage]; NSIndexPath *indexPath = [NSIndexPath indexPathForRow:(self.wikiPages.count - 1) inSection:0]; [self.tableView insertRowsAtIndexPaths:@[indexPath] withRowAnimation:UITableViewRowAnimationAutomatic]; } I consider it important to separate out the refresh method, because you may want to implement a pull-to-refresh or let the user refresh in other ways, or refresh automatically every 5 minutes, etc., so this is just good practice for all sorts of other applications. This is particularly true when refresh does more than just a single method call. Once you can get things to this point, you'll be ready for another round of reviews. Make sure that the delegate methods are called on the main thread but all the downloading stuff remains in the background.
{ "domain": "codereview.stackexchange", "id": 14045, "tags": "performance, objective-c, interview-questions, ios" }
Create a bag with multiples topics
Question: I'am newbie in ROS. In fact, I have a folder containing images, a file containing IMU data with its timestamps correspondence. How can i put all of them into a bag with multiples topics. Originally posted by trinamntn08 on ROS Answers with karma: 11 on 2017-08-21 Post score: 0 Answer: Take a look at the rosbag Python API example. That writes two msg types to two different topics (chatter and numbers). Just make sure to use the correct message types (from sensor_msgs: sensor_msgs/Imu and sensor_msgs/Image), and that, depending on whether the nodes that will be receiving these messages need it, to embed a CameraInfo topic with the intrinsics of the camera that was used as well. Originally posted by gvdhoorn with karma: 86574 on 2017-08-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28650, "tags": "rosbag" }
Less pollution: moving hurricane debris to other regions for use, or burning?
Question: When a big hurricane hits, it can create debris on the scale of $\mathrm{10^8 yd^3}$. Cities in Florida, Texas, and other affected areas are struggling to hire enough trucks and drivers to pick it up quickly. But aside from that, I noticed many of the areas have started to burn the debris once it starts building up. Got to wondering... typically mulch comes in modestly pricey, and when free mulch is offered, it often goes quickly. So assuming a fair portion of debris is mulchable and is of interest to other areas, and that we can acquire typical transportation resources, then we'll set up transfer from collection sites to those other regions rather than burning it. What would be the net pollution result? If removed for mulch and such: trucking pollution + decomposition (- trees saved locally??) If burned: the burning pollution. Obviously it's about approximation rather than exacts, it's probably hard to appraise the different byproducts from burning versus decomposition, and a lot probably depends upon the way it is burned. But as a whole, can we get a rough estimate of comparable quantities/damage done... is it less pollution/damage even to truck it an average of 3000 miles? 1000 miles? 100 miles? 10 miles? Should it be burned on the spot (if done safely)? Would think there's got to be some way to get a very rough idea. Certainly the best option if viable might be leaving it in place to decompose. But considering how upset people are getting at having debris around these parts a month later, exclude that option from the possibilities. Trucking or burning, how do they compare? Answer: As the question was changed, my answer attempts to evaluate only the difference between burning and transporting. Please correct my values if my quickly found sources are inaccurate or you find more representing. I know there is quite a few unwritten assumptions that simplify this problem. On average transporting via trucks emit 161.8 $\frac{CO_{2}}{short-ton-mile}$ (according to this Environmental defense fund handbook). Short ton equals to 2 000 lbs. The amount of $CO_{2}$ equivalent from burning wood is $1900 + 200 + 70 = 2170 \frac{CO_{2} eqiuv.}{kg}$ from carbon dioxide, nitrous oxide and methane respectively (according to this article, the link to the source of these numbers fails so please comment if these numbers aren't correct). Now we can write an equation out of this and solve for the kilometer distance that produces equal amount of carbon dioxide equivalent.$$\frac{2170\frac{CO_{2}eq.}{kg}}{161.8\frac{CO_2}{short-ton-mile} \cdot 0.0011\frac{short-ton}{kg} \cdot 0.6213\frac{mile}{km}} \approx 20000 km$$ This is quite a large number in my opinion. I would've expected less. So quite a long distance is needed until transporting produces more $CO_2$ pollution than burning.
{ "domain": "earthscience.stackexchange", "id": 1224, "tags": "air-pollution" }
Given a set of LTL formulas, on which states does the Kripke structure hold?
Question: I'm currently learning about LTL and CTL formulas and to get a better understanding I try to manually interpret the formulas over a given Kripke structure. Since I'm not 100% sure if my results are correct I would appreciate if anyone can verify them. Task: I showed on which states the given LTL formular hold. Some LTL notation notes: $X$ equals $\bigcirc$ $G$ equals $\Box$ $F$ equals $\diamond$ $Fc = \{\}$ My interpretation: $Fc$ means that on all paths $c$ holds sometimes the future. Since all paths come along $t4$ it doesn't hold for any state. $G(b \vee c) = \{\}$ My interpretation: For all paths holds globally b or c. $G(Fb) = \{t0, t1, t2, t3, t4, t5, t6\}$ My interpretation: For all paths holds globally that eventually b will be true. $G(b \Rightarrow (Xa \Rightarrow Xb)) = \{t0, t1, t2, t3, t4, t5, t6\}$ My interpretation: Since $Xa \Rightarrow Xb$ is true for every state the implication $b \Rightarrow (Xa \Rightarrow Xb)$ must hold for all states too sinde $? \Rightarrow true$ is always true. $a U (b U c) = \{t1, t3, t4, t5\}$ My interpretation: Following paths are valid: aaaaabbbc, bbbbc, c, ccc. Therefore the states $t1, t3, t4, t5$ are valid. So can anybody confirm my results? Answer: A couple of comments: Note that "in the future" is not strict, i.e. $Fb$ is also satisfied whenever $b$ itself holds. Looks fine to me. Also ok, though I'm not sure if your interpretation would be very enlightening in a more complicated Kripke structure. "$Xa\Rightarrow Xb$ is true for every state" is a CTL-ism. What you want to say is that for every state along every path, if $a$ holds in this state and $b$ holds in the next state, then $a$ also holds in the next state.Is this the case? Here you want some finite number (possibly $0$) of $a$s, followed by some finite number (possibly $0$) of $b$s, followed by a $c$. Is this what you see along every path starting in the states you give?
{ "domain": "cs.stackexchange", "id": 4507, "tags": "logic, model-checking, linear-temporal-logic, temporal-logic" }
Formation of Cosmic Microwave Background
Question: It is said that the cosmic microwave radiation (CMB) was formed when the universe was 379,000 years old. How is this calculated? Answer: We observe the temperature of the CMB as a ~2.7 K blackbody, but that's the redshifted version we observe. The CMB is also know as the "surface of last scattering" at the point of recombination when nuclei and electrons combined to form neutral atoms the universe went from opaque to transparent. This happens at a temperature of ~3000 K. From this we can estimate the redshift (z~1100) of the CMB which corresponds to an age given our cosmology.
{ "domain": "astronomy.stackexchange", "id": 531, "tags": "cosmology, big-bang-theory, cosmic-microwave-background" }
High CPU and memory usage iterating over 300K objects
Question: I am trying to run a series of keywords against a series of categories and then within those categories there are some options. So I have ended up doing a forEach over for of over a every method and when dealing with a lot of entries node consumes way too much memory. When dealing with 300K objects from a 29MB csv converted to JSON file and processed, pm2 monitor says node peaks to 4GB RAM usage and 200% CPU is this normal? Here is an example of the code with a minial data sample const keywords = [ { Keyword: 'foo', URL: 'https://www.facebook.co.uk' }, { Keyword: 'foo', URL: 'https://www.twitter.co.uk/blue' }, { Keyword: 'faa', URL: 'https://www.facebook.co.uk/twitter' }, { Keyword: 'faa', URL: 'https://www.apple.co.uk/green' } ] const categories = [ { name: 'Tech', options: [ { method: 'include', regex: 'facebook' }, { method: 'exclude', regex: 'twitter' } ] }, { name: 'Green', options: [ { method: 'include', regex: 'green' } ] } ] keywords.forEach((obj) => categories.forEach(({name, options}) => obj[name] = options.every(({method, regex}) => method === 'include' ? obj.URL.includes(regex) : !obj.URL.includes(regex)))) console.log(keywords) .as-console-wrapper { max-height: 100% !important; top: 0; } Answer: Performance Whenever possible avoid strings. Why ? JS strings are immutable Passing a string to a function or assigning it to a variable requires the string to be copied[*1]. This adds a lot of processing overhead (memory processing, (assignment, and GC), and string iteration) which can be avoided. [*1] See update at bottom of answer. For example using your function function process(keywords, categories) { keywords.forEach(obj => categories.forEach(({name, options} => obj[name] = options.every(({method, regex}) => method === 'include' ? obj.URL.includes(regex) : !obj.URL.includes(regex)) ) ); } The two inner loops create new strings for each iteration. name in the outer loop categories.forEach(( and method and regex in the loop options.every(({ As these string are all stored as references in objects there is no need to copy the strings to new variables. Just use the references directly as follows... function process(keywords, categories) { keywords.forEach(obj => categories.forEach(cat => obj[cat.name] = cat.options.every(opt => opt.include ? obj.URL.includes(opt.regex) : !obj.URL.includes(opt.regex)) ) ); } Avoid state strings Using string to store simple states is much slower than using simpler types like boolean or number. For example you use the expression opt.method === "include" to check the type of test to do on URL. The negative (false) for opt.method === "include" (method = "exclude") is quicker as the compare fails on the first character "e" !== "i" . However the match needs to iterate each of the 7 characters to find true. JS has no clue to help check the match (the strings include and exclude are the same length). As there are only two states, include or exclude, you can use a boolean state. Example the option objects can be options: [ { include: true, regex: 'facebook' }, // includes { include: false, regex: 'twitter' } // excludes ] And then the inner test can be a constant (and fast) complexity opt.include ? obj.URL.includes(opt.regex) : !obj.URL.includes(opt.regex)) If you can not create the boolean for categories as stored (in JSON). Process the options once outside the function process function optimzeCats(categories) { categories.forEach(cat => cat.options.forEach(opt => opt.include = opt.method === 'include' )); } Having to do so will of course reduce the gain gained. Reduce scope searches Using node (V8) means that there is an additional overhead each (scope step) you need to use a variable outside the current scope. In your code the outer loop keywords.forEach(obj puts the variable obj 2 scope steps above its use in the inner loop. obj.URL.includes( As (I assume) the number of keywords greatly outnumbers the number of cats changing the scope distance to obj will give another worthwhile performance gain. This can be done by swapping the order of the first two outer loops. function process(keywords, categories) { categories.forEach(cat => keywords.forEach(obj => obj[cat.name] = cat.options.every(opt => opt.method === 'include' ? obj.URL.includes(opt.regex) : !obj.URL.includes(opt.regex)) ) ); } Further optimizations All of the above should give up to 15% performance gain and a unknown but worthwhile reduction in memory use. (Note only in regard to processing, as I don't know how you handle the JSON string) There are likely many more optimizations however these will depend very much on what is being stored in both data structures and how the results are expected to be used. Update Correction... After comments and then some research. Strings are not copied (duplicated) when assigned but rather a map reference (hash) to the unique string within the global context represents the string.
{ "domain": "codereview.stackexchange", "id": 42758, "tags": "javascript, node.js" }
Computational Complexity of Promise Monotone 1 in 3 SAT
Question: Given a 3SAT problem where each clause is a Monotone Clause (i.e. each clause with all positive or negative literals). Moreover, we are promised that each solution for the above SAT problem (if any exists at all) will be of the form 1 in 3 SAT. Monotone 1 in 3 SAT is NP Complete, but with the additional promise above, does the problem still remain NP Complete (it obviously is NP) ? Answer: The trivial answer is no, as promise problems cannot be (by definition) in NP, and thus cannot be NP complete. The basic complexity classes e.g. $P, NP$ refer only to decision problems. However, if you want to relate your promise problem to the standard classes, you can say it is NP-hard in the sense that if it can be solved in polynomial time for inputs in the promise, then $P=NP$. To see why the above holds, let us introduce some notations. Let $C$ be the set of all monotone clause 3CNF, for which all satisfying assignments are 1 in 3. We shall call this set "the promise". Now your problem is, given an input in $C$, to determine whether it lies in $L_{yes}=C\cap SAT$, or in $L_{no}=C\setminus L_{yes}=C\cap\overline{SAT}$. Given an instance $\varphi$ to monotone clause 1 in 3 SAT, transform it to a monotone clause 3CNF $\psi$ such that $\psi\in C$, and in addition every 1 in 3 satisfying assignment for $\varphi$ also satisfies $\psi$. You can construct $\psi$ by adding clauses which negate conjunctions of pairs of literals in the same clause. Since $\psi\in C$ for every possible $\varphi$, and in addition $\varphi$ has a one in three satisfying assignment iff $\psi\in L_{yes}$, then solving the promise problem in polynomial time allows you to determine whether $\varphi$ has a one in three satisfying assignment, and thus $P=NP$. The above basically shows that you can reduce an NP complete problem to your promise problem, but when talking about promise problems the definition of reduction should be slightly adjusted. Let us say that $f$ is a reduction from $L$ to the promise problem $L_{yes}\cup L_{no}$ if $Im(f)\subseteq L_{yes}\cup L_{no}$ and for every $x$ it holds that $x\in L\iff f(x)\in L_{yes}$. This is equivalent to saying that $x\in L\Rightarrow f(x)\in L_{yes}$ and $x\notin L\Rightarrow f(x)\in L_{no}$ (note that not being in $L_{yes}$ does not imply being in $L_{no}$).
{ "domain": "cs.stackexchange", "id": 10774, "tags": "complexity-theory, np-complete" }
augmenting room magnetic field for smartphone sensors
Question: is it possible to enhance (or redirect) the earth's magnetic field in a room or house so that one can write a small program that makes smartphones with hall-effect sensors detect more reliably in which direction they are pointing? I presume a fridge magnet won't do the job... Answer: This question might be better suited to the Physics StackExchange, but the back-of-the-envelope calculation goes like this: The Earth's magnetic field is 31.869 µT, and a refrigerator magnet is 5000µT. So the refrigerator magnet will very readily affect the magnetometer (not usually called a hall-effect sensor) on a smartphone. However, you will run into two very serious problems The strength of a magnet follows the inverse square law -- it drops off rapidly with distance. You can verify this with a handheld compass: see how close your magnet needs to be in order for it to pull the needle away from magnetic north. The magnetic field curves around the magnet. For the earth, this is not a major concern because you are (more or less) standing on the surface where the lines run (more or less) straight from north to south. For your room-sized magnetic field to work, you would need to establish a magnet far enough away that its field lines would have minimal curvature through the area that you want to measure, and strong enough that those field lines would have enough strength to dominate the 31.869 µT provided by Earth's magnetic field.
{ "domain": "robotics.stackexchange", "id": 1086, "tags": "sensors, hall-sensor" }
Why tf object detection api needs so few pictures?
Question: I am wondering why tf object detection api needs so few picture samples for training while regular cnns needs many more? What I read in tutorials is that tf object detection api needs around 100-500 pictures per class for training (is it true?) while regular CNNs need many many more samples, like tens of thousands or more. Why is it so? Answer: I guess that they need so little data because their models are already trained on huge datasets, and they are just transferring the learning (using those pre-trained models as starting point).
{ "domain": "ai.stackexchange", "id": 1332, "tags": "tensorflow, object-detection" }