anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Describing state machines mathematically
Question: The short paper "Computer Science and State Machines" by Leslie Lamport seems quite strange to me. On the one hand, I am surprised to see that an important hardware protocol called "two-phase handshake" can be derived from a trivial program, simply by mathematical substitution. On the other hand, I think that this example is (and should be) chosen deliberately. What I doubt about is its generality. If this method (i.e., describing state machines mathematically and deriving a protocol from its specification by mathematical substitution) is so fresh that researchers have not developed a general theory, I would like to see more examples. My question is straightforward: Is this derivation a coincidence? Could anyone offer more examples or related references? The derivation of the "two-phase handshake" protocol from a trivial program: The trivial program mentioned above is just to alternately perform the $\mathcal{P}$ and $\mathcal{C}$ operations: $\mathcal{X}: \textrm{ loop } \mathcal{P} \textrm{ } ; \textrm{ } \mathcal{C} \textrm{ endloop}$ By introducing a variable $pc$ to represent the "program counter", $\mathcal{X}$ can be described as the following state machine: $Init_{\mathcal{X}} \triangleq (pc = 0) \land Init_{\mathcal{PC}}$ $Next_{\mathcal{X}} \triangleq \big( (pc = 0) \land \mathcal{P} \land (pc' = 1) \big) \lor \big( (pc = 1) \land \mathcal{C} \land (pc' = 0) \big)$ where $Init_{\mathcal{X}}$ denotes the set of initial states; $Next_{\mathcal{X}}$ specifies the next-step transition. $Init_{\mathcal{PC}}$ specifies the initial values of the variables in $var_{\mathcal{PC}}$ involved in $\mathcal{P}$ and $\mathcal{C}$. The primed variable ($pc'$ here) is used to represent the modified version of its unprimed counterpart ($pc$ here). The two-phase handshake protocol can be described as follows, where $p$ and $c$ are initially equal. \begin{eqnarray} \mathcal{Y} : & \textrm{ process } & Prod: \textrm{ whenever } p = c \textrm{ do } \mathcal{P} \textrm{ } ; \textrm{ } p = p \oplus 1 \textrm{ end} \\ & \Arrowvert & \\ & \textrm{ process } & Cons: \textrm{ whenever } p \neq c \textrm{ do } \mathcal{C} \textrm{ } ; \textrm{ } c = c \oplus 1 \textrm{ end} \end{eqnarray} Note that process $Prod$ reads $c$ and writes $p$ while $Cons$ reads $p$ and writes $c$. It is not hard to find out that $\mathcal{Y}$ alternately performs $\mathcal{P}$ and $\mathcal{C}$. The protocol $\mathcal{Y}$ can also be described as a state machine: $Init_{\mathcal{Y}} \triangleq (p = c) \land Init_{\mathcal{PC}}$ $Next_{\mathcal{Y}} \triangleq Prod \lor Cons$ $Prod \triangleq (p = c) \land \mathcal{P} \land (p' = p \oplus 1) \land (c' = c)$ $Cons \triangleq (p \neq c) \land \mathcal{C} \land (c' = c \oplus 1) \land (p' = p)$ The amazing observation is: $\mathcal{Y}$ can be obtained from $\mathcal{X}$ by substituting $p \oplus c$ for $pc$ in their state machines. Answer: There is a general theory here, which was introduced into CS by Robin Milner, which Lamport did not go into. A state machine is generally given as a triple $(Q \in \mathrm{Set}, q \in Q, f \in I \times Q \to \mathcal{P}(O \times Q))$, consisting of a state set $Q$, an initial state $q$, and a transition relation $f$. Now, suppose we have two automata $(Q, q, f)$ and $(T, t, g)$. Let's ask a question: when are these automata equivalent? If all we are going to do is take the machines and send them inputs and listen to the outputs, we don't want to require the state sets to be the same (for example, if $(T, t, g)$ is the result of running a DFA minimization algorithm on $(Q, q, f)$). It turns out that the right notion of equivalence is bisimulation. The idea is that we take two state machines to be the same, if we can produce a relation $R \subseteq Q \times T$, such that $(q,t) \in R$ for all $(q,t) \in R$ and $i \in I$ and $o \in O$ then: for all $q' \in Q$, if $(o, q') \in f(i, q)$, then there is a $t' \in T$ such that $(o, t') \in g(i, t)$. for all $t' \in T$, if $(o, t') \in g(i, t)$, then there is a $q' \in Q$ such that $(o, q') \in f(i, q)$. This says that if we can figure out any relation such that (a) the initial states are related, and (b) from related initial states, any I/O action the first machine can take can be mimicked by the second machine in a way that keeps you in the relation, and (c) similarly, the first machine can mimic anything the second can do. Lamport's notation with primes is a way of concisely describing input-output relations. In this case, the state sets are the program counter of the first program, and the two program counters of the second program. The bisimulation relation is $R(pc, (p, c)) \triangleq pc = p \oplus c$, and then the bisimulation conditions follow trivially (since the relation expressions are equal under the subsitution). The general theory at work is the theory of coinduction. State machines are representions of corecursively defined sets, and bisimulations tell you when two state machines are representations of the same potentially-infinite object. Incidentally, this paper is written in a rather polemical style. Equally polemically, I'll point out that the style he advocates in this paper simply fails should you ever need to verify part of a program in isolation -- for example, if you want to prove a library implementation correct, or verifying a program using higher-order functions (or even just dynamic linking). But both of us will use coinduction, nonetheless.
{ "domain": "cstheory.stackexchange", "id": 2820, "tags": "reference-request, pl.programming-languages, formal-methods, examples" }
Why can so little digital information be stored on a cassette tape?
Question: I had heard that tape is still the best medium for storing large amounts of data. So I figured I can store a relatively large amount of data on a cassette tape. I was thinking of a little project to read/write digital data on a cassette tape from my computer sound card just for the retro feeling. (And perhaps read/write that tape with an Arduino too). But after reading up about it for a bit it turns out that they can store very small amounts of data. With baud rates varying between 300 to 2400 something between ~200KB to ~1.5MB can be stored on a 90 minute (2x45min) standard cassette tape. Now I have a lot of problems with understanding why that is. 1- These guys can store 90 minutes of audio. Even if we assume the analog audio quality on them was equivalent of 32Kbps that's about 21MB of data. I have a hard time believing what I listened to was 300bps quality audio. 2- I read about the Kansas City standard and I can't understand why the maximum frequency they're using is 4800Hz yielding a 2400 baud. Tape (according to my internet search) can go up to 15KHz. Why not use 10KHz frequency and achieve higher bauds? 3- Why do all FSK modulations assign a frequency spacing equal to baud rate? In the Kansas example they are using 4800Hz and 2400Hz signals for '1' and '0' bits. In MFSK-16 spacing is equal to baud rate as well. Why don't they use a MFSK system with a 256-element alphabet? With 20Hz space between each frequency the required bandwidth would be ~5KHZ. We have 10KHz in cassette tape so that should be plenty. Now even if all our symbols were the slowest one (5KHz) we would have 5*8 = 40000 baud. That's 27MB of data. Not too far from the 21MB estimation above. 4- If tape is so bad then how do they store Terabaytes on it? Answer: I had heard that tape is still the best medium for storing large amounts of data. well, "best" is always a reduction to a single set of optimization parameters (e.g. cost per bit, durability, ...) and isn't ever "universally true". I can see, for example, that "large" is already a relative term, and for a small office, the optimum solution for backing up "large" amounts of data is a simple hard drive, or a hard drive array. For a company, backup tapes might be better, depending on how often they need their data back. (Tapes are inherently pretty slow and can't be accessed at "random" points) So I figured I can store a relatively large amount of data on a cassette tape. Uh, you might be thinking of a Music Casette, right? Although that's magnetic tape, too, it's definitely not the same tape your first sentence referred to: It's meant to store an analog audio signal with low audible distortion for playback in a least-cost cassette player, not for digital data with low probability of bit error in a computer system. Also, Music Cassettes are a technology from 1963 (small updates afterwards). Trying to use them for the amounts of data modern computers (even arduinos) deal with sounds like you're complaining your ox cart doesn't do 100 km/h on the autobahn. But after reading up about it for a bit it turns out that they can store very small amounts of data. With baud rates varying between 300 to 2400 something between ~200KB to ~1.5MB can be stored on a 90 minute (2x45min) standard cassette tape. Well, so that's a lot of data for when music-cassette-style things were last used with computers (the 1980s). Also, where do these data rates drop from? That sounds like you're basing your analysis on 1980's technology. These guys can store 90 minutes of audio. Even if we assume the analog audio quality on them was equivalent of 32Kbps that's about 21MB of data. 32 kb/s of what, exactly? If I play an Opus Voice, Opus Music or MPEG 4 AAC-HE file with a target bitrate of 32 kb/s next to the average audio cassette, I'm not sure the cassette will stand much of a chance, unless you want the "warm audio distortion" that casettes bring – but that's not anything you want to transport digital data. You must be very careful here, because audio cassette formulations are optimized for specific audio properties. That means your "perceptive" quality has little to do with the "digital data capacity". I have a hard time believing what I listened to was 300bps quality audio. again, you're comparing apples to oranges. Just because someone 40 to 50 years ago wrote a 300 bits per second modem that could reconstruct binary data from audio cassette-stored analog signals, doesn't mean 300 bps is the capacity of the music cassette channel. That's like saying "my Yorkshire Terrier can run 12 km/h on this racetrack, therefore I can't believe you can't have Formula 1 cars doing 350 km/h on it". I read about the Kansas City standard and I can't understand why the maximum frequency they're using is 4800Hz yielding a 2400 baud. Tape (according to my internet search) can go up to 15KHz. Why not use 10KHz frequency and achieve higher bauds? Complexity, and low quality of implementation and tapes. I mean, you're literally trying to argue that what was possible in 1975 is representative for what is possible today. That's 45 years in the past, they didn't come anywhere near theoretical limits. Why do all FSK modulations assign a frequency spacing equal to baud rate? They don't. Some do. Most modern FSK modulations don't (they're minimum shift keying standards, instead, where you choose the spacing to be half the symbol rate). In the Kansas example they are using 4800Hz and 2400Hz signals for '1' and '0' bits. In MFSK-16 spacing is equal to baud rate as well. Again, 1975 != all things possible today. Why don't they use a MFSK system with a 256-element alphabet? With 20Hz space between each frequency the required bandwidth would be ~5KHZ. We have 10KHz in cassette tape so that should be plenty. Now even if all our symbols were the slowest one (5KHz) we would have 5*8 = 40000 baud. That's 27MB of data. Not too far from the 21MB estimation above. Well, it's not that simple, because your system isn't free from noise and distortion, but as before: Low cost. They simply didn't. If tape is so bad then how do they store Terabaytes on it? You're comparing completely different types of tapes, and tape drives: This 100€ LTO-8 data backup tape vs this cassette tape type, of which child me remembers buying 5-packs at the supermarket for 9.99 DM, which, given retail overhead, probably means the individual cassette was in the < 1 DM range for business customers: and this 2500€ tape drive stuffed with bleeding edge technology and a metric farkton of error-correction code and other fancy digital technology vs this 9€ casette thing that is a 1990's least-cost design using components available since the 1970s, which is actually currently being cleared from Conrad's stock because it's so obsolete: At the end of the 1980s, digital audio became the "obvious next thing", and that was the time the DAT cassette was born, optimized for digital audio storage: These things, with pretty "old-schooley" technology (by 2020 standards) do 1.3 Gb/s when used as data cassettes (that technology was called DDS but soon parted from the audio recording standards). Anyway, that already totally breaks with the operating principles of the analog audio cassette as you're working with: in the audio cassette, the read head is fixed, and thus, the bandwidth of the signal is naturally limited by the product of spatial resolution of the magnetic material and the head and the tape speed. There's electronic limits to the first factor, and very mechanical ones to the second (can't shoot a delicate tape at supersonic speeds through a machine standing in your living room that's still affordable, can you). in DAT, the reading head is mounted on a rotating drum, mounted at a slant to the tape – that way, the speed of the head relative to the tape can be greatly increased, and thus, you get more data onto the same length of tape, at very moderate tape speeds (audio cassete: ~47 mm/s, DAT: ~9 mm/s) DAT is a digital format by design. This means zero focus was put into making the amplitude response "sound nice despite all imperfections"; instead, extensive error correction was applied (if one is to believe this source, concatenated Reed-Solomon codes of an overall rate of 0.71) and 8b-10b line coding (incorporating further overhead, that should put us at an effective rate of 0.5). Note how they do line coding on the medium: This is bits-to-tape, directly. Clearly, this leaves room for capacity increases, if one was to use the tape as the analog medium it actually is, and combined that ability with the density-enabling diagonal recording, to use the tape more like an analog noisy channel (and a slightly nonlinear at that) than a perfect 0/1 storage. Then, you'd not need the 8b-10b line coding. Also, while re-designing the storage, you'd drop the concatenated RS channel code (that's an interesting choice, sadly I couldn't find anything on why they chose to concatenate two RS codes) and directly go for much larger codes – since a tape isn't random access, an LDPC code (a typically 10000s of bits large code) would probably be the modern choice. You'd incorporate neighbor-interference cancellation and pilots to track system changes during playback. In essence, you'd build something that is closer to a modern hard drive on a different substrate than it would be to an audio cassette; and lo and behold, suddenly you have a very complex device that doesn't resemble your old-timey audio cassette player at all, but a the modern backup tape drive like I've linked to above.
{ "domain": "dsp.stackexchange", "id": 9606, "tags": "audio, modulation, frequency-modulation, fsk" }
Is $y[n]=x[n] * x[n^2]$ invertible?
Question: Is the following system invertible or not? $$y[n]=x[n] * x[n^2]$$ where $*$ stands for the aperiodic convolution operator. I have not been able to find a mathematically sufficient argument for it... Answer: The system is nonlinear (bilinear in $x$), with a nonlinear law (square) on indices. Odds are the system is not invertible. One can try to prove it in its full generally, or try to find a counterexample. Let us try the lazy way, using properties at hand. The system is bilinear. Hence, the output for $k x[n]$, $k\in \mathbb{R} $, will be $k^2 y[n]$. Resultingly, since $(-k)^2 = k^2$, $k x[n]$ and $-k x[n]$ have the same output. If they are nonzero, they differ and yield the same result. Verdict: non-invertible. This is a first way. Another is possible, along the same line, using the indices $n$ and $n^2$. Fat32 just did it, using prime delays. With more generality: take some signal $x[n]$, assuming that: there exists at least one non-square index $n_+$, for which $x[n_+] \neq 0$: for instance $x[-2] =1$ (many negative numbers are not perfect squares), $x$ is zero at all square indices: $x[0] = x[1] = x[4] = x[9]=\cdots = 0$. Then $x[n^2]$ is identically zero, while $x[n]$ is not and the convolution is zero.
{ "domain": "dsp.stackexchange", "id": 5930, "tags": "discrete-signals, linear-systems, system-identification, inverse" }
Multiple PIDs in quadcopter
Question: I am wondering what the use is of two PID loops to control a quadcopter. One PID for stability control and another PID for rate control. Why can't you just use one PID per axis to control a quadcopter where you take the current and desired angle as input and motor power as the output? Answer: Because it is an under-actuated system. You can not directly control the linear velocity with only one PID. To move in the 3D space you need to control the linear velocity. So the first loop uses the angular velocity (or the attitude itself) as control inputs to control the linear velocity, while the second loop uses the torques or rpms of the fans (real commands) to achieve that angular velocity: backstepping-like effect. Is this comment helpful?
{ "domain": "robotics.stackexchange", "id": 597, "tags": "quadcopter, pid" }
Why exactly is the feature on Io called a volcano, and the feature on Enceladus called a geyser?
Question: News of the long-lived and not unusual cloud on Mars associated with 21 km tall Olympus Mons has people commenting. This tweet by planetary scientist Dr. Tanya Harrison says I don't understand the conspiracy theory contingent that's going off re: the clouds over Arsia Mons. What possible reason would we have to hide volcanic eruptions on Mars? Heck, NASA has shared plenty of pics of volcanoes erupting on Io (shown here) & geysers on Enceladus! (see the tweet for the video) Why exactly is the feature on Io called a volcano, and the feature on Enceladus called a geyser? I understand they are substantially different, but I'm asking about how the geological classification works on bodies with different materials and temperatures than those on Earth. Answer: The difference between a volcano and a geyser is defined by what is being ejected. Volcanoes eject lava, which is molten rock. Molten rock that stays underground is called magma, but when it reaches the surface, usually via a volcano, it is called lava. A rock is composed of minerals, such as sulphides and silicates. The volcanoes on Io eject lava that is "composed of various forms of elemental sulfur. The colouration of the flows was found to be similar to its various allotropes". Some volcanoes on Earth also eject elemental sulfur, such as Mount Ijen in Indonesia. Geysers on Earth typically eject water or steam. But elsewhere in the solar system cryogeysers exist; they eject "volatiles, together with entrained dust or ice particles, without liquid". On Enceladus, the geysers eject water, ice particles "and smaller amounts of other components (such as carbon dioxide, nitrogen, ammonia, hydrocarbons and silicates)". Unlike in lava from volcanoes, the silicates ejected from geysers on Enceladus are a minor component of the ejected material. In addition to volcanoes and geysers, there are cryovolcanoes, also known as ice volcanoes, on some bodies in the solar system. Like geysers, they eject volatiles "such as water, ammonia or methane, instead of molten rock. Collectively referred to as cryomagma, cryolava or ice-volcanic melt, these substances are usually liquids and can form plumes, but can also be in vapour form. After eruption, cryomagma is expected to condense to a solid form when exposed to the very low surrounding temperature."
{ "domain": "earthscience.stackexchange", "id": 1579, "tags": "volcanology, geysers" }
Optimizing splay tree
Question: I wrote an splay search tree using the algorithm description and debug and now I want to find out how I can optimize it, maybe I have some obvious errors and I will be glad if someone shows them to me. Java code: private final class SplayTree{ private Node root; private void keepParent(Node v){ if(v.l != null) v.l.p = v; if(v.r != null) v.r.p = v; } private void rotate(Node parent, Node child){ Node gparent = parent.p; if(gparent != null){ if(gparent.l != null && gparent.l.k == parent.k) gparent.l = child; else gparent.r = child; } if(parent.l != null && parent.l.k == child.k){ Node tmp = child.r; child.r = parent; parent.l = tmp; }else{ Node tmp = child.l; child.l = parent; parent.r = tmp; } keepParent(child); keepParent(parent); child.p = gparent; } private Node splay(Node node){ if(node == null) return null; while (node.p != null){ Node parent = node.p; Node gparent = parent.p; if(gparent == null){ rotate(parent, node); }else{ if(gparent.l != null && gparent.l.k == parent.k && parent.l != null && parent.l.k == node.k){ rotate(gparent, parent); rotate(parent, node); }else if(gparent.r != null && gparent.r.k == parent.k && parent.r != null && parent.r.k == node.k){ rotate(gparent, parent); rotate(parent, node); }else{ rotate(parent, node); rotate(gparent, node); } } } return node; } private Node find(int key){ Node node = root, prev = null; while (node != null){ prev = node; if(node.k == key) break; else if(key < node.k) node = node.l; else node = node.r; } if(node == null) { node = prev; if(node != null) this.root = node; } else this.root = node; return splay(node); } public long sum(int l, int r){ sum = 0; Node root = this.root; while (root != null){ if(root.k >= l && root.k <= r) break; else if(root.k < l) root = root.r; else root = root.l; } if(root == null) return sum; Queue<Node> queue = new ArrayDeque<>(); queue.add(root); Node node; while ((node = queue.poll()) != null){ if(node.k >= l && node.k <= r) sum += node.k; if(node.l != null) queue.add(node.l); if(node.r != null) queue.add(node.r); } return sum; } public Node[] split(int key){ if(this.root == null) return new Node[]{null, null}; Node subRoot = find(key); if(subRoot.k < key){ Node right = subRoot.r; if(right != null) right.p = null; subRoot.r = null; return new Node[]{subRoot, right}; }else{ Node left = subRoot.l; if(left != null) left.p = null; subRoot.l = null; return new Node[]{left, subRoot}; } } public Node insert(int key){ if(root == null) return this.root = new Node(key); Node prev = null; while (root != null){ prev = root; if(root.k == key) return splay(root); else if(key < root.k) root = root.l; else root = root.r; } root = prev; Node node = new Node(key); if(key < root.k){ root.l = node; node.p = root; }else{ root.r = node; node.p = root; } return this.root = splay(node); } public Node merge(Node l, Node r){ if(r == null) return l; if (l == null) return r; l = maximum(l); l.r = r; r.p = l; return l; } public Node remove(int key){ Node root = find(key); if(root == null || root.k != key) return root; if(root.l != null) root.l.p = null; if(root.r != null) root.r.p = null; return this.root = merge(root.l, root.r); } public Node maximum(Node root){ while(root.r != null) root = root.r; return splay(root); } } private final class Node{ int k; Node l; Node r; Node p; Node(int k) { this.k = k; } } It would also be interesting to know if there are any mistakes in the style of the code. I will be glad to any help Answer: There's two things that I noticed. This first being a potential change in structure in find. You could handle the else branch instead of (effectively) setting a flag indicating whether you used a break. This keeps similar logic together, but at the cost of introducing another exit point to your function, which can be considered bad in some styles. private Node find(int key){ Node node = root, prev = null; while (node != null){ prev = node; if(node.k == key) this.root = node; return splay(node); else if(key < node.k) node = node.l; else node = node.r; } if(prev != null) this.root = prev; return splay(prev); } The other potential change I want to point out is in splay. You have some inconsistency in your if nesting. You should probably stick to all else ifs, it makes it easier to draw comparisons between the 4 cases, and reduces unnecessary indentation. You should also consider wrapping the two particularly long conditionals, so that you can control where they wrap if they are opened on a 80 character-wide terminal, and can abuse the symmetry within each conditional to help expose any copy/paste errors. private Node splay(Node node){ if(node == null) return null; while (node.p != null){ Node parent = node.p; Node gparent = parent.p; if(gparent == null){ rotate(parent, node); }else if(gparent.l != null && gparent.l.k == parent.k && parent.l != null && parent.l.k == node.k){ rotate(gparent, parent); rotate(parent, node); }else if(gparent.r != null && gparent.r.k == parent.k && parent.r != null && parent.r.k == node.k){ rotate(gparent, parent); rotate(parent, node); }else{ rotate(parent, node); rotate(gparent, node); } } return node; }
{ "domain": "codereview.stackexchange", "id": 33766, "tags": "java, algorithm, tree" }
Are there any open source SAT solvers with UNSAT core extraction algorithm built in?
Question: Just like the title says. I need to use a SAT solver on a series of CNF formulas but not only do I need an answer of the type satisfiable/unsatisfiable but also some subset of clauses whose conjunction is still unsatisfiable (the unsatisfiability core). I have read somewhere that it is possible to hack the code of some SAT solver (like MiniSAT) to produce UNSAT core, but I don't feel competent enough to actually do it. I was hoping that somebody else have already done it and have uploaded the code somewhere, but either that is not the case or I suck at Googling. Answer: MUSer2 is probably the tool for extracting minimal unsatisfiable cores (MUS) currently. It treats the SAT solver as a black box, so you can plug in any solver you want. If you want a solver with MUS extraction capability, the newest version of PicoSAT comes with a utility tool PicoMUS that does the job.
{ "domain": "cs.stackexchange", "id": 21576, "tags": "reference-request, satisfiability, sat-solvers" }
When is the big-O relation preserved under exponentiation?
Question: Suppose that $f, g$ are functions from the positive integers to the positive reals. Under what circumstances will $\log f(n)=O(\log g(n))$ imply $f(n)=O(g(n))$? It's easy to see that this isn't always true: If $f(n)=3^n$ and $g(n)=2^n$ then we know that $3^n\notin O(2^n)$, but taking logs gives $\log3^n = n\log 3$ and $\log 2^n=n\log 2$ and it's obvious that $n\log 3=O(n\log 2)$. The reason is also clear: If there exists a $c>0$ such that $\log f(n)\le c\,\log g(n)$ then all we can say is that $f(n)\le (g(n))^c$, which won't necessarily allow one to conclude $f(n)=O(g(n))$. When, though, can we correctly make the inference? Answer: It is at least sufficient to have $\log f(n) \in o(\log g(n))$. This gives us that for sufficiently large $N$, we have $\log f(n) \leq \log g(n)$ for all $n > N$. Hence immediately we have that for $c=1, n > N$, $f(n) \leq c\cdot g(n)$ and ergo $f(n) \in O(g(n))$. This also suggests the condition that if we have that $\log f(n) \leq c\cdot \log (g(n))$ where $c \leq 1$ for sufficiently large $n$, then we also get $f(n) \in O(g(n))$.
{ "domain": "cs.stackexchange", "id": 4286, "tags": "asymptotics, landau-notation" }
Why is quantum tunneling most significant between states of equal energy?
Question: Why is it, that quantum tunneling is only significant between states of nearly equal energy (as claimed here: 'Since tunnelling is significant only between states of nearly equal energy, tunnelling is unlikely in such instances.')? Answer: First, to answer your question. Tunneling in Quantum Mechanics is, of course, about the transfer of something through a potential barrier, which would be impossible in classical mechanics. The rate of tunneling depends strongly on the mass on the particle, the height of the potential barrier and the width of the barrier. The paper that you mention in your question describes how enzymes can make reactions go faster by increasing the rate of tunneling for electrons and protons (hydrogen nuclei - or in effect ionized H atoms). The paper suggest that the key mechanism is by reducing the width of potential energy barriers. For an electron or proton to tunnel through a barrier and remain on the far side of it (and not return) it should have a quantum state to fit into on the other side of the barrier. If there is not a quantum state with similar energy on the other side of the barrier then the probabiliy of the electron or proton being located on the far side of the barrier becomes very small. (Strictly speaking there will be individual quantum states that span space on both sides of the potential energy barrier). If there is an available state with a different energy on the other side of the barrier then, as well as tunneling, energy would have to be gained or lost (possibly by photon absorption or emission) to allow the particle to get to the other side of the barrier and the probability for this is very low. Finally from your comment, you are interested in magnetization. I am not sure that I understand how magnetization relates to tunneling, unless proton/electron transfer from molecule to molecule is important. Edit more information about two states with similar energy each side.... [Figure taken from http://www.scielo.br/img/fbpe/jbchs/v08n5/a19fig04.gif is about restricted motion inside a molecule.] In the figure below the top of the barrier in the centre, there would be a single state on each side of the barrier, but these two states (one on the left and one on the right) mix with each other to make two states that exist at the same time on both sides of the potential barrier. So the particle can pass from one side of the barrier to the other on a single quantum state. Thus if we have two states of similar energy on each side of the barrier they mix together to make two states which are on both sides of the potential and there is no need to energy loss / gain as the system changes from one side to the other
{ "domain": "physics.stackexchange", "id": 17722, "tags": "quantum-mechanics, quantum-tunneling" }
Complexity theory when an oracle is part of the input
Question: The most common way in which oracles occur in complexity theory is as follows: A fixed oracle is made available to, say, a Turing machine with certain limited resources, and one studies how the oracle increases the computational power of the machine. There is, however, another way in which oracles sometimes occur: as part of the input. For example, suppose I want to study algorithms for computing the volume of a given high-dimensional polytope. Classically, the polytope would need to be specified by providing a list of its facets or some other explicit representation. However, we can also pose the problem of computing the volume of a polytope that is specified by a volume oracle, that takes the coordinates of a point in space as input and outputs "yes" if and only if the given point lies inside the polytope. Then we can ask what computational resources are needed to compute the volume of a polytope that is specified in this manner. In this particular case we have the very nice polynomial time approximation scheme of Dyer, Frieze, and Kannan and, interestingly from the complexity theory point of view, a proof that randomness helps in an essential way for this problem, in that no deterministic algorithm can perform as well as the Dyer-Frieze-Kannan algorithm. Is there a systematic way to study the complexity theory of problems in which oracles are provided as part of the input? Does it somehow reduce to the usual theory of complexity classes with oracles? My guess is no, and that because there are too many different ways that an oracle could be supplied as part of the input, every problem of this sort has to be handled in an ad hoc manner. However, I would be happy to be proved wrong on this point. Answer: It's called Type-2 Complexity Theory. There's a paper by Cook, Impagliazzo and Yamakami that ties it nicely to the theory of generic oracles.
{ "domain": "cstheory.stackexchange", "id": 1135, "tags": "cc.complexity-theory, oracles" }
How to use waitForMessage properly?
Question: Hello everyone ! I am trying to use the function ros:topic::waitForMessage in order to get one time a nav_msgs::Path message. As waitForMessage is overloaded, I am using the following definition : boost::shared_ptr<M const> ros::topic::waitForMessage (const std::string &topic, ros::NodeHandle &nh ) Here is the extract of my main where is located the function : int main(int argc, char *argv[]){ ros::init(argc, argv, "create_path"); ros::NodeHandle create_path; nav_msgs::Path path = ros::topic::waitForMessage("/path_planned/edge", create_path); ros::Rate loop_rate(1); while (ros::ok()){ loop_rate.sleep(); } return 0; } My problem is that I don't get how to use properly this function. When I catkin_make this code, I get these errors : /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp: In function ‘int main(int, char**)’: /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp:63:86: error: no matching function for call to ‘waitForMessage(const char [19], ros::NodeHandle&)’ nav_msgs::Path &path = ros::topic::waitForMessage("/path_planned/edge", create_path); ^ In file included from /opt/ros/kinetic/include/ros/ros.h:55:0, from /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp:1: /opt/ros/kinetic/include/ros/topic.h:135:28: note: candidate: template<class M> boost::shared_ptr<const M> ros::topic::waitForMessage(const string&, ros::NodeHandle&) boost::shared_ptr<M const> waitForMessage(const std::string& topic, ros::NodeHandle& nh) ^ /opt/ros/kinetic/include/ros/topic.h:135:28: note: template argument deduction/substitution failed: /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp:63:86: note: couldn’t deduce template parameter ‘M’ nav_msgs::Path &path = ros::topic::waitForMessage("/path_planned/edge", create_path); ^ coverage_path_planning/CMakeFiles/create_path.dir/build.make:62: recipe for target 'coverage_path_planning/CMakeFiles/create_path.dir/src/create_path.cpp.o' failed make[2]: *** [coverage_path_planning/CMakeFiles/create_path.dir/src/create_path.cpp.o] Error 1 CMakeFiles/Makefile2:1371: recipe for target 'coverage_path_planning/CMakeFiles/create_path.dir/all' failed make[1]: *** [coverage_path_planning/CMakeFiles/create_path.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed After this error, I changed the line of waitForMessage in my program. There is the new line : nav_msgs::Path path = ros::topic::waitForMessage<nav_msgs::Path>("/path_planned/edge", create_path); I tried to catkin_make again and I got this new error : /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp: In function ‘int main(int, char**)’: /home/stagiaire019/astek_ws/src/coverage_path_planning/src/create_path.cpp:63:67: error: conversion from ‘boost::shared_ptr<const nav_msgs::Path_<std::allocator<void> > >’ to non-scalar type ‘nav_msgs::Path {aka nav_msgs::Path_<std::allocator<void> >}’ requested nav_msgs::Path path = ros::topic::waitForMessage<nav_msgs::Path>("/path_planned/edge", create_path); ^ coverage_path_planning/CMakeFiles/create_path.dir/build.make:62: recipe for target 'coverage_path_planning/CMakeFiles/create_path.dir/src/create_path.cpp.o' failed make[2]: *** [coverage_path_planning/CMakeFiles/create_path.dir/src/create_path.cpp.o] Error 1 CMakeFiles/Makefile2:1371: recipe for target 'coverage_path_planning/CMakeFiles/create_path.dir/all' failed make[1]: *** [coverage_path_planning/CMakeFiles/create_path.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed I don't understand how this works, so if anyone know how to use properly waitForMessage, I will be pleased. Thanks Originally posted by antonin_haulot on ROS Answers with karma: 215 on 2018-06-12 Post score: 7 Answer: You need to understand shared pointers. Here is another question that shows a similar solution. Originally posted by knxa with karma: 811 on 2018-06-12 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by antonin_haulot on 2018-06-13: It worked ! Thank you ! I've edited my question to show the solution at the end. Comment by jayess on 2018-06-13: @antonin_haulot can you please create an answer containing your solution rather than putting it in your question? Comment by antonin_haulot on 2018-06-19: @jayess done !
{ "domain": "robotics.stackexchange", "id": 31004, "tags": "ros-kinetic" }
Why do we need to normalise states in quantum field theory?
Question: In QM its obvious that we need to normalise quantum states since their inner product squared represents a probability. This normalization leads to physical states in QM being represented by 'rays' of vectors in the Hilbert space (i.e. with an arbitrary overall phase) However in QFT, the inner product of the states no longer has this probability meaning, so why do we need to normalise states (e.g. choosing to normalise $|p\rangle = \sqrt{2 E_p} a_p^\dagger |\Omega\rangle$). What would break if we did not normalise the states? Also what would be wrong with a state $k | p \rangle$ representing a different physical state to $| p \rangle$? Is it just so that the inner product $\langle q | p \rangle$ is well-defined, even if it no longer represents probability amplitude? Answer: QM and QFT are probabilistic (hence unitary) theories, and in each theory a given outcome has a specific probability in a given physical state. One well-studied special case of QM is a $1$-particle system; but just because that's not how the states in QFT look, it doesn't mean we don't still have a notion of unitarity, which is all normalization demands. We can then recover probability distributions from suitably computed expectations, in much the same manner as in QM. For example, the equivalent of $\langle m|\hat{H}|n\rangle$ becomes $\langle p|\hat{H}|q\rangle$; in both cases, complete knowledge of the matrix elements gives the probability distribution of the vacuum's classical Hamiltonian $H$ when observed. (All the footnotes and asterisks about normal ordering, how we make a field theory's Hamiltonian finite over $\Bbb R^3$ and so on are left to the reader.)
{ "domain": "physics.stackexchange", "id": 85206, "tags": "quantum-field-theory, hilbert-space, s-matrix-theory, quantum-states, normalization" }
Double Digestion with Restriction Enzymes Using Different Buffers
Question: I am currently working on preparing a 9 kb sequence of DNA for restriction digestion into the pBAD30 expression vector. There are very few restriction enzymes that do not have a restriction site located on my insert, and since I am using 2 restriction enzymes in my digestion, I had little choice in choosing my restriction enzymes. The only two restriction enzymes that will work for me are Xmal and KpnI. XmaI uses CutSmart buffer while KpnI uses NEB buffer. The efficiency of XmaI in CutSmart buffer is 100% while the efficiency of KpnI in CutSmart buffer is 50%. Would it be easier to perform a double digestion using CutSmart buffer and 2x KpnI than to perform two consecutive digestions? Answer: In my experience, a double digestion in CutSmart buffer will work perfectly well. The reaction may proceed slower, but incubate it a little longer and run a gel after the digestion - you'll see whether it has worked. The other answerers unfortunately did not mention that the best way is to check the restriction products yourself on a gel. Cut out the fragment from the gel you need (and purify) if you want to avoid undigested plasmid. See if the plasmid fragment sizes correspond to the correct cutting sites. Basic lab practice and it takes very little time! Beware of star activity - ask a technician to sequence your fragments or plasmids to be sure it's all going well (this is good lab practice), in case you see unexpected bands on your gel that may indicate unwanted restriction activity. This is not so common, though :)
{ "domain": "biology.stackexchange", "id": 9543, "tags": "molecular-biology, lab-techniques, methods, restriction-enzymes" }
Photon interactions with photovoltaic cells
Question: I was wondering how different energy photons interact with the electrons in a semiconductor in a PV cell. If the photon has less energy than the band gap, then the photon passes through and does not interact with the semiconductor, right? Does it just keep traveling until it hits a material beneath the PV cell that can absorb it? If the photon has more energy than the band gap, then where does the excess energy go after the electron has been promoted to the conduction band? Is this heat energy? Answer: If the photon has less energy than the bandgap, it will not be absorbed and the material is theoretically transparent to this wavelength as you can see here: http://www.pveducation.org/pvcdrom/materials/optical-properties-of-silicon If the energy is higher than the bandgap energy (a little) it can still be absorbed as seen here https://www2.pvlighthouse.com.au/resources/courses/altermatt/The%20PV%20Principle/Absorption%20of%20light.aspx The absorption, although, is not fully efficient and the excess of energy will be lost, as you said, in heat, during the process in which it is transformed in electrical energy.
{ "domain": "physics.stackexchange", "id": 42088, "tags": "photons, electrons, photoelectric-effect, solar-cells" }
Distinguishing a biased coin with a small set of tests
Question: Say we have a "coin" $f : [n] \to \{\pm 1\}$ so that either $f$ is balanced, or $f$ is $\epsilon$-far from being balanced. It's a classic result that sampling $O(1/\epsilon^2)$ random points of $f$ are enough to determine if $f$ is biased. That is, if $\mathcal{F} = \binom{[n]}{O(1/\epsilon^2)}$ is the collection of all subsets of size $O(1/\epsilon^2)$, then with probability at least $2/3$ over a random choice of a set $S$ from $\mathcal{F}$, the bias of $S$ will be within $\epsilon/2$ of the true bias of $f$ (correctly determining if $f$ is balanced/unbalanced). Are there smaller families $\mathcal{F}$ that have the same property? For example, is it possible to find a collection of poly($n, 1/\epsilon)$ sets where 2/3 of the sets have bias close to the bias of $f$, for any $f$ either balanced or $\epsilon$-far from balanced? Answer: Yes. Such families are called "averaging samplers", and there are plenty of constructions for them. You can find a more information about them (and about the more general notion of sampler) in this survey. The notion of averaging samplers was introduced in this paper, which also showed that they are equivalent to randomness extractors. There are many constructions of extractors, and you can use any of them to construct samplers. Here are a few relatively simple ways to construct such families: Take a random family, as pointed out by Emil Jerabek in the comments. Construct a family using a $k$-wise independent hash function, as pointed out in the comments by D.W. Construct a family by considering an expander graph over the vertex set [n] and taking the neighborhood of radius $r$ around every vertex (for some sufficiently large values of $r$) - see for example this lecture. Alternatively, take the family obtained from the set of random walks of length $r$ on the graph. -- see for example this lecture .
{ "domain": "cstheory.stackexchange", "id": 4581, "tags": "property-testing" }
Call a service from WaypointTaskExecutor plugin
Question: Hello there I am developing my own WaypointTaskExecutor plugin to be executed at waypoints (using nav2's waypoint_follower). Is there a way to call a service or at least publish some data on a topic, from within my plugin? Originally posted by sdudiak01 on ROS Answers with karma: 27 on 2022-03-25 Post score: 1 Answer: Did you check the photo_at_waypoint source code there? I think you can base your solution on it. @stevemacenski wrote about it (1, 2, 3) as an example plugin for processing. Originally posted by ljaniec with karma: 3064 on 2022-03-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by sdudiak01 on 2022-03-25: Thanks, that's exactly what i was looking for Comment by stevemacenski on 2022-03-28: Yay examples!
{ "domain": "robotics.stackexchange", "id": 37528, "tags": "ros2" }
How to edit target_link_libraries
Question: I'm trying to use boost package in Eclipse platform. In this line of code: boost::this_thread::disable_interruption di; Although boost-1.41.0-17 is installed and I have included boost/thread.hpp in the code, I receive the this error: undefined reference to 'boost::this_thread::disable_interruption::disable_interruption() I'm using CentOS 6 and gcc compiler. Anyone knows what's causing this problem? Originally posted by modaei on ROS Answers with karma: 1 on 2013-05-06 Post score: 0 Answer: The boost::thread library is one of the boost libraries that is not contained only in header-files. Some of the boost::thread code also resides in binary library files that are installed on the system with the rest of the boost installation. The error message you're getting indicates that the linker is unable to find the implementation code for the disable_interruption() method. It's probably located in the binary boost::thread library file, so you'll need to tell the linker to also link this library in with the rest of your project. For ROS, this linking is typically configured in the package's CMakeLists.txt file. The exact call you'll use is different depending on whether you're using rosbuild or catkin: For rosbuild, you can add rosbuild_link_boost(my_exe thread) For catkin, you can add find_package(Boost REQUIRED COMPONENTS thread) followed by target_link_libraries(my_exe ${Boost_LIBRARIES}) If you're able to build your package from the command line, but are only having problems within Eclipse, then the problem may be related to your Eclipse configuration. See here for the recommended steps to configure and run Eclipse with ROS. Edit: fix reference to ${Boost_LIBRARIES}, thanks to @"Dirk Thomas" Originally posted by Jeremy Zoss with karma: 4976 on 2013-05-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Dirk Thomas on 2013-05-09: For CMake (catkin) the variable is ${Boost_LIBRARIES} - not the short name stated in the answer, Comment by ZiyangLI on 2015-08-02: What if I only want to link thread lib?
{ "domain": "robotics.stackexchange", "id": 14088, "tags": "ros" }
Can an object reverse its direction of acceleration even though it continues to move in the same direction?
Question: Can anyone please explain me on this matter along with day to day examples? Answer: A pendulum is a day to day example of this. If you watch a pendulum swinging from left to right as it passes the mid point the velocity and acceleration are: The acceleration always point towards the mid point, so as the pendulum passes through the mid point the acceleration reverses direction but the velocity does not.
{ "domain": "physics.stackexchange", "id": 14991, "tags": "kinematics" }
Why is my implementation of Q-learning not converging to the right values in the FrozenLake environment?
Question: I am trying to learn tabular Q learning by using a table of states and actions (i.e. no neural networks). I was trying it out on the FrozenLake environment. It's a very simple environment, where the task is to reach a G starting from a source S avoiding holes H and just following the frozen path which is F. The $4 \times 4$ FrozenLake grid looks like this SFFF FHFH FFFH HFFG I am working with the slippery version, where the agent, if it takes a step, has an equal probability of either going in the direction it intends or slipping sideways perpendicular to the original direction (if that position is in the grid). Holes are terminal states and a goal is a terminal state. Now I first tried value iteration which converges to the following set of values for the states [0.0688909 0.06141457 0.07440976 0.05580732 0.09185454 0. 0.11220821 0. 0.14543635 0.24749695 0.29961759 0. 0. 0.3799359 0.63902015 0. ] I also coded policy iteration, and it also gives me the same result. So I am pretty confident that this value function is correct. Now, I tried to code the Q learning algorithm, here is my code for the Q learning algorithm def get_action(Q_table, state, epsilon): """ Uses e-greedy to policy to return an action corresponding to state Args: Q_table: numpy array containing the q values state: current state epsilon: value of epsilon in epsilon greedy strategy env: OpenAI gym environment """ return env.action_space.sample() if np.random.random() < epsilon else np.argmax(Q_table[state]) def tabular_Q_learning(env): """ Returns the optimal policy by using tabular Q learning Args: env: OpenAI gym environment Returns: (policy, Q function, V function) """ # initialize the Q table # # Implementation detail: # A numpy array of |x| * |a| values Q_table = np.zeros((env.nS, env.nA)) # hyperparameters epsilon = 0.9 episodes = 500000 lr = 0.81 for _ in tqdm_notebook(range(episodes)): # initialize the state state = env.reset() if episodes / 1000 > 21: epsilon = 0.1 t = 0 while True: # for each step of the episode # env.render() # print(observation) # choose a from s using policy derived from Q action = get_action(Q_table, state, epsilon) # take action a, observe r, s_dash s_dash, r, done, info = env.step(action) # Q table update Q_table[state][action] += lr * (r + gamma * np.max(Q_table[s_dash]) - Q_table[state][action]) state = s_dash t += 1 if done: # print("Episode finished after {} timesteps".format(t+1)) break # print(Q_table) policy = np.argmax(Q_table, axis=1) V = np.max(Q_table, axis=1) return policy, Q_table, V I tried running it and it converges to a different set of values which is following [0.26426802 0.03656142 0.12557195 0.03075882 0.35018374 0. 0.02584052 0. 0.37657211 0.59209091 0.15439031 0. 0. 0.60367728 0.79768863 0. ] I am not getting, what is going wrong. The implementation of Q learning is pretty straightforward. I checked my code, it seems right. Any pointers would be helpful. Answer: I was able to solve the problem. The main issue for non-convergence was that I was not decaying the learning rate appropriately. I put a decay rate of $-0.00005$ on the learning rate lr, and subsequently Q-Learning also converged to the same value as value iteration.
{ "domain": "ai.stackexchange", "id": 1432, "tags": "reinforcement-learning, q-learning, value-iteration, policy-iteration, frozen-lake" }
Is the the spectral norm of a Boolean function bounded by the degree of its Fourier expansion?
Question: Let $f: \{-1,1\}^n \rightarrow \{-1,1\}$ be a Boolean function. The Fourier expansion of $f$ is $$f(T) = \sum_{S \subseteq [n]} \widehat{f}(S)\ \chi_S(T)$$ where $\widehat{f}(S)$ are real numbers and $\chi_S(T)=\Pi_{i \in S} T_i$ is a parity function. Let $d$ be the degree of the the Fourier expansion of $f$, i.e. $d= \max_{\widehat{f}(S)\neq 0} |S|$. By Parseval's identity we have $$\sum_{S \subseteq [n]} \widehat{f}(S)^2=1$$ I am looking for a bound on $$\sum_{S \subseteq [n]} |\widehat{f}(S)|$$ I think it is bounded by $d$. But I have neither a proof nor a counterexample for this claim. Can someone provide a proof or give a counterexample? Answer: It is a standard fact that if $f:\{-,1,1\}^n \to \{-1,1\}$ is a function of Fourier degree $d$, then its Fourier coefficients are multiples of $2^{-d+1}$. In particular, every non-zero coefficient must be at least $2^{-d+1}$ in absolute value. Therefore, by Parseval, there are at most $2^{2(d-1)}$ non-zero coefficients, and so the spectral norm of $f$ is at most $$\sum_{S}|\hat{f}(S)| \leq \sqrt{2^{2(d-1)}}\sqrt{\sum_{S}\hat{f}(S)^2} = 2^{d-1}$$. This bound is tight. For example the complete binary decision tree of depth $d$ has spectral norm $2^{d-1}$. This can be shown, e.g., by induction on $d$. The address function has also maximal possible spectral norm.
{ "domain": "cstheory.stackexchange", "id": 2279, "tags": "co.combinatorics, boolean-functions, fourier-analysis" }
Protein PTM site prediction
Question: Is there any in silico analysis method to predict post-translational modification sites on a given protein? Answer: There are actually a lot of these sites available, I have used some of the one listed below. Additionally there are some huge list of other services available in this field from ExPASy, you can find it here and the Center for Biological Sequence Analysis, which can be found here. ExPASy - FindMod The Eukaryote Linear Motif resource for Functional Sites in Proteins Phosida
{ "domain": "biology.stackexchange", "id": 3571, "tags": "bioinformatics, proteins" }
PV solar panel is producing more than is possible according to its datasheet?
Question: I have a client in Belgium that has 16 DM400-M10-54HBB on his roof. On Saturday the 2nd of December 2023, the PV output was 3.95 kW, according to the log of the Huawei SUN2000 4.6KTL-L1 5 kVA inverter. This means 247 W per panel. This is the house, with the panels directed to the south-east: However, the irradiance was only 250-300 W/m2 that day, according to multiple measurements accross the country (clear blue skies according to my naked eye). This is to be expected because we only get around 1100-1200 W/m2 in summer and now it is winter. However, according to the datasheet, there is approximately 600 W/m2 needed to produce this amount of power (yellow line is drawn by me to indicate the produced peak power on that day). How is this possible? Datasheet source: https://www.elektrototaalmarkt.nl/amfile/file/download/file/3868/product/41786/ The data in the Huawei logs seems accurate.. Answer: The answer from fromwastetowind is correct that solar irradiance is published for a flat horizontal surface. I entered the time, latitude, and logitude for Brussels into the NOAA ESRL Solar Position Calculator. This gave a solar elevation of 16.87 degrees. Not knowing your customer's panel angles I just assumed a perfect angle 90 degrees to the incoming radiation (the best possible production). The best way to think about this calculation is that 0.29m^2 of solar radiation is spread out over the 1m^2 that the pyronometers measured. The 861 watts per meter squared is a reasonable number because the radiation has to travel through more atmosphere at the high angle. And the roughly 600 watts per meter squared you figured from the spec sheet is reasonable for the not perfect angle on the fixed array. Also note that in some cases mixed cloud cover (like shown in the OP photo) can cause an increase in local irradiance. The light that was headed to other locations on the earth's surface gets diffused and a portion of that light can end up at the solar array being analyzed. This can be a big problem for closed loop solar hot water systems and additional safety factory has to be considered when sizing the thermal load dump. Article on mixed cloud cover irradiation spikes.
{ "domain": "engineering.stackexchange", "id": 5401, "tags": "electrical-engineering, solar-energy, photovoltaics" }
Could discoid galaxies be expanding?
Question: I understand that astronomers once thought that the material in the disc of a galaxy was moving around the galactic centre (where most of the mass was thought to be) in roughly circular orbits. The circular motions would be explained by the combination of inertia and gravitational acceleration towards the centre. This follows the Kepler/Newton model which describes/explains the orbits of planets around the Sun. In a Kepler/Newton system the tangential or transverse velocity of a low-mass orbitting object depends on the radial distance of the high-mass object from the centre. Objects further out must have lower velocities so that the weaker centrally-directed acceleration at that distance can keep the object moving in a roughly circular track. In the centre of the galaxy the velocities are low and increase rapidly with distance away from the centre. This is understandable as the mass of the central bulge of the galaxy is not concentrated at the centre, it is spread out over a relatively large volume. However it is now well-known that the measured velocities of visible material in the disks of discoidal galaxies do not vary as expected for a Kepler/Newton system. At a certain distance where the velocities would be expected to start decreasing (as per curve A) they either level out or continue increasing at a slow rate (as per curve B). The current explanantion is that a large mass of Dark Matter extends throughout the galaxy in such a way as to produce non-Keplerian behavior. QUESTION But isn't there another possible explanantion of the non-Keplerian velocity profile namely that the ordinary detectable material (e.g. gas, dust, stars) in the disk is not completely gravitationally bound to the galactic centre and is actually gradually moving outwards along a spiral path? Answer: No, for several reasons. there are no radial velocities in the required range observed, for example in the Milky Way galaxy. the reservoir of stars moving out from the centre would quickly be drained, so you would need to magically generate them at the centre -- this is much worse than postulating dark matter. you'd expect huge amounts of stars and gas that have moved out to large radii. The only serious alternative to dark matter is some modification to Einstein's/Newton's theory of gravity. There are several candidates, but none is really convincing (though arguably, dark matter is not too convincing either).
{ "domain": "astronomy.stackexchange", "id": 630, "tags": "galactic-dynamics" }
Is E1cB possible in the reaction of cyclopentadiene with acetone in presence of ethanolate?
Question: When cyclopentadiene 1 and acetone 2 are reacted with sodium ethoxide in ethanol, what product is obtained? I was told that the correct product 3 is formed by nucleophilic addition of cyclopentadienyl anion to the carbonyl group. However, when I thought about it more, I wondered why an E1cB reaction to form the dimethyl fulvene 4 isn't possible in this case. I believe it should be possible because the aromaticity of the cyclopentadienyl carbanion allows the rate-determining deprotonation of 3 to proceed under strongly basic conditions. On the other hand, there would be more ring strain in the more unsaturated compound 4. How do I decide if E1cB is possible or not here? Answer: I believe you were right to conclude E1cB would occur. Both of your considerations are exactly on point. The amount of ring strain is minimal -- if we think that it's OK to deprotonate to form a sp2-like carbon in the first place, then we've already decided that the added ring-strain is not a huge issue. Of course, the added conjugation and the entropic component of eliminating a hydroxide ion also help to drive the reaction forward. A literature source that confirms our thinking is shown below, for your specific example no less, and it looks like there are many derivatives for which the elimination is observed. [1] I'll go further to note that Scifinder didn't turn up any reaction results for for your alcohol product, or any similar derivatives, under equilibrium basic conditions. This suggests that elimination is actually quite likely, and that your answer key is wrong. It is however, possible to isolate the alcohol if acid-base equilibrium between the alkoxide intermediate and solvent/conjugate-acid is minimized.[2] References Ragauskas, A. J.; Stothers, J. B. 13C magnetic resonance studies. 119. Tricyclo[3.3.0.0] and [3.3.1.0]octanones from substituted norbornenones via cyclopropanation and homoketonization. Can. J. Chem. 1985, 63 (11), 2961–2968. DOI: 10.1139/v85-491. Smits, G.; Audic, B.; Wodrich, M. D.; Corminboeuf, C.; Cramer, N. A β-Carbon elimination strategy for convenient in situ access to cyclopentadienyl metal complexes. Chem. Sci. 2017, 8 (10), 7174–7179. DOI: 10.1039/C7SC02986A.
{ "domain": "chemistry.stackexchange", "id": 10421, "tags": "organic-chemistry, reaction-mechanism, elimination" }
Limiting depth kinect sees in rgbdslam?
Question: Is there any way I can limit the depth to which I record data in rgbdslam? What I mean by this is, I don't want to accumulate points that are far away from the camera. My main goal is to use rgbdslam to scan objects, but it involves a lot of human post-processing to isolate the object and the file sizes are very big. It would be nice if there was a way I could reduce the human element and reduce the size of these point clouds altogether. Is there any way this can be done while recording data online? Or is the only way to do this is to scan the pcd file after wards and remove points that are far away (have a high (x^2 + y^2 + z^2)^.5 value). Originally posted by Shark on ROS Answers with karma: 241 on 2011-04-21 Post score: 0 Answer: How about adding a pcl_ros/PassThrough nodelet? I haven't used the RGBDslam since it first came out, but I would think you should be able to remap the input topic of RGBDslam to the output of the PassThrough filter, which can be setup to limit the Z-height (depth) of the cloud. There is a very brief tutorial on PassThrough filter on the wiki: http://www.ros.org/wiki/pcl_ros/Tutorials/PassThrough%20filtering Originally posted by fergs with karma: 13902 on 2011-04-22 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5424, "tags": "ros, slam, navigation, depth" }
Why specific heat at constant pressure is greater than specific heat at constant volume?
Question: I know the relation between specific heat at constant volume and pressure and I also know how to calculate it. Thing is, I don't understand its concept I want to know why at constant pressure, specific heat is always greater than at constant volume Answer: At constant volume, all the heat that goes into the system goes into raising the temperature of the system, and no external work is done. At constant pressure, some of the heat goes into expanding the system, which does external work, and therefore leaves less energy available for raising the temperature.
{ "domain": "physics.stackexchange", "id": 33021, "tags": "thermodynamics" }
How did the Milky Way quasar not disrupt life on Earth?
Question: According to most sources of information I have found (A Quora answer and books), when galaxies become quasars, they destroy all life in their host galaxy, as they output so much radiation that they can disintegrate DNA and do lots of bad stuff, like destroying the ozone layer. But according to some information I have found, such as this site, The Milky Way itself was a quasar or atleast, an active galaxy 6,000,000 years ago, when massive gas streams were blasted from Sagittarius A*, the Milky Way's supermassive black hole. This explains the enormous Fermi Bubbles found lingering above our galaxy, and the abnormal amount of gamma rays being found in that region. Quasars are known to be so powerful that they make GRBs seem like a insignificant firework. The proof that the Milky Way was a quasar is the Fermi Bubbles, giant bubbles of gas found lingering over our galaxy. According to studies, the Fermi Bubbles emit a unusually high amount of gamma rays. This is way too much radiation to be produced by dark-matter annihilation. Studies suggest that the reason for the Fermi Bubbles being so radioactive and energetic is due to the fact that the Fermi Bubbles may have been the remnants of a quasar event, when too much gas fell into Sgr A* and was blasted out in jets, which formed the Fermi Bubbles. This also explains the reason why they are emitting so much gamma rays, due to the rapid ionization of so much gas by this quasar event. According to this Iopscience source, temperatures in the Fermi Bubbles may range from $10^6K$ to $10^8K$. Also, it has been shown that the "cool" gas in Northern Fermi Bubbles, have been clocked at $2⋅10^6K$. This may have been a spectacular sight in the prehistoric night sky, but terrible news for life, due to intense irradiation of hard gamma-rays and X-rays. . But at that time, the Earth already had advanced life on it, and even apes were present at the time. If the Milky Way was a quasar, then it should have outputted so much radiation into the environment that it would have destroyed the ozone layer. Also, the radiation would have sterilized the planet to a lifeless rock, and it would have taken several millions, or even billions of years for Earth to recover from such a mass extinction. And yet, everything on Earth seems normal. The ozone layer is intact, humans haven't mutated into horrendous creatures, nor is our planet fried to a crisp. Nothing is out of place. I cannot get my head around this, quasars are supposed to destroy all life in the host galaxy, and not spare life. It should have turned Earth into a barren wasteland and destroy even the hardiest microbes, and destroy the ozone layer, even stripping the Earth of its own atmosphere, if it was powerful enough. How could have life survived on Earth, when the Milky Way was a quasar 6 million years ago? Answer: An active galactic nucleus doesn't emit energy equally in all directions. It may form "jets", and if you are looking into the AGN at the right angle, and then nucleus is active enough, then you see a quasar. Being too close to the jets of a quasar would make complex life hard on a planet. But there are three things that may have protected Earth. We are not too close, being in a spiral arm of the galaxy. We were not in the direct line of any jets. And the nucleus might not have been especially active. So it simply isn't the case that an AGN will necessarily sterilize the galaxy. The Milky Way AGN was never strong, close, and positioned well enough to destroy life. And we know this simply because we are here. Your internet link is about being "hit by a quasar" which is rather unclear, it certainly doesn't talk about merely sharing a galaxy with an AGN.
{ "domain": "astronomy.stackexchange", "id": 6787, "tags": "black-hole, supermassive-black-hole, life, radiation, galactic-center" }
Perform sum of squared numbers without using for loop
Question: I have to find the sum of number after performing square operation in the given input the negative numbers need to be ignored. The input Format is, first line is number of test cases followed by N as number of test inputs in new line followed by N space seperated numbers. The task need to be done using golang without using for loop. Output need to be print in new line for each test case. 3 4 3 -1 1 14 5 9 6 -53 32 16 6 3 -4 12 2 5 7 Below is my attempt to achieve the task using recurrsion. package main import "fmt" func findSquareSum(x int, a []int, iteration int) { var num int if x > 0 { fmt.Scanf("%d", &num) if num > 0 { a[iteration-1] += num * num } findSquareSum(x-1, a, iteration) } } func readTest(x int, a []int){ var input int if x > 0{ fmt.Scanf("%d", &input) findSquareSum(input,a,x) readTest(x-1,a) } } func printResult(x int, a []int){ if (x > 0){ fmt.Println(a[x-1]) printResult(x-1,a) } } func main() { var test int fmt.Scanf("%d", &test) a := make([]int, test) readTest(test,a) printResult(test,a) } My question is as there is no base case in the recursive function. How the programm is working as per expectations? Answer: There is a base case... it's the line if x > 0 { compare to the for loop version for x=input: x>0: x=x-1 { findSquareSum(input,a,x) sets x=input if x > 0 { is the condition check findSquareSum(x-1, a, iteration) decrements x also consider the modified fucntion func findSquareSum(x int, a []int, iteration int) { var num int if x <= 0 { return } fmt.Scanf("%d", &num) if num > 0 { a[iteration-1] += num * num } findSquareSum(x-1, a, iteration) } now it should look more like the base cases we are to seeing. It should also be clear that this is equivalent to the askers version (unless my go syntax is wrong!) similarly func readTest(x int, a []int){ var input int if x <= 0 { return } fmt.Scanf("%d", &input) findSquareSum(input,a,x) readTest(x-1,a) }
{ "domain": "codereview.stackexchange", "id": 41285, "tags": "recursion, go" }
How I loop over this code
Question: I have 139 samples; In R for each sample I run below codes for example for sample 1 sample_1 = whichSignatures(tumor.ref = sigs.input, signatures.ref = signatures.nature2013, sample.id = 1, contexts.needed = TRUE, tri.counts.method = 'genome') a1=(sample_1[[1]]) > class(sample_1) [1] "list" > a1=(sample_1[[1]]) > head(a1) Signature.1A Signature.1B Signature.2 Signature.3 Signature.4 Signature.5 Signature.6 Signature.7 Signature.8 1 0.5370653 0 0 0 0 0 0 0 0 Signature.9 Signature.10 Signature.11 Signature.12 Signature.13 Signature.14 Signature.15 Signature.16 Signature.17 1 0 0 0 0 0 0 0 0 0.3383947 Signature.18 Signature.19 Signature.20 Signature.21 Signature.R1 Signature.R2 Signature.R3 Signature.U1 Signature.U2 1 0.1123409 0 0 0 0 0 0 0 0 > dim(a1) [1] 1 27 and for sample 2 sample_2 = whichSignatures(tumor.ref = sigs.input, signatures.ref = signatures.nature2013, sample.id = 2, contexts.needed = TRUE, tri.counts.method = 'genome') a2=(sample_2[[1]]) And finally binding a1,a2,...a139 by sig=rbind(a1,a2) > print(sig) Signature.1A Signature.1B Signature.2 Signature.3 Signature.4 Signature.5 Signature.6 Signature.7 Signature.8 1 0.5370653 0 0 0 0 0 0 0 0 2 0.7162292 0 0 0 0 0 0 0 0 Signature.9 Signature.10 Signature.11 Signature.12 Signature.13 Signature.14 Signature.15 Signature.16 Signature.17 1 0 0 0 0 0 0 0 0 0.3383947 2 0 0 0 0 0 0 0 0 0.2837708 Signature.18 Signature.19 Signature.20 Signature.21 Signature.R1 Signature.R2 Signature.R3 Signature.U1 Signature.U2 1 0.1123409 0 0 0 0 0 0 0 0 2 0.0000000 0 0 0 0 0 0 0 0 > sigs.input is a common R object here Doing these manually is too painful; Can you please help me how I do that automatically in R? I am not sure but something like looping or whatever you guess possible Thanks Answer: list.of.results <- lapply(seq(INSERT_TOTAL_NUMER_OF_SAMPLES_HERE),function(sample.id){ whichSignatures(tumor.ref = sigs.input, signatures.ref = signatures.nature2013, sample.id = sample.id, contexts.needed = TRUE, tri.counts.method = 'genome')[[1]] }) sig <- do.call(rbind,list.of.results)
{ "domain": "bioinformatics.stackexchange", "id": 1160, "tags": "r, loop" }
Which policy do I need to use in updating Q function?
Question: Policy function can be of two types: deterministic policy and stochastic policy. Deterministic policy is of the form $\pi : S \rightarrow A$ Stochastic policy is defined using conditional probability distributions and I generally remember as $\pi: S \times A \rightarrow [0,1]$. (I personally don't know whether the function prototype is correct or not) I am guessing that both type of policies can be used for Q learning. As one can read from this answer that both reward and policy function are needed to implement $Q$ learning algorithm In addition to the RF, you also need to define an exploratory policy (an example is the $\epsilon$-greedy), which allows you to explore the environment and learn the state-action value function $\hat{q}$. I have no doubt about the necessity of reward function as it is obvious from the updating equation of $Q$. And coming to the (usage of policy), you can find it from the line 5 of the pseudocode provided in the answer Choose $a$ from $s$ using policy derived from $Q$ One can notice that policy is used for computing $Q$ and $Q$ updation also needs a policy. Henceforth I conclused myself that the correct statement for the line 5 of pseudocode has to be Choose $a$ from $s$ using policy derived from $Q$ updated so far Is my conclusion true? Else, how is it possible to break that cyclic dependency between policy and $Q$ function? Answer: I am going to stick with Q learning here to keep things simple. Most value-based reinforcement learning used for optimal control will have some statement similar to: Choose $a$ from $s$ using policy derived from $Q$ First, yes this is always the current Q function or Q table, evaluated for the state of interest. When you are choosing the agent's best guess at optimal actions, then this derivation of policy is fixed: $$\pi(s) = \text{argmax}_a Q(s,a)$$ This matches your form for a deterministic policy (although it is always possible to express a deterministic policy as a stochastic one with probabitly of 1 choosing its selected action). In Q-learning this policy is the target policy that you are currently learning the value of. When it comes to taking actions in the environment to gain new observations, you do not use the target policy, because it does not explore. Instead you use a different behaviour (or exploring) policy. It is important for Q learning to work in theory that this policy is "soft" - that it has some non-zero chance of selecting any action. A popular choice for the behaviour policy is to use $\epsilon$-greedy, which is a stochastic policy that selects a random actiom with probability $\epsilon$, otherwise it selects the greedy policy. The greedy policy is definitely "derived from Q", so the $\epsilon$-greedy is too. In fact it is not 100% necessary to use a "policy derived from Q" for the behaviour policy for Q learning to work. A completely random policy can work, for instance. The learning rate is better though - often much better - if current highest action value estimates are selected more often. This allows the agent to explore state action pairs close to its best guess at optimal. There are a few other ways to derive behaviour policies from Q table. There is an unwritten assumption in the pseudocode that this will be done in a way that favours the higher-valued actions. You can come up with any method that creates a stochastic policy function from Q values and has the following traits: There is a chance of selecting any action There is a higher chance of selecting the current highest valued actions Optionally, the preference for highest valued actions becomes stronger as the agent becomes better at the task If you can do this, then Q learning should work well. It is still sometimes a challenge to find the balance point between exploring enough to learn new things about the environment, yet doing so close to what is currently known to be best. Regarding this: Choose $a$ from $s$ using policy derived from $Q$ updated so far Yes although most sources do not spell that out in full, relying on the use of $Q$ as a variable/data structure to imply it. The target policy in Q learning is not directly the optimal policy (that is not possible unless you already know it), but the best guess at what would maximise expected return given the updates to Q so far. This keeps shifting as more knowledge is obtained.
{ "domain": "ai.stackexchange", "id": 2899, "tags": "reinforcement-learning, q-learning, terminology, off-policy-methods, exploration-strategies" }
Why is the strong nuclear force > electrostatic repulsion?
Question: In a nucleus there is a gravitational force between the nucleons and also electrostatic repulsion between the protons, and since electrostatic repulson >> gravitational attraction, it follows that there must be an additional attractive force acting on the nucleons or else there is nothing stopping them from flying apart. So if we let the gravitational and electrostatic forces be $g$ and $e$, respectively, and denote the additional attractive force by $x$, then we would need $g+x=e$ (because the attractive and repulsive forces must balance). This gives $x=e-g<e$, implying that the additional attractive force must be less than electrostatic repulsion. Since the strong nuclear force must be part of $x$, we then have strong nuclear force $\le x<e$. But in my revision guide, it says that the strong nuclear force is more than electrostatic repulsion, which seems counter-intuitive according to the above. Please explain! Answer: Consider the Earth-Moon system. They are subject to an attractive force (gravitation) and to no repulsive forces (neglecting solar tides, anyway), yet they stay at a nearly constant distance from one another because of their dynamics. A a static analysis of this system would prompt us to postulate some repulsive force holding the bodies apart (and you can find it by using a non-inertial frame of reference: it is the centrifugal pseudoforce). The lesson is that static analysis will break when applied to dynamic systems. You are trying to analyze the nucleus in terms of statics when it is a dynamic system (and moreover a dynamic quantum system). As nuclear particles are confined to a limited region in space they necessarily acquire a larger range of momenta as a consequence of the commuter between positions and momentum (we can wave our hands and say "Heisenberg Uncertainty Principle" if you want a shorter label for this effect).
{ "domain": "physics.stackexchange", "id": 21870, "tags": "electrostatics, strong-force" }
SASS 960 fluid grid
Question: I'm currently converting a CSS 960 fluid grid to SASS. How can I improve my current implementation? My column classes .two.columns are getting a bit unruly. Is there a better way to write them? // Variables $width: 960px; .container { position: relative; width: $width; margin: 0 auto; padding: 0; .column, .columns { float: left; display: inline; margin-left: 10px; margin-right: 10px; } } .row { margin-bottom: 20px; } // Nested Column Classes .column.alpha, .columns.alpha { margin-left: 0; } .column.omega, .columns.omega { margin-right: 0; } // 960 GRID // (1 * (960 / 16)) - (2 * 10) .container { .one { &.column, &.columns { width: 40px; } } .two.columns { width: 100px; } .three.columns { width: 160px; } .four.columns { width: 220px; } .five.columns { width: 280px; } .six.columns { width: 340px; } .seven.columns { width: 400px; } .eight.columns { width: 460px; } .nine.columns { width: 520px; } .ten.columns { width: 580px; } .eleven.columns { width: 640px; } .twelve.columns { width: 700px; } .thirteen.columns { width: 760px; } .fourteen.columns { width: 820px; } .fifteen.columns { width: 880px; } .sixteen.columns { width: 940px; } .one-third.column { width: 300px; } .two-thirds.column { width: 620px; } .offset-by-one { padding-left: 60px; } .offset-by-two { padding-left: 120px; } .offset-by-three { padding-left: 180px; } .offset-by-four { padding-left: 240px; } .offset-by-five { padding-left: 300px; } .offset-by-six { padding-left: 360px; } .offset-by-seven { padding-left: 420px; } .offset-by-eight { padding-left: 480px; } .offset-by-nine { padding-left: 540px; } .offset-by-ten { padding-left: 600px; } .offset-by-eleven { padding-left: 660px; } .offset-by-twelve { padding-left: 720px; } .offset-by-thirteen { padding-left: 780px; } .offset-by-fourteen { padding-left: 840px; } .offset-by-fifteen { padding-left: 900px; } } Answer: You could use a for loop to have it generate the columns for you, like... $grid-column: 16; $grid-gutter: 10px; $column-width: 50px; .column { position: relative; display: inline; float: left; margin-right: ($grid-gutter / 2); margin-left: ($grid-gutter / 2); } @for $n from 1 through $grid-column { .grid-#{$n} { @extend .column; width: ($column-width * $n) + ($grid-gutter * ($n - 1)); } } @for $n from 1 through $grid-column - 1 { .offset-#{$n} { padding-left: ($column-width * $n) + ($grid-gutter * $n); } } That would output .column, .grid-1, .grid-2, .grid-3, .grid-4, .grid-5, .grid-6, .grid-7, .grid-8, .grid-9, .grid-10, .grid-11, .grid-12, .grid-13, .grid-14, .grid-15, .grid-16 { position: relative; display: inline; float: left; margin-right: 5px; margin-left: 5px; } .grid-1 { width: 50px; } .grid-2 { width: 110px; } .grid-3 { width: 170px; } .grid-4 { width: 230px; } .grid-5 { width: 290px; } .grid-6 { width: 350px; } .grid-7 { width: 410px; } .grid-8 { width: 470px; } .grid-9 { width: 530px; } .grid-10 { width: 590px; } .grid-11 { width: 650px; } .grid-12 { width: 710px; } .grid-13 { width: 770px; } .grid-14 { width: 830px; } .grid-15 { width: 890px; } .grid-16 { width: 950px; } .offset-1 { padding-left: 60px; } .offset-2 { padding-left: 120px; } .offset-3 { padding-left: 180px; } .offset-4 { padding-left: 240px; } .offset-5 { padding-left: 300px; } .offset-6 { padding-left: 360px; } .offset-7 { padding-left: 420px; } .offset-8 { padding-left: 480px; } .offset-9 { padding-left: 540px; } .offset-10 { padding-left: 600px; } .offset-11 { padding-left: 660px; } .offset-12 { padding-left: 720px; } .offset-13 { padding-left: 780px; } .offset-14 { padding-left: 840px; } .offset-15 { padding-left: 900px; } etc..
{ "domain": "codereview.stackexchange", "id": 3189, "tags": "sass" }
How the charge will be distributed between a charged rod and a neutral sphere, what will be the charge on each after contact?
Question: When a charged sphere (charge = Q1) and an uncharted sphere of same radius are brought in contact, then the final charge on both spheres will be Q/2. But what will happen if different shapes (one charge and other uncharged) are made in contact with each other. What will be the magnitude of charge on both of them ? Answer: Short Answer: It depends on the shape and size of both the conductors. The principle which is followed is that the charge will flow in such a way that the final electrostatic potential of both the conductors becomes equal after we touch them. More about charges and fields here.
{ "domain": "physics.stackexchange", "id": 94837, "tags": "homework-and-exercises, electrostatics, charge" }
Time scaling of a shifted function
Question: When we have say Delta at time $t_0$ on continuous time: $\delta(t-t_0)$. If we want to move it $t_0/2$, can we scale the time and squeeze by 2 instead of shifting? How does scaling in time of a shifted function look like? Is it $f(a(t-t_0))$ or $f(at-t_0)$? Answer: Properties or effects of mathematical objects (operators, systems, functions, etc.) generally depend on hypotheses they are supposed to possess (or axioms they obey). Here, one of the tag the OP used is very important: continuous-signals, and a mention in the question as well: "shifted function". Words are important. The Dirac $\delta$ is not a function. And from the realm of function, it cannot be considered continuous (with standard continuity), in the sense of topological continuity (note to self: ask to rename continuous-signals in continuous-time-signal). The Dirac $\delta$ is a kind of continuous analogue of the discrete Kronecker delta. Continuous here means that it is defined on continuous time. Its properties are odd, compared to classical functions. For the formula the OP considers, a useful property is: if $g$ is a continuously differentiable function with a real root at $t_0$, and its derivative (well-defined) does not vanish, one could write: $$\delta(g(t)) = \frac{\delta(t-t_0)}{g'(t_0)}\,.$$ which applies well here. For details, see Dirac delta, Composition with a function.
{ "domain": "dsp.stackexchange", "id": 9133, "tags": "continuous-signals, scaling" }
What is pre-concentration (context: Breath analysis)?
Question: What is the meaning of pre-concentration both in general and specific to this context of a) "In addition to consistent sampling protocols,.. pre-concentration, and analysis of breath samples require standard methodology, calibration standards..." b) "Offline breath analysis involves some form of pre-concentration of analytes followed by a separation step using high-resolution GC-MS based detection" Answer: Pre-concentration just refers to the process of concentrating a sample before analysis, so that trace components won't be overlooked. If you're looking for something that's only present at a few parts-per-million, you could easily miss it if you used the raw sample, so ideally you want a way to separate off either large, common sources of noise/error, or to just extract the exact components you're looking for. A quick look suggests, for example, that a key first component to remove in breath analysis is moisture, as some volatile components will dissolve readily in the water and make it harder to detect them. Anyway, this paper on a few pre-concentration and detection techniques might be informative: Mochalski P, Wzorek B, Sliwka I, Amann A; Chromatogr B Analyt Technol Biomed Life Sci. 2009 Jul 1;877(20-21):1856-66.
{ "domain": "chemistry.stackexchange", "id": 8474, "tags": "analytical-chemistry, chromatography" }
How and why did mouth and nasal cavity evolve separate?
Question: My initial objection is that nose filters air, mouth is for eating but is used for breathing also, plus they both are used to create sounds. What is the cause and reason in this case, why do we need two "holes" and one is not good enough? Answer: Neither the nostrils nor the mouth originally evolved for breathing. Fish have (two pairs of) nostrils which they use to smell and mouths which they use to eat, but they breathe through their gills. Some lobe-finned fishes (the ancestors to tetrapods) evolved a connection between the posterior nostrils and the oral cavity called choanae. A fossil called Kenichthys is a transitional form in this development. The evolutionary reasons behind this development are not particularly well understood. (This should not be surprising when discussing something that happened in the Devonian. It's only recently that the discovery of Kenichthys ended the controversy of whether chonae in tetrapods are homologous to posterior nostrils in fish.) Basal reptiles have nostrils and mouths and can breathe through both of them, but do not have a separation between the oral and nasal cavities. The next important development is a secondary palate (which separates the nasal and oral cavities). This allows animals to continue breathing while swallowing food. Animals without such a separation must hold their breath while swallowing. This ability is certainly useful in many situations, and unsurprisingly several solutions to this problem have evolved in different lineages.
{ "domain": "biology.stackexchange", "id": 404, "tags": "evolution, human-anatomy, language" }
Why Ti/Tv ratio?
Question: I'm interested in the transition/transversion (Ti/Tv) ratio: In substitution mutations, transitions are defined as the interchange of the purine-based A↔G or pryimidine-based C↔T. Transversions are defined as the interchange between two-ring purine nucleobases and one-ring pyrimidine bases. (Wang et al, Bioinformatics, 31 (3), 2015) How exactly does this ratio imply false positives? Too high ==> high false positive rates? Or too low? Why is the expected value for random substitutions for the Ti/Tiv ratio 0.5? If the ratio is expected to be 2.10 for WGS, but I get 3.00 what does that mean? What if I get 1.00? Answer: I more like to use "ts/tv" for transition-to-transversion ratio. This abbreviation had been used in phylogenetics. When NGS came along, some important developers started to use "ti/tv", but I am still used to the old convention. Why is the expected value for random substitutions for the Ti/Tv ratio 0.5? There are six types of base changes. Two of them are transitions: A<->G and C<->T and the other four types are transversions. If everything were random, you would expect to see twice as many transversions – ts:tv=2:4=0.5. If the ratio is expected to be 2.10 for WGS The expected ratio is not "2.10 for WGS". It is 2–2.10 for human across the whole genome. You see this number when you align the human genome to chimpanzee or when you focus on an accurate subset of human variant calls. However, in other species, the expected ts/tv may be very different. Also, this number is correlated with GC content. You get a higher ts/tv in high-GC regions, or in coding regions which tend to have higher GC, too. Partly as a result, it is hard to say what is expected ts/tv in exact. but I get 3.00 what does that mean? What if I get 1.00? If you get 3.00, your callset is highly biased. If you get 1.00, your callset has a high error rate. Suppose the exact ts/tv is $\beta$ and you observe ts/tv $\beta'\le\beta$, you can work out the fraction of wrong calls to be (assuming random errors have ts/tv=0.5) $$ \frac{3(\beta-\beta')}{(1+\beta')(2\beta-1)} $$ This is of course an approximate because $\beta$ is not accurate in the first place and because errors are often not random, so their ts/tv is not really 0.5. How exactly does this ratio imply false positives? Too high ==> high false positive rates? Or too low? Too low ==> high false positive rate; too high ==> bias. In practice, you rarely see "too high" ts/tv.
{ "domain": "bioinformatics.stackexchange", "id": 577, "tags": "variant-calling" }
Mapping input vectors of variable length to output vectors of variable lengths with dummy variables
Question: I have a general question about supervised ANNs that map inputs to outputs. It is possible to vary the length of the input and output vectors by inserting some dummy variables that will not be considered in the mapping (or will be mapped to other dummy variables). So basically the mapping should look like this (v: value, d: dummy) Input vector 1 $[v,v,v,v,v] \rightarrow$ Output vector 1 $[v,v,v,v,v]$ Input vector 2 $[v,v,v,v,v]\rightarrow$ Output vector 2 $[v,v,v,v,v]$ Input vector 3 $[v,v,v,d,d] \rightarrow$ Output vector 3 $[v,v,v,d,d]$ Input vector 4 $[v,v,d,d,d] \rightarrow$ Output vector 4 $[v,v,d,d,d]$ Input vector 5 $[v,d,d,d,d] \rightarrow$ Output vector 5 $[v,d,d,d,d]$ The input and output vectors have a length of 5 with 5 values. However, sometimes only a vector of size e.g. 3 (which is basically a vector of length 5 with 2 dummy variables) should be mapped to an output vector of length 3. So after training the ANN should know that if it for example gets an input vector of length 3 it should produce an output vector of length 3. Is something like this generally possible with ANNs or other machine learning approaches? If so, what type of ANN or machine learning approach can be used for this? I'll appreciate every comment. Reminder: Can anybody give me more insights into this? Answer: This should be possible but I've never seen it done in practice. Whether or not this will even actually work is unclear to me and will be highly dependent both on your training data and choice of loss. I'd take a step back and look into the literature to see if you can't find a more established approach to your problem, perhaps with RNNs. That being said, I believe the following should do what you're asking. Consider network $N$ to be a dense neural net with $k$ layers, $N_i$ to be the $i$th layer of $N$, $L$ to be the max length of the input, and $V$ to be the number of terms in the input (the number of $v$s). To accomplish what you want in the above scenario, you can add three additional layers to $N$, $N_{k+1}$ $N_{k+2}$, and $N_{k+3}$: $N_{k+1}$ is a simple dense layer that has $L$ neurons and takes as input the output of $N_k$. This layer can be skipped if layer $N_k$ already has $L$ neurons. $N_{k+2}$ takes as input the output $N_{k+1}$ and takes the Hadamard product (elementwise multiplication) of it with a second input, a binary vector of length $L$ with a prefix of $V$ $1$s and suffix of $L-V$ $0$s. For example if $L=5$ and $V=2$, you would supply as the second input the vector $[1, 1, 0, 0, 0]$, which effectively "zeroes out" the third, fourth, and fifth positions. $N_{k+3}$ is your new output layer, which also has $L$ neurons. $N$ can now be trained and, given the target data is in the proper format to achieve this, it should output results like in your question.
{ "domain": "ai.stackexchange", "id": 3035, "tags": "neural-networks, training" }
if-elseif chain. Is there a better pattern?
Question: I'm working on a TCP Server using .NET and the following code is my first attempt to process the received data (from the socket) to create a packet to deliver. Basically, the packet has a header (7 bytes) where the first 3 bytes are always 0x12 0x34 0x89 and the next 7 bytes are the message length. For example: 0x12 0x34 0x89 0x00 0x00 0x00 0x02 0x20 0x20 The packet then is a 2 length packet and the sent data is 0x20 0x20. To process this I am using an if-elseif chain, which I don't like. Is there a better pattern to take away that if-elseif chain? public void ProcessIncomingData(byte[] buffer, int length) { for (int i = 0; i < length; i++) { ProcessByte(buffer[i]); } } private void ProcessByte(byte b) { if (_status == PacketStatus.Empty && b == FirstHeaderByte) { _status = PacketStatus.FirstHeaderReceived; } else if (_status == PacketStatus.FirstHeaderReceived && b == SecondHeaderByte) { _status = PacketStatus.SecondHeaderReceived; } else if (_status == PacketStatus.SecondHeaderReceived && b == ThridHeaderByte) { _status = PacketStatus.ThirdHeaderReceived; } else if (_status == PacketStatus.ThirdHeaderReceived || _status == PacketStatus.ReceivingPacketLenght) { const int sizeOfInt32 = sizeof (int); const int sizeOfByte = sizeof (byte); _packetLength |= b << (sizeOfInt32 - ++_packetLenghtOffset) * sizeOfByte; _status = _packetLenghtOffset < 4 ? PacketStatus.ReceivingPacketLenght : PacketStatus.ReceivingData; } else if (_status == PacketStatus.ReceivingData) { _packet.Add(b); var receivedByteCount = _packet.Count; if (receivedByteCount == _packetLength) { var packetData = new byte[_packet.Count]; _packet.CopyTo(packetData); var receivedPacketHandler = PacketReceived; if(receivedPacketHandler != null) { receivedPacketHandler(this, new PacketReceivedEventArgs{ Packet = packetData }); } ResetControlVariables(); } } } Answer: Simple pattern: Set up a hash table or map that maps the incoming byte to a function that returns the correct response. (C++ pseudocode - can be done in C# as well - don't have time now for all the implementation details:) __statusType responseFunc(_statusType,byte); map<byte,responseFunc> responseMap; private void ProcessByte(byte b) { _status= responseMap[_status](_status,b); } Although you have some nested conditions, etc, your handler functions can deal with those conditions - but you may have to rethink your design and set up some classes that properly represent and understand your conditions. A long switch case or 'if else if' is invariably the result of poor design. You need to think carefully about your architecture. Another thing to consider is abstracting your conditions a bit and setting up an enumeration or class hierarchy to represent them. They can then be used as the key for your map or a parameter to your handler functions. Yes - this is work - but clean design does require some effort. If you don't like long conditional statements (and you shouldn't) put in the effort to clean them up. You should think along those lines - maps, dictionaries, enumerations, class hierarchies, factories, etc - whenever you run into a situation that seems to require an extended 'if else if..' or 'switch case';
{ "domain": "codereview.stackexchange", "id": 4141, "tags": "c#, design-patterns, socket" }
A fixed-depth characterization of $TC^0$? $NC^1$?
Question: This is a question about circuit complexity. (Definitions are at the bottom.) Yao and Beigel-Tarui showed that every $ACC^0$ circuit family of size $s$ has an equivalent circuit family of size $s^{poly(\log s)}$ of depth two, where the output gate is a symmetric function and the second level consists of $AND$ gates of $poly(\log s)$ fan-in. This is a fairly remarkable "depth collapse" of a circuit family: from a depth 100 circuit you can reduce the depth to 2, with only a quasi-polynomial blowup (and one fancy but still restricted gate at the top). My question: is there any known way to express a $TC^0$ circuit family, similarly? More ambitiously, what about an $NC^1$ circuit family? Potential answers would have the form: "Every $TC^0$ circuit of size $s$ can be recognized by a depth-two family of size $f(s)$, where the output gate is a function of type $X$ and the second level of gates have type $Y$". It doesn't have to be depth-two, any sort of fixed-depth result would be interesting. Proving that every $TC^0$ circuit can be represented in depth 3 by a circuit consisting of only symmetric function gates would be very interesting. Some minor observations: If $f(n)=2^n$ the answer is trivial for any Boolean function (we can express any function as an $OR$ of $2^n$ $AND$s). For concreteness, let's require $f(n) = 2^{n^{o(1)}}$. The answer is also trivial if either $X$ or $Y$ is allowed to be an arbitrary function computable in $TC^0$... :) I'm obviously interested in "simpler" functions, whatever this means. It's a bit slippery to define because there are symmetric function families which are uncomputable. (There are unary languages which are uncomputable.) If you like, you may simply replace $X$ and $Y$ with symmetric functions in the statement, however I'd be interested in any other neat choices of gates. (Now for some brief recollections of notation: $ACC^0$ is the class recognized by a family of unbounded fan-in constant-depth circuits with $AND$, $OR$, and $MOD_m$ gates for a constant $m > 1$ independent of the circuit size. A $MOD_m$ gate returns $1$ iff the sum of its inputs is divisible by $m$. $TC^0$ is the class recognized by constant-depth circuits with $MAJORITY$ gates of unbounded fan-in. $NC^1$ is the class recognized by logarithmic-depth circuits with $AND$, $OR$, $NOT$ gates of bounded fan-in. It is known that $ACC^0 \subseteq TC^0 \subseteq NC^1$ when the circuit size is restricted to be polynomial in the number of inputs.) Answer: Here is a slight expansion of my comment to the answer by Boaz. Agrawal, Allender and Datta in their paper On $TC^0$, $AC^0$, and Arithmetic Circuits give a characterization of $TC^0$ in terms of arithmetic circuits. Namely, they show that a language $A$ is in $TC^0$ if and only there is a function $f$ in $\sharp AC^0$ and an integer $k$ such that $x \in A$ if and only if $f(x) = 2^{|x|^k}$. Note that $\sharp AC^0$ is a special form of constant depth arithmetic circuit over $Z$ (only constants 0 and 1 are allowed, and variable inputs can be $x_i$ or $1-x_i$). Given that, as Boaz points out in his answer, there is a non-trivial depth reduction for arithmetic circuits, this might be something to look into.
{ "domain": "cstheory.stackexchange", "id": 326, "tags": "cc.complexity-theory, circuit-complexity, upper-bounds" }
ROS2 launch external (non ROS) program
Question: Hello, I am looking for a possibility to start a non-ROS program in ROS2 with a launchfile. For ROS1 there is aliencontrol. Is there something like this for ROS2? Or is it possible to start external programs from the Python launchfile? Thank you! Jan Originally posted by JanWeber on ROS Answers with karma: 1 on 2021-09-16 Post score: 0 Answer: Take a look at the ExecuteProcess Action (https://github.com/ros2/launch/blob/master/launch/launch/actions/execute_process.py#L89) In fact, the Node Action you're probably already using in the python launch script wraps ExecuteProcess. ExecuteProcess(cmd=['ping', '192.168.1.1'], output='screen'), Originally posted by ChuiV with karma: 1046 on 2021-09-22 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by flo on 2021-09-23: please consider ticking the answer to be correct if it has solved your problem. Comment by 130s on 2023-04-07: Concrete example that uses ExecuteProcessAction can be found in the official tutorial https://docs.ros.org/en/rolling/Tutorials/Intermediate/Launch/Using-Event-Handlers.html
{ "domain": "robotics.stackexchange", "id": 36922, "tags": "ros2, roslaunch" }
Does d/d(spinor) anticommute with a spinor?
Question: Weyl spinors anticommute (see, e.g. Why isn't the anticommutativity of spinors sufficient as "spin-statistics-theorem"?). If we consider the derivative, with respect to a Weyl spinor $\frac{d}{d\chi}$, does this derivative anticommute with a given Weyl spinor $\chi$? To be explicit, consider the product $$ \frac{d}{d\chi} \chi f(x) \underbrace{=}_{\text{product rule}} \left(\frac{d}{d\chi} \chi \right ) f(x) \pm \chi \left(\frac{d}{d\chi} f(x) \right ) ,$$ where $f(x)$ is an arbitrary object, such that $\frac{d}{d\chi} f(x) \neq 0$. Which sign here is correct? Does the product rule for spinors involve a minus sign instead of a plus sign? Answer: Actually there are two different definitions of $d\over d\psi$ for spinor $\psi$, whether acting from right or left. It satisfies the calculus of Grasmann algebra(anticommuting numbers). If you take the definition of $d\over d\psi$ to act from left, then if $f(x)$ is commutative number(c-number in literature usually), then it should be \begin{equation} \frac{d}{d\chi} \chi f(x) \underbrace{=}_{\text{product rule}} \left(\frac{d}{d\chi} \chi \right ) f(x) + \chi \left(\frac{d}{d\chi} f(x) \right ) \end{equation} or if $f(x)$ contains odd number of spinors, then it's a Grassmann number, the sign should be minus.
{ "domain": "physics.stackexchange", "id": 44692, "tags": "quantum-field-theory, fermions, differentiation, spinors, grassmann-numbers" }
Macrophage pathogen fixation
Question: Overly simplified, macrophages recognise pathogenic patterns and endocytose anything that matches them. That also works on bacteria, which are quite often very mobile. What if a bacterium was just randomly twitching around when a macrophage recognised it - would it be possible for it to "swim away" (as a reaction to the macrophage or just randomly)? Or do macrophages employ some sort of fixation mechanism to keep anything they recognise close until it is engulfed? Or is engulfment in itself just so quick? Answer: As an alternative explanation I refer to this video. Crawling Neutrophil Chasing a Bacterium In addition to a high affinity between the target bacterium and the macrophage, sensing the bacterium will also trigger amoeboid movement to chase down a pathogen.
{ "domain": "biology.stackexchange", "id": 423, "tags": "cell-biology, immunology" }
AprilTag vs Aruco markers
Question: AprilTag and Aruco are both popular methods for vision-based fiducial marker pose estimation. What are the pros and cons of each fiducial marker pose estimation systems? Answer: Aruco (as implemented in OpenCV) pros Easy to set up (with readily available aruco marker generator, opencv & ros implementation, etc.) fewer false detection (with default parameters) cons Newer versions of aruco is GPL licensed, hence opencv is stuck on an old implementation of aruco when it was still BSD. More susceptible to rotational ambiguity at medium to long ranges More tuning parameters More computationally intensive AprilTag (as implemented in apriltag_ros) pros BSD license Fewer tuning parameters Works fairly well even at long range Used by NASA More flexible marker design (e.g. markers not necessarily square) Less computationally intensive Native support for tag bundles where multiple tags are combined to form 1 tag (using multiple tags at different orientations effectively eliminates the rotational ambiguity problem) cons less straight forward to setup (no opencv implementation AFAIK, only ros implementation, slightly more steps to obtain markers) more false detection (with default parameters)
{ "domain": "robotics.stackexchange", "id": 2050, "tags": "computer-vision, pose" }
A strange insect
Question: I saw this ant-like insect in Brazil, close to Rio de Janeiro. It was around 2 cm long. I tried to use google's search by image, but no luck. Does any one know the name of the species? Answer: Insects like this are commonly called "velvet ants", but they are wasps, not ants. This insect is in the family Mutillidae, and it's called Hopolocrates cephalotes. (https://www.inaturalist.org/taxa/629302-Hoplocrates-cephalotes/browse_photos)
{ "domain": "biology.stackexchange", "id": 8483, "tags": "species-identification, entomology" }
Design with Context, Table and Functions
Question: Each class of my project has several inputs and outputs. Each input, the same as output, depends on it's own set of value-datatype. I need to provide a mechanism to forward data streams from one instance to another, with ability to split that streams and to combine several streams into one. There are a lot of such classes, and they are hardly grouped by their input or output type. Splitting and combining (may be other operations like filtering or simple math) will provide this classes interaction. I came out with Table class, which can store and manage two-dimensional array of objects and also can send rows to another Table, using maps on columns (that's how data streams splits). It could be one-dimensional, but data buffer avaibility is useful. Each class become a Context with a collection of that tables. Typical work algorythm for Context is to: 1. initialize tables, 2. handle input table inserts, 3. unbox and process data, 4. insert result to output table Everything was OK with that Table, but one day, while coding a new Context I felt that data storage and transfering is not the only thing Context must do. When an external infrastructure is hiding behind the Context, it needs to be connected/disconnected/opened/loaded etc. And that looks like I need to add a Procedure or even Function to Context. This is first huge project of mine and I have no person to help. Questions I care about: this design looks like implementing internal programming language, is that correct? Can you show examples? this design looks like implementing in-memory database, is that correct? Can you show examples? is that normal not to use interfaces on data types if there are a lot of them? can you propose more effective and elegant design for my problem? are there similiar design patterns? May be some code will help to understand what I need. This is very simplified version. In real project there are a lot of Context realisations, more sugar to access and create Tables, more logic on managing internal collections. public abstract class Context { public Table this[string name] { get { /* ... */ } } public IEnumerable<Table> Tables { get; } protected void AddTable(Table t) { /* ... */ } } public class Table { object[][] rows; // Two-dimensional for buffering rows public Table(string name /* + columns initialisation info */ ) { /* ... */ } public void PushRow(params object[] row) { /* ... */ } // insert new row and send it to subscribers public void AddSubscriber(Table dst, params int[] columnMap) { AddSubscriber(dst.PushRow, columnMap); } public void AddSubscriber(Action<object[]> receiver, params int[] columnMap) { /* ... */ } } public class Function { // Do I need this in Context, if, for example, Mathematician must be Opened/Closed? } class Mathematician : Context { public Mathematician() { // input table Table t = new Table("Numbers"); AddTable(t); t.AddSubscriber(Count); // output tables AddTable(new Table("Average")); AddTable(new Table("Variance")); AddTable(new Table("StDev")); } void Count(object[] row) { double newNumber = (double)row[0]; // exception here if data streams connection is not correct double avg = 0; double variance = 0; double stdev = 0; /* ... computations begin ... */ /* Tables can be accessed, their buffer can be used */ // Finally, publish results this["Average"].PushRow(avg); this["Variance"].PushRow(variance); this["StDev"].PushRow(stdev); } } Possible usage. Imagine that we have another context Secretary : Context which will receive data from Mathematician to write it somewhere and Device : Context to provide Mathematician with new numbers. Note, they're independant of each other. static class Program { public static void Main(string[] args) { Context m = new Mathematician(); Context d = new Device(); Context s = new Secretary(); d["Sensor1"].AddSubscriber(m.Numbers, /* map must correspond to Device internals */) m["StDev"].AddSubscriber(s.Console, /* ... */); // Now each sensor1 signal in Device context will be sent to Mathematician, and the results of computations will be sent to Secretary, which can write it to console // We can start this system manually like that d["Sensor1"].PushRow( /* ... signal data ... */); } } Answer: I think the single biggest issue with your design is that it's not type-safe and relies on strings a lot. C# is a statically-typed language, you should take advantage of that and let the compiler check for possible errors. This means: Making Table generic, so that a row is an object of a specific type, instead of object[]. If you need to “map” data between different types, you can use lambdas. Tables in a specific context should be properties, instead of accessing them by a string. You should separate input tables and output tables into separate types (at least on the outside). Mathematician should be able to read from the Numbers table, but others should only write to it. It's a good idea to enforce this by making Numbers something like InputTable<T>. In general, your requirements are quite similar to TPL Dataflow, you should consider looking at it for inspiration, or even use it in your implementation (though it's .Net 4.5 only).
{ "domain": "codereview.stackexchange", "id": 3552, "tags": "c#, object-oriented, design-patterns" }
Why is there no Green's function when no gauge is chosen (Linearized Gravity)
Question: I'm working on linearized gravity, and have come to the point where you need to choose a gauge to simplify the Einstein equation for the perturbative field. Reading a paper, I came across the claim "The green's function does not exist without a gauge choice", but unfortunately nothing else was provided in support of the claim. I am wondering, is this because the equation without a gauge choice does not provide a unique solution, or is it something else? Building on this, I am wondering, does the existence of a unique solution guarantee the existence of a green's function? Does it work the other way around as well? Thanks Answer: The field equation in linearized gravity is $$ \partial^2 h^{\mu \nu} + \partial^\mu \partial^\nu h - \partial_\lambda \partial^\nu h^{\mu \lambda} - \partial_\lambda \partial^\mu h^{\nu \lambda} - \eta^{\mu \nu} \partial^2 h + \eta^{\mu \nu} \partial_\lambda \partial_\sigma h^{\lambda \sigma} = - \kappa T^{\mu \nu}$$ You can write it in the form convenient for extracting Green's function: $$L^{\mu \nu \alpha \beta} h_{\alpha \beta} = - \kappa T^{\mu \nu}$$ One would want to invert $L$, but the problem is that $$\mathbf{det} (L) = 0$$ which is a manifestation of gauge invariance. $h^{\mu \nu}$ is not uniquely determined by $T^{\mu \nu}$. Different gauges give different values for $h^{\mu \nu}$ for the same $T^{\mu \nu}$.
{ "domain": "physics.stackexchange", "id": 34873, "tags": "general-relativity, greens-functions, linearized-theory" }
How can I deal with data that is on the format "Image + single number"?
Question: Let say I have a data set where every sample is an image of a landscape and a temperature associated with the landscape. How do I incorporate the temperature into my convolutional neural network for classifying if the data is e.g a winter or summer landscape? Can I simply add the temperature as a feature after the feature learning part of the network? I can not find similar questions but maybe this has a name that I am not aware of? Answer: After the convolutional part you will need to add a normal, dense layer. Concatenate it to this layer and add some more layers if necessary, to add more interactions between the temperature and the image. This wouldn't necessarily need to be too deep because it can learn to represent the image features in a way that it will combine nicely with the temperature already, hopefully.
{ "domain": "datascience.stackexchange", "id": 1809, "tags": "machine-learning, feature-selection, convolutional-neural-network" }
Experimental determination of pH
Question: I am trying to determine the experimental pKa for two weak acids that were titrated against 0.20M NaOH. I have read elsewhere that you can take the point where the graph becomes steep and divide the value of base added by two the corresponding pH value would then be the pKa, but how do i choose which value since it may not be obvious which point the graph becomes steep. Below I can see that the pka for acetic acid should be close to the theoretical value calculated of 4.76 and the Tris-HCl pka should be approximately 8.3 but there must be a better way than just guessing from a graph. My textbook doesn't explain how to experimentally find pH just that it's the point where $[A^-]/ [HA]$. I am hoping someone can give me an equation to work with or guide me in the right direction. Thank you, Answer: So I think what you heard is about the right idea. The flat region is your buffering region, and isn't super helpful to deduce the pKa. Adding base consumes the weak acid, and because it is weak, you know you are mostly consuming the [HA] form, and equilibrating back to around the pKa. This is why your pH doesn't change much around the pKa. As an example, if you had 10 mmol of HA to start, then at the pKa point you would have 5 mmol of HA and 5 mmol of A-. Now, if you keep adding base, at some point you will essentially consume the remaining 5 mmol of HA. Then, there will be negligible amount left and it can't buffer anymore - i.e. your pH will change rapidly because you are adding strong base. Here you will have ~0 mmol of HA, and ~10 mmol of A-. Note that you have twice the amount of A- now. The pKa will have been at the point where you had half of this. Experimentally, I know two simple ways. Using a pH meter and no indicator, you have to measure the sharp region more carefully (i.e., drop by drop), because you want to be able to find the exact point where the pH changes the fastest. By approximating the derivative, i.e. $$ \frac{d}{d(V_{base})} pH(x_i) \approx \frac{pH(x_{i+1})-pH(x_{i-1})}{x_{i+1}-x_{i-1}}$$ you can find the point of greatest change, or the equivalence point. Obviously, better data in that region will give you more clarity. I used to do one titration quickly so that I would know the approximate region where I would need to be and then slowly titrate the rest. The other method, i.e. using an indicator solution and then just very carefully reaching the equivalence point essentially gives you the same information, and I know people who can very quickly do this. But in general, this would not be as accurate.
{ "domain": "chemistry.stackexchange", "id": 10762, "tags": "ph, titration" }
Most popular element with count in Array with Java
Question: Here is the input array of int values Example: { 5, 4, 6, 5, 1, 0, 7, 7, 3, 5 } Find the most repeated element with its count, example output; 5 - repeated 3 times Here is my code after some improvisations to improve the complexity from O(n) class RepeatedEntry implements Comparable<RepeatedEntry> { private int number; private int count; public void setNumber(int number) { this.number = number; } public void setCount(int count) { this.count = count; } public int getNumber() { return number; } public int getCount() { return count; } @Override public int compareTo(RepeatedEntry obj) { return obj.count - this.count; } } public class MaxCountOfDuplicate { public static RepeatedEntry findMostPopularItem(List<Integer> inputList) { Collections.sort(inputList); Set<RepeatedEntry> resultSet = new TreeSet<>(); for (int i = 0; i < inputList.size(); i++) { int thisEle = inputList.get(i); int lastIndexOfThisEle = inputList.lastIndexOf(thisEle); int repeatedCount = (lastIndexOfThisEle - i) + 1; if (repeatedCount != 1) { RepeatedEntry repeatedEntry = new RepeatedEntry(); repeatedEntry.setNumber(thisEle); repeatedEntry.setCount(repeatedCount); resultSet.add(repeatedEntry); } } return resultSet.iterator().next(); } /* Given an input array find the duplicate element with max count. */ public static void main(String[] args) { Integer[] x = { 5, 4, 6, 5, 1, 0, 7, 7, 3, 5 }; // input array RepeatedEntry finalResult = findMostPopularItem(Arrays.asList(x)); System.out.println("Most repeated element and its count is -> \n"); System.out.println(finalResult.getNumber() + " - repeated " + finalResult.getCount() + " times"); } } Here are my relevant questions to this I am sure the time complexity is not O(n) and O(n^2) In this case, how do I calculate time complexity and what is it in this case? And coming to space complexity, creating RepeatedEntry objects in a loop, is it a performance issue? I have assumed the input array to be Integer wrapper, if it was strictly int primitive, the conversions and complexity's would increase I guess. I thought of storing the result in HashMap but avoided as it needs further processing to fetch the output, any answer using HashMap improving complexity would help me learn too and best possible solutions as well. Answer: (Useful: What is the difference between O, Ω, and Θ?) How do I calculate time complexity and what is it in this case? Algorithmic time complexity, as we commonly understand it, is a measure of how an algorithm scales in terms of its input. We're usually interested in asymptotic complexity, meaning we want to see how it performs in the generalized, large-scale cases. It is the answer to the question: "What if we make it bigger?" How you calculate is fairly straightforward: you figure out the number of steps your algorithm needs compared to your input size, and you crib out the scalars: Θ(4n² + 25n + 842) = Θ(n²) ; Θ(28n * 6 log (18n)) = Θ(n log n) ... with the understanding that, as n approaches very large numbers, all scalars become insignificant. (This isn't entirely accurate, but it's good enough for envelope-and-fingers calculations.) Here is a line-per-line of your code: // annotated with time complexity, with n = inputList.size() public static RepeatedEntry findMostPopularItem(List<Integer> inputList) { L1: Collections.sort(inputList); // Θ(n log n) hopefully L2: Set<RepeatedEntry> resultSet = new TreeSet<>(); // Θ(1) L3: for (int i = 0; i < inputList.size(); i++) { // Θ(n) L4: int thisEle = inputList.get(i); // Θ(1) L5: int lastIndexOfThisEle = inputList.lastIndexOf(thisEle); // Θ(n) L6: int repeatedCount = (lastIndexOfThisEle - i) + 1; // Θ(1) L7: if (repeatedCount != 1) { // Θ(1) L8: RepeatedEntry repeatedEntry = new RepeatedEntry(); // Θ(1) L9: repeatedEntry.setNumber(thisEle); // Θ(1) L10: repeatedEntry.setCount(repeatedCount); // Θ(1) L11: resultSet.add(repeatedEntry); // Θ(log n) L12: } L13:} L14:return resultSet.iterator().next(); // Θ(log n) } L3 * L5 gives you an algorithmic time complexity approached by Θ(n²) (consider the case where no numbers are repeated). If the input is already sorted, you can find the top in a Θ(n) fashion: E maxItem = null; int maxLength = 0; int start, end; for ( start = 0, end = 1; end < inputList.size(); end++ ) { // Θ(n) if ( !inputList.get(end).equals(inputList.get(start)) ) { // Θ(1) if ( end - start > maxLength ) { // Θ(1) maxItem = inputList.get(start); // Θ(1) maxLength = end - start; // Θ(1) } start = end; // Θ(1) } } // case: maxItem is top/last element if ( end - start > maxLength ) { // Θ(1) maxItem = inputList.get(start); // Θ(1) maxLength = end - start; // Θ(1) } return new RepeatedEntry(maxItem, maxLength); // Θ(1) ... but sorting itself is a Θ(n log n) operation in the ideal case. I thought of storing the result in HashMap but avoided as it needs further processing to fetch the output, any answer using HashMap improving complexity would help me learn too and best possible solutions as well. Map<E, Integer> frequency = new HashMap<>(); E maxItem = null; int maxLength = 0; for ( E item : inputList ) { // Θ(n) int count = frequency.merge(item, 1, Integer::sum); // Θ(1) [!] if ( count > maxLength ) { // Θ(1) maxItem = item; // Θ(1) maxLength = count; // Θ(1) } } return new RepeatedEntry(maxItem, maxLength); // Θ(1) The exclamation mark at Map.merge is because, while HashMap provides access in asymptotically constant time, the actual constant time may be significant, and it depends on practical factors like load factor, bucket sizes, and hash spread.
{ "domain": "codereview.stackexchange", "id": 28020, "tags": "java, algorithm, complexity, cyclomatic-complexity" }
Which is the difference between entangled spins and classical coins?
Question: I'm trying to understand why entanglement has no classical analog. I imagine the usual well known example: Charlie prepares two spins in the entangled states. Alice and Bob measure them using their own detectors. Why the results cannot be explained as for a classical couple of coins (head/tail)? My understanding so far is that - if Alice and Bob measured only the z components, the two situations (quantum one and classical analog) would be really indistinguishable: when Alice measures -1 then Bob gets +1 and viceversa. What is the difference if they measure the x components of their spins instead? They will still find they have opposite values, will they? So why couldn't this be explained again with the classical coin analog? What rules out the hypothesis that the results may have been produced in the very first moment of the entanglement creation? Answer: Actually both quantum and classical correlations are zero when the measurements of Alice and Bob detectors involve z and x components, respectively. The above is clearly shown in the beginning of the naive view of an experimentalist and in the image below. More surprising results arise at smaller angles. I've found the easiest explanation to understand from wikipedia: Start with one setting exactly opposite to the other. All the pairs of particles give the same outcome (each pair is either both spin up or both spin down). Now shift Alice's setting by one degree relative to Bob's. They are now one degree off being exactly opposite to one another. A small fraction of the pairs, say f, now give different outcomes. If instead we had left Alice's setting unchanged but shifted Bob's by one degree (in the opposite direction), then again a fraction f of the pairs of particles turns out to give different outcomes. Finally consider what happens when both shifts are implemented at the same time: the two settings are now exactly two degrees away from being opposite to one another. By the mismatch argument, the chance of a mismatch at two degrees can't be more than twice the chance of a mismatch at one degree: it cannot be more than 2f. So the measurements in my original question (along x and z axis) are not resulting in non-classical predictions. However it has been noted that my "description applies to the singlet state, not to a general entangled state, for which the details of the correlations can be different". Indeed, taking into account that the singlet state-vector |sing> is a superposition of two vectors |ud> and |du> which is maximally entangled, more exactly defined as $|sing⟩ = \frac{1}{\sqrt 2} (|{↑↓}⟩-|{↓↑}⟩)$ then the x components anticorrelate. In that case the paradox would consist in the fact that if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it too is measured Anyway, the measurements only along axis can't be used to rule out a possible classical explanation (local realism) - in other words initial conditions (hidden variables distributions) exist such that measurements along same axis are anticorrelated, while along different ones have zero correlation - so, once more, Bell inequality is meaningful at smaller angles (as already discussed in the first part of this answer) between Alice's and Bob's detectors.
{ "domain": "physics.stackexchange", "id": 41795, "tags": "quantum-mechanics, quantum-spin, quantum-entanglement" }
How can a haploid plant be bisexual?
Question: According to Wikipedia: Meiosis is a specialized type of cell division that reduces the chromosome number by half. This process occurs in all sexually reproducing single-celled and multicellular eukaryotes, including animals, plants, and fungi Because the number of chromosomes is halved during meiosis, gametes can fuse (i.e. fertilization) to form a diploid zygote that contains two copies of each chromosome, one from each parent. So it means that a haploid plant body will give rise to either a male sex organ or female sex organ that produces gametes to form zygote thus completing the alternation of generation. We also know that chara a monoecious plant can be dioecious, which means that haploid plant body is producing both antheridia and archegonia which produce gametes by mitosis. Coming to my question, "HOW CAN A PRODUCT OF MEIOSIS BEFORE FERTILISATION BE BISEXUAL?" Does this implies that reduction division is not responsible of sex differentiation? If yes then what is responsible for sex differentiation in plants like chara where a haploid produces both gametes ? Explanation will be appreciated Answer: Meiosis does not determine sexual form. Eukaryotes use meiosis and fertilization to recombine genes to form new combinations. Meiosis does produce haploid cells from diploid cells, but that has nothing much to do with the sexual forms involved. In the case of the algal genus Chara, the organism's life cycle is entirely haploid except for the single-celled zygote formed during fertilization; this is called a haplontic life cycle. Being haploid does not require an organism to be of one sex (or any sex), so there is no difficulty to being monoecious and having both male and female structures on a single organism. The reason for having two kinds of gametes, sperm and ova, is simply practical: the specialized job of sperm is to move to another gamete, adn the job of the ova is to provide a maximum amount of nutrition for the future zygote (which means larger size and hence limited mobility).
{ "domain": "biology.stackexchange", "id": 5401, "tags": "botany, meiosis, sexual-reproduction, genetics" }
Recover an array from its insertion indices
Question: An array is built starting with an empty array, and then a sequence of insertions: insert $a_1$ at index $z_1=1$ insert $a_2$ at index $z_2$ insert $a_3$ at index $z_3$ ... and so on. When we insert element $a_i$ and index $z_i$ the result is that $a_i$ is now at index $z_i$, whereas everything before index $z_i$ is unchanged and everything after has its index increased by 1. (With one-based indexing) E.g., the sequence $(3,1), (5,2), (1,2)$ gives $[3]$ then $[3,5]$ then $[3,1,5]$. All of the instructions will make sense, i.e., $1 \leq z_i\leq i$. My question is about how to calculate the final array. The naive approach would be to start with an empty array, and literally obey the instructions; but in the language I program, insertions have a worst case time complexity of $O(\# $elements shifted$)$; if, for example, all the insertion indices were $1$, this would result in $O(N^2)$ time. Supposing we have access to all the instructions simultaneously, how can we calculate the final array in faster than $O(N^2)$ time? I would be happy with a name if this problem is well studied. I did make the following (family of) observations: the element which ends up at index 1 was the last one which arrived with index 1. the element which ends up at index 2 arrived with index 1 or 2. If it arrived with index 2, then none came after it with either index 1 or 2. If it arrived with index 1, then exactly one came after it with index 1, after which came none with index 2. the element which ends up at index 3 arrived with index 1, 2, or 3. If it arrived with index 3, then none came after it with index 1, 2 or 3. If it arrived with index 2, then exactly one element with index 1 or 2 followed it, then none after that with index 1, 2, or 3. If it arrived with index 1, then it was followed by one with index 1, then one with index 1 or 2, and then none after that with index 1, 2, or 3. ... and so on. However I can't think of the algorithms or data structures that would make this information useful. Update: There are two $O(n\log n)$ solutions below: the first one, which I accepted, that uses an AVL tree, and one which I learned of afterward that uses a semgent tree, and is somewhat simpler. Answer: Prelimiaries You can augment an AVL tree to support all the usual operations plus the following: Shift$(a,b)$ increases all keys $k \ge a$ by $b \ge 0$ in $O(\log n)$ time (where $n$ is the number of elements in the tree). To do so add a value $x_v$ to each node $v$ in the tree. This value represents an offset to be added to all keys stored in the subtree rooted at $v$. The Search, Insert, and Shift operations, along with the required rotations can be implemented as follows (I won't be using the Delete operation, but it can also be implemented). Search The search operation work as usual except that you now keep track of the cumulative offset in the path from the current node to the root. Insert To insert a node with key $k$, use the search operation to find the position where a node with key $k$ would need to be placed and the cumulative offset $\overline{x}$ up to that point. Add a leaf in that position and store its key as $k - \overline{x}$. Perform the necessary rotations to rebalance the tree (see the sequel). Rotations To perform a right rotation on $u$ let $v$ be its left child. "Push down" the offset of $u$ as follows: increment the stored key of $u$ by $x_u$, add $x_u$ to the offsets of the children of $u$, and set $x_u$ to $0$. Similarly, "push down" the offset of $v$. Perform the rotation as usual. Left rotations are symmetric. Shift$(a,b)$. Find the node $u$ with key $a$ or, if no such node exists, find its successor (if the successor doesn't exist either, we are done). Increase the stored key of $u$ by $b$. If $u$ has a right child $v$ then increase $x_v$ by $b$ as well. Walk from $u$ to the root of the tree. Every time you walk to a vertex $w$ from its left child, increase the key of $w$ by $b$ and the offset $x_z$ of the right child $z$ of $w$ by $b$ (if $z$ exists). Solving your problem Keep an augmented AVL tree $T$ and consider the operations one at a time. At the end of the generic $i$-th step, the tree will contain $i$ nodes that collectively store the elements of the first $i$ operations. Each node $u$ is associated with one element of the array. The key of $u$ is exactly the position of $u$'s element in the array, as of the $i$-th operation, while the element's value is stored as satellite data in $u$. When the operation $(a_i, z_i)$ is to be processed do a Shift$(z_i, 1)$ operation on $T$. Then, insert a new node with key $z_i$ and satellite data $a_i$ in $T$. At the end of the process you can traverse the tree and recover the final position (the node's key) of each array element (the node's satellite data). The total time required is $O(n \log n)$.
{ "domain": "cs.stackexchange", "id": 15679, "tags": "algorithms, arrays" }
Create circuit from qiskit json format
Question: When you run a job on an IBM device you can download a json file which contains among other things the description of the circuit. Is there a simple way in qiskit to create a QuantumCircuit object from this file without writing your own parser? Below is an example for the content of this file. {"config": {"n_qubits": 5, "memory_slots": 3}, "header": {"qubit_labels": [["q", 0], ["q", 1], ["q", 2], ["q", 3], ["q", 4]], "n_qubits": 5, "qreg_sizes": [["q", 5]], "clbit_labels": [["meas", 0], ["meas", 1], ["meas", 2]], "memory_slots": 3, "creg_sizes": [["meas", 3]], "name": "circuit-103", "global_phase": 2.356194490192345, "metadata": {}}, "instructions": [{"name": "rz", "params": [1.5707963267948966], "qubits": [0]}, {"name": "sx", "qubits": [0]}, {"name": "rz", "params": [1.5707963267948966], "qubits": [0]}, {"name": "rz", "params": [1.5707963267948966],... Answer: The format that the IBM Quantum API is currently using is Qobj (or Quantum object) which is documented here: https://arxiv.org/abs/1809.03452 and the current schema definitions for it are located here: https://github.com/Qiskit/ibm-quantum-schemas That's just the background information on the format though, to answer the details of your question, Qiskit supports loading this and creating a circuit from a Qobj payload via its Qobj class and the disassemble() function. Assuming you have a local file job.json the basic workflow would be something like: import json from qiskit.assembler.disassemble import disassemble from qiskit.qobj import QasmQobj with open('job.json') as fd: qobj_dict = json.load(fd) qobj = QasmQobj.from_dict(qobj_dict) circuits, run_config, headers = disassemble(qobj) Where circuits is the list of QuantumCircuit objects equivalent to what's in the qobj (a qobj can contain more than one circuit). run_config and headers are dictionaries containing the configuration and metadata contained in the qobj.
{ "domain": "quantumcomputing.stackexchange", "id": 4624, "tags": "qiskit, quantum-circuit, programming" }
Why is the change of heat non zero in a isothermal process?
Question: I was reading the definition of heat capacity and it says that $$\Delta Q = C \ \Delta T$$ (introduction to statistical physics - K. Huang) So my question is that, if we consider an isothermal process, becuase temperature remains the same, $\Delta Q$ would be zero. And $\Delta U= \Delta W$ by the first law, but thats wrong too because of what I have read on the internet. So, what am I doing wrong? Why do you have in a isothermal process $W=Q$ and not $U=W$? Answer: Suppose you take an ideal gas as your system. Then according to the equipartition theorem its total internal energy would be $\frac{1}{2}k_B T$ per degree of freedom per molecule. So if $f$ is the number of degrees of freedom then the total internal energy of your $N$ molecule ideal gas system would be $$\frac{f}{2}NK_BT$$ or $$\frac{f}{2}nRT$$ So as you can see the total internal energy only depends on the temperature. And moreover if $\Delta T$ is the change in temperature in a process then the corresponding change in total internal energy would be $$\Delta U = \frac{f}{2}nR\Delta T$$ As in isothermal process temperature remains constant, $\Delta T= 0 $. So $$\Delta U =0$$ And the first of law of thermodynamics becomes $$W= -Q$$ (PS: Also, I want to point out a flaw in your question: Even when $T$ remains same, $\Delta Q$ need not be zero. Think of what happens in a phase change situation i.e latent heat.)
{ "domain": "physics.stackexchange", "id": 51048, "tags": "thermodynamics" }
Testing General Relativity
Question: Ever since Einstein published his GR theory in 1916, there have been numerous experimental tests to confirm its correctness--and has passed with flying colors. NASA and Stanford have just announced that their Gravity Probe B activity has confirmed GR's predicted geodetic and frame-dragging effects. Are there any other facets of GR that need experimental verification? Answer: Sure there are. The theory has been tested within only a teeny tiny part of the range of its predictions. For example it predicts gravitational redshift in the range of 0% (no redshift) to 100% (black hole), but experiments to date have shown a maximum gravitational redshift less than 0.01%. It matters less how many tests of GR are done than how extensively those tests cover the range of what GR predicts. While we have little experimental data to definitively show that GR is the correct theory of gravity, we do know that it leads to major problems for physics, like its breakdown at gravitational singularities, its incompatibility with quantum mechanics, and the black hole information loss paradox. A competing theory of gravity that is confirmed by all experimental tests of GR to date need not have any of those problems, indicating that a lot more testing of GR is warranted.
{ "domain": "physics.stackexchange", "id": 942, "tags": "general-relativity, experimental-physics" }
How to load a controller from control_manager?
Question: Dears all, I'm new to ROS_control and I'm investigating the possibility to use it to build an hardware-interface and a controller for a 6 gdl human-exoskeleton. To begin, I wrote a simple hardware-interface as the one in: https://github.com/ros-controls/ros_control/wiki/hardware_interface I registered a "joint state interface" and a "joint effort interface" for all my joints and created the hardware-interface class. In my main program I created the controller_manager passing the created object as explained in the same tutorial. I could compile the code and run the so created control-loop; when it is running I can see the /controller_manager services trough "rosservice list". My question is: if I want to use some "effort_controllers/JointPositionController" for my joints, how should I load them? As a beginning, I built a yaml configuration file and a launch file to load a joint controller as follows: yaml file robot: #what should robot be for? Package name? Namespace? rHip_position_controller: type: effort_controllers/JointPositionController joint: rHip pid: {p: 100.0, i: 0.01, d: 10.0} #random example values and launcher rosparam file="$(find biomot_control)/config/biomot_config.yaml" command="load" node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="rHip_position_controller" But when I launch it, it blocks at the step: Loading controller: rHip_position_controller. Am I missing any step or doing something wrong? Also running "rosrun controller_manager controller_manager list-types" blocks without prompting any error or answer. Thank you very much for the attention. Originally posted by MarcoB on ROS Answers with karma: 28 on 2015-03-11 Post score: 0 Answer: But when I launch it, it blocks at the step: [..] [..] Also running "rosrun controller_manager controller_manager list-types" blocks [..] This reminds me of Unresponding control node on the ros-sig-robot-control list. Without access to your actual code: make sure you service ros_control callbacks somewhere, as if those are not processed, it cannot correctly handle incoming service requests, which typically results in the hangs you are describing. Originally posted by gvdhoorn with karma: 86574 on 2015-03-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by MarcoB on 2015-03-11: Thank you, that seems to be my problem. I fixed with a ros::spinOnce() in my control loop, I'm not sure that's the correct way to do it. Now I'm facing another problem, when loading:"Could not find parameter robot_description on param serv." I made a urdf file but I can't figure out how to load it. Comment by MarcoB on 2015-03-11: EDIT: I had to start a dedicate thread running ros::spin as explained in the linked discussion. with position interfaces now controllers can load fine, to run effort interfaces I still need to understand how to get a proper xml description file from the urdf file. Comment by gvdhoorn on 2015-03-12: You might be interested in the ros_control_boilerplate package by Dave Coleman. Comment by MarcoB on 2015-03-12: Very interesting, thank you very much!
{ "domain": "robotics.stackexchange", "id": 21111, "tags": "ros, hardware-interface, controller-manager" }
Generate collision resistant identifiers with two-way hashing
Question: Objective: We want to generate a unique and reproducible identifier for a given slice of bytes and avoid collisions. High-level idea: Compute $$fK := hash(k_1)\; and\; sK := hash(k_1^{-1})\; where\; k_i^{-1}\; is\; the\; reversed\; bitstring\; k_i$$ We then proceed to set, our identifier to be: $$<value\_of\_fK>\_<value\_of\_sK>$$ Assumption: For two keys $$k_1\; and\; k_2,\; if\; hash(k_1) = hash(k_2)\; and\; hash(k_1^{-1}) = hashs(k_2^{-1})\; then\; k_1 = k_2$$ where our hash function is FNV1a. Hypothesis: For any given input bitstring, we will have a unique identifier and no collisions. I would like to have your input on this Answer: Your assumption is probably false. It can only be true if the size of the hash is at least half the size of the key. If you want to generate an absolutely unique identifier, why not use the key itself? Another option, if you want an alphanumeric identifier, is to use Base64 (which also uses a few punctuation marks). If you just want a practically unique identifier, you can just use a hash function – any industry-standard one would do. That's exactly their goal. Since it doesn't seem you care about cryptographic security, you can just use MD5 or SHA1.
{ "domain": "cs.stackexchange", "id": 7661, "tags": "strings, correctness-proof, hash-tables, hashing" }
ROSLaunch and SRC
Question: Hi there, I'm new to ROS. I'm trying to understand how ROS works. I already did the tutorials but I really can't understand how does launch files works. I mean, sometimes I don't know how the launch files are connected with the logical part included in the src/ folder. Thank you all! Originally posted by julimen5 on ROS Answers with karma: 17 on 2017-09-13 Post score: 0 Original comments Comment by jayess on 2017-09-13: Welcome! I think that in order to get a better response, you should ask at least two separate questions. One about the launch files/source code connection and the other about RPLIDAR and SLAM as these are two different subjects. You can edit this question and then ask a new one as well. Answer: First, a very brief explanation of launch files and roslaunch. According to the wiki entry for roslaunch: roslaunch is a tool for easily launching multiple ROS nodes locally and remotely via SSH... Launch files provide a way, among other things, to launch a node (or nodes) with a single command: roslaunch <package-name> <launch-file.launch> Let's say that your package is called my_package with a node called my_node saved in the src folder. You can run (i.e., launch) your node by creating a launch file called my_node.launch like so: <launch> <node pkg="my_package" type="my_node" name="my_node"/> </launch> and run it using roslaunch my_package my_node.launch The code inside of the launch file is just XML. The launch tag tells ROS that this is a launch file (they don't have to have a .launch file extension) and the node tag tells roslaunch to run a node. Here are what the node tags attributes mean: pkg: the package your node is in type attribute says what the executable name is (for C++, for Python this is the filename so you would put the file extension) name: this is what you want ROS Master to call your node. This is useful for giving nodes meaningful names and using multiple instances of the same node in the same namespace. You can also configure nodes via parameters and include launch files within launch files (within launch files...) to create a very complex system. In fact, some ROS packages can be composed of nothing but launch files from other packages! Launch files are one of the many powerful features of ROS because they make it very easy to configure your system and they promote extensibility and resuse. There are many more features of launch files and I recommend that you read through the wiki to get a better understanding of them. This wiki entry for roslaunch does a good job of explaining the roslaunch XML format and the book A Gentle Introduction to ROS has a chapter on how to use launch files as well. Edit: TL;DR: catkin is used to compile the code into executables (or, for Python you chmod +x it) and then you tell roslaunch what executable to execute with the node tag and its attributes. Longer explanation: For the "logical" part: roslaunch will run exectuables from a package. It knows what executable you want to run through the node tag's type attribute. You put the executable's name as the value for type and roslaunch knows that you want to run an executable with the same name (from the package that you specified with the pkg attribute). This can be from C++, Python, or whatever supported language was used to write the node. The code in the src folder (or wherever it is located) is not going to be executed by roslaunch, only executables can be run. Originally posted by jayess with karma: 6155 on 2017-09-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by julimen5 on 2017-09-14: Well, thanks for the whole explanation. But the thing I don't understand it's what I said "logical" part. Most of the packages included in ROS have .cpp/.h/.py files, there you have a lot of code and I thought the launch files are executing those .cpp/.h/.py files. Comment by julimen5 on 2017-09-14: Now I get it. I didn't know about the "type". i really appreciate your help!! Comment by jayess on 2017-09-14: No problem, glad to help.
{ "domain": "robotics.stackexchange", "id": 28845, "tags": "slam, navigation, lidar" }
What is induction-induction?
Question: What is induction-induction? The resources I found are: the HoTT book, at the end of chapter 5.7. nLab's article a paper called Inductive-inductive definitions this blog post also mentions inductive-inductive types The first two references are too brief for me, and the latter two are too technical. Can anyone explain it in layman's term? It would be better if there's Agda code. Answer: Supplemental 2016-10-03: I mixed up induction-induction and induction-recursion (not the first time I did that!). My apologies for the mess. I updated the answer to cover both. I find the explanations in the Forsberg & Setzer's paper A finite axiomatisation of inductive-inductive definitions illuminating. Induction-recursion An inductive-recursive definition is one in which we define a type $A$ and a type family $B : A \to \mathsf{Type}$ simultaneously in a special way: $A$ is defined as an inductive type. $B$ is defined by recursion on $A$. Crucially, the definition of $A$ may use $B$. Without the third requirement, we could first define $A$ and then separately $B$. Here is a baby example. Define $A$ inductively to have the following constructors: $a : A$ $\ell : \big(\sum_{x : A} B(x)\big) \to A$ The type family $B$ is defined by $B(a) = \mathtt{bool}$ $B(\ell(x, f)) = \mathtt{nat}$. So, what is in $A$? First of all we have an element $$a : A.$$ Because of that, there is a type $B(a)$ which is defined to be $\mathtt{bool}$. Therefore, we can form two new elements $$\ell(a, \mathtt{false})$$ and $$\ell(a, \mathtt{true})$$ in $A$. Now we have $B(\ell(a, \mathtt{false})) = B(\ell(a, \mathtt{true})) = \mathtt{nat}$, so we can also form for every $n : \mathtt{nat}$ the elements $$\ell(\ell(a, \mathtt{false}), n) : A$$ and $$\ell(\ell(a, \mathtt{true}), n) : A$$ We can keep going like this. The next stage will be that since $$B(\ell(\ell(a, \mathtt{true}), n)) = \mathtt{nat}$$ there is for every $m : \mathtt{nat}$ the element $$\ell(\ell(\ell(a, \mathtt{true}), n), m) : A$$ and the element $$\ell(\ell(\ell(a, \mathtt{false}), n), m) : A$$ We can keep going. A little bit of thinking reveals that $A$ is more or less two copies of lists of natural numbers, sharing a common empty list. I will leave it as an exercise to figure out why. Induction-induction An inductive-inductive definition also defines a type $A$ and simultaneously a type family $B : A \to \mathsf{Type}$: $A$ is defined inductively $B$ is defined inductively, and it may refer to $A$ Crucially, $A$ may refer to $B$. It is important to understand the difference between induction-recursion and induction-induction. In induction-recursion we define $B$ by providing equations of the form $$B(\mathsf{c}(\ldots)) = \cdots$$ where $\mathsf{c}(\ldots)$ is a constructor for $A$. In an inductive-inductive definition we define $B$ by providing constructors for forming the elements of $B$. Let us reformulate our previous example as induction-induction. First we have the inductively given tpye $A$: $a : A$ $\ell : \big(\sum_{x : A} B(x)\big) \to A$ The type family $B$ is defined by the following constructors: $\mathsf{Tru} : B(a)$ $\mathsf{Fal} : B(a)$ if $x : A$ and $y : B(x)$ then $\mathsf{Zer} : B(\ell(x,y))$ if $x : A$ and $y : B(x)$ and $z : B(\ell(x,y))$ then $\mathsf{Suc}(z) : B(\ell(x,y))$. As you can see, we gave rules for generating the elements of $B$ which amount to saying that $B(a)$ is (isomoprhic to) the booleans, and $B(\ell(x,y))$ is (isomorphic to) the natural numbers.
{ "domain": "cs.stackexchange", "id": 7406, "tags": "type-theory, induction, inductive-datatypes" }
Can a flame start on water (video included)?
Question: Can a flame a start on a wet piece of paper immersed in water? The video is shown below (not the full video): https://youtu.be/Ey3z8z4Hxtc?t=1h27m29s This is not in English so you might not understand anything. In the video they simply say it is demons or whatever that starts the fire, which is silly to me, of course. My questions: What kind of compounds can we use (or the fraud used in the video)? What do they do in movies, tv shows, magic shows, or even the olympics, to start flames on water? What changes the color of flame. For example it was pink in the video? Answer: Watch this video from 1:50 onwards to see the reactions of sodium and potassium with water. https://www.youtube.com/watch?v=H6ZDiRIvc2E This video, 0:56, shows gasoline (petrol) burning on water. (Note that the batteries have nothing to do with it). https://www.youtube.com/watch?v=Bs13jBWqFjA
{ "domain": "chemistry.stackexchange", "id": 4047, "tags": "everyday-chemistry, water, combustion" }
Parsing cells containing Line Feed Characters
Question: Link to sanitized xls on dropbox if test data is needed Essentially the reports I work with aren't bad - The issue is the way it exports to excel - With the problem being that these cells are filled with LF characters breaking apart the data entries in the cells (usually a listing of employees in format empID / emp name. There's really no rhyme or reason as to where it places the LFs - sometimes there are three in a row. A lot of the time for analysis I need to use this data but first I need each person to have their own data (the reports get a lot bigger). Since I'm constantly writing and rewriting ways to do it, I figured I'd give it a shot at CR. I'm sure there's plenty to be improved. One note - apparently when you set a range to an inputbox range and the user hits cancel, it errors before assigning anything into the range. I could not find any other way to handle it, so I put it in its own function to avoid any other errors that occur. All one module. The top module would be called. I know the licensing conflict here, no need to mention it. Option Explicit '========================================== 'MIT License 'Copyright (c) <2016> <Raymond Wise> <https://github.com/RaymondWise/Excel-Workday-Report-Parser> @raymondwise '========================================== Public Sub ParseColumnFromWorkday() Dim lastRow As Long lastRow = 1 Dim workingRange As Range Set workingRange = UserSelectRange(lastRow) If workingRange Is Nothing Then Exit Sub End If Application.ScreenUpdating = False Dim workingColumn As Long workingColumn = workingRange.Column Dim currentRow As Long Dim cellToParse As Range Dim stringParts() As String For currentRow = lastRow To 2 Step -1 Set cellToParse = Cells(currentRow, workingColumn) stringParts = Split(cellToParse, vbLf) If Len(Join(stringParts)) = 0 Then GoTo SkipLoop cellToParse.Value = stringParts(0) Dim i As Long For i = 1 To UBound(stringParts) If Len(stringParts(i)) > 0 Then cellToParse.EntireRow.Copy cellToParse.EntireRow.Insert shift:=xlDown cellToParse.Offset(-1) = stringParts(i) End If Next i SkipLoop: Next currentRow Application.CutCopyMode = False Application.ScreenUpdating = True End Sub Supporting Cast Private Function UserSelectRange(ByRef lastRow As Long) As Range Set UserSelectRange = Nothing Dim columnToParse As Range Set columnToParse = GetUserInputRange If columnToParse Is Nothing Then Exit Function If columnToParse.Columns.Count > 1 Then MsgBox "You selected multiple columns. Exiting.." Exit Function End If Dim columnLetter As String columnLetter = ColumnNumberToLetter(columnToParse) Dim result As String result = MsgBox("The column you've selected to parse is column " & columnLetter, vbOKCancel) If result = vbCancel Then MsgBox "Process Cancelled." Exit Function End If lastRow = Cells(Rows.Count, columnToParse.Column).End(xlUp).Row Set UserSelectRange = Range(Cells(2, columnToParse.Column), Cells(lastRow, columnToParse.Column)) End Function Private Function GetUserInputRange() As Range 'This is segregated because of how excel handles canceling a range input Dim userAnswer As Range On Error GoTo inputerror Set userAnswer = Application.InputBox("Please select a single column to parse", "Column Parser", Type:=8) Set GetUserInputRange = userAnswer Exit Function inputerror: Set GetUserInputRange = Nothing End Function Private Function ColumnNumberToLetter(ByVal selectedRange As Range) As String Dim columnLetter As String Dim rowBeginningPosition As Long rowBeginningPosition = InStr(2, selectedRange.Address, "$") columnLetter = Mid(selectedRange.Address, 2, rowBeginningPosition - 2) ColumnNumberToLetter = columnLetter End Function This isn't posted on the repo yet, just wanted to hit the gauntlet here first. Answer: This "guard clause" does not need to be a block: If workingRange Is Nothing Then Exit Sub End If Inlining the Exit Sub makes it clearer that it's intended to be a "quick sanity check" and not something that's meant to eventually grow with special handling and additional code (like a block does) - in fact, it would be consistent with what you have in other places: If columnToParse Is Nothing Then Exit Function Indentation is uncalled for here: Dim i As Long For i = 1 To UBound(stringParts) The declaration of i and the For loop are technically at the same "level", and should be lined up. Dim i As Long For i = 1 To UBound(stringParts) Looking again at this validation part: Set columnToParse = GetUserInputRange If columnToParse Is Nothing Then Exit Function If columnToParse.Columns.Count > 1 Then MsgBox "You selected multiple columns. Exiting.." Exit Function End If I think this might actually be better off in an error handler. Set columnToParse = GetUserInputRange If columnToParse Is Nothing Then Err.Raise ParseError.InputRangeIsNothing If columnToParse.Columns.Count > 1 Then Err.Raise ParseError.MultipleColumnsSelected Where ParseError could be a Private Enum that defines error codes for your implementation, typically starting with vbObjectError + 42. And then the body of the procedure can focus on the "happy path" - while the error handler can Select Case on the error number, and handle as needed: Case Error.InputRangeIsNothing Resume CleanExit Case Error.MultipleColumnsSelected MsgBox "Multiple columns are selected. Please select only one.", vbExclamation Case Else MsgBox "An error has occurred: " & Err.Description, vbCritical Resume CleanExit If Len(Join(stringParts)) = 0 Then GoTo SkipLoop This is pretty much the only acceptable use for a GoTo instruction - simulating a Continue statement. But before doing that, I'd fix the indentation: For currentRow = lastRow To 2 Step -1 Set cellToParse = Cells(currentRow, workingColumn) stringParts = Split(cellToParse, vbLf) If Len(Join(stringParts)) = 0 Then GoTo SkipLoop cellToParse.Value = stringParts(0) Dim i As Long For i = 1 To UBound(stringParts) If Len(stringParts(i)) > 0 Then cellToParse.EntireRow.Copy cellToParse.EntireRow.Insert shift:=xlDown cellToParse.Offset(-1) = stringParts(i) End If Next i SkipLoop: Next currentRow ...and add some breathing space... and heck, I'd pay the price for the extra nesting and remove that GoTo. For currentRow = lastRow To 2 Step -1 Set cellToParse = Cells(currentRow, workingColumn) stringParts = Split(cellToParse, vbLf) If Len(Join(stringParts)) > 0 Then cellToParse.Value = stringParts(0) Dim i As Long For i = 1 To UBound(stringParts) If Len(stringParts(i)) > 0 Then cellToParse.EntireRow.Copy cellToParse.EntireRow.Insert shift:=xlDown cellToParse.Offset(-1) = stringParts(i) End If Next i End If Next currentRow ...and then I'd extract a small private method for it: Private Sub WhateverThisDoes(stringParts(), ByVal cellToParse As Range) cellToParse.Value = stringParts(0) Dim i As Long For i = 1 To UBound(stringParts) If Len(stringParts(i)) > 0 Then cellToParse.EntireRow.Copy cellToParse.EntireRow.Insert shift:=xlDown cellToParse.Offset(-1) = stringParts(i) End If Next i End Sub ...which removes the nesting in the outer loop, and leaves you with smaller functions that do fewer things: If Len(Join(stringParts)) > 0 Then WhateverThisDoes stringParts, cellToParse
{ "domain": "codereview.stackexchange", "id": 19623, "tags": "vba, excel" }
Synthesis of a weinreb amide from an acid
Question: The following shows the conversion of an acid to a weinreb amide: It was taken from the IChO 2015 preparatory problem set problem 23. In this two-step synthesis, I am slighly puzzled by the first step. The rationale behind first forming the ester is that we would like to synthesise an acid derivative which can react with the amine to form the amide since direct conversion of an acid to an amide is rather difficult. However, I wonder why is the t-butyl ester formed instead of say, a methyl ester, since the former sterically-hinders the attack of the amine in the subsequent step. In fact, conversion to an acid chloride first (e.g. using thionyl chloride), instead of an ester would seem to be more optimal than conversion to an ester first since acid chlorides would be more reactive with the amine. In summary, I would like to know why ester formation is preferred over acid chloride formation in the first step and also why a rather bulky ester is preferred over a much less bulky one. Regarding the former, I believe it could be that the formation of $\ce {HCl}$ by-product may hinder the 2nd reaction by protonating the amines. Regarding the latter, I believe there may be some electronic factors that need to be considered? Answer: Your question is easily answered by the fact that the conditions of Me3COCl + starting acid give a mixed anhydride not an ester. The tButyl group is chosen as the hindrance it brings makes reaction at the desired C=O carbon more likely. Further reading about mixed anhydride coupling is here
{ "domain": "chemistry.stackexchange", "id": 11788, "tags": "synthesis, amines" }
hector_slam how to disable tf?
Question: hi , i'm trying to get odometry from hector_slam bu using hokuyo laser because the odometry that i'm getting from wheel encoder is not so good , so i configured the hector_slam and i'm getting a good odometry data but the problem is that despite i put it's still publishing trasnformation from map to odom and i don't want it because amcl is already publishing it , i want only the odometry data not the tf , somemone can tell me how to deal with it ? here is the launch file i'm using to get odometry from hector_slam : <launch> <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen"> <remap from="map" to="hector_map"/> <remap from="scanmatch_odom" to="odom_hector"/> <remap from="initialpose" to="trash_initialpose"/> <param name="tf_map_scanmatch_transform_frame_name" value="odom" /> <param name="pub_map_odom_transform" value="false"/> <param name="pub_odometry" value="true" /> <param name="base_frame" value="base_link"/> <param name="odom_frame" value="odom"/> <param name="output_timing" value="false"/> <param name="use_tf_scan_transformation" value="true"/> <param name="use_tf_pose_start_estimate" value="false"/> <param name="scan_topic" value="scan"/> <!-- Map size / start point --> <param name="map_resolution" value="0.050"/> <param name="map_size" value="2048"/> <param name="map_start_x" value="0.5"/> <param name="map_start_y" value="0.5" /> <!-- Map update parameters --> <param name="update_factor_free" value="0.4"/> <param name="update_factor_occupied" value="0.9" /> <param name="map_update_distance_thresh" value="0.4"/> <param name="map_update_angle_thresh" value="0.06" /> <remap from="map" to="maphector"/> </node> <node pkg="topic_tools" type="throttle" name="map_throttle" args="messages map 0.1 throttled_map" /> </launch> Originally posted by kesuke on ROS Answers with karma: 58 on 2018-07-01 Post score: 0 Answer: There is another parameter pub_map_scanmatch_transform which enables the publishing of tf from map to the frame named in param "tf_map_scanmatch_transform_frame_name". So just add <param name="pub_map_scanmatch_transform" value="false"/> to disable that. Originally posted by kartikmohta with karma: 308 on 2018-07-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by kesuke on 2018-07-02: @kartikmohta thank you so much it's working
{ "domain": "robotics.stackexchange", "id": 31143, "tags": "navigation, mapping, hector-slam, hector-mapping, ros-indigo" }
Is a layer of gas with sufficiently large optical thickness really radiating as a black body?
Question: Can a parcel of gas with large value of optical thickness really radiate like a black body? I have in mind a simple (most likely oversimplified) model which yields $$I_\nu = I_\nu(0) e^{-\tau\nu} + I_\nu^B [1-e^{-\tau_\nu}]$$ It says, that radiation of a layer of optical thickness $\tau$ radiates based on the radiation entering on one side, attenuated by absorption the intrinsic radiation, also attenuated by absorption I have seen this equation in several books. But I do not understand the quantity $J_\nu$ in the equation $$d I\lambda = -k_\lambda \rho I_\lambda ds + j_\lambda \rho ds$$ which is replaced after introducing the source function $J_\lambda=j_\lambda/k_\lambda$ by $$\frac{d I\lambda(s)}{k_\lambda \rho ds} = -I_\lambda(s) + J_\lambda(s)$$ In my textbook, a few sentences after having introduced this, it is said, that $$J_\lambda(s) = B_\lambda^B(T)$$ where $B_\lambda^B(T)$ is the Planck-Function. I cannot follow this argument: It would mean, that an absorbing gas with sufficiently big optical thickness radiates to the outside according to Planck's law - but by what argument this assumption is justified? See here the original text: Answer: The expression $J_{\nu} = B_\nu$ (where $J_\nu$ is the source function) is true for a source of thermal radiation that is in local thermodynamic equilibrium. i.e. Where the population of various energy levels and the speeds of particles are determined by a single temperature. This is just Kirchoff's law of thermal radiation. If that is the case, then you can see from the equation of radiative transfer (the first equation you have written), that when $\tau_\nu$, the optical depth, becomes large, then the specific intensity, $I_\nu \rightarrow B_\nu$. So it is indeed true that if you have a sufficiently thick slab (and by this we mean optically thick, with large $\tau_\nu$), then the specific intensity from the slab will approach the Planck function value. Of course for the emergent spectrum to resemble the Planck function then this must be true at all frequencies. The emergent spectrum will in fact be the Planck function at the temperature at which the optical depth to the surface is $\sim 2/3$ and the radiation can escape. Note that the emission of a blackbody spectrum is not a property of the gas/material alone, it is a combination of thermal emission (the $J_\nu = B_\nu$ part) and the geometry (size and density) of the system Edit: Incidentally, the notation you are using is very confusing. Usually $S_\nu$ is used for the source function and $J_\nu$ is the mean specific intensity. The confusing thing is that for a blackbody, $S_\nu = B_\nu = I_\nu = J_\nu$ !
{ "domain": "physics.stackexchange", "id": 92415, "tags": "thermodynamics, radiation, thermal-radiation, absorption" }
What type of bird is this?
Question: Can someone identify this white colour bird? Location: unknown (image taken from someone's profile pic). Answer: From the picture, it looks like Cacatua sulphurea [Source: Wikipedia] The yellow-crested cockatoo (Cacatua sulphurea) also known as the lesser sulphur-crested cockatoo, is a medium-sized (approximately 34 cm long) cockatoo with white plumage, bluish-white bare orbital skin, grey feet, a black bill, and a retractile yellow or orange crest. The sexes are similar.[Source] Habitat and ecology: It mostly inhabits evergreen, deciduous, monsoon and semi-evergreen forests. It nests in tree cavities or a pre-existing hole made by any another species.(Nandika et al.) Observations made on Masakabing Island suggests that it's favourite diet include male fruits of Artocarpus communis, flowers and fruits of Cocos nucifera, Young leaves and flowers of Ceiba petandra, mangroves and male fruits of Brassus sudaica.(Metz et al.) It mostly nests on C. nucifera, A. communis, C. petandra, Tamarindus indica and Avicennia sp. (Nandika et al.) [Source] Why it is not Cacatua galerita? The plumage is different and the skin around the eyes are more white. [Source: In description part] In the picture of the OP there is a blue ring over the eyes and it is also known as blue-eyed cockatoo. [Source: In the description part]
{ "domain": "biology.stackexchange", "id": 7146, "tags": "species-identification, ornithology" }
Is the format of this PDO mysql statement correct
Question: I have been writing simple php programs, trying to adhere to the PSR standards, for writing code. However they do not seem to have any standards for writing mysql PDO statements in. For example I have written the mysql statement; $sql = "INSERT INTO `users`( `name`, `phone`, `city`, `date_added` )VALUES( :name, :phone, :city, :date)"; Is this a correct way of formatting? Or is there anything I can do to enhance it or follow some guidelines similar to PSR on how to write these types of statements in? Answer: I think this is a somewhat a matter of preference. What I try do to do when writing SQL in code context is adhere to the following guidelines: Make sure all lines of code are <= 80 characters in length (Especially for longer queries) Keep SQL definition on separate lines of code from surrounding programming language syntax, with query not indented (to help give you more room to not exceed 80 characters on line). Try to break up lines of SQL query along where different clauses within the query begin/end. Where a clause portion of query would exceed 80 characters on a line, break across lines, indenting to indicate that this is a logical continuation of the clause begun on a previous line. When you have longer lists of columns/values (by longer I mean maybe 5 or more) put each column value on it's own line to make it more readable. This is like you have done in your example, except for in your example case I don't know that I would put each field/value on its own line. I oftentimes use heredoc/nowdoc syntax on longer queries. For your example query, I would probably write it like this: $sql = " INSERT INTO `users` (`name`, `phone`, `city`, `date_added`) VALUES (:name, :phone, :city, :date) "; Here you can clearly see the following progression with each line: The first line clearly indicate this is insert query operating against users table. The second line defines the columns you are operating against The third line introduces a new SQL clause (this case being that you are now specifying the values for insert) The fourth line is written in a similar manner to the second line, so it is really easy to correlate the insert values to the columns. Let's expand your example. Let's say I had 10 columns I was going to insert. In this case you could do this: $sql = " INSERT INTO `users` (`name`, `phone`, `city`, `date_added`, `column5`, `column6`, `column7`, `column8`, `column9`, `column10`) VALUES (:name, :phone, :city, :date, :column5, :column6, :column7, :column8, :column9, :column10) "; But I would probably go ahead and break each column/value on its own line in this case because I find the following to be more readable: $sql = " INSERT INTO `users` ( `name`, `phone`, `city`, `date_added`, `column5`, `column6`, `column7`, `column8`, `column9`, `column10` ) VALUES ( :name, :phone, :city, :date, :column5, :column6, :column7, :column8, :column9, :column10 ) "; Let's say we have a single WHERE clause condition added. Perhaps that looks like this: $sql = " INSERT INTO `users` (`name`, `phone`, `city`, `date_added`) VALUES (:name, :phone, :city, :date) WHERE `somefield` = 'some value' "; Or multiple WHERE conditions: $sql = " INSERT INTO `users` (`name`, `phone`, `city`, `date_added`) VALUES (:name, :phone, :city, :date) WHERE `somefield` = 'some value' AND `someotherfield` = 'some other value' "; So at the end of the day, it really is preference to what you (and any peers you may be working with) find to be a style that you want to adopt. I almost always stay away from "in-lining" SQL into other language code like this: $pdo->prepare('INSERT INTO `users` (`name`, `phone`, `city`, `date_added`) VALUES (:name, :phone, :city, :date));" Having a SQL statement in a variable makes it easier to debug the code and provide more meaningful error messages, IMO.
{ "domain": "codereview.stackexchange", "id": 21248, "tags": "php, mysql, pdo" }
What is the best hardware interface system for a new robot project?
Question: I am developing an educational robot program at a community college using ROS as the software platform and am looking for the best hardware interface for my design. I need something that is easy to implement so that students can add/remove functionality with minimal software implications. My robots will contain laptop computers with ethernet, usb and bluetooth capabilities. So far I am looking at the RoNex as a possibility, but would like some feedback if this is the best approach? I have also looked at simply buying USB GPIO modules, but I want to keep the custom software to a minimum so simple integration with ROS would be a great feature. We will be interfacing all kinds of analog and digital sensors, encoders, cameras and such. If anyone has experience in this area, advise would be greatly appreciated. Thanks, Jonathan West Southwest Indian Polytechnic Institute Originally posted by JWest on ROS Answers with karma: 66 on 2014-07-06 Post score: 1 Answer: I´d recommend using an Arduino for hardware interfacing, as this appears to be the platform that currently has by far the biggest user community and makes basic interfacing very easy (and easily google-able if something does not work). Interfacing with ROS can be done using rosserial_arduino or ros_arduino_bridge. Don´t want to disparage the use of RoNex, which looks like a very capable approach, but it´s use of EtherCAT suggests that is meant for more involved (and expensive) applications. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-07-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ccapriotti on 2014-07-07: Just to add my 2 cents, regarding Arduino: from a student POV, it is great, because you can learn how sensors work (digital are either up or down, analog ranging from 0 to 1023), voltage divider, and all that good stuff. Comment by JWest on 2014-07-07: Thanks for the comments, Ok, I like the idea of the cheaper and more accessible Arduinos. In my application, I will have a base rover platform with the laptop, batteries, and motor control. The students will design and build "experiments" or additional components to add to the base platform. Will ROS be able to effectively work with several (perhaps 6 or 7) arduinos all connected through a USB hub? If so, then this is probably the way I will go since each student's project can have its own arduino and can be a stand-alone, testable (and affordable) subsystem. Comment by ccapriotti on 2014-07-07: Unless there is a ROS limitation to the number of serial ports to be connected, I do not foresee any issues. Programming the Arduinos will be simpler, and your students will be able to focus on learning programming, physical interface and electronic, and ROS will be in the BG, overseeing all.
{ "domain": "robotics.stackexchange", "id": 18531, "tags": "ros, usb, ethercat" }
Updating RViz display without clock being published (use_sim_time is true)
Question: I am playing back a bag file and need to be able to move things around while the bag is on pause. I have an interactive marker that broadcasts its pose as a transform. I also have a laser scan in the frame of the marker's transform frame. RViz does update transform tree display and interactive marker, but the laser scan only moved to the new location when I resume the bag playback. The laser scan is published by a node (not by the rosbag) based on wallclock. To force the RViz to update the scan I tried to change laser scan time stamp. I tried to publish to /clock myself, incrementing the timestamp while the bag is on pause (I remapped the rosbag's /clock). Nothing did work. Edit: The laser scan I want to move in the rviz display is published with ros::wallTimer callback (frequency does not depend on simulation clock). It is stamped based on the last /clock message from the bag. I was hoping to figure out how to get the laser scan to be redrawn at the correct location according to updated tf tree, without resuming the bag playback. The goal is to be able to get some measurements done on the recorded data using convenience of RViz infrastructure. Originally posted by mmedvede on ROS Answers with karma: 221 on 2013-01-22 Post score: 0 Answer: Setting /use_sim_time true should be done globally. Having some nodes run with wall clock time makes no sense. If your laser driver publishes data to the same ROS graph, make sure it uses simulated time, as well. Originally posted by joq with karma: 25443 on 2013-01-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mmedvede on 2013-01-23: Thank you for the answer. I should have specified I use wallTimer callback, and I do not use wall time to stamp any of the messages I send out when the bag playback is on pause. Comment by joq on 2013-01-23: The problem is that rviz uses ros::time, which does not change while rosbag is paused. I can't think of any way around that. Comment by mmedvede on 2013-01-23: I did try to publish my own time to /clock. It was slightly incremented compared to the one that rosbag published. The rosbag /clock has been remapped to /clock2, as it keeps publishing the clock even when playback is on hold. Comment by joq on 2013-01-23: Fighting with rosbag over who generates simulated time does not sound very promising. Are you guaranteed that rosbag will never resume after the pause? Comment by mmedvede on 2013-01-23: No, rosbag would resume after the pause. This is a problem if my own clock ran ahead of the rosbag's. The thing is, even publishing my own clock does not make RViz to redraw. I wander if RViz needs the full tf tree with the newer stamps along with the new clock to redraw. Comment by joq on 2013-01-23: Probably so, tf is very sensitive to missing parts of the tree. Until messages can be fully transformed, rviz does not see them. Comment by mmedvede on 2013-01-23: My tf tree does not have any parts missing and is fully connected at all times. In any case, what I want seems to bee too convoluted to implement with RViz right now. Thank you Comment by joq on 2013-01-23: The parts need not be missing, just not up to date. I think you are right that it will not work.
{ "domain": "robotics.stackexchange", "id": 12536, "tags": "rviz" }
Remove duplicates of first elements of pairs
Question: A list of pairs is given using Python. I want to remove duplicate occurrences of the first element of the pair leaving only one occurrence of each first element paired with the highest second element it comes with. I am looking for an efficient solution that returns the pairs sorted. >>> myfunc([[2, 0], [3, 0], [4, 0], [5, 0], [3, 2], [4, 0], [5, 3], [5, 3]]) [[2, 0], [3, 2], [4, 0], [5, 3]] Answer: You can do this pretty easily in $O(n log(n))$ time by using a hash table (such as Python's dict) to map from the first value in a pair to the largest second value it's been seen with. def myfunc(pairs): mapping = {} for key, value in pairs: if key not in mapping or value > mapping[key]: mapping[key] = value return sorted(mapping.items()) Hash table lookups take $O(1)$ time on average, so eliminating the duplicates in the loop will take $O(n)$ time. Sorting is (asymptotically) slower, taking $O(n log(n))$ time.
{ "domain": "cs.stackexchange", "id": 10279, "tags": "algorithms, lists" }
Enlarging the action of $C$, $P$ and $T$
Question: I am now studying QFT using Schwartz's book, and I am going through the part discussing about how the charge conjugation, parity, and time-reversal operator acting on various object should look like. He starts with how charge conjugation operator on the spinors act: $$C : \psi \rightarrow -i \gamma_{2} \psi^{*}$$ and then enlarges the action of $C$ on the other objects so that the Lorentz invariance is attained. However, I find this logic to be somewhat strange. For example, suppose we have a Lagrangian $$\mathcal{L} = \phi^{*}(\Box + m^{2}) \phi + \frac{\lambda}{3!} \phi^{3}.$$ Then after imposing a transformation such that $\phi \rightarrow - \phi$, we see that $\mathcal{L}$ is not invariant under this. Then we just simply say that this Lagrangian does not have this symmetry and go on. But the argument on the action of $C$, $P$ and $T$ doesn't go this way. We first declare that the Lagrangian we are interested in must be invariant under $C$, $P$ and $T$, and adjust the action of them on various objects to make that happen. For example, Schwartz says that the QED interaction term $eA_{\mu} \bar{\psi} \gamma^{\mu} \psi$ should be invariant under Lorentz transformations, and since $$C : \bar{\psi} \gamma^{\mu} \psi \rightarrow - \bar{\psi} \gamma^{\mu} \psi,$$ he declares that $C: A_{\mu} \rightarrow -A_{\mu}$. (Schwartz, Quantum Field Theory and the Standard Model, p.194~195)) Why can we do this for $C$, $P$ and $T$? Why do we not do this for others? For example, for $$\mathcal{L} = \phi (\Box + m^{2} ) \phi + \frac{\lambda}{3!} \phi^{3},$$ and the transformation $\phi \rightarrow -\phi,$ why not we extend the transformation to $\lambda \rightarrow -\lambda$, so that the Lagrangian $\mathcal{L}$ is invariant under it? Answer: When folks talk about symmetry transformation, usually it's the fields (and in some cases, coordinates as well) that are transformed, e.g. $\psi$, $\phi$, and $A_\mu$. And in your case of $$ \mathcal{L} = \phi (\Box + m^{2} ) \phi + \frac{\lambda}{3!} \phi^{3} $$ where $\lambda$ is NOT a field, thus can NOT be transformed. One workaround is to promote $\lambda$ to be a field that can transform properly, then you regain symmetry. One historical example: Steven Weinberg promoted the mass parameter $m$ of standard model fermion to the Higgs field (more precisely Higgs field multiplied by the Yukawa constant) to accommodate the electroweak symmetry.
{ "domain": "physics.stackexchange", "id": 95254, "tags": "field-theory, cpt-symmetry" }
cannot install ros-electric-desktop-full right now
Question: I cannot install ros-electric-desktop-full at the moment. I am not sure that whether the repo now invalid or some thing wrong with me. The result I got are many unmet dependencies, but when I go to the repo by the web browser they still be there! Can anyone tell me what am I supposed to do please. Originally posted by JrManit on ROS Answers with karma: 1 on 2011-10-21 Post score: 0 Original comments Comment by tfoote on 2011-10-22: Please provide more information about what architecture and OS you are running. Answer: What OS and repo are you using? Please quote your sources.lists and show the console output of your problems. If you don't provide more information we can't help you, please see the support guidelines at http://www.ros.org/wiki/Support Based on your outputs it looks like you're using lucid. http://packages.ros.org/ros/ubuntu/dists/lucid/main/binary-amd64/Packages shows consistent versions of opencv. As does http://packages.ros.org/ros/ubuntu/dists/lucid/main/binary-i386/Packages Originally posted by tfoote with karma: 58457 on 2011-11-03 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by MichaelHsu170 on 2011-11-15: Excuse me, but may I ask when could the problem mentioned in http://answers.ros.org/question/2887/libopencv23-dev-conflict be resolved, please?
{ "domain": "robotics.stackexchange", "id": 7053, "tags": "ros, dependencies" }
BPS state and annihilation of SUSY charges
Question: As we all know that anticommutator of one set of supercharges in massive extended supersymmetry is something like $$\{b_\alpha, b_\beta^\dagger \} = \delta_{\alpha \beta} (M-\sqrt{2} Z).$$ My question is that everyone says that it is obvious that BPS states annihilate half of the supersymmetric charges. Why is this so? It may be trivial but I don't know how. May be it is just because the annihilation operator acts on the lowest energy state and annihilate BPS States? But why only half? Can anyone clear that up. Answer: Take for example $\mathcal{N}=2$ supersymmetry. The algebra, including a central charge $Z$, is given by $$\{Q_\alpha^a,Q^\dagger_{\dot{\alpha}b}\}=2\sigma^\mu_{\alpha\dot{\alpha}}P_\mu \delta^a_b$$ $$\{Q_\alpha^a,Q_\beta^b\}=2^{3/2}\epsilon_{\alpha\beta}\epsilon^{ab}Z$$ $$\{Q^\dagger_{\dot{\alpha} a},Q^\dagger_{\dot{\beta} b}\}=2^{3/2}\epsilon_{\dot{\alpha}\dot{\beta}}\epsilon_{ab}Z.$$ We can now define $$a_\alpha=\frac12\left(Q_\alpha^1+\epsilon_{\alpha\beta}\left(Q_\beta^2\right)^\dagger\right)$$ and $$b_\alpha=\frac12\left(Q_\alpha^1-\epsilon_{\alpha\beta}\left(Q_\beta^2\right)^\dagger\right),$$ which reduces the algebra to $$\{a_\alpha,a_\beta^\dagger\}=\delta_{\alpha\beta}(M+\sqrt{2}Z)$$ and $$\{b_\alpha,b_\beta^\dagger\}=\delta_{\alpha\beta}(M-\sqrt{2}Z).$$ A BPS-state satisfies $M=\sqrt{2}Z$, hence the second part of the algebra reduces to $$\{b_\alpha,b_\beta^\dagger\}=0.$$ This tells us that the operators in the second half of the algebra, which now vanishes, only generate states of zero norm. This is how you should understand the statement you were asking about.
{ "domain": "physics.stackexchange", "id": 13329, "tags": "supersymmetry" }
What's the Noether charge associated with Kaehler invariance of SuGra?
Question: What is the Noether charge associated with Kahler invariance of supergravity (SUGRA)? As the question is rather tangential to what I need to do, I have not tried explicitly calculating it myself, but I'm sure that I'm not the first one to wonder. Answer: Answer: There is none. The issue at hand is that the Kaehler invariance is just that - an invariance, not a continuous symmetry of the fields. Most prominently the superpotential must transform as $$ \mathcal W \to \mathcal W e^{-h} $$ A general superpotential that leads to consistent theories is $$ \mathcal W =\frac{1}{2} m_{ab} \phi^a \phi^b + \frac{1}{3} Y_{abc} \phi^a \phi^b \phi^c $$ with at least one of the $m_{ab}$ and $Y_{abc}$ non-zero. From this is is obvious, that no transformation of the fields $\phi^a$ exists, such that $\mathcal W \to \mathcal W e^{-h}$ without redefining the couplings. Thus, there is a Kaehler invariance, which involves a redefinition of the couplings and has its value on its own (e.g. on non-simply connected internal spaces, the Kaehler potential might only be defined locally, with definitions on different charts being equal up to Kaehler transformations $\mathcal K' = \mathcal K + f(\phi) + \bar f(\bar\phi)$), but this is not a symmetry in the sense of Noether's theorem.
{ "domain": "physics.stackexchange", "id": 10409, "tags": "research-level, supersymmetry, symmetry, noethers-theorem, supergravity" }
What are aryl chlorides currently used for?
Question: In a recent paper by Gustafson et al. (DOI: 10.1021/acs.orglett.6b02650) the authors open with the argument that aryl chlorides are both versatile synthetic handles and common functionalities in drug discovery. However, they fail to give examples of current use in industry and pharmaceutical research. I know of only one reaction that aryl chlorides are used for, and that would be the Ullmann biaryl (ether) coupling. Are there any other ones for which aryl chlorides are a good starting material? And then, about the second part of the sentence: Why are aryl chlorides a common functionality in drug discovery? Is it simply that compounds are chlorinated and then compared to their unchlorinated analogues? All in all, please provide examples for the quoted statement above. Answer: Synthetic handles: for organometallic couplings (Sonogashira, Suzuki...), there is a selectivity ($\ce{aryl-I} > \ce{aryl-OTf} >\ce{aryl-Br} >\ce{aryl-Cl}$) that can be crucial in the total synthesis. You can make the other position ($\ce{aryl-I}$ or whatever) react first, then use modified conditions suitable for aryl chloride couplings (plenty of examples on this website). Or make an halogen exchange before coupling your chloro- position. Common functionalities: halogen bonds can change binding affinities. Even if they are weaker than hydrogen bonds, they are specific (see C. Bissantz et al., J. Med. Chem., 2010, 53 (14), 5061–5084.). They can make interactions with electrophiles, nucleophiles, or with themselves (ex: $\ce{C-X...O}$ $\ce{sp^2}$ or $\ce{X...X}$). One example of molecule designed with chloroaryl moieties is represented here (see Furet et al., Bioorg. Med. Chem. Lett. 2012, 22 (10), 3498–3502, which shows the docking model). This is an inhibitor which blocks the binding pocket of the regulator protein MDM2. The 6-chloroindolyl moiety enables to fill a subpocket (TRP 23), while the p-chlorophenyl fills another one (Leu 26). The chlorine-chlorine interactions between the 6-chloroindolyl moiety and the triptophan residue in the protein was a key point in the design of this inhibitor, strongly enhancing the binding affinities.
{ "domain": "chemistry.stackexchange", "id": 7440, "tags": "organic-chemistry, drugs" }
Minimizing the MSE with symmetric scalar quantizers
Question: Let $X$ be a scalar random variable with pdf $f(x)=e^{-2|x|}$. Among the class of symmetric 3 -point scalar quantizers find one which minimizes the MSE in quantizing $X$. Compute the MSE in this case. I find that f(x) is the Laplacian distribution with $\mu = 0, b = 1/2$. By using the Lloyd-max scalar quantizer, I can find that the threshold is $[-1/2, 1/2]$ and quantization levels are $[-1, 0, 1]$. Now, I need to find $d=M S E=E\left[(X-\hat{X})^2\right]$, but I can't compute the mean-squared error during integration. How I can make progress to find the MSE? Answer: $$MSE=E\left[(X-\hat{X})^2\right]$$ From the definition of the expectation operator. $$ \int_{-\infty}^{\infty}{(x - \hat{X})^2 f_X(x)} dx$$ Split the integral into the different quantization regions. $$ \int_{-\infty}^{-\frac{1}{2}}{(x - (-1))^2 e^{2x}} dx + \int_{-\frac{1}{2}}^{\frac{1}{2}}{(x - (0))^2 e^{-2|x|}} dx + \int_{\frac{1}{2}}^{\infty}{(x - (1))^2 e^{-2x}} dx $$ $$ \int_{-\infty}^{-\frac{1}{2}}{(x + 1)^2 e^{2x}} dx + 2\int_{0}^{\frac{1}{2}}{x^2 e^{-2x}} dx + \int_{\frac{1}{2}}^{\infty}{(x - 1))^2 e^{-2x}} dx $$ $$ \frac{1}{8}e^{-1} + \left(\frac{1}{2} - \frac{5}{4}e^{-1} \right) + \frac{1}{8}e^{-1} $$ $$ \frac{1}{2} - e^{-1} \approx 0.132 $$
{ "domain": "engineering.stackexchange", "id": 5144, "tags": "electrical-engineering, signal-processing" }
Equations of motion for two point masses
Question: The following system of two particles connected by a massless rod rotates about an axis perpendicular to the plane of the system. The top mass $m_{r}$ is at a distance $L_{r}$ from the rotation point and the bottom mass $m_{l}$ is at a distance $L_{l}$ from the rotation point. $L_{r}+L_{l}=L$ where $L$ is the length of the rod. If I apply Newton's second law along the $\theta$ direction to each mass I get $$-m_{r}g\cos\theta=m_{r}L_{r}\frac{d^{2}\theta}{dt^{2}}\tag{1}$$ $$m_{l}g\cos\theta=m_{l}L_{l}\frac{d^{2}\theta}{dt^{2}}\tag{2}$$ Usually though Newton's law for rotational motion, $\tau_{external}=Id^{2}\theta/dt^{2}$ where $\tau_{external}$ is the torque due to the gravitational forces and $I$ is the momentum of inertia of the two particles, is applied which leads to $$g\cos\theta[m_{l}L_{l}-m_{r}L_{r}]=(m_{l}L_{l}^{2}+m_{r}L_{r}^{2})\frac{d^{2}\theta}{dt^{2}}\tag{3}$$ I can see how to get (3) by adding (1) and (2) but I can not see how to get (1) and (2) from (3). Are equations (1) and (2) correct? For example they seem to imply that $$g\cos\theta=L_{l}\frac{d^{2}\theta}{dt^{2}}=L_{r}\frac{d^{2}\theta}{dt^{2}}$$ i.e. that $L_{l}=L_{r}$? Answer: The rod will exert a tangential force on the particles. Your equations (1) and (2) then become $$N_{r}-m_{r}g\cos\theta=m_{r}L_{r}\frac{d^{2}\theta}{dt^{2}}$$ and $$-N_{l}+m_{l}g\cos\theta=m_{l}L_{l}\frac{d^{2}\theta}{dt^{2}}$$ If you now add these two equations and use the fact that $N_{r}L_{r}-N_{l}L_{l}=0$ you get equation (3). How do we know that $N_{r}L_{r}-N_{l}L_{l}=0$? One argument is because (3) is the equation you get if you use a Lagrangian formalism and (3) implies that $N_{r}L_{r}-N_{l}L_{l}=0$. The Lagrangian is $$L=\frac{1}{2}m_{r}L_{r}^{2}\left(\frac{d\theta}{dt}\right)^{2}+\frac{1}{2}m_{l}L_{l}^{2}\left(\frac{d\theta}{dt}\right)^{2}-m_{r}gL_{r}\sin\theta+m_{l}gL_{l}\sin\theta$$ and from Lagrange equation $\partial L/\partial \theta - d/dt\partial L/\partial \dot{\theta}$ you get exactly (3). A more "physical" explanation to why $N_{l}L_{l}=N_{r}L_{r}$ is that because the rod is massless, the torque due to the forces $N_{l},N_{r}$ acting on the rod must be zero.
{ "domain": "physics.stackexchange", "id": 36443, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics" }
Convert R RNA-seq data object to a Python object
Question: I have done some work in R and would like to try a Python tool. What is a good way to import the data (and its annotations etc) as a Python object? I am particularly interested in converting a Seurat object into an AnnData object. (Either directly or as a Python object which can be converted into an AnnData.) Answer: A simple solution for converting Seurat objects into AnnData, as described in this vignette: library(reticulate) seuratobject_ad <- Convert(from=seuratobject, to="anndata", filename="seuratobject.h5ad") Alternatively, one can use Loom, "a file format designed to store and work with single-cell RNA-seq data". In R, install LoomR: devtools::install_github(repo = "mojaveazure/loomR") Convert from Seurat object to loom: pfile <- Convert(from = pbmc_small, to = "loom", filename = "pbmc_small.loom") pfile$close() # close loom objects when finished using them. Then import loom object in Python using loompy, or directly as AnnData: scanpy.api.read_loom Alternatively, see feather. Or export as some text format (csv, json) then import into Python.
{ "domain": "bioinformatics.stackexchange", "id": 1976, "tags": "rna-seq, r, python" }
Can this phenomenon be explained in ground frame?
Question: Suppose there is a disc which is rotating about its centre with constant angular velocity, the surface of disc is rough and there is a small block at some distance away from centre (at rest initially), now if one observes what happens in the foregoing motion to block in rotating frame it can be explained using centrifugal and coriolis force but in ground frame how to explain what force is causing the block to move in a straight path leaving the disc finally? Answer: TL;DR When both the box and the disc are at rest, there are two (significant) forces acting on the box: (i) weight from the gravitational force, and (ii) normal force from the contact with the disc surface. If disc starts rotating, there are two possible scenarios: (i) if there is no friction force, the box remains at the same position as seen by ground observer; (ii) if there is friction force, the box rotates together with the disc up until the disc (angular) velocity is large enough to overcome static friction force between the box and the disc surface, after which the box leaves the disc. Below I discuss these two scenarios, both seen by a ground observer. I would suggest always to analyze free-body diagrams (forces that act on the body) from an inertial reference frame. Otherwise, you will just get confused by non-existent forces such as a centrifugal force. No friction between box and disc surface The only two forces acting on the box are the gravitational force (the weight) exerted by the Earth and normal force exerted by the disc surface. Since the box does not move in vertical direction, these two forces are equal in magnitude and opposite in direction. Even if the disc rotates, the box remains at rest, because there are still only two forces exerted on the box. There is friction between box and disc surface Imagine both box and disc rotate at some angular velocity $\omega$. There are three forces exerted on the object: (i-ii) weight and normal force as already discussed, and (iii) static friction force exerted by the disc surface in direction parallel to the surface. Since the box rotates about an axis that goes through disc center, the net force on the box is (always) directed towards the center of rotation (study uniform circular motion to understand why): $$\vec{F}_\text{net} = m \frac{v^2}{R} \hat{r} = m \omega^2 R \hat{r}$$ where $v$ is linear velocity, $\omega$ is angular velocity, $R$ is distance of the box from the center of rotation, and $\hat{r}$ is a unit vector that points towards the center of rotation. This net force equals vector sum of all forces that are exerted on the object: $$\vec{F}_\text{net} = \vec{w} + \vec{n} + \vec{f}_s$$ where $\vec{w}$ and $\vec{n}$ are weight and normal force which cancel as already discussed, and $\vec{f}_s$ is static friction force. From this it is obvious that the static friction provides a radial force component which makes the box rotate together with the disc. In other words, the static friction opposes relative motion between the box and the disc surface. However, the static friction force has a maximum value it can take which depends on the coefficient of static friction $\mu_s$. If the velocity ($v$ or $\omega$) is too large such that the net force is larger than the maximum static friction force, the box starts slipping on the disc surface and eventually falls off.
{ "domain": "physics.stackexchange", "id": 86385, "tags": "newtonian-mechanics, reference-frames, centrifugal-force, coriolis-effect" }
What causes hot temperature extremes?
Question: Recently I looked into the hottest(and coldest, but in another question) natural places. I find this to be a "cloud of searing gas that is surrounding a swarm of galaxies clustered together five billion light-years away in the constellation of Virgo". This place has a temperature of 300 million degrees Celsius, which is super hot even compared to the temperature at our Sun's core. A quick search why gives: A group of galaxies collided violently with another swarm of galaxies at a speed of 2,500 miles per second. https://www.skymania.com/wp/hottest-spot-in-universe-found/ However, what mechanism exactly causes this temperature? Friction? Or something else? Would stars there be fusing stuff from the outside? Answer: Within that galaxy cluster, things become chaotic as these galaxies are flung around but kept in orbit by their strong gravity. And as Steven Linton mentioned, the galaxy's organized motion is turned primarily into disorganized motion within the material that is stuck together, which is what we know as heat. So the atoms of gas in these clouds collided and bounced off in all directions, where they hit other atoms, until the two clouds had formed one, with huge random internal motions (i.e. very hot).
{ "domain": "astronomy.stackexchange", "id": 3556, "tags": "temperature" }
How fast would you have to go to claim you saw the red light as green due to the Doppler Effect?
Question: Driving toward it, I suspect you would have to be going at relativistic speeds. Calculating this, it appears to be around 1/6 the speed of light when driving directly at it, but since you are on the road and it is in the air, I can't calculate the angle well. This could make for a good excuse to the cops :) Answer: Randall Munroe answered this question in this article Let me (try) to quote: How fast would you have to go in your car to run a red light claiming that it appeared green to you due to the Doppler Effect? —Yitzi Turniansky As expected, quoting and mathjaxing are two things that do not go together well, the rest of this post should be considered a quote: $$\frac{Red ~light~ wavelength}{Green~ light~ wavelength}=\sqrt{\frac{1+\frac{Your ~speed}{Speed~ of~ light}}{1−\frac{Car~ speed}{Speed~ of~ light}}}$$ $$Car speed=\frac{c×(Red ~light~ wavelength²−Green~ light~ wavelength²)}{Green~ light~ wavelength²+Red~ light~ wavelength²}≈\frac16c$$ Solely for your amusement I'd like to point out how happy I felt when finally getting this formula right and how quickly that feeling diminished when I realized I could have simply copy & pasted the mathjax code from the page source. duuuh
{ "domain": "physics.stackexchange", "id": 21471, "tags": "visible-light, speed-of-light, doppler-effect" }
When the Sun goes red giant, will it "vaporize" away a significant fraction of Jupiter and Saturn moons?
Question: Since Jupiter and Saturn's moons are composed of a rock+ice mix. For example, Callisto is 50% rock and 50% ice. When the Sun finally goes red giant, could it melt a significant portion of the 50% of Callisto that is ice? And over what timescale? Answer: The Sun's luminosity is going to increase by a factor of a thousand or more, so objects at 30 or so AU will receive as much illumination as Earth does now. That is right around the perihelion of Pluto, so I think ice moons will be fairly literally toast. That's even assuming that the swelling is sufficiently slow and spherically symmetric, and the significant mass loss that occurs is such that their orbits aren't disrupted, which might be generous. The entire red giant phase only lasts a few million years, so the transition between phases is instantaneous on astronomical timescales. I'm not sure about human timescale, though. If there are any sentient, corporeal beings living on Earth in five billion years, they better monitor the Sun at all times.
{ "domain": "physics.stackexchange", "id": 3078, "tags": "solar-system" }
Quasi concavity of max-relative entropy?
Question: The max-relative entropy between two states is defined as $$D_{\max }(\rho \| \sigma):=\log \min \{\lambda: \rho \leq \lambda \sigma\}.$$ It is known that the max-relative entropy is quasi-convex. That is, for $\rho=\sum_{i \in I} p_{i} \rho_{i}$ and $\sigma=\sum_{i \in I} p_{i} \sigma_{i}$ where $\rho_i, \sigma_i$ are quantum states and $p$ is a probability vector, it holds that $$D_{\max }(\rho \| \sigma) \leq \max _{i \in I} D_{\max }\left(\rho_{i} \| \sigma_{i}\right).$$ Is there a lower bound for $D_{\max }(\rho \| \sigma)$ in terms of $D_{\max }(\rho_i \| \sigma_i)$? Answer: No, this is not possible. Consider $\rho_1 = \sigma_2 = \vert 0\rangle\langle 0 \vert$ and $\rho_2 = \sigma_1 = \vert 1\rangle\langle 1 \vert$. Then, $$D_{\max}(\rho_i\|\sigma_i) = \infty\quad \text{for } i = 1,2.$$ Let $p_i = (1/2, 1/2)$ and you see that $D_{\max}(\rho\|\sigma) = 0$.
{ "domain": "quantumcomputing.stackexchange", "id": 3247, "tags": "information-theory, entropy, max-entropy, relative-entropy" }
How would a quantum radar detect the state/information change of a remote, unobserved particle?
Question: The Department of National Defence is investing into a technology they are coining quantum radar. The premise is that they would create an entangled pair of photons, and shoot one out into the air while monitoring the other one. Before asking a question here a few years ago about what/how quantum entanglement is/works, my hope was that entangled particles could work this way, because if this is how entanglement worked, one could achieve faster than light communication. But my understanding of entanglement now has been clarified to be more of a shared secret, that it may have uses in assuring authenticity, or detection of eavesdropping, specifically because the entangled particles share a secret and are not locked together. That is, changing the state of an entangled particle does not change the state of it's partner, that it simply decouples them to no longer be entangled. However the descriptions of these proposed 'quantum radar' now makes me question yet again if I have an accurate understanding of quantum entanglement. So I guess my question is, yet again, does changing the state/information of an entangled particle have any effect at all on it's partner(s)? Answer: To answer your final question, no. There is no measurement you can make on your local photon that will tell you anything about what has happened to the remote photon. The idea of the project you've mentioned is not to shoot one into the air while monitoring just the other one. Big picture, they want to shoot one into the air, and then, if it reflects off of something (like a stealth plane), detect it and measure both photons. By looking at correlations between the incoming photons and the photons that remained in the lab, they can more easily separate background noise (which was never entangled with your photons, and so should be uncorrelated) and signal (which still has some correlations with your photons). Relevant paper on this topic: Quantum Illumination.
{ "domain": "physics.stackexchange", "id": 49143, "tags": "quantum-entanglement" }
Running two subscribers concurrently in the same node
Question: I am working on a problem and need to use the most recent pose from amcl/future (my self-designed topic) and publish it as a tf between two frames. I made two subscribers for: amcl_pose/future and clock. The subscriber for amcl_pose/future updates a pose struct inside my code whenever it receives a new pose estimate from amcl; the subscriber for clock ignores the clock message and publish the most recent pose from the structure to tf (basically use it to publish tf in a high rate). The problem is that the amcl_cb function is never called by ROS. I believe it might be a concurrency issue because the two callbacks share the same data structure. What is the "correct" way to implement this? My code snippet is provided. **Note: the pose published is not the amcl estimated pose. ** **The parent_frame and child_frame are only to simplify the snippet, so it is not related. ** void amcl_cb(const geometry_msgs::PoseWithCovarianceStamped::ConstPtr& p){ pose.header = p->header; pose.pose = p->pose; } void clock_cb(const rosgraph_msgs::Clock::ConstPtr& dummy){ tf::TransformBroadcaster broadcaster; broadcaster.sendTransform( tf::StampedTransform( tf::Transform( tf::Quaternion(pose.orientation.x, pose.orientation.y, pose.orientation.z, pose.orientation.w), tf::Vector3(pose.position.x, pose.position.y, pose.position.z)), ros::Time::now(),"parent_frame", "child_frame")); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Subscriber amcl_sub = n.subscribe<geometry_msgs::PoseWithCovarianceStamped>("amcl_pose/future",1000, amcl_cb); ros::Subscriber clock_sub = n.subscribe<rosgraph_msgs::Clock>("clock",1000, clock_cb); ros::spin(); } Originally posted by alex_f224 on ROS Answers with karma: 28 on 2018-06-21 Post score: 0 Original comments Comment by mgruhler on 2018-06-21: This is just a guess, but... amcl does, by Default, not have a Topic that is called amcl_pose/future. Do you remap amcl_pose to amcl_pose/future? Additionally, it does not make sense to use the clock to republish this. Either use a ros::Timer.. Comment by mgruhler on 2018-06-21: .., if you want this published with another, static, frequency, or publish this directly from the amcl_cb. Last but not least, are you sure your frames are called parent_frame and child_frame? But I might be missing something important. So maybe elaborate why you chose that approach? Comment by alex_f224 on 2018-06-21: Thanks for asking. No, it is not the amcl pose. In fact, I used the amcl_pose to make some prediction, which publishes a topic I call amcl_pose/future. Comment by alex_f224 on 2018-06-21: The parent_frame and child_frame are only to simplify the snippet, so it is not related. Comment by gvdhoorn on 2018-06-21:\ I believe it might be a concurrency issue because the two callbacks share the same data structure. No, that is most likely not the case here. ros::spin() implements a single-threaded callback queue, so calls to your two callbacks are serialised. Comment by pengjiawei on 2018-06-21: try this while(n.ok()){ subscriber; ros::spinOnce(); } Comment by gvdhoorn on 2018-06-21: What does subscriber do? And what is it? Answer: Problem solved. Just to let everyone know. Firstly, there is no "bug" in the code; it is just that amcl does not publish pose unless the robot moved. Therefore, amcl_cb was never called. I changed my code using the ros::Timer suggestion by @mig to publish tf. It works fine at the end. Originally posted by alex_f224 with karma: 28 on 2018-06-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31056, "tags": "ros, navigation, ros-kinetic, callback, amcl" }
Is there a more advanced keyboard teleop controller for pr2
Question: There is a keyboard teleoperation program in pr2_teleop package. But it only provides very basic movement on the base. I am writing a more advanced keyboard controller, just like the Joystick teleoperation controller did, but I was wondering if there is already some similar keyboard controller there to save my work~ Anyone knows this? Thank you Originally posted by vincent on ROS Answers with karma: 311 on 2011-08-09 Post score: 0 Answer: There is a more advanced teleop package for the PR2 called pr2_teleop_general. That allows you to control the arms, head and base. Originally posted by cmansley with karma: 198 on 2011-08-09 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by vincent on 2011-08-09: thank you~
{ "domain": "robotics.stackexchange", "id": 6379, "tags": "ros, teleoperation" }
4-bit input, 5-bit output, logical right shift by 2, which is the correct set of 5 output bits?
Question: Suppose I have the following inputs: 1110 1111 If I perform a logical right shift by 2 on each, are the 5-bit outputs these: 00111 00111 or these: 01110 01111 If it's neither, then I'd appreciate an explain why and what I've done wrong! Answer: There are two ways to do a shift: either the "new" digit (the leftmost, msb) is set to zero, or it is set to the digit that was there before. The former is usually called logical shift while the latter preserves the sign of the number (in case you use the complement representation) and is called arithmetic shift. But sometimes both are just called shift, and one needs to clarify the desired behavior. Since you as about buffers of length 5 bits, the msb is anyway 0 so logical shift-arithmetic shift, and I'd say the 2-bit right shift is: 01110 -> 00111 -> 00011 01111 -> 00111 -> 00011
{ "domain": "cs.stackexchange", "id": 3968, "tags": "boolean-algebra" }
How does global planner in teb local planner tutorial work
Question: Previously I have ask a question about how to activate the global_plan_viapoint_sep for the tutorial, and now I encounter another problem in exploring the teb techniques, how do the global planner in the tutorial work, when I tried to go through the content in each file, it have show that the tutorial have to purpose the point and created a path related to the point, but how does this happen? Does this planner use any global planner to generate a simple route, then using the teb local planner to optimize the route? Or it is the function of teb to generate the global path? Originally posted by samping on ROS Answers with karma: 33 on 2016-05-20 Post score: 0 Answer: The global planner is completely independent of a particular local planner implementation and vice versa. You might had a look at the launch files in the teb_local_planner_tutorials source. Here move_base is loaded with the common global planner: <param name="base_global_planner" value="global_planner/GlobalPlanner" /> <param name="planner_frequency" value="1.0" /> But you can also use Navfn or any custom planner. But if you want to use the global path following mode (what I guess according to your description which mentions the global_plan_viapoint_sep parameter), the only prerequiste is that the global planner provides a sequence of points rather than a single goal point. But this should be the case usually. So how does it work?: The global planner provides a path to the goal. In each sampling interval (according to the controller rate), the local planner (teb, dwa, ...) takes a subset of the global plan (usually the part that is contained in the local costmap) into account for local planning (think of a receding horizon). Let's denote the current goal of this horizon as virtual/local goal. And yes, the teb_local_planner optimizes this initial route w.r.t. time-optimality by default. In that case the teb_local_planner usually shortens the path to the current virtual goal. In some applications the user might prefer to follow the global plan more strictly rather than taking always the fastest path to the virtual goal. This is addressed in the tutorial you are talking about. The planner iterates the intermediate points of the global plan from the global planner along the current horizon to the virtual goal. Each intermediate point that is separated with a certain distance from its predecessor (parameter global_plan_viapoint_sep) is taken into account as attractor during optimization. I call these attractors via-points since our optimized trajectory should try to reach each of them. Of course, the trajectory is still time-optimal, but not w.r.t. the straight line between start and goal, but by incorporating via-points. The strength of attraction at each via-point can be adjusted according to weight_viapoint. The default setting is not very strict. You can find further information and explanations in the tutorial. If something is missing, let me know or modify the content. I hope that this small description answers your questions. Edit: The first part of the tutorial does not consider any global plan. You can feed in via-points using rviz (publish point button) just to get familar with the optimization process and the weight_viapoint parameter. Originally posted by croesmann with karma: 2531 on 2016-05-20 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 24706, "tags": "ros, teb-local-planner" }
Can you find the mass of solvent with mass of solute, volume of solution, and solution density?
Question: If you have, for example, $\pu{10 g}$ of a solute dissolved into $\pu{100 ml}$ of a solution, and the solution density is $\pu{1.5 g ml^-1}$, can you just multiply $100\ \mathrm{ml} \times 1.5\ \mathrm{\frac{g}{ml}}$, then subtract the $\pu{10 g}$ of solute from the result to get the mass of the solvent in grams? Does that work or am I trying to do do this wrong? Answer: That works! In most chemistry (but not nuclear reactions!), mass is always conserved, so $$m_\text{solution} = m_\text{solvent} + m_\text{solutes}$$ Mass is also equal to density times volume, so $$m_\text{solution} = \rho_\text{solution} V_\text{solution} = m_\text{solvent} + m_\text{solutes}$$ Solving for $m_\text{solvent}$: $$m_\text{solvent} = \rho_\text{solution} V_\text{solution} - m_\text{solutes}$$ Which is exactly the calculation you did.
{ "domain": "chemistry.stackexchange", "id": 2757, "tags": "physical-chemistry, solutions" }
A contradiction between Biot-Savart and Ampère-Maxwell Laws?
Question: I came across a problem that I cannot get my head around. Consider two very small spherical metallic balls given charges $+Q$ and $-Q$. Assume that both can be approximated as point charges. Now, they are connected by a straight, finite, conducting wire. A current will flow in the wire until the charges on both balls become zero. Consider a point P on the perpendicular bisector of the wire, at a distance $r$ from the wire. My goal is to find the magnetic field at point P, when the current in the wire is $i$. The following figure illustrates the mentioned situation. I will now use the Ampère-Maxwell equation to obtain an expression for the field. I have constructed a circular loop of radius $r$ around the wire, to use the Ampère-Maxwell Law. Firstly, one must notice that the two charges produce an electric field everywhere in space. And since the balls are getting discharged, the electric field is actually changing. I have calculated the electric flux through the surface when the charges on the balls are $+q$ and $-q$ below. Now, for the final substitution... So I have obtained a neat result after all! But, I realized there was a problem. Let me use the Biot-Savart Law to find the magnetic field created only due to the current in the wire. This is a relatively easier calculation since the formula for the field due a finite current carrying straight wire is already known. The answer turns out to be the same. First of all, is the answer correct? If not, where did I go wrong? This is what I cannot understand. The Biot-Savart Law gives you the magnetic field created merely due to the current flowing in a conducting wire. On the other hand, the Ampère-Maxwell Law gives you the net field due to the current carrying wire and due to the induced magnetic field (caused by the changing electric field). So how is it that I get the same answer in both cases? The Biot-Savart Law cannot account for induced fields, right? Why does there seem to be an inconsistency in the two laws? Have I missed something, or used a formula where it is not applicable? Answer: The short answer is that the case of a finite wire violates one of the postulates of magnetostatics, which is that $\nabla \cdot \vec j = 0$. In cases when the current has sinks, the Biot-Savart law is not equivalent to the Ampere law, but to a "magnetostationary" Maxwell-Ampére law. Hence, in this rather special case you get the same result from the Biot-Savart as well as the Maxwell-Ampére law. It is not by coincidence. Sketch of a proof follows, for gory details see Griffiths' Introduction to electrodynamics. (Some details are on wikipedia) The Biot-Savart law can be equivalently written as $$ \vec B(\vec{r}) = \frac{\mu}{4 \pi} \nabla \times \int \frac{\vec{j}(\vec{r}')}{|\vec{r}-\vec{r}'|} d^3 r' $$ It is important to remember that the curl acts only on the unprimed r. We can now take a curl of the above equation, use the curl curl formula, realize that the laplacian of $1/r$ is proportional to the delta function, and use a few other tricks to obtain $$\nabla \times \vec{B} =\frac{\mu}{4 \pi} \nabla( \int \frac{ \nabla' \cdot \vec{j}}{|\vec{r}-\vec{r}'|} d^3 r' ) + \mu \vec j$$ Where the primed nabla acts on the primed r. When the current is divergenceless, we then simply obtain the Ampere law in differential form. However, if the current has a nonzero divergence and satisfies charge conservation we have $\nabla \cdot \vec{j}= -\partial_t \rho$. If we then assume that $\partial_t B=0$ ("magnetostationarity") the Gauss' law and other laws of electrostatics are unchanged. Then, we can use the electrostatic solution of the electric potential $\phi$ to easily derive that $$ \frac{\mu}{4 \pi} \nabla( \int \frac{-\partial_t \rho}{|\vec{r}-\vec{r}'|} d^3 r )=-\mu \epsilon \nabla( \partial_t \phi) $$ But we can also switch the order of derivatives and use $\vec E = -\nabla \phi$ to finally see that in the special "magnetostationary" case with conservation of charge the Biot-Savart law will be equivalent to the full Maxwell-Ampére law $$\nabla \times \vec{B} = \mu (\vec j + \epsilon \partial_t \vec E )$$ Note that the practical occurence of cases where the Maxwell contribution to Ampére law is non-negligible while the Faraday induction is negligible is very little. One should thus understand the "magnetostationary" validity of the Biot-Savart law as more of a curiosity. For instance in your case the charges would have to be very large and the conductor between them a very bad conductor. As the two spheres become connected, an electromagnetic wave emerges. Only as the wave leaves the system and the stationary current is well established we can use the Biot-Savart law.
{ "domain": "physics.stackexchange", "id": 39402, "tags": "electromagnetism" }
AttributeError: 'list' object has no attribute 'values'
Question: While implementing a code, getting the below error: AttributeError Traceback (most recent call last) <ipython-input-22-d35ba980e0c5> in <module> 23 optimizer.zero_grad() 24 ---> 25 output = model(X_tr) 26 optimizer.zero_grad() 27 loss = loss_func(output) ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), <ipython-input-20-fe528190183d> in forward(self, x) 13 14 def forward(self,x): ---> 15 h = self.f(x) 16 return self.g(h) ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), <ipython-input-18-f3b96ecb25f3> in forward(self, x) 34 x = self.fc(x[:, -1, :]) 35 # out.size() --> 100, 10 ---> 36 x = self.hybrid(x) 37 return T.cat((x, 1 - x), -1) ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), <ipython-input-6-f66b87fe8d6c> in forward(self, input) 42 43 def forward(self, input): ---> 44 return HybridFunction.apply(input, self.quantum_circuit, self.shift) <ipython-input-6-f66b87fe8d6c> in forward(ctx, input, quantum_circuit, shift) 8 ctx.quantum_circuit = quantum_circuit 9 ---> 10 expectation_z = ctx.quantum_circuit.run(input[0].tolist()) 11 result = torch.tensor([expectation_z]) 12 ctx.save_for_backward(input, result) <ipython-input-4-39b4287471c5> in run(self, thetas) 30 result = job.result().get_counts() 31 ---> 32 counts = np.array(list(result.values())) 33 print('counts', counts) 34 print('result.values', result.values()) AttributeError: 'list' object has no attribute 'values' But the output of result after printing is: RESULT [{'1': 49, '0': 51}, {'1': 58, '0': 42}, {'1': 48, '0': 52}, {'0': 53, '1': 47}, {'0': 51, '1': 49}, {'0': 51, '1': 49}, {'1': 58, '0': 42}, {'0': 47, '1': 53}, {'1': 54, '0': 46}, {'0': 47, '1': 53}, {'0': 41, '1': 59}, {'0': 52, '1': 48}, {'1': 47, '0': 53}, {'1': 52, '0': 48}, {'0': 52, '1': 48}, {'1': 50, '0': 50}, {'0': 41, '1': 59}, {'1': 54, '0': 46}, {'1': 55, '0': 45}, {'1': 44, '0': 56}, {'1': 61, '0': 39}, {'1': 53, '0': 47}, {'0': 40, '1': 60}, {'1': 59, '0': 41}, {'0': 60, '1': 40}, {'0': 56, '1': 44}, {'1': 46, '0': 54}, {'0': 46, '1': 54}, {'0': 53, '1': 47}, {'1': 52, '0': 48}, {'0': 49, '1': 51}, {'1': 55, '0': 45}, {'1': 51, '0': 49}, {'1': 48, '0': 52}, {'0': 53, '1': 47}, {'0': 56, '1': 44}, {'1': 53, '0': 47}, {'0': 51, '1': 49}, {'0': 45, '1': 55}, {'1': 47, '0': 53}, {'1': 55, '0': 45}] Answer: Result.get_counts() method returns a dictionary if the Job contains only one circuit. If, however, your job contains multiple circuits, it will return a list of dictionaries. To avoid this issue, you will need to either specify the circuit index: # Counts of the first circuit: result = job.result().get_counts(0) or loop over the circuits: for _dict in job.result().get_counts(): print(_dict)
{ "domain": "quantumcomputing.stackexchange", "id": 4154, "tags": "qiskit, programming, quantum-gate" }
Python text based game room to room movement
Question: I'm new to coding and working on a text based game moving from room to room. The code works in pycharm but according to an instant feedback program I entered it into, it gave some tips on it and I am not exactly sure how how to improve upon it. Here is what it gave me: The Great Hall string is a key in main dictionary. Revise code so it is not a key in main dictionary. It said it can not classify my code (not sure what that means) do I need a def main() command? Consolidate multiple print commands into one function Make condition simpler by using non-complex conditions. With simple operators that don't look for a contain within the string 5.Better practice to use while True: and the break reserved word to stop the loop when you require it to stop data setup rooms = {'Great Hall': {'name': 'Great Hall', 'South': 'Bedroom', 'North': 'Dungeon', 'West': 'Library', 'East': 'Kitchen'}, 'Bedroom': {'name': 'Bedroom', 'East': 'Cellar', 'North': 'Great Hall'}, 'Cellar': {'name': 'Cellar', 'West': 'Bedroom'}, 'Library': {'name': 'Library', 'East': 'Great Hall'}, 'Kitchen': {'name': 'Kitchen', 'West': 'Great Hall', 'North': 'Dining Room'}, 'Dining Room': {'name': 'Dining Room', 'South': 'Kitchen'}, 'Dungeon': {'name': 'Dungeon', 'South': 'Great Hall', 'East': 'Gallery'}, 'Gallery': {'name': 'Gallery', 'West': 'Dungeon'}, } directions = ['North', 'South', 'East', 'West'] current_room = rooms['Great Hall'] # game loop while True: # display current location print() print('You are in the {}.'.format(current_room['name'])) # get user input command = input('\nWhat direction do you want to go? ').strip() # movement if command in directions: if command in current_room: current_room = rooms[current_room[command]] else: # bad movement print("You can't go that way.") # Exit game elif command.lower() in ('q', 'quit'): break # bad command else: print("I don't understand that command.") Answer: Overall I don't think the code is that bad, considering what is the purpose; but if you want to expand your game I think the greatest issue is the data structure. It'd be very easy to misplace some room or call it in a different way if you have to repeat yourself every time, so I suggest you to use a class to represent a Room: class Room: name: str north: 'Room' east: 'Room' south: 'Room' west: 'Room' def __init__(self, name, north=None, east=None, south=None, west=None): self.name = name self.north = north self.east = east self.west = west self.south = south if north: north.south = self if east: east.west = self if south: south.north = self if west: west.east = self def go_to(self, direction): if direction in ['north','east','south','west']: return getattr(self, direction) else: return None def __str__(self): return self.name I'm using types because is very helpful to find out problems before it's too late, in Python if you have a recursive type (like Room) you need to use the quotes, but it just a syntactic notation. By default a room does not have any north,east,south,west Room (hence the =None) but the important thing that happens in the __init__ is that when you add a room it automatically set the opposite direction. So if a room goes to another by east, the other room does the opposite by west. Thus we are reducing errors in directions. If this will not be the case for some room, you will be able to override that using another class (SelfLockingRoom that closes all the other directions when you enter, for example). The __str__ method is just to handle the display of the room to the room itself; at one point you could have more than just the name to display. I'm adding also a go_to method, it is the room responsibility to decide where to go given the direction; in the future you could have a TrapRoom that extends Room and in case of direction "East" will do nasty things, for example. The getattr is just to avoid having to write if direction=='north': return self.north but it would be the same and maybe even clearer. Actually, if the game develops further, you may need more complex rules to decide what do to given a direction so probably you will need a House class: class House: current_room: Room rooms: list[Room] # at the moment is not used but could be in the future? def __init__(self, current_room: Room, rooms: list[Room]): self.current_room = current_room self.rooms = rooms def go_to(self, direction): if next_room := self.current_room.go_to(direction): self.current_room = next_room return next_room At the moment is not very useful, but I think it's going to be. To initialize the house you just create the rooms: house_rooms = [cellar := Room('Cellar'), library := Room('Library'), dining_room := Room('Dining Room'), gallery := Room('Gallery'), bedroom := Room('Bedroom', east=cellar), dungeon := Room('Dungeon', east=gallery), kitchen := Room('Kitchen', north=dining_room), great_hall := Room('Great Hall', south=bedroom, north=dungeon, west=library, east=kitchen)] house = House(current_room=great_hall, rooms=house_rooms) (As you can see I'm not repeating names or directions) The game loop is OK, we can collect some helpful methods, just in case you want to expand them in the future: def prompt(text: str): print(text) def ask(text: str) -> str: return input(text).strip() and this would be your game loop (I'm always lowering the direction so you can write East east or EAST). commands = { "directions": ['north', 'south', 'east', 'west'], "quit": ['q', 'quit'] } def game_loop(): prompt(f"\nYou are in the {house.current_room}") command = ask('\nWhat direction do you want to go? ').lower() if command in commands["directions"]: if not house.go_to(command): prompt("You can't go that way.") elif command in commands["quit"]: return False else: prompt("I don't understand that command.") return True if __name__ == '__main__': while game_loop(): pass
{ "domain": "codereview.stackexchange", "id": 41083, "tags": "python" }
How to represent linear regression in a decision tree form
Question: I have read that decision trees can represent any hypothesis and are thus completely expressive. So how do we represent the hypothesis of linear regression in the form of a decision tree ? I am referring to the equation w0 + w1.x1 + w2.x2 + .... + wn.xn = yn Answer: The basic idea would be to divide up your feature space in small multi-dimensional intervals, and then assign to each point in a given interval the average value that your linear regression model has in that interval. This is something you can do with a tree. This is similar in spirit to approximating (in 1D space) the function $y = x$ with a "piecewise constant", "staircase-like" function like http://mathworld.wolfram.com/NearestIntegerFunction.html: you could divide your 1D space in equal intervals (e.g. of length 1), and you'd assign to each interval the average value that the function $y = x$ has in it. Note that such "piecewise constant" function can be defined as a tree: suppose you wanted to know the "tree" approximation of $y = x$ for $x^* = \pi = 3.14159..$, then you could do: Is $x^* > 0$? Yes - Is $x^* < 0.5$? No (if it was, I would have approximated $x^*$ with 0) - - Is $x^* < 1.5$? No (if it was, I would have approximated with 1) - - - Is $x^* < 2.5$? No (if it was...) - - - - Is $x^* < 3.5$? Yes -> I approximate $x^*$ with $3$ (3 is the average of $y = x$ between 2.5 and 3.5). Note that as the size of the intervals shrinks (in "tree language", as you grow your tree more and more), the better the approximation would become. Also, rather than decision one would speak of regression tree in this context. To generalize this idea, just imagine that you could carry out a similar procedure for any linear function in 1D $y = ax + b$ using a "staircase-like" function, and in N dimensions for any linear function in N-D $y = a_0 + a_1 x_1 + \cdots + a_n x_x$. Actually, you don't have to restrain to linear functions, as as you said trees are more flexible still (you just need to appropriately assign the values to each interval of your feature space)!
{ "domain": "datascience.stackexchange", "id": 4374, "tags": "machine-learning, decision-trees, linear-regression" }
Vigenere square cypher decryption
Question: I'm in the process of learning python and programmed this exercise in decrypting the Vigenere square cypher as practice. Please comment on best practices, efficiency, or more pythonic ways to do things. #!/usr/bin/env python """Functions for encrypting and decrypting text using the Vigenere square cipher. See: http://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher IC stands for Index of Coincidence: http://en.wikipedia.org/wiki/Index_of_coincidence """ from __future__ import division from collections import Counter from math import fabs from string import ascii_lowercase from scipy.stats import pearsonr from numpy import matrix from os import system #Define some constants: LETTER_CNT = 26 ENGLISH_IC = 1.73 #Cornell English letter frequecy #http://www.math.cornell.edu/~mec/2003-2004/cryptography/subs/frequencies.html ENGLISH_LETTERS = 'etaoinsrhdlucmfywgpbvkxqjz' ENGLISH_FREQ = [0.1202, 0.0910, 0.0812, 0.0768, 0.0731, 0.0695, 0.0628, 0.0602, 0.0592, 0.0432, 0.0398, 0.0288, 0.0271, 0.0261, 0.0230, 0.0211, 0.0209, 0.0203, 0.0182, 0.0149, 0.0111, 0.0069, 0.0017, 0.0011, 0.0010, 0.0007] ENGLISH_DICT = dict(zip(list(ENGLISH_LETTERS), ENGLISH_FREQ)) MAX_LEN = 10 #Maximum keyword length def scrub_string(str): """Remove non-alphabetic characters and convert string to lower case. """ return ''.join(ch for ch in str if ch.isalpha()).lower() def string_to_numbers(str): """Convert str to a list of numbers giving the position of the letter in the alphabet (position of a = 0). str should contain only lowercase letters. """ return [ord(ch) - ord('a') for ch in str] def numbers_to_string(nums): """Convert a list of numbers to a string of letters (index of a = 0); the inverse function of string_to_numbers. """ return ''.join(chr(n + ord('a')) for n in nums) def shift_string_by_number(str, shift): """Shift the letters in str by the amount shift (either positive or negative) modulo 26. """ return numbers_to_string((num + shift) % LETTER_CNT for num in string_to_numbers(str)) def shift_string_by_letter(str, ch, direction): """Shift the letters in str by the value of ch, modulo 26. Right shift if direction = 1, left shift if direction = -1. """ assert direction in {1, -1} return shift_string_by_number(str, (ord(ch) - ord('a') + 1) * direction) def chunk_string(str): """Add a blank between each block of five characters in str.""" return ' '.join(str[i:i+5] for i in xrange(0, len(str), 5)) def crypt(text, passphrase, which): """Encrypt or decrypt the text, depending on whether which = 1 or which = -1. """ text = scrub_string(text) passphrase = scrub_string(passphrase) letters = (shift_string_by_letter(ch, passphrase[i % len(passphrase)], which) for i, ch in enumerate(text)) return ''.join(letters) def IC(text, ncol): """Divide the text into ncol columns and return the average index of coincidence across the columns. """ text = scrub_string(text) A = str_to_matrix(scrub_string(text), ncol) cum = 0 for col in A: N = len(col) cum += (sum(n*(n - 1) for n in Counter(col).values()) / (N*(N - 1)/LETTER_CNT)) return cum/ncol def keyword_length(text): """Determine keyword length by finding the length that makes IC closest to the English plaintext value of 1.73. """ text = scrub_string(text) a = [fabs(IC(text, ncol) - ENGLISH_IC) for ncol in xrange(1, MAX_LEN)] return a.index(min(a)) + 1 def correlation(letter_list): """Return the correlation of the frequencies of the letters in the list with the English letter frequency. """ counts = Counter(letter_list) text_freq = [counts[ch]/len(letter_list) for ch in ascii_lowercase] english_freq = [ENGLISH_DICT[ch] for ch in ascii_lowercase] return pearsonr(text_freq, english_freq)[0] def find_keyword_letter(letter_list): """Return a letter of the keyword, given every nth character of the ciphertext, where n = keyword length. """ str = ''.join(letter_list) cors = [correlation(shift_string_by_number(str, -num)) for num in xrange(1, LETTER_CNT + 1)] return ascii_lowercase[cors.index(max(cors))] def find_keyword(ciphertext, keyword_length): """Return the keyword, given its length and the ciphertext.""" A = str_to_matrix(scrub_string(ciphertext), keyword_length) return ''.join( [find_keyword_letter(A[j]) for j in xrange(keyword_length)]) def str_to_matrix(str, ncol): """Divide str into ncol lists as in the example below: >>> str_to_matrix('abcdefghijk', 4) [['a', 'e', 'i'], ['b', 'f', 'j'], ['c', 'g', 'k'], ['d', 'h']] """ A = [list(str[i:i + ncol]) for i in xrange(0, len(str), ncol)] stub = A.pop() B = matrix(A).T.tolist() for i, ch in enumerate(stub): B[i] += ch return B def test_functions(): """Unit tests for functions in this module.""" assert(shift_string_by_number('unladenswallow', 15) == 'jcapstchlpaadl') assert(shift_string_by_letter('unladenswallow', 'M', -1) == 'ngetwxglpteehp') assert(chunk_string('terpsichorean') == 'terps ichor ean') assert(crypt('Hello world!', "mypassword", 1) == 'udbmhplgdh') assert(crypt('udbmhplgdh', "mypassword", -1) == 'helloworld') assert(round(correlation('ganzunglabulich'), 6) == 0.118034) assert(scrub_string("I'm not Doctor bloody Bernofsky!!") == 'imnotdoctorbloodybernofsky') assert(string_to_numbers('lemoncurry') == [11, 4, 12, 14, 13, 2, 20, 17, 17, 24]) assert(numbers_to_string([11, 4, 12, 14, 13, 2, 20, 17, 17, 24]) == 'lemoncurry') assert(round(IC('''QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV''', 5) , 2) == 1.82) assert(keyword_length('''QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV''') == 5) assert(str_to_matrix('abcdefghijk', 4) == [['a', 'e', 'i'], ['b', 'f', 'j'], ['c', 'g', 'k'], ['d', 'h']]) if __name__ == '__main__': print 'Calculating...' with open ("plaintext.txt", "r") as infile: plaintext = infile.read().replace('\n', ' ') passphrase = 'Moby Dick' ciphertext = crypt(plaintext, passphrase, 1) kw_len = keyword_length(ciphertext) kw = find_keyword(ciphertext, kw_len) print 'Keyword length is {0}.'.format(kw_len) print 'The keyword is {0}.'.format(kw) system("""bash -c 'read -s -n 1 -p "Press any key print the decrypted text..."'""") print crypt(ciphertext, kw, -1) Answer: Overall this is well documented, well written code. There are a number of things I may have written differently, but they are primarily manners of personal style. But I still found some things I want to call out that might be somewhat problematic, or at least worth examining: scrub_string might not do what you want. Notably, "\xe9".isalpha() is True with my default settings, but your code probably does not handle LATIN SMALL LETTER E WITH ACUTE. shift_string_by_letter seems like it might be better described as passing in the letter that a should become (or becomes that becomes a). As a related comment, make sure you are aware that assert lines are removed when python is run with -O or -OO so you cannot depend on them to catch run-time errors. Your use here is correct, as passing anything other than 1 or -1 is a programming error instead of a run-time error. crypt uses modulus to match what itertools could make simpler. With cycle and izip it can become this: letters = (shift_string_by_letter(ch, p, which) for ch, p in izip(text, cycle(passphrase))) IC doesn't appear to follow the same naming conventions as your other functions. It's short, and upper-case, possibly an abbreviation. In __main__, the use of system and bash is unusual. I would replace this with raw_input("Press enter to print the decrypted text...") (or input(...) in python 3) to avoid the indirection that needlessly prevents this from working on systems without bash. In several places, constructs like ord(ch) - ord('a') or chr(n + ord('a')) help you convert between letters an indices. It might help either performance or readability to set up two dictionaries to do the conversion as a lookup, say to_index[ch] or from_index[n]. This would have a nice effect of catching unsupported characters more explicitly by raising a KeyError if one is encountered. As a general guideline, when you start to prioritize performance over readability, one of the lowest hanging fruits in python is function calls. While this can lead towards manually "inlining" other functions in order to save a few cycles, it can also lead towards other interesting approaches. For example, if the text for crypt is long enough, it's plausible that making up to 26 dictionaries mapping all 26 characters would allow for faster running code overall, despite the set-up time. The resulting code might look like this: shift_map = { p : { ch : shift_string_by_letter(ch, p, which) for ch in ascii_lowercase } for p in passphrase } letters = (shift_map[p][ch] for ch, p in izip(text, cycle(passphrase))) But that strategy would fail utterly for short lengths of text due to the costs of setting up the dictionary. It's also possible that all of this would be lost in the noise of using generator expressions. This pontification highlights the need to profile the scenarios for which you actually want your code to perform well, and then to be creative about how you fix them.
{ "domain": "codereview.stackexchange", "id": 5500, "tags": "python, vigenere-cipher" }
Why does charge not flow through my body when I touch one lead of a battery?
Question: Why does charge not flow through my body when I touch one lead of a battery? I think of it like this: a battery's leads have some potential relative to ground. When I touch one lead there is a potential difference between me and one lead. Why does current not flow through my body? Answer: Why does current not flow through my body? Actually, current (charge) does flow in your example! As you have correctly identified, a battery terminal is on certain potential with respect to the ground, and you (or your hands) are on the same potential as the ground. When you touch the battery terminal, the charge will flow to equalize the two potentials. Since capacitance of battery terminals to ground is very small, not "much charge" is needed to equalize the potentials. In the schematic below, each battery terminal has a certain (small) capacitance to ground. Initially, their voltage to ground is $$\varphi_1 - \varphi_0 = V_1 \qquad \text{and} \qquad \varphi_2 - \varphi_0 = V_1 + V_\text{BAT}$$ Now when you touch the positive terminal (close the switch), the current starts flowing from the capacitor through a resistor (combination of skin, body, shoes etc.) which is in $\text{M}\Omega$ range. Once the capacitor is discharged, i.e. once the potentials are equalized, the current (charge) stops flowing. And it is not that you cannot feel this! If the capacitance was initially charged to a very high potential above the ground, a spark could occur between your hand and the battery terminal, which does hurt a little. See related discussion on sparks: Reason of sudden electric shocks?
{ "domain": "physics.stackexchange", "id": 86858, "tags": "electrostatics, electric-circuits, electric-current, voltage" }
Parts Inventory System
Question: I have been working on some code for a basic inventory management system and I have all the features that I believe are need like adding parts displaying parts list and the such as you can see. I believe I have worked out all the bugs I can find. I'm looking to see if I got all the bugs out and maybe some optimizations that I am not seeing. The code is fully working and I'm just making sure I am not overlooking anything before I start using this code. dict_member = {} parts = dict_member class Parts(): def __init__(self, part, part_number, quantity): self.part = part self.part_number = part_number self.quantity = quantity def display (self): for part_number, parts in dict_member.items(): print('Part Name:', parts.part + ', ''Part Number:' + parts.part_number + ', ''Quantity:' + parts.quantity) if parts == 0: print('No Parts Found in List') def add(self): part = input("Enter Name of Part:\n ") part_number = input("Enter Part Number:\n ") quantity = input("Enter Quantity:\n ") dict_member[part] = Parts(part, part_number, quantity) def remove(self): part = str(input("Enter Part Name to Remove\n")) if part in parts: del parts[part] else: print("Part Not Found.") def edit(self, part): if part in parts: par = input("Enter New Part\n") num = input("Enter New Part Number\n ") quan = input("Enter New Quantity\n ") del dict_member[part] dict_member[part] = Parts(par, num, quan) else: print("No such Part exists") def saveData(self): filename = input("Filename to save: ") out_file = open(filename, "wt") for part_number, parts in dict_member.items(): out_file.write(parts.part + "," + parts.part_number + "," + parts.quantity + "\n") out_file.close() print("File Saved") def loadData(self): try: filename = input("Filename to load: ") in_file = open(filename, "rt") while True: in_line = in_file.readline() if not in_line: break in_line = in_line[:-1] part, part_number, quantity = in_line.split(",") dict_member[part] = Parts(part, part_number, quantity) in_file.close() except FileNotFoundError: print("File Not Found.") return parts def display_menu(): try: print("") print("1. Parts ") print("2. Add Part") print("3. Remove Part ") print("4. Edit Part") print("5. Save Part List") print("6. Load Part List") print("9. Exit ") print("") return int(input("Selection> ")) except ValueError: print("Selection Not Valid.") return parts print("Welcome Sticks&Stones Inventory System") part_instance = Parts(None, None, None) menu_item = display_menu() while menu_item != 9: if menu_item == 1: part_instance.display() elif menu_item == 2: part_instance.add() elif menu_item == 3: part_instance.remove() elif menu_item == 4: m = input("Enter Part Name to Edit\n") part_instance.edit(m) elif menu_item == 5: part_instance.saveData() elif menu_item == 6: part_instance.loadData() menu_item = display_menu() print("Exiting Program...") Answer: You are mixing two very different objects and tasks in your Parts class. On the one side there is the object of a Part and on the other hand there is a dictionary of parts which is your inventory system. You should separate the two: import sys from collections import namedtuple class Part(namedtuple("Part", "name part_number quantity")): def __str__(self): return ", ".join(self) class Parts(dict): def display (self): if not self: print('No Parts Found in List') return print() print('Name, Part Number, Quantity') for part in self.values(): print(part) print() def add(self, *args): try: name, part_number, quantity = args except ValueError: name = input("Enter Name of Part:\n ") part_number = input("Enter Part Number:\n ") quantity = input("Enter Quantity:\n ") self[name] = Part(name, part_number, quantity) def remove(self, part=""): if not part: part = input("Enter Part Name to Remove\n") try: del self[part] except Keyerror: print("Part {} not found.".format(part)) def edit(self, part=""): if not part: part = input("Enter Part Name to Edit\n") try: new_name = input("Enter new part name\n") number = input("Enter new part number\n ") quantity = input("Enter new quantity\n ") self[part] = Part(new_name, number, quantity) except KeyError: print("No such Part exists: {}".format(part)) def save(self, filename=""): if not filename: filename = input("Filename to save: ") with open(filename, "wt") as out_file: for part in self.values(): out_file.write("{}\n".format(part)) print("File saved") def load(self, filename=""): if not filename: filename = input("Filename to load: ") try: with open(filename, "rt") as in_file: for line in in_file: if not line: break part, part_number, quantity = line.strip().split(",") self.add(part, part_number, quantity) except FileNotFoundError: print("File Not Found.") def menu(inventory): menu_list = [("Parts", inventory.display), ("Add Part", inventory.add), ("Remove Part", inventory.remove), ("Edit Part", inventory.edit), ("Save Part List", inventory.save), ("Load Part List", inventory.load), ("Exit", sys.exit)] while True: for i, (name, _) in enumerate(menu_list, 1): print("{}. {}".format(i, name)) try: user = int(input("Selection> ")) menu_list[user-1][1]() except (ValueError, IndexError): print("Selection Not Valid.") def main(): print("Welcome to Sticks&Stones Inventory System") inventory = Parts() while True: menu(inventory) if __name__ == '__main__': try: main() except (KeyboardInterrupt, SystemExit): print("Exiting Program...") Here I made two classes. I also made the inventory a sub-class of dict. This way you don't have to keep an external dictionary (and could even have two different inventories at the same time, maybe for different things?). All methods of the inventory take parameters if you want to, and ask for them if they are not supplied. A dict has dict.values which directly gives you a list (or iterable) of all values. The Part is just a data container, so I used collections.namedtuple here, and just added nicer printing by overriding the magic __str__ method, which is invoked when calling e.g. print(part).
{ "domain": "codereview.stackexchange", "id": 23694, "tags": "python, python-3.x" }
Find the subset of k element between n that maximize the total distance
Question: Given a set $Q\subset \mathbb{N}^m $ of $n$ points, we want to find the subset $S_{max}\subset Q$ of $k$ elements that maximize the total distance between them, according to the $\ell^1$ norm. $$S_{max} = \arg \max_S\sum_{i,j \in S, i \ne j} d(x_i,x_j)$$ In my specific case, $Q\subset \{ 0, 1 \} ^m $, thus $d(\cdot,\cdot)$ is equal to the Hamming distance. Is there any efficient way to solve this problem? Is it possible to rewrite it in another simpler way? Answer: This problem smells like it might be NP-hard, though I have no proof. If you are looking for a proof of NP-hardness, my advice would be to go through Garey & Johnson (or some other list of NP-complete problems) and look for all problems that seem plausibly related: e.g., ones that involve the hypercube, bitvectors and the Hamming distance, clustering in general metric spaces, or total/average distance in some general metric space. Then, edit your question to list all of those known-NP-complete and plausibly-related problems in your question... and try to see if you can find a reduction from any of them to your problem. If you are less concerned about theoretical results and instead are looking to solve this in practice as efficiently as possible, some of the techniques in Maximize distance between k nodes in a graph can be adapted to your problem. In particular, I'll suggest two candidate approaches: if you want an exact solution, try using ILP; if you want an approximate solution, try FPF. You can formulate your problem as an instance of integer linear programming (ILP). Introduce variables $y_i$, where $y_i$ indicates whether the $i$th point $x_i$ is included in $S_{max}$, and $z_{i,j}$, with the intended meaning that $z_{i,j} = y_i \land y_j$. Constrain each of these to be zero-or-one integer variables. Add the constraints $\sum_i y_i \le k$ and $z_{i,j} \le y_i$ and $z_{i,j} \le y_j$ and $z_{i,j}=z_{j,i}$. Your goal is to maximize the objective function $$\sum_{i,j} d(x_i,x_j) z_{i,j}.$$ You can optionally add the extra constraints $y_i+y_j-1 \le z_{i,j}$ and/or $z_{i,j}+z_{j,k}-1 \le z_{i,k}$ and/or $\sum z_{i,j} \le k(k-1)$; this will increase the number of constraints, but might help the solver find a solution more quickly. Now give this ILP instance to an off-the-shelf ILP solver, and hope that it can find a solution efficiently. If you want a fast heuristic to find an approximate solution, you could try using FPF, as described here. You could also try a greedy algorithm: at the $i$th iteration, select the vertex $x_j$ that increases the total distance of all the vertices selected so far by as much as possible. Neither of these is guaranteed to find an optimal solution, and the solution they output probably won't be optimal, but if you're lucky, they might be not-too-much-worse than optimal.
{ "domain": "cs.stackexchange", "id": 4895, "tags": "algorithms, optimization, combinatorics, np-hard" }
How to write the redox reaction of chlorate to chlorine and oxygen using half rections?
Question: I know how to solve redox reactions, but this one is really confusing me since I can't really write half reactions for it. $$\ce{ClO3- -> Cl2 + O2}$$ The answer is apparently: $$\ce{2H+ + 2ClO3- -> 5/2 O2 + Cl2 + H2O}$$ This makes sense. I understand that 12 electrons would be lost from oxygen if it all went to $\ce{O2}$ but only 10 electrons are gained by the two chlorine(V)'s so you have to use a water to make the charge balance work out. I just don't see how you'd come to this conclusion on your own. Answer: This is tricky and one way to think about it that I can offer is to treat $\ce{ClO3-}$ as a combination of $\ce{Cl^5+}$ and $\ce{O^2-}$. Since the problem is that the starting material is undergoing disproportionation with different elements being reduced/oxidised, if you could split the starting material into the two different elements, then you could circumvent the issue. Obviously the bonding in the chlorate ion is not ionic but this procedure allows you to construct "half-equations" and then an overall equation. I would not suggest that you write this in an exam. The two half-reactions are then: $$\begin{align} \ce{Cl^5+ + 5e-} &\longrightarrow \ce{1/2Cl2} \tag{1} \\ \ce{O^2-} &\longrightarrow \ce{1/2 O2 + 2e-} \tag{2} \end{align}$$ $[(1) \times 2] + [(2) \times 5]$ gives $$\ce{2 Cl^5+ + 5O^2- -> Cl2 + 5/2 O2}$$ In order to "re-form" your chlorate, you need three oxide ions per Cl ion, so you need to add one more oxide to both sides: $$\ce{2 Cl^5+ + 6O^2- -> Cl2 + 5/2 O2 + O^2-}$$ Now you can get back your chlorate: $$\ce{2 ClO3- -> Cl2 + 5/2 O2 + O^2-}$$ Now it's highly unlikely that there are going to be free oxide ions hanging around, so we just protonate them by adding $\ce{2 H+}$ on both sides. The oxide ion simply becomes water: $$\ce{2 ClO3- + 2H+ -> Cl2 + 5/2 O2 + H2O}$$ I understand that 12 electrons would be lost from oxygen if it all went to $\ce{O2}$ but only 10 electrons are gained by the two chlorine (V)'s so you have to use a water to make the charge balance work out. Yeah, if all six oxygens became $\ce{O2}$ then they would lose 12 electrons. Therefore, one of the oxygens does not become $\ce{O2}$; it becomes $\ce{H2O}$ and remains as oxygen(-2). It is not about charge balance.
{ "domain": "chemistry.stackexchange", "id": 6767, "tags": "inorganic-chemistry, redox, halides" }
Evaluating infix expression with brackets
Question: My assignment was to write a function that computes the value of an expression written in infix notation. We assume that we are using the four classical arithmetic operations +, -, * and / and that each binary operation is enclosed in parentheses. My stack: class Stack: def __init__(self): self._arr = list() def push(self, item): self._arr.append(item) #return self def pop(self): assert not self.is_empty(), "pop() on empty stack" return self._arr.pop() def is_empty(self): return not bool(self._arr) def peek(self): assert not self.is_empty(), "peek() on empty stack" return self._arr[-1] def all_items(self): return self._arr def value_at_index(self, value): try: return self.items[value] except: return False def __len__(self): return len(self._arr) The code: def infix_brackets_eval(data): Stack = Stack() i = 0 ret = 0 while i < len(data): if data[i] != ' ': if data[i] not in '+-*/()': temp = 0 while data[i] != ' ' and data[i]!= ')': temp = temp * 10 + int(data[i]) i+=1 i-=1 elif data[i] == ')': B = Stack.peek() Stack.pop() x = Stack.peek() Stack.pop() A = Stack.peek() Stack.pop() if Stack.peek() == '(': Stack.pop() if x == '-': temp = A - B elif x == '+': temp = A + B elif x == '*': temp = A * B elif x == '/': temp = A / B else: temp = data[i] #print(temp) Stack.push(temp) i += 1 return Stack.peek() infix_brackets_eval("((12 * 3) + (2 + 3) * (1 * 3))") Output: 15 Apart from the code comments of course, could any of you point me to an example use of a tokenizer or parser? I am very curious about the application and mechanics of how to use it, in the case when the tokens are operands, tokenize also need multi-digit numbers. Unfortunately I do not know where to look. I tried to rely on various documentation but I can not translate it to this example. Answer: Python lists are stacks. I'd recommend removing the Stack class. You should split tokenization away from evaluation. Whilst the tokenization you need is quite simple, tokenization can get quite complicated. Not only will you shoot yourself in the foot, but the code is harder to read with tokenization and evaluation in the same function. Your variable names are not great. Lets use TDD to build the tokenizer: We'll handle only numbers. However only one character, and we'll convert to an int. def tokenize(text): for char in text: if char in "0123456789": yield int(char) >>> list(tokenize("123")) [1, 2, 3] We'll handle one number of any size. We'll build up a 'token' stack. We'll use a list to ensure \$O(1)\$ time when adding new characters. This is because Python's strings guaranteed a minimum \$O(n)\$ time when adding. def tokenize(text): token = [] for char in text: if char in "0123456789": token.append(char) if token: yield int("".join(token)) >>> list(tokenize("1 2 3")) [123] >>> list(tokenize("")) [] We'll handle spaces. We just need to copy the if token code. And reset token after yielding one. def tokenize(text): token = [] for char in text: if char in "0123456789": token.append(char) elif char in " ": if token: yield int("".join(token)) token = [] if token: yield int("".join(token)) >>> list(tokenize("1 2 3")) [1, 2, 3] We'll handle operators. Basically the same as spaces but we also yield the operator. def tokenize(text): token = [] for char in text: if char in "0123456789": token.append(char) elif char in " +-*/()": if token: yield int("".join(token)) token = [] if char != " ": yield char if token: yield int("".join(token)) >>> list(tokenize("1+ 2+3")) [1, "+", 2, "+", 3] Error on unknown values. You should error on malformed input. The hard part of programming is handling user input. def tokenize(text): token = [] for char in text: if char in "0123456789": token.append(char) elif char in " +-*/()": if token: yield int("".join(token)) token = [] if char != " ": yield char else: raise ValueError(f"unknown character {char!r}") if token: yield int("".join(token)) >>> list(tokenize("~")) ValueError: unknown character '~' Since we've handled the tokenization, all we need to do is use your if data[i] == ')' code. def evaluate_infix(tokens): stack = [] for token in tokens: if token == ")": lhs, op, rhs = stack[-3:] del stack[-3:] if stack[-1] == "(": del stack[-1] if op == '-': token = lhs - rhs elif op == '+': token = lhs + rhs elif op == '*': token = lhs * rhs elif op == '/': token = lhs / rhs stack.append(token) return stack[-1] >>> evaluate_infix(tokenize("((12 * 3) + (2 + 3) * (1 * 3))")) 15 The hard part of programming is handling user input. You have a bug: >>> ((12 * 3) + (2 + 3) * (1 * 3)) 51 The if stack[-1] == "(": check shouldn't be optional. A "(" should always be required. As such, your input is invalid but you've not noticed. def evaluate_infix(tokens): stack = [] for token in tokens: if token == ")": lb, lhs, op, rhs = stack[-4:] del stack[-4:] if lb != "(": raise ValueError("three tokens must be between parens") if op == '-': token = lhs - rhs elif op == '+': token = lhs + rhs elif op == '*': token = lhs * rhs elif op == '/': token = lhs / rhs stack.append(token) return stack[-1] >>> evaluate_infix(tokenize("((12 * 3) + ((2 + 3) * (1 * 3)))")) 51 >>> evaluate_infix(tokenize("((12 * 3) + (2 + 3) * (1 * 3))")) ValueError: three tokens must be between parens
{ "domain": "codereview.stackexchange", "id": 42463, "tags": "python, python-3.x, homework, math-expression-eval" }
Is the double slit experiment valid in all inertial frames?
Question: In a double slit experiment done with particles having mass (say electrons), the results are inferred as being caused by probability waves. The wavelength of these waves is dependent upon the velocity (momentum) of the particles( de broglie wavelength = h/p). Now if the observer is in another frame in a way that the velocity of the observer's frame of reference is equal and in the same direction to the velocity of particles; the velocity of the particles w.r.t. the observer will be zero. Zero momentum means there should be no debroglie waves. But the slits and the screen do have momenta. Does the observer still see the same pattern on the screen? If yes, then how? If not, then doesn't it violate the first postulate of relativity? Answer: It is valid in all frames, but I don't know why you used the special relativity tag. The Schrödinger equation is a Newtonian equation, so it uses Galilean Relativity. Now let's get to how it works. To make the mathematics simple people assume a single monochromatic plane wave matter wave with a constant phase at each of the slits at any moment. In reality you can't make a perfect matter wave, you could experimentally make a wave packet strongly peaked about some momentum, and the math is very very similar. If you truly had a single particle wave and you switched Galilean frames then the waves would now be peaked around a zero momentum, but the screen and the barrier with the slits would be moving in the opposite direction as the matter wave originally did in the original frame. Before the potential was effectively quite large in the fixed location of the barrier except where the slits were and you were computing the wave at the fixed locations on the screen. Now you have a large potential in the moving barrier and are computing the wave's values in the moving locations of the moving screen. So you could try to rewrite everything like that, but it still missing the essential fact of the Schrödinger equation for a universe with more than a single particle in the universe which is: The wave function is not a physical field in 3d Euclidian space, it is a function from the 3n dimensional configuration space of the entire system of n particles into the joint spin space of the entire system of n particles. The most studied example of how the Schrödinger equation actually works is the hydrogen atom studied as a system of two particles, a proton and an electron and ignoring all the spin. You can have a 3d configuration space $\psi(x_e,y_e,z_e,x_p,y_p,z_p,t)$ and have a potential $V(x_e,y_e,z_e,x_p,y_p,z_p,t)=\frac{-ke^2}{\sqrt{(x_e-x_p)^2+(y_e-y_p)^2+(z_e-z_p)^2}}$ then you can switch coordinates to a relative position $(x_e-x_p,y_e-y_p,z_e-z_p)$ and a center of mass and then look for separable solutions. And what you get is the usual Schrödinger equation single particle hydrogen (with a reduced mass) for the relative position solution, and a free particle for the center of mass solution. That's the most common studied problem and if you truly what to understand the Schrödinger equation you should study it and make sure you get it. And then the same ideas come up for every problem. For the screen plus electron plus barrier with splits there is a center of mass, and there is a relative coordinate of the electron relative to the center of mass. And when you change frames, it just changes the free particle motion for the center of mass, the relative coordinate is unaffected by your choice of frame. So the wave function is not a function in space and when people talk about it as if it is they are just talk about one particle's position relative to some other things and since it is a relative coordinate, it isn't affected by changes in frame. So really everything that happens when you change frames happened to a part you were ignoring.
{ "domain": "physics.stackexchange", "id": 37164, "tags": "special-relativity" }