anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
how does gradient descent update weigths in neural network
Question: Im currently trying to learn about back propagation, and it's going forward, but theres one thing that keeps me scratching my head, and doesnt really seems to be answered in any of the videos or articles im looking at. I understand now, that based on my loss, the weigths of my network is updated. But what i dont understand is how this happens. lets say i have this exercise network with the following weigths: W_1 = 1.2 - w_2 = 0.4 - W_3 = 1.0 Now i do some training, and lets say i have the loss o.8. Now when i use my loss to update my weights, what happens specifically to the weights? are something being added, subtracted maybe multiplied? Thanks a lot Answer: In short it is "Added" to previous value of the weight. Here is the algo from tom mitchell's, in your case it shall be W_1 = w_1+delta,W_2 = w_2+delta, W_3 = w_3+delta.
{ "domain": "datascience.stackexchange", "id": 7516, "tags": "neural-network, gradient-descent, backpropagation" }
Water pressure vs temperature
Question: If I have a sealed enclosure full of water (constant volume) at 25˚C at atmospheric pressure, I then heat the water to 50˚C. Would the pressure in the sealed enclosure change? If the pressure has changed, how would I go about calculating the change? Answer: Yes, at constant density, the pressure increases as the temperature does: $\hspace{75px}$. For example, having water sealed at atmospheric pressure at $4\sideset{^{\circ}}{}{\mathrm{C}}$ will have a density of approximately $1 \frac{\mathrm{g}}{\mathrm{cm}^3}$. If we increase the temperature to $30\sideset{^{\circ}}{}{\mathrm{C}}$, maintaining the density (since the enclosure is sealed), the pressure will rise up to $100 \, \mathrm{bar}$. Find equations describing the rate of change here.
{ "domain": "physics.stackexchange", "id": 94692, "tags": "thermodynamics, pressure, temperature, water" }
Threads and logfiles
Question: The c# newbie is back :) I have another problem with my threads. Here is what i am trying to achieve: I am starting 5 threads which are performing the same task but on different URLs. So i am keeping a "masterLogFile.txt" to keep track of what URLs have already been visited. Each thread compares its own "thread1LogFile.txt" to the "masterLogFile.txt" before deciding whether to execute the task. My question is, is there any more efficient way to handle this? Currently each thread runs this piece of code before deciding if the URL is ok or not: using (FileStream fs = File.Open("masterLogFile.txt", FileMode.Open, FileAccess.Read, FileShare.Read)) { byte[] bff = new byte[1024]; UTF8Encoding temp = new UTF8Encoding(true); while (fs.Read(bff,0,bff.Length) > 0) { if (temp.GetString(bff).Contains(variableWithUrlFrom_thread1LogFile.txt)) { found = true; } } fs.Close(); } Answer: I'd use a dictionary (or a HashSet if you don't need the ID information) for coordination. class UrlLog { private Dictionary<string, int> visited = new Dictionary<string, int>(); public bool HasUrlBeenVisited(string Url) { lock(visited) return visited.ContainsKey(Url); } public void SetUrlVisited(string Url, int threadId) { /* id not strictly necessary */ lock(visited) visited[Url] = threadId; } } Then pass one instance of this UrlLog to your threads, and they can use that. Also, I'm not sure you even need the locks for the checks, but it's better to be safe than sorry. :)
{ "domain": "codereview.stackexchange", "id": 1755, "tags": "c#, multithreading" }
Why equating forces give wrong answer?
Question: Question :- A block is attached to the free end of a spring of spring constant $50\ \mathrm{N/m}$. Initially the spring was at rest. A $3\ \mathrm N$ force was applied to the block until it came to rest again. Find the maximum displacement of the block, take initial displacement as $0$. My first try :- We know the spring force is $-kx$. So at rest only horizontal forces acting on the block would be spring force and applied force of 3 N. $$\therefore -kx + 3N = 0$$ $$\implies x = \frac 3{50} = 0.06\ \mathrm m$$ Which is incorrect. My second try :- Let work done by spring force and applied force be $W_s$ and$ W_a$ respectively. $$ \begin{aligned} \Delta E_k &= W_s + W_a\\ &\implies0 = \displaystyle -\frac12 kx^2 + Fx\\ &\implies\displaystyle \frac12 kx = F\\ &\implies\displaystyle kx = 6\\ &\implies\displaystyle x = \frac{6}{50} = 0.12\ \mathrm m \end{aligned} $$ Which is correct. I am still not getting why my first try failed. What was my error ? Answer: Your first try is considering a static system. So the result is true if you very very slowly increase the force until you reach 3N. (With the friction dampening all oscillatory motion of the mass) So the real question is what is different in the problem you posted. Try thinking it from the beginning, you switch on the force and initially it is much greater than the counter force of the spring. So the mass accelerates, building up kinetic energy. The part the work which is transformed into kinetic energy decreases until you reach 0.06m as you calculated in your first try. From now on the force is decelerating the mass until it comes to a rest. As you calculated this is 0.12m from its original position. This scenario is dynamic, the mass will now oscillate around the center-point.
{ "domain": "physics.stackexchange", "id": 36592, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics" }
How to interpret these different Fourier analysis of this audio signal?
Question: This is my first dive in DSP. I would like to familiarize myself with frequency analysis. I have two audio tracks which should be digitized at 16bit-44.1kHz and 24bit-192kHz (music, presented as a 24bit-192kHz sample) respectively. I wanted to identify the effect of the low-pass filter around the Nyquist frequency (22.05kHz and 96kHz respectively). Edit: I completely reworked the question. Software used: I basically estimated the power spectral density using Welch's method as implemented by scipy.signal.welch in the Scipy library of the Python programming language. Basically, I used a script equivalent to: import numpy as np from matplotlib import pyplot as plt from scipy import signal from waveio import readwav # Load data from one channel (#0) for each sample file wav192 = readwav("24b-192khz.wav")[:,0] wav44 = readwav("16b-44khz.wav")[:,0] # DoE: 2 sample size and two windows types chunks = [256, 4096] windows = ["hanning", "boxcar"] # boxcar is rectangular # Prepare a figure plt.figure() # Calculate density spectra and plot for N in chunks: for w in windows: f, Pxx44 = signal.welch(wav44, fs=44100, window=w, nperseg=N, nfft=2*N, scaling="density") plt.semilogy(f, Pxx44) plt.legend(["chunk=%d; window=%s"%(c, w) for c in chunks for w in windows]) plt.xlabel("Frequency (Hz)") plt.ylabel("Density (I$^2$/Hz)") The power spectral density of the 44.1kHz audio sample: Which is basically just as expected: The chunk size, i.e. the number of samples per fft-transform segment in the real domain, does not change the density a lot if a Hann window is used. The chunk size effect is clearly visible with the boxcar (rectangular) window. From what I understand, this is because of spectral leakage which diminishes as the chunk size increases. Is that correct? The low-pass filter effect at the Nyquist frequency (22.05kHz) So far, so good. The power spectral density of the 192kHz audio sample: Good point: Same behaviour in regard to the chunk size and the window. Is spectral leakage really that strong? That's pretty impressive. Oddities: What the heck is happening? Where is the low-pass filter near the Nyquist frequency? Why are very-high frequencies even increasing? Could that be related to the choice of the windowing function? From my interpretation, there is no low-pass filter visible because basically no audio system would go above 192kHz and generally, the software/hardware creator are smart enough to apply a low-pass filter designed with regard to the actual output bandwidth of the audio system. As for the increasing audio signal above 57kHz, I really can't explain it: the original audio sample is some classical music. I wouldn't expect any instrument to generate louder sounds in that range or frequencies. Any idea? Could this be an example of upsampling? Answer: If you look at the Rectangular window the best its rejection gets is about 40 dB. So that behavior, especially obvious in the bottom plot, for the rectangular window is to be expected. I don't know for sure if this explains everything but look at the level between your peak signal and the high-frequency components. There is almost 60 dB of rejection there. I've always heard that a good rule of thumb is to get 60 dB of rejection from your filters. I know that's not the full 96 dB offered by 16-bit music, but I bet you'd be hard pressed to actually hear that. Of course, there's the silliness of having music at that sample rate. Humans just can't hear anything above around 20 kHz, give or take. This article gives a good summary of the issues. It also brings up a good point, that that sample rate has the potential to pick up harmonics and other high-frequency distortion caused by equipment and electronics, despite the fact that we can't hear it. Perhaps something like that is going on?
{ "domain": "dsp.stackexchange", "id": 2924, "tags": "audio, fourier-transform, frequency-spectrum" }
Local Search vs Classical Search
Question: I have some questions regarding local search / optimization as explained in chapter 4 of the book : http://aima.cs.berkeley.edu/ In classical search (Chap 3), the search starts from an initial node, then the search continues based on strategies of BFS, DFS, etc. What I am unsure of is the process of local search (Chap 4). Does the local search algorithm start from one node in state space? Check if constraints are satisfied? If Yes, this is goal? If not, move to neighbours? Is the entire state space considered goal state? Even nodes that don't satisfy the constraints? In optimization, the search is conducted in a part of search space where all constraints are met, but tries to find a better solution. In that case, what if the search algorithm moves to nodes where they don't satisfy the constraints? Answer: Classical local search works as follows. We're trying to optimize some function under some constraints. We start with some feasible point (a point satisfying all constraints). At each step, we consider small changes to the current point which (1) keep it feasible, (2) improve the objective function. If we find such a small change, we modify the point accordingly. Eventually, we reach a local optimum, and we hope that it's not too bad relative to the global optimum. A classical example is the simplex algorithm for linear programming. The algorithm starts with some feasible point (it's not immediately obvious how to do it; a trick is required). At each step, we try to modify the point by switching one tight constraint with another in a way that improves the objective function while keeping the point feasible. Eventually we reach a local optimum, which turns out to be a global optimum (in this particular case). The interior-point algorithm for linear programming work differently. They start at some point, and move in a direction that (1) makes the point more feasible, and (2) improves the objective function. In the end, you get close to a feasible local optimum, which turns out to be a global optimum. This is not local search, but it's an example of an algorithm which does not maintain a feasible solution. Non-oblivious local search is a variant on the theme of local search, in which instead of trying to optimize the actual objective function, you direct the local search using an auxiliary objective function. Sometimes this improves the quality of the local optimum. You can read all about it in a recent PhD thesis by Justin Ward.
{ "domain": "cs.stackexchange", "id": 1168, "tags": "artificial-intelligence, search-algorithms" }
Kinetic energy always time independent?! Where is my mistake?
Question: I have some problems understanding the Lagrangian and the Hamiltonian formalism. Those can be condensed in the following "derivation" of $\frac{\partial T}{\partial t} = 0$ from the equation $\frac{\partial H}{\partial t} = - \frac{\partial L}{\partial t}$. Since the kinetic energy might be time dependent (for example when our frame of reference accelerates), it seems that I missed something really important. Question: . Where is my mistake or where did I missunderstood the Lagrangian / Hamiltonian formalism? Derivation: One of the Hamiltonian equations is $\frac{\partial H}{\partial t} = - \frac{\partial L}{\partial t}$ (see section "Deriving Hamilton's equations" of the Wikipedia article "Hamiltonian mechanics"). With $H=T+V$ and $L=T-V$ we get ($T$ stands for kinetic energy and $V$ for potential energy): $$\begin{array}{rrl} & \frac{\partial H}{\partial t} & = - \frac{\partial L}{\partial t} \\ \iff & \frac{\partial (T+V)}{\partial t} & = - \frac{\partial (T-V)}{\partial t} \\ \iff & \frac{\partial T}{\partial t} + \frac{\partial V}{\partial t} & = - \frac{\partial T}{\partial t} + \frac{\partial V}{\partial t} \\ \iff & 2\frac{\partial T}{\partial t} & = 0 \\ \iff & \frac{\partial T}{\partial t} & = 0 \end{array}$$ I want to elaborate the given answer by Qmechanic, since it took me some time to understand it. Extended Answer: We have to keep in mind, that $H = H(p,q,t)$ and $L(\dot q, q, t)$ have different signatures. While the Hamiltonian depends on the generalized impuls $p$, the Lagrangian depends on the velocity $\dot q$. Therefore $\frac{\partial H}{\partial t}$ and $\frac{\partial L}{\partial t}$ are different partial derivations. In $\frac{\partial H}{\partial t}$ the variables $p$ and $q$ are held constant while in $\frac{\partial L}{\partial t}$ the variables $\dot q$ and $q$ are held constant. When we notate the first derivation with $\partial_t^{p,q}$ and the second with $\partial_t^{\dot q,q}$ we see were I made a mistake in the above derivation: $$\begin{array}{rrl} & \partial_t^{p,q} H & = - \partial_t^{\dot q,q} L \\ \iff & \partial_t^{p,q} (T+V) & = - \partial_t^{\dot q,q} (T-V) \\ \iff & \partial_t^{p,q} T + \partial_t^{p,q} V & = - \partial_t^{\dot q,q} T + \partial_t^{\dot q,q} V \\ \iff & \partial_t^{p,q} T + \partial_t^{\dot q,q} T & = \partial_t^{\dot q,q} V - \partial_t^{p,q} V\\ \end{array}$$ Since $V$ doesn't depend on $p$ nor $\dot q$ we have $\partial_t^{\dot q,q} V - \partial_t^{p,q} V = 0$: $$\partial_t^{p,q} T + \partial_t^{\dot q,q} T = 0$$ However, $\partial_t^{p,q} T$ do not have to be the same as $\partial_t^{\dot q,q} T$. This is not the case, when the impuls-velocity-connection is time dependent, i.e. $p = p(\dot q, t)$. An example is a launching rocket whose mass decreases with time. Here we have $T=\frac 12 m(t)\dot q^2$ and thus $p=m(t)\dot q$ (see example in the answer by Qmechanic). Both partial derivations are only the same, when the following property is fulfilled: $$p \text{ is constant over time} \iff \dot q \text{ is constant over time}$$ Answer: In a nutshell, even if we assume the non-generic relations $$L(q,v,t)~=~T(v,t)~-~V(q,t)\quad\text{and}\quad H(q,p,t)~=~T(p,t)~+~V(q,t),$$ then OP's mistake is to be cavalier about functional dependence of $T$, and in particular, its explicit time dependence. Perhaps a simple example is in order, cf. above comment by jacob1729: $$\begin{align} L(q,v,t)~=~\frac{m(t)}{2}v^2 \quad &\Rightarrow\quad \frac{\partial L(q,v,t)}{\partial t} ~=~ \color{red}{+}\frac{m^{\prime}(t)}{m(t)}L(q,v,t)\cr\cr \updownarrow\text{identify}\qquad & \qquad\qquad\text{sum up to zero }\updownarrow\cr\cr H(q,p,t)~=~\frac{p^2}{2m(t)}\quad &\Rightarrow\quad \frac{\partial H(q,p,t)}{\partial t} ~=~ \color{red}{-}\frac{m^{\prime}(t)}{m(t)}H(q,p,t). \end{align}$$
{ "domain": "physics.stackexchange", "id": 59027, "tags": "newtonian-mechanics, energy, lagrangian-formalism, hamiltonian-formalism, hamiltonian" }
Send Data from DynamoDB to Lambda (C#) and to Azure Queue
Question: I am a beginner and trying to send data from AWS DynamoDB to Azure Queues. Note that this code will be invoked 10,000 and alot more. Can you guys review it once. using System; using System.IO; using System.Text; using Newtonsoft.Json; using Amazon.Lambda.Core; using Amazon.Lambda.DynamoDBEvents; using Amazon.DynamoDBv2.Model; using Microsoft.Azure.ServiceBus; using System.Threading.Tasks; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace Lambda { public class Function { private static readonly JsonSerializer _jsonSerializer = new JsonSerializer(); public async Task FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context) { context.Logger.LogLine($"Beginning to process {dynamoEvent.Records.Count} records..."); foreach (var record in dynamoEvent.Records) { context.Logger.LogLine($"Event ID: {record.EventID}"); context.Logger.LogLine($"Event Name: {record.EventName}"); string streamRecordJson = SerializeStreamRecord(record.Dynamodb); await Send(streamRecordJson); context.Logger.LogLine($"DynamoDB Record:"); context.Logger.LogLine(streamRecordJson ); } context.Logger.LogLine("Stream processing complete."); } private static async Task Send(string stream) { const string connectionString = "QUEUE END POINT"; string queueName = "QUEUE NAME"; ServiceBusConnectionStringBuilder svc = new ServiceBusConnectionStringBuilder(connectionString); ServiceBusConnection svc1 = new ServiceBusConnection(svc); var client = new QueueClient(svc1, queueName, ReceiveMode.PeekLock, RetryPolicy.Default); var message = new Message(Encoding.UTF8.GetBytes(stream)); await client.SendAsync(message); } private string SerializeStreamRecord(StreamRecord streamRecord) { using (var writer = new StringWriter()) { _jsonSerializer.Serialize(writer, streamRecord); return writer.ToString(); } } } } Answer: If the connection and queue name are not changing per record then there is no reason to be creating a new client for each record in the loop. Especially for the amount of times stated in the original post. Move that to the constructor of the class. public class Function { private static readonly JsonSerializer _jsonSerializer = new JsonSerializer(); private readonly IQueueClient client; private const string connectionString = "QUEUE END POINT"; private const string queueName = "QUEUE NAME"; public Function() { ServiceBusConnectionStringBuilder builder = new ServiceBusConnectionStringBuilder(connectionString); ServiceBusConnection connection = new ServiceBusConnection(builder); client = new QueueClient(connection, queueName, ReceiveMode.PeekLock, RetryPolicy.Default); } public async Task FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context) { var logger = context.Logger; logger.LogLine($"Beginning to process {dynamoEvent.Records.Count} records..."); foreach (var record in dynamoEvent.Records) { logger.LogLine($"Event ID: {record.EventID}"); logger.LogLine($"Event Name: {record.EventName}"); string streamRecordJson = SerializeStreamRecord(record.Dynamodb); await SendAsync(streamRecordJson); logger.LogLine($"DynamoDB Record:"); logger.LogLine(streamRecordJson); } context.Logger.LogLine("Stream processing complete."); } private Task SendAsync(string body) { var message = new Message(Encoding.UTF8.GetBytes(body)); return client.SendAsync(message); } private string SerializeStreamRecord(StreamRecord streamRecord) { using (var writer = new StringWriter()) { _jsonSerializer.Serialize(writer, streamRecord); return writer.ToString(); } } }
{ "domain": "codereview.stackexchange", "id": 37801, "tags": "c#, .net, amazon-web-services" }
Why is the gauge potential $A_{\mu}$ in the Lie algebra of the gauge group $G$?
Question: If we have a general gauge group whose action is $$ \Phi(x) \rightarrow g(x)\Phi(x), $$ with $g\in G$. Then introducing the gauge covariant derivative $$ D_{\mu}\Phi(x) = (\partial_{\mu}+A_{\mu})\Phi(x).$$ My notes state the gauge potential $A_{\mu} \in L(G)$, $L(G)$ being the Lie Algebra of the group $G$. What's the connection between the Lie Algebra of the group and the gauge potential? Answer: The gauge potential is an object that, when introduced in the covariant derivative, is intended to cancel the terms that spoil the linear transformation of the field under the gauge group. Every gauge transformation $g:\Sigma\to G$ (on a spacetime $\Sigma$) connected to the identity may be written as $\mathrm{e}^{\mathrm{i}\chi(x)}$ for some Lie algebra valued $\chi: \Sigma\to\mathfrak{g}$. The derivative of a transformed field is $$ \partial_\mu(g\phi) = \partial_\mu(g)\phi + g\partial_\mu\phi = g(g^{-1}(\partial_\mu g) + \partial_\mu)\phi$$ and it is the $g^{-1}(\partial_\mu g) = \partial_\mu\chi$ that we want to cancel here by adding the gauge field so that $D_\mu(g\phi) = gD_\mu\phi$. Since $\partial_\mu\chi$ is Lie algebra valued, so must the gauge field $A$ we add, and it has to transform as $$ A\overset{g(x)}{\mapsto} gAg^{-1} - g^{-1}\mathrm{d} g$$ to cancel the terms we want to cancel.
{ "domain": "physics.stackexchange", "id": 21743, "tags": "gauge-theory, group-theory, lie-algebra, differentiation, gauge-invariance" }
Lambda Calculus in Rewriting systems
Question: How to do or implement Lambda Calculus in a Rewriting systems? Rewriting systems are Turing complete. But I can't figure out how to do lambda calculus or functions with them. Answer: See also this question: "How is Lambda Calculus a specific type of Term Writing system?". Term rewriting, as introduced in (1), and described in e.g. (2), is a first-order system that cannot handle binding. Consider the $map$ function. $$ \begin{array}{lcl} map(f, []) &\rightarrow& [] \\ map(f, cons(x, l)) &\rightarrow& cons( f\ x, map\ f\ l) \end{array} $$ The problem is that $f$ is used both as a variable and a function symbol, which is not permitted by first-order term-rewriting system. This lead to higher-order rewriting, see e.g (3) for an overview. Another approach to unification of term-rewriting with the $\lambda$-calculus is the rewriting calculus (4). Yet another approach towards rewriting with binders -- arguably the most modern -- is nominal rewriting (5). D. E. Knuth, P. Bendix, Simple Word Problems in Universal Algebras. F. Baader, T. Nipkow, Term Rewriting and All That. T. Nipkow, C. Prehofer, Higher-Order Rewriting and Equational Reasoning. H. Cirstea, C. Kirchner, Introduction to the rewriting calculus. M. Fernandez, M. J. Gabbay, Nominal Rewriting.
{ "domain": "cs.stackexchange", "id": 8799, "tags": "turing-machines, lambda-calculus, turing-completeness, term-rewriting" }
Formation of tetrafluoroborate using VBT and hybridisation
Question: How can the formation of $\ce{BF_4^-}$(Boron is $\ce{sp^3}$ hybridized) be explained using VBT and Hybridization? So far, I understood that one electron from s orbital gets excited and jumps into a p orbital, and then the orbitals undergo $\ce{sp^3}$ hybridization leaving us with four $\ce{sp^3}$ hybrid orbitals (three $\ce{sp^3}$ orbitals with one electron each and one $\ce{sp^3}$ hybridized orbital with no electron). What happens next? Answer: Perhaps my answer deals mostly with hybridisation and less with VBT. Your question seems to be arising when you find that only 3 of them contain a lone electron each and the fourth is still empty despite the hybridisation being $\ce{sp^3}$ which means total four hybrid orbitals. Well, see that you asked about fluoroborate ($\ce{BF_4^-}$) in which there is an excess charge on boron. What happens first is, as you pointed out, that an electron from s orbital jumps to a p orbital. There is still one empty p orbital which acquires an electron and thus a negative charge on B. Now what you have is four singly occupied $\ce{sp^3}$ orbitals of B that can each overlap with a lone electron in the p orbital of four F atoms to give $\ce{BF_4^-}$. Note that the hybridisation to $\ce{sp^3}$ doesn't happen unless the remaining p orbital of boron gets at least one electron.
{ "domain": "chemistry.stackexchange", "id": 17485, "tags": "bond, hybridization, covalent-compounds, valence-bond-theory" }
Length of a curve in D dimensional euclidean space
Question: In a book I am reading on special relativity, the infinitesimal line element is defined as $dl^2=\delta_{ij}dx^idx^j$ (Einstein summation convention) where $\delta_{ij}$ is the euclidean metric. Next, if we have some curve C between two points $P_1$ and $P_2$ in this space then the length of the curve is given as $\Delta L = \int_{P_1}^{P_2}dl$ I am having trouble deriving the next statement, which I quote: A curve in D-dimensional Euclidean space can be described as a subspace of the D-dimensional spce where the D co-ordinates $x^i$ are given by single valued functions of some parameter $t$, in which case the length of the curve from $P_1=x(t_1)$ to $P_2=x(t_2)$ can be written $$\Delta L = \int_{t_1}^{t_2}\sqrt{\delta_{ij} \dot{x}^i \dot{x}^j} dt \qquad \mbox{where}\; \dot{x}^i\equiv \frac{dx^i}{dt}$$ Answer: You can derive the correct results when you use the key property of differentials $$dx_i=\dot{x}_i dt.$$ Note that $\Delta L$ is invariant under reparameterization $t'=f(t)$ as you can check easily (this is in fact the reason why you can write it as $\int d\ell$ without any reference to a parametrization). However, to calculate the length $\Delta L$ it is advisable to introduce some (arbitrary) parameterization. If you are interested in unique parameterizations: there exists also a unique parameteriziation with respect to arclength which has some nice feature.
{ "domain": "physics.stackexchange", "id": 1188, "tags": "mathematical-physics, differential-geometry" }
Why can't we set the lattice spacing 'a' in lattice QCD?
Question: My question is to do with lattice QCD. In the lattice action there is a parameter, 'a', the lattice spacing in physical units. However, if we want to generate a configuration with a certain lattice spacing, we don't just set a=some number. We take the roundabout route of setting some dimensionless parameters (with some dependence on 'a') equal to some value and then extracting what 'a' must have been afterwards. My question is why do we have to do this? Why can't I just go into the code and set a=something and simulate? I know that for other computational problems it makes sense to only simulate dimensionless quantities, but the reasons for this are to do with stability and reducing the computational time. Answer: The reason is that changes in the scale a and changes in the coupling g can compensate for each other. Two simulations, one with a small lattice spacing a and gauge couling g and another with an even smaller lattice spacing a' and coupling g' give the same results at long distances when g' is adjusted properly. This is the statemen that the theory is renormalizble, so that you can take the limit a goes to zero, g' changes correspondingly, and extract a good limit. Further, as the lattice spacing a' gets smaller, to keep the physics the same, g' gets weaker. This is the statement that QCD is asymptotically free (free at short distances). But the dependence of g on a for not-so-small lattices is annoying to calculate, because it only is simple in covariant regulators, and the log-running means it is never that small at any reasonable scale. So instead of fixing a and calculating what g should be, you use the known existence of the scaling continuum limit to fix your physics. So you just set your length scale to make a=1 and you adjust g to be approximately .5 (this makes ${g^2\over 2\pi} = .04$, and this is the perturbative parameter), and then you look to see if the gauge field randomizes over your box with this choice. If you make g too small, the gauge field will be nearly constant in the box, if you make g too big, the typical gauge field configuration will be random from point to point, with a large SU(3) matrix for plaquettes. You want to make sure that your g is in the good spot so that the box is not too small to see the long-distance randomness, and not too big to make the lattice coarse to see the interior structure of a hadron bag. Because the choice of g and a are intertwined in a nontrivial way, it is best to fix the simulation parameters by using the output masses. The dependence of g and a cannot be extracted from traditional dimensional analysis, because it is logarithmic in a. Classically, g is independent of a.
{ "domain": "physics.stackexchange", "id": 1605, "tags": "computational-physics, lattice-model" }
Why can't ROS find this file?
Question: #Error error loading <rosparam> tag: file does not exist [costmap_common_params.yaml] XML is <rosparam command="load" file="costmap_common_params.yaml" ns="global_costmap"/> The traceback for the exception was written to the log file I tried putting the files in the same folder as the launch file and that didn't work. Scanning around my hard drive I found the file located at ~/catkin_ws/src/rtabmap_ros/launch/azimut3/config/costmap_common_params.yaml What's going on here? Originally posted by jacksonkr_ on ROS Answers with karma: 396 on 2016-08-12 Post score: 0 Answer: The file parameter to rosparam needs an absolute file name (technically it's relative to the working directory for roslaunch, but you shouldn't count on that). You can give the file parameter an absolute path by using the $(find pkg) substitution to start with the absolute path to a package, and then use the path of the file within that package. Since you're looking for the config/costmap_common_params.yaml file within the azimut3 package, you can refer to that file relative to the azimut3 package: <rosparam command="load" file="$(find azimut3)/config/costmap_common_params.yaml" ns="global_costmap"/> for more substitutions and examples, have a look at the roslaunch XML syntax documentation Originally posted by ahendrix with karma: 47576 on 2016-08-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jacksonkr_ on 2016-08-13: ResourceNotFound: azimut3 so I found another copy of the file at ~/catkin_ws/install/share/rtabmap_ros/launch/config/costmap_common_params.yaml and changed the file reference to $(find rtabmap_ros) but ROS is looking in /home/ubuntu/catkin_ws/src/rtabmap_ros/launch/config/ instead. ?? Comment by Michael Johnson on 2016-08-15: If you want to use the install location ~/catkin_ws/install/ you should source that setup.bash file. i.e. source ~/catkin_ws/install/setup.bash Comment by ahendrix on 2016-08-17: Can you find the azimut3 package with rospack find azimut3? If you can't, you should figure out why and fix that. Perhaps there isn't a package.xml in your azimut3 package, or it doesn't have the right package name in it?
{ "domain": "robotics.stackexchange", "id": 25512, "tags": "navigation, rosparam, costmap" }
In an ant (or bee) colony, what is the very approximate ratio of new breeders to workers?
Question: For example, out of every 1000 eggs laid, X mature into drones and/or virgin queens. That seems impossibly precise but it illustrates the kind of number I want well. I'll accept answers for any species and any number of species, even one, with any amount of precision or lack thereof, because right now I can't even feel confident saying that there are more workers or more breeders, though I obviously suspect more workers. I would also be ecstatic to have any live count just before the nuptial flight, i.e. for this colony in this study there were X workers, Y drones, and Z virgin queens just before the nuptial flight, or X workers and (Y+Z) breeders, or X% of the colony was breeders, or for this species on average X% are breeders just before the nuptial flight. Anything. Any one thing and I can accept it as an answer. I can find any number of studies that talk about the sex ratio between drones and virgin queens, so I know someone is counting. Maybe I'm not reading closely enough, but they always seem to slip away from giving all the numbers I need to figure this out for myself. Answer: So. This answer is specific to the western honeybee, Apis mellifera, as there are massive amounts of data on them; more, possibly, than any other insect species. There has certainly been more data collected about them than any other hymenopteran. At around the time of the nuptial flight, there may be as many as 60,000 workers in the hive, though likely number is more like 15,000 - 20,000. There will be either one (virgin) or two (one mated, one virgin) queens (the old queen will stay with the hive, if she is alive). There may be as many as 400 drones from the original colony (though usually the number is less, around 150 is typical; 10 - 50 of them will actually mate with the queen), and an equal number may join in the flight drawn from other colonies, especially in commercial beekeeping operations. Somewhere between 1000 - 6000 workers will take part in the nuptial flight with the virgin queen and the drones. This means that in honeybees, the nuptial flight contains 1 queen:~5000 workers:~150 drones. Data is drawn from many years working with bees and beekeepers. Opinions will vary somewhat depending on the beekeeper and the beekeeping methods. These are based, generally, on bees kept for clover and fruit pollination in the western USA.
{ "domain": "biology.stackexchange", "id": 8545, "tags": "entomology, ant" }
Constructing a circuit which performs the transformation $|x,y\rangle \to |x, x + y \bmod 4\rangle$
Question: When faced with exercises like these, I find it hard to know how to construct the circuits due to the amount of input one needs to account for. I have seen the solution provided here however, I don't think I would have been able to solve this exercise my self. Does anyone have any tips on a systematic way to construct a circuit like this? I start out with a circuit which solves one specific input (for example for $x = |00\rangle, y = |01\rangle$), but after this I get stuck. I appreciate any help! Answer: Here are some strategies. Start from a classical circuit, and do a naive transformation into a quantum circuit (e.g. AND gate becomes Toffoli onto a clean ancilla). Find ways to uncompute intermediate values and other garbage. Then optimize optimize optimize until it's compact. It helps to know a lot of trivial circuit identities when optimizing. Decompose into simpler problems. For example, separate the two-bit addition into two controlled increments then figure out how to do a controlled increment. Get inspiration from algebraic boolean expressions for the output values. For example, if you are incrementing a register $x_0, x_1, x_2, ..., x_k$ then qubit $i$ transitions from $x_i$ to $x_i \oplus \prod_{j<i} x_j$ and this form suggests a simple circuit that you can create. Brute force one output bit at a time. This only works for smaller circuits. Figure out every situation where each bit should be ON, and for each of those cases output a multi-controlled NOT targeting a clean ancilla to represent the output. Then do the same strategy in reverse to uncompute the input. Then optimize optimize optimize. Diagonalize the problem. Is there a simple unitary $U$ that maps your problem's eigenstates to the computational basis states? Apply it, then phase the appropriate states using many-controlled Z rotations, then un-apply $U$. In the case of addition, the operation that does this is the QFT and you won't even need the Z rotations to be controlled. Find a loop invariant that breaks the problem into constant-sized pieces. In the case of adders, the invariant is packaged up into the carry bit that propagates through ripple-carry adders. Can you find an equivalent of a carry bit for your problem? Pay attention to data dependencies. Start by getting the value of qubits with no dependents correct. Then remove those qubits from the dependency graph and iterate. Two-bit addition has a very simple dependency graph, and so this approach works well on it. If the dependency graph has cycles, the problem will be much harder. Apply generic problem solving (i.e. read How to Solve It). For example, get a foothold by dropping constraints. What if you were allowed to get the phases wrong, or to create junk? What if you only have to get some of the qubits correct, and are allowed to trash the others? What if you had more workspace? What if one of the input qubits was promised to be in the 0 state? The 1 state? The + state? What if you only had to do 1 bit addition, or 1-bit-register-into-2-bit-register addition?
{ "domain": "quantumcomputing.stackexchange", "id": 1238, "tags": "quantum-algorithms, circuit-construction" }
D-Threose: reduction and optical activity
Question: I've bumped into this exercise: Judging by the mechanism of reaction between D-threose and $\ce{NaBH4}$, do you think the final product will be optically active? To me looks like a reduction, since $\ce{NaBH4}$ is a lightly reductive agent, but then why the answer is "No, the product is not optically active"? Is it because of the presence of anomers or there are other reasons? Answer: When you reduce the aldehyde (from the open chain threose, since the cycle cannot be reduced as it has only acetal-like functionality) you introduce a symmetry element (both ends of the molecule become $\ce{CH2OH}$). The presence of a symmetry element means the reduced molecule is not chiral, but meso.
{ "domain": "chemistry.stackexchange", "id": 8484, "tags": "organic-chemistry, carbohydrates, optical-properties" }
How to work with different Encoding for Foreign Languages
Question: I've got a Word Embedding File called model.txt. This contains 100 Dimensional vectors for over a million French words. These words contain accented characters such as é, â, î or ô. Let me explain my problem with the following example: Consider these two words and their respective vectors, both of which are taken from model.txt: etait -0.100460 -0.127720 ... était 0.094601 -0.266495 ... Both words signify the same meaning but the former is without the accents while the later has accents. Now I'm trying to load this word embedding using the gensim.models.KeyedVectors in the following way: model = KeyedVectors.load_word2vec_format(open(model_location, 'r', encoding='utf8'), binary=False) word_vectors = model.wv To which I get the following error: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-82-e17c33c552da> in <module> 10 model = KeyedVectors.load_word2vec_format(open(model_location, 'r', 11 encoding='utf8'), ---> 12 binary=False) 13 14 word_vectors = model.wv D:\Anaconda\lib\site-packages\gensim\models\keyedvectors.py in load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype) 1547 return _load_word2vec_format( 1548 cls, fname, fvocab=fvocab, binary=binary, encoding=encoding, unicode_errors=unicode_errors, -> 1549 limit=limit, datatype=datatype) 1550 1551 @classmethod D:\Anaconda\lib\site-packages\gensim\models\utils_any2vec.py in _load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype, binary_chunk_size) 286 vocab_size, vector_size, datatype, unicode_errors, binary_chunk_size) 287 else: --> 288 _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding) 289 if result.vectors.shape[0] != len(result.vocab): 290 logger.info( D:\Anaconda\lib\site-packages\gensim\models\utils_any2vec.py in _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding) 213 def _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding): 214 for line_no in range(vocab_size): --> 215 line = fin.readline() 216 if line == b'': 217 raise EOFError("unexpected end of input; is count incorrect or file otherwise damaged?") D:\Anaconda\lib\codecs.py in decode(self, input, final) 320 # decode input (taking the buffer into account) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call 324 self.buffer = data[consumed:] UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 7110-7111: invalid continuation byte which I thought made sense if my file was encoded in a different format. However, using git I tried checking the encoding of the file using file * and got: model.txt: UTF-8 Unicode text, with very long lines Now, if I try to write the above code and have the encoding set to latin1, there isn't any problem to load this document but at the cost of not being able to access any of the words which contains an accent. Essentially throwing an out-of-vocab error upon executing: word_vectors.word_vec('était') How am I supposed to approach the problem? I've also got the .bin file of the model, should I try to use that to load my words and their corresponding vectors? Answer: Nevermind, the solution was trivial. Since I had the .bin file I could just open it in binary form. If somebody doesn't really have the .bin file, they could consider converting the .txt file to .bin and solve further.
{ "domain": "datascience.stackexchange", "id": 7833, "tags": "nlp, word-embeddings, encoding, gensim" }
Random Forest Model Train, Save and Predict Later vs Train and Predict Right Away - Different Results
Question: I tested two pieces of code and they delivered different results, which was quite unexpected. First piece of code is supposed to train models in a k-fold manner, preserve each one of these fitted models and then validate them later on same or different dataset: models = dict() # train on Dataset 1 for component in components: print(component) # fetch X # fetch y kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=1) model = RandomForestClassifier(random_state=11) f1_scores = [[], []] models[component] = [] # enumerate the splits and summarize the distributions for train_idx, test_idx in kfold.split(X, y): # select rows X_full_train, X_full_test = X.iloc[train_idx], X.iloc[test_idx] y_train, y_test = y.iloc[train_idx], y.iloc[test_idx] # summarize train and test composition model.fit(X_full_train, y_train) models[component].append(model) print("Dataset 1") # evaluate on Dataset 1 samples print() for component in components: print(component) # fetch X # fetch y kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=1) # enumerate the splits and summarize the distributions predictions = [] y_tests = [] for train_idx, test_idx in kfold.split(X, y): model = models[component].pop(0) # select rows X_full_train, X_full_test = X.iloc[train_idx], X.iloc[test_idx] y_train, y_test = y.iloc[train_idx], y.iloc[test_idx] # summarize train and test composition prediction = model.predict(X_full_test) predictions.extend(prediction) y_tests.extend(y_test) fig, (ax1,ax2) = plt.subplots(1,2, figsize=(9,2)) clf_report = classification_report(y_tests, predictions, output_dict=True) sns.heatmap(pd.DataFrame(clf_report).iloc[:-1, :-3].T, annot=True, ax=ax1) ConfusionMatrixDisplay.from_predictions(y_tests, predictions, xticks_rotation=45, ax=ax2) plt.show() Second piece of code is doing basically the same thing as the one above (in case the validation dataset is the same one as training dataset). So, I perform k-fold training and testing in one of the identically split data (because of random_state): print("Dataset 1") # train and evaluate on Dataset 1 samples print() for component in components: print(component) # fetch X # fetch Y kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=1) model = RandomForestClassifier(random_state=11) # enumerate the splits and summarize the distributions predictions = [] y_tests = [] for train_idx, test_idx in kfold.split(X, y): # select rows X_full_train, X_full_test = X.iloc[train_idx], X.iloc[test_idx] y_train, y_test = y.iloc[train_idx], y.iloc[test_idx] # summarize train and test composition model.fit(X_full_train, y_train) prediction = model.predict(X_full_test) predictions.extend(prediction) y_tests.extend(y_test) fig, (ax1,ax2) = plt.subplots(1,2, figsize=(9,2)) clf_report = classification_report(y_tests, predictions, output_dict=True) sns.heatmap(pd.DataFrame(clf_report).iloc[:-1, :-3].T, annot=True, ax=ax1) ConfusionMatrixDisplay.from_predictions(y_tests, predictions, xticks_rotation=45, ax=ax2) plt.show() As you can see, these results look less optimistic as opposed to the first ones. What wonders me, is that they look different even though I fed them with same random_state integer and I do not quite understand why is that so? I would be glad if someone could explain this to me. Thanks in forward! Answer: I expected scikit to allocate completely new memory space for corresponding model during fit() call, which does not happen to be the case. So in the first case by calling models[component].append(model) I tend to save the address of model rather than the deep copy of the model itself. Later on, this model gets overwritten by the next one and so on. Eventually, I end up with a list of same address pointing to the last fitted model. Easy solution to this is to move the model creation inside the loop or create a deep copy manually using copy utilities: for train_idx, test_idx in kfold.split(X, y): model = RandomForestClassifier(random_state=11) ...
{ "domain": "datascience.stackexchange", "id": 10459, "tags": "classification, random-forest, training, cross-validation, prediction" }
Does it give sense to speak about field distribution of a single photon?
Question: A Point source of light has radial symmetry. If the source gets attenuated so that only a single photon is leaving each hour, can I still argue, that the field of the single photon is radial but the photon is detected on an arbitrary (random) point on the sphere, like it is the case for the wave function of a material particle? In this case there could be interference caused by spatially very distant interfering objects (e.g. gravitation of stars, ...) Or does the emitted photon have a narrow radiation coil from beginning? Answer: A single photon can be emitted with an outgoing "spherical" wavefront. I put "spherical" in quotes because most emission processes will produce a photon with a wavefunction that may have less symmetry but that is still significant in practically all directions, in close analogy with classical EM radiation. (This close analogy is not a coincidence, as illustrated quantitatively in another post.) A photon is something that can be counted, not necssarily something that is localized or traveling in a narrow direction. However, if a source located in the middle of a large spherical cavity emits single a "spherically"-symmetric photon, and if the spherical wall of the cavity is lined with localized photon-detectors, then only one of those detectors will register the photon. This is a property of the measurement, and it does not imply that the photon had any narrow direction prior to the measurement.
{ "domain": "physics.stackexchange", "id": 55058, "tags": "electromagnetism, visible-light, photons, quantum-optics" }
Why it is not $O(m)$ but $O(\log m)$?
Question: I am reading the lecture notes and have a question. I am trying to understand the beginning of Section 3 on page 2. Problem: Given an input stream $\sigma$, compute (or approximate) its length $m$. Naive solution: $O(\log m)$ bits, exact solution. I don't understand why it is not $O(m)$ bits but $O(\log m)$ bits. Any help would be greatly appreciated. Answer: Here $O(\log m)$ denotes the space complexity of the algorithm. We can represent a natural number $m$ in $O(\log m)$ space trivially. For example, when we express numbers in base-2 system (binary number), $m$ is expressed in $\lceil \log_2 m \rceil$ bits. (e.g., 5 is 101 in binary notation. $\lceil\log_2 5\rceil$ digits.) As a side note, $m$ is expressed in $O(m)$ space if we use base-1 system. (e.g., 5 is 11111 in unary notation. 5 digits.)
{ "domain": "cs.stackexchange", "id": 11724, "tags": "complexity-theory, space-complexity, counting" }
PCL SACSegmentation - setAxis and setModelType has no effect in Output
Question: I want to estimate the ground plane present in the pointcloud and remove it. I used the plane model segmentation tutorial for that. Instead of removing the ground plane it removes the vertical planes(wall) in the pointcloud. I changed the axis perpendicular to which the plane is estimated but get the same result.. This is the code :- Eigen::Vector3f axis = Eigen::Vector3f(0.0,1.0,0.0); pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients); pcl::PointIndices::Ptr inliers (new pcl::PointIndices); // Create the segmentation object pcl::SACSegmentation<pcl::PointXYZI> seg; seg.setAxis(axis); seg.setOptimizeCoefficients (true); seg.setModelType (pcl:: SACMODEL_PLANE); seg.setMethodType (pcl::SAC_RANSAC); seg.setDistanceThreshold (0.5); seg.setInputCloud (cloud); seg.segment (*inliers, *coefficients); When I input a stream of pointclouds through ROS, I get Model coefficients as :- Model coefficients: 0.980676 0.00643671 0.195532 0.127706 Model coefficients: 0.0974007 -0.00814183 0.995212 2.64382 Model coefficients: 0.0235566 -0.997725 -0.0631705 4.7102 Model coefficients: 0.0404465 -0.999113 -0.0117141 4.65797 ie sometimes it removes ground plane and at some other instant it removes walls... the coordinate system in the pointcloud is x axis -> forward y axis -> right z axis -> up Now since ground plane is along x-z plane perpendicular to y axis I set the Axis as (0.0,1.0,0.0) so that according to PCL api ground_plane perpendicular to y axis be removed... but as i said only at some instants it is removed and at other instants vetical planes (wall) gets removed... I tried all possibilities of (1,0,0) ,(0,1,0),(0,0,1) but it has NO EFFECT in the output result...The same result I mentioned above is obtained... Also changed ModelType to SACMODEL_PERPENDICULAR_PLANE , SACMODEL_PARALLEL_PLANE ,still no desired result Please pour in your suggestions where I am going wrong.. Originally posted by KarthikMurugan on ROS Answers with karma: 15 on 2013-04-29 Post score: 1 Answer: I think you need to also set the eps_angle parameter. If you check the source code for sac_model_parallel_plane.hpp, you'll see that the axis is only used in the isModelValid() function. And, even then, only when eps_angle_ > 0. Looking in sac_model_parallel_plane.h, we see that eps_angle_ is initialized to 0. So, unless you explicitly set this parameter, the model will effectively ignore the parallel-to-axis constraint. Try adding this: seg.setEpsAngle( 30.0f * (M_PI/180.0f) ); Originally posted by Jeremy Zoss with karma: 4976 on 2013-04-30 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by KarthikMurugan on 2013-04-30: Thank you Jeremy.. So is the angle parameter signify angle tolerance of plane with the axis or its normal with the axis??? (as it is in this : http://docs.pointclouds.org/trunk/classpcl_1_1_s_a_c_segmentation.html ) Comment by Jeremy Zoss on 2013-04-30: Read the PCL documentation for setEpsAngle() at the link you provided. It is: "maximum allowed difference between the model NORMAL and the given axis in radians". So, the "axis" parameter is the expected plane normal vector, and "epsAngle" is the allowable variation for a plane to count as "valid" Comment by KarthikMurugan on 2013-05-01: Thank u Jeremy.... tat worked!!! now only the ground plane gets removed... but Still at some instants nothing gets removed... do you know wat could be the reason behind it... I tried values for epsAngle from 20 to 45 ....any rectification to be made ?? Comment by KarthikMurugan on 2013-05-01: Also what is the difference between SACMODEL_PERPENDICULAR_PLANE and SACMODEL_PARALLEL_PLANE... I get the same result for both!!! Comment by Jeremy Zoss on 2013-05-01: Your questions are fairly specific to PCL, and are not really ROS-related. You should try addressing these questions to the PCL user's list: pcl-users@pointclouds.org. Comment by dmngu9 on 2015-04-22: hi i have the same problem. In my pointcloud, the wall is a dominant plane, everytime i tried to extract the ground (set axis method) it still gives me the wall. How did you solve your problem Comment by daviddoria on 2015-12-02: I also have the same problem. It should return a plane that is within the 'epsAngle' of the specified 'axis' even if one doesn't fit well, right? It seems like what is happening is that there are 0 inliers in the model that is found (which should be an error condition?).
{ "domain": "robotics.stackexchange", "id": 14003, "tags": "ros, pcl, ros-fuerte" }
What frequency is the scratching of finger nails on a blackboard?
Question: This is the frequency/intensity that sets my teeth on edge. Does anybody know what frequency (roughly) it is? I am guessing it is near the top of normal human hearing, 20kHZ, but I'm not sure if that's why it affects me. I am sure the same frequency is played on some of the music I listen to, but somehow, it does not make me wince. There is a related question here, with no answer Scratching on a Blackboard, but I just want a frequency value. Answer: From http://www.livescience.com/16967-fingernails-chalkboard-painful.html: Interestingly, the most painful frequencies were not the highest or lowest, but instead were those that were between 2,000 and 4,000 Hz. The human ear is most sensitive to sounds that fall in this frequency range, said Michael Oehler, professor of media and music management at the University of Cologne in Germany, who was one of the researchers in the study. No one knows all of the reasons why that sound is so painful to listen to, but some theorize that we evolved ear canals to amplify human speech as much as possible, and that sounds like this happen to have large portions of their energy in that frequency band.
{ "domain": "physics.stackexchange", "id": 32608, "tags": "energy, acoustics, friction, resonance, perception" }
Generalization of independent set
Question: I know the definition of the independent set problem in graph theory. An independent set cannot contain any two adjacent vertices. How about if you allow no more than $k$ pairs of adjacent vertices? Does this more general problem have a name? Are there techniques for solving it? In particular, are there any techniques for solving it with linear programming? Answer: Not exactly what you're looking for, but Dinur and Safra, in their celebrated paper on the hardness of vertex cover, prove that the following promise problem is NP-hard for every fixed $r,\epsilon > 0$ (using the PCP theorem and Raz's parallel repetition theorem). Instance: A graph $G$ whose vertex set is composed of $m$ sets $V_1,\ldots,V_m$ of size $r$, each of them forming an $r$-clique. Problem: Distinguish between the following two cases: YES case: $G$ has an independent set of size $m$. NO case: Every set $A \subseteq V$ containing more than $\epsilon m$ vertices contains a clique of size $h = \lfloor \epsilon r^{1/c} \rfloor$ (where $c$ is some universal constant). More explicitly, for any NP language $L$ there is a polytime $f$ mapping instances of $L$ to instances of this promise problem, in such a way that if $x \in L$ then $f(x)$ is a YES instance, and if $x \notin L$ then $f(x)$ is a NO instance.
{ "domain": "cstheory.stackexchange", "id": 2028, "tags": "ds.algorithms, graph-theory, co.combinatorics, linear-programming, independence" }
Black hole magnification factor
Question: Given a black hole of mass $M$ which has a distance $D_1$ to Earth and $D_2$ to a galaxy being magnified, what is the magnification factor? And a follow-up question: Does it magnify across the electromagnetic spectrum or only the visible light? Answer: I'd suggest you take a look at Narayan and Bartelmann (1996). They run through all the math to answer exactly this question. I'll add here the punchline from their paper for posterity. Shown below is Figure 7 from their paper. It shows the scenario of looking straight on at the gravitational lens of mass $M$ (a black hole in your scenario). The source (a background galaxy in your scenario) is offset from the line of sight of the black hole by some angle. This produces two images of the source, namely $I_+$ and $I_-$ that we would observe here on Earth. The other useful concept here is the Einstein Radius denoted by the angle $\theta_E$. The image that appears outside the Einstein Radius involves a positive magnification (that is, that image is brighter than the source). The image inside the Einstein Radius involves a negative magnification (that is, that image is dimmer than the source). The Einstein Radius itself is a function of the geometry of the system and the mass of the lens. $$\theta_E = \left(\frac{4GM}{c^2}\frac{D_{ds}}{D_sD_d}\right)^{1/2}$$ In this equation $D_{d}$ is the distance between the observer and the lens (aka the "deflector"), $D_s$ is the distance between the observer and the source, and $D_{ds}$ is the distance between the lens and the source. To jump to the punchline, the total magnification of the source is given by the area of the image over the area of the source. Since there are two images, we have to add the flux from both to get the total magnification. The end result is that the magnification, $\mu$, is given by $$\mu = \frac{u^2+2}{u\sqrt{u^2+4}}$$ where $u$ is the angular separation of the source from the lens, in units of the Einstein Radius. As a specific example, when the source lies exactly on the Einstein Radius, that is, it's angle of separation from the lens equals $\theta_E$, then $u = 1$ and the magnification of the source is $\mu = 1.34$. In other words, the source image (or rather, images) is 1.34 times brighter. Does it magnify across the electromagnetic spectrum or only the visible light? Yes, the magnification is independent of wavelength. Note that the above scenario assumes certain simplifications are true. The primary one being the "thin screen approximation". This basically assumes that the lensing happens all at once with a single deflection. In reality it would be a constant deflect as the light travels from the source to the detector, but that's generally barely distinguishable from the thin screen approximation, so it's useful to use. In addition, once the lens becomes extended, things get more complicated. You restricted your case to a "point lens" by using a black hole so the math was much simpler.
{ "domain": "astronomy.stackexchange", "id": 6346, "tags": "black-hole, gravitational-lensing" }
Output two columns
Question: Output formatting: Input Format Every line of input will contain a String followed by an integer. Each String will have a maximum of 10 alphabetic characters, and each integer will be in the inclusive range from 0 to 999. Output Format In each line of output there should be two columns: The first column contains the String and is left justified using exactly 15 characters. The second column contains the integer, expressed in exactly 3 digits; if the original input has less than three digits, you must pad your output's leading digits with zeroes. Sample Input java 100 cpp 65 python 50 Sample Output ================================ java 100 cpp 065 python 050 ================================ printf is pretty straightforward. Two formats per row, newline at the end. The lines that don't need to be formatted (first and last) can be put on the screen using println instead. To the best of my knowledge it isn't possible to put both printf statements in one printing statement, so I made it a function instead. Keeps things neat. My naming is probably horrible. As usual. import java.util.Scanner; public class Solution { private static void printRowOutlined(String left, int right) { System.out.printf("%-15s", left); System.out.printf("%03d\n", right); } public static void main(String[] args) { Scanner sc=new Scanner(System.in); System.out.println("================================"); for(int i = 0; i < 3; i++){ String text = sc.next(); int number = sc.nextInt(); printRowOutlined(text, number); } System.out.println("================================"); } } Answer: To the best of my knowledge it isn't possible to put both printf statements in one printing statement, so I made it a function instead. It is possible: the printf method takes a varargs as second argument, so you can give it multiple arguments to format. When you have multiple arguments, each parameter can be refered to as %[argument_index$] in the format String (as per Formatter Javadoc), where argument_index is the index of the argument in the varargs. In this case, we can have: private static void printRowOutlined(String left, int right) { System.out.printf("%-15s%03d%n", left, right); } Since each parameter in the format String are refered to in the order in which they are given in the arguments, we don't even need to specify the index (but we could have "%1$-15s%2$03d%n" as the format String). Regardless of that, I think having a separate method for the printing operation is still cleaner. Also, don't use \n. You can use the %n specifier, which will use the line separator of the current system. Furthermore, I notice you're not doing any validation on your input, you could verify that the integer is correct, in the right range, and that the String to format has the right numbers of characters.
{ "domain": "codereview.stackexchange", "id": 45178, "tags": "java, strings, programming-challenge, formatting" }
Light and electromagnetic waves
Question: Light has no charge and electromagnetic waves are created by charged particles, then how does light create an electromagnetic wave? Answer: Light is an electromagnetic wave. So these two statements are equivalent: "Light has no charge" "Electromagnetic waves have no charge" As are these two: "Electromagnetic waves are created by charged particles" "Light is created by charged particles" Aside: A more precise statement is "light is (or electromagnetic waves are) created by accelerating charged particles. Therefore, it doesn't make sense to say light "creates" an electromagnetic wave (or vice versa).
{ "domain": "physics.stackexchange", "id": 81991, "tags": "electromagnetism, visible-light" }
Calculating the result of a recurrence function
Question: The recurrence function is defined as follows: \$f(0) = 1\$ \$f(1) = 1\$ \$f(2n) = f(n)\$ \$f(2n+1) = f(n) + f(n-1)\$ I was tasked to calculate the recurrence of a very large number, \$n = 66666666666666\$, using C++. As you can clearly see, the 3rd line applies for even inputs, and the 4th line for odd inputs. If this isn't clear consider the following derivation for \$f(10)\$: \$f(10) = f(5) = f(2) + f(1) = f(1) + f(1) = 1 + 1 = 2\$ long long int function(long long int x) { if (x == 0 || x == 1) return 1; long long int result = 0; if (x % 2 != 0) { result += function((x-1)/2) + function((x/2)-1); } else { result += function(x/2); } return result; } The runtime of this implementation is very large (almost 30 seconds to finish). Clearly this is because recursion is very expensive. My runtime limit is 1.0s, at most. Therefore I think an iterative approach would work wonders. This is the result of running time ./my_program: real 0m29.893s user 0m29.809s sys 0m0.075s I am wondering if there are other approaches, besides translating it to an iterative version, which would be suitable for this problem in particular. I would also be grateful if anyone has any pointers about my code/approach. Answer: Avoid the result variable The result variable is only ever incremented once and then returned. You can simplify by returning directly: if (x % 2 != 0) { return function((x-1)/2) + function((x/2)-1); } else { return function(x/2); } Eliminating the variable should also speed-up the program if the compiler did not optimize it out already. Memoization Memoization essentially stores already computed values for later re-use trading space for run-time. Memoization has already been implemented in C++ so you can just make use of it.
{ "domain": "codereview.stackexchange", "id": 21413, "tags": "c++, time-limit-exceeded, mathematics" }
Constant flow rate from two closable pipes with single source
Question: I have a question regarding a system containing a water source, with two pipes connected using a T. There are two different states of the system; one being where both pipes are open and water is flowing, the other being that one pipe is closed. Is it possible to have a constant flow out of each pipe, in both situations? If so, what device is needed to keep the constant flow? Example: Pipe 1 and 2 is open. 15 L/min is flowing out of both pipes. Pipe 1 is closed, but 15 L/min is still flowing from pipe 2. Image related. The two things I have come up with are: Flow control valves at the start of each pipe, ensuring no flowrate higher than what is set A pressure reducing valve, ensuring that the pressure is constant on the valve-side. This has led me to two things; firstly that adding a flow control valve would be costly when expanding the system, and secondly that I do not know enough about fluid dynamics to understand how a pressure reducing valve could work in relation to mass flow rate. Thanks for the help. Answer: First of all, having the same flow in both pipes without flow control valves on both is set by the characteristics of the dwnstream piping in each. Unless they are mirrors of each other, all valves open will likely not have the same flow in both. You could throttle one if desired to make them the same if that is a goal. Second, you could control flow through one or the other when one is shut off via a throttle with specific settings (your "device to control flow"). You could determine the settings via experimenting, the setting might be different with Pipe 1 closed versus 2. This assumes that the valves for Pipes 1 & 2 are either open or shut. If you want a specific flow rate, then flow controllers on each pipe with control valves is really the only way to go. Also, I'm not sure what the "pressure reservoir" is for, water doesn't really need this.
{ "domain": "engineering.stackexchange", "id": 4530, "tags": "fluid-mechanics" }
Tsallis $q$-Gaussian and applications
Question: Why is not $q$-Gaussian distribution merely the substitution of q exponential into the gaussian function?, i.e. substitution of equ.2 in equ.1. Where would there be three cases as below. When to use each, the direct substitution and the defined $q$-Gaussian function? different applications? Answer: There are a couple of things going on here. First, the $q$-Gaussian is defined as closely as possible to a direct swap of exponential for $q$-exponential as you could hope. To the extent it looks like that isn't true, it is only because in the $q$-Gaussian case, the shape factor ("covariance") and the normalization have been written differently. For the Gaussian, the normalization $A$ was written in the numerator and for the $q$-Gaussian, the normalization $C_q$ was written in the denominator. Likewise, for the Gaussian, you have factors of $w$ (in the notation of the question) in the denominators for the Gaussian and corresponding factors of $\sqrt{\beta}$ in the numerators. Now this notation does hide some things, some of which are related to the normalization choices: The $q$-exponential is only defined over a bounded subset of the real line for $q<1$, and so the distribution there is fundamentally different than in the unbounded cases. This is enforced by the innocuous looking $+$ subscript in the definition of the $q$-exponential. In the "fat-tail" cases of $1 < q < 3$, the distribution goes to 0 at infinity with power-law tails rather than the exponential decay in the normal Gaussian. Points #1 & #2 give rise to the completely different forms of the normalization factors in the different cases. A point never well-emphasized with these distributions is that the transition between the cases is not defined by a properly smooth limit in $q$. This is especially true in the limit $q \rightarrow 1^-$ where $q$ approaches 1 from below because the truncation to the finite domain is not a smooth operation. It's also true, though more subtly so, in the limit $q \rightarrow 1^+$ because there's infinite Fisher distance between the fat-tails and the exponential tails.
{ "domain": "physics.stackexchange", "id": 80799, "tags": "thermodynamics, statistical-mechanics" }
What's the difference between Row Polymorphism and Structural Typing?
Question: The definitions I've stumbled across seem to indicate they express the same idea. That's that the relationship between record types is determined by their fields (or properties) rather than their names. Their Wikipedia pages also seem to indicate the same idea: A structural type system (or property-based type system) is a major class of type system in which type compatibility and equivalence are determined by the type's actual structure or definition and not by other characteristics such as its name or place of declaration. In programming language type theory, row polymorphism is a kind of polymorphism that allows one to write programs that are polymorphic on record field types (also known as rows, hence row polymorphism). Are there any differences between them? Answer: Structural type systems don't necessarily have anything to do with records. For instance, you could have a system where: data Bool = False | True data Two = Zero | One are actually the same type, because they are both types with two nullary constructors. It also doesn't necessarily tell you much about records, because even though types are determined by their structure, the two records: {s : S ; t : T} {s : S ; t : T ; u : U} are not the same structure, so you could have structural typing without there being anything convenient about these two types. Similarly, row polymorphism in isolation doesn't tell you much---just that you can quantify over rows, and probably use them with e.g. a record type parameterized by a row. But there are all sorts of variations on what you can do with rows that really specify the capabilities of the system. Usually with structural records people at least want subtyping. That allows you to say that my second record type above is a subtype of the first, so that you can pass the latter to anything expecting the former. A typical way to do this with row polymorphism is to instead quantify over the extra fields that may be present, and use some kind of row concatenation. So perhaps a more targeted question is what is the difference between subtyping and quantifiers. The answer to that is generally that subtyping cannot express quantified types unless the quantified variable only occurs exclusively covariantly or contravariantly. So we could say: (forall a. a -> T) ~= Top -> T (forall a. T -> a) ~= T -> Bot But for a type like forall a. a -> a, there is no one type to pick for a without losing information. This extends to systems with just record subtyping vs. (appropriate) row polymorphism. However, if you have a system with subtyping and quantifiers, and subtyping can apply to quantified types, then the differences might be a lot more subtle. Having both quantifiers and subtyping can get quite tricky, though (not that row polymorphism is easy to get right, either).
{ "domain": "cs.stackexchange", "id": 16940, "tags": "type-theory, polymorphisms" }
References for Glashow-Weinberg-Salam model
Question: I am looking for reference recommendations on the Glashow-Weinberg-Salam theory of electroweak symmetry breaking. In particular, I am looking for a discussion of the $SU(2)\times U(1)$ gauge symmetry "breaking" and its connection to the Higgs phenomenon. I am familiar with the discussions in Ryder's Quantum Field Theory and Halzen & Martin's Quark and Leptons, but I am looking for more detailed references (at the advanced grad student level) on this matter that go into detail concerning the calculations. Any books, online reviews, or articles would be greatly appreciated. Answer: Your question is actually unclear, as it instantly multiplexes into four quite different questions, which I will not spend time to parse. Before any discursive bloviations I'll send you to a classic, ISBN-13: 978-0521476522, Dynamics of the Standard Model by J Donoghue, E Golowich, B Holstein, a bare minimum for particle theorists. Now for the bloviation: A better alternative to your "pragmatic" references for pro-forma introductions include M Schwartz , ISBN-13: 978-1107034730 , Quantum Field Theory and the Standard Model T P Cheng & L L Li , ISBN-13: 978-0198519614 , Gauge Theory of elementary particle physics JB Zuber & C Itzykson , ISBN-13: 978-0486445687 , Quantum Field Theory TD Lee, ISBN-13: 978-3718600335 , Particle Physics and Introduction to Field Theory E Abers & B Lee 1973 "Gauge theories" Physics Reports, 9(1), pp.1-2 and, literally, dozens of equivalents. They've done yeoman's duty in educating thousands. On the off chance you are really asking about radiative corrections, a starting point might be W Hollik 1990 Hollik, Wolfgang FL. "Radiative Corrections in the Standard Model and Their Rǒle for Precision Tests of the Electroweak Theory." Fortschritte der Physik/Progress of Physics 38, no. 3 (1990): 165-260
{ "domain": "physics.stackexchange", "id": 60282, "tags": "particle-physics, resource-recommendations, standard-model, higgs, electroweak" }
How frequency response related to a transfer function
Question: Can anyone explain how frequency response related to a transfer function? Answer: A discrete-time linear time-invariant (LTI) system is defined by its impulse response, which can be expressed as a list of non-zero coefficients $c_n$ occuring at integer time indices $t_n$. To form its output $y[k]$ all the system can do is sum time-shifted and constant-multiplied copies of its input $x[k]$. An important class of input functions are complex exponentials that look for example like this: If the input is a complex exponential: $$x[k] = az^k,$$ where $a$ and $z$ are complex constants and $k$ is the integer time index, then summation results in an output that is the same as the input multiplied by a constant $H(z)$: $$y[k] = \sum_n c_n x[k-t_n] = \sum_n c_n a z^{k-t_n} = \left(\sum_n c_n z^{-t_n} \right)a z^k = H(z)\,x[k].$$ In other words, complex exponentials are eigenfunctions of LTI systems. The constant $H(z)$ is called the transfer function. It is a function of the base $z$ of the complex exponential. If $|z|$ = 1, then the base is of form: $$z = e^{i\omega} = \cos(\omega) + i \sin(\omega),$$ where $i$ is the imaginary unit, and the system's input is a complex sinusoid of real frequency $\omega$ (with magnitude and phase embedded in the constant $a$): $$x[k] = a (e^{i\omega})^k = a e^{i\omega k} = a\left(\cos(\omega k) + i \sin(\omega k)\right)$$ Compared to other complex exponentials, complex sinusoids don't decay or increase in magnitude by time. Frequency response at frequency $\omega$ is simply the constant $H(e^{i\omega})$ by which the system multiplies a complex sinusoid input of frequency $\omega$. By the inverse Fourier transform, one can go from the frequency response to the impulse response, and from the impulse response one can obtain the transfer function as shown above. So the transfer function of a discrete-time LTI system is fully defined by its frequency response.
{ "domain": "dsp.stackexchange", "id": 3278, "tags": "frequency-response, transfer-function" }
Is it possible to plot different topics on the same plot in rxbag?
Question: I have several distinct topics saved in a bag and I would like to plot them on the same plot for comparison purposes in rxbag. This works great for values in the same topic, but I don't know if I can do this for different topics. Originally posted by cmansley on ROS Answers with karma: 198 on 2011-07-08 Post score: 0 Answer: For plotting you can use rxplot. You can plot any numeric value from any plot. You will be able to plot them in the same plot, or separated plots but in the same screen. Have a look at the documentation since it is very easy to use. I hope this helps. Originally posted by gazkune with karma: 219 on 2011-07-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gazkune on 2011-07-21: Playing bag files and using rxplot could be a good alternative. If not, have a look at http://www.ros.org/wiki/rxbag_plugins I haven't used it but it seems it can help you. Comment by cmansley on 2011-07-21: I know about rxplot, but I have bag files. Are you suggesting that I have to play the bag files back to get the functionality of rxplot, when there is a tool rxbag designed for viewing and plotting data in bag files? I just want to plot two topics to the same plot (not even the same graph).
{ "domain": "robotics.stackexchange", "id": 6076, "tags": "ros, rxbag, rxtools" }
Confusion in understanding the behavior of inductor in RL circuit with DC source
Question: When we have a DC voltage source with a switch in series with $RL$ and the switch is closed at $t=0$ then it is said that current is zero initially, but the voltage across inductor is same as that of applied voltage (according to Kirchhoff voltage law) so there should be current (according to $V=L(di/dt)$) but it contradicts the initial statement so how do I understand this? If we have only inductor I understand that current increases linearly with time but addition of resistor makes the current increase exponential, how to understand this intuitively (I understand from the equations but not theoretically (intuitively) how it is happening)? I understand that changing current causes the induced EMF which opposes the changing current, but what I don't understand is - won't it cause the current to be constant but here it seems to contradict that changing current should be there for EMF to exist, so how do we explain that voltage is reducing to zero and current is increasing with respect to the confusion I mentioned above in inductor of $RL$ circuit (so basically I am not understanding the behavior of induced EMF in inductor)? Please provide an intuitive explanation. I have gone through lot of questions on this site but couldn't find any answers regarding my confusion, I am stuck with this. Please help me with this. Answer: When we have a DC voltage source with a switch in series with RL and the switch is closed at t=0 then it is said that current is zero initially, but the voltage across inductor is same as that of applied voltage( according to kirchhoff voltage law) so there should be current( according to v=L(di/dt) )but it contradicts the initial statement so how do I understand this? You are right that right when we close the switch the voltage across the inductor is equal to the applied voltage. However, you are misinterpreting what a potential difference of magnitude $v=L\cdot\text di/\text dt$ means. This equation doesn't say if there is a potential difference across the inductor then there is current through the inductor. What it says is that a potential difference across the inductor is associated with a change in current through the inductor. Therefore, since the voltage across the inductor is non-zero at $t=0$, we know the current is changing at $t=0$. ...but addition of resistor makes the current increase exponential , how to understand this intuitively (I understand from the equations but not theoretically how it is happening)? The current increases like $$i=i_0\left(1-e^{-t/\tau}\right)$$ So it is increasing, and there is an exponential function, but usually "increasing exponentially" means it keeps growing and growing more rapidly without bound. This is not what is happening here. As the current in the circuit increases the voltage across the resistor increases. Therefore, the voltage across the inductor decreases. Based on our previous discussion, this means that the change in current must be decreasing. Hence this "voltage trade-off" happens at a slower and slower rate. This causes the current to approach a steady value where the increase over time decays exponentially. I understand that changing current causes the induced EMF which opposes the changing current, but what I don't understand is - won't it cause the current to be constant... Keep in mind that "oppose" does not mean "block". Everything else... It seems like your confusion stems from what we discussed initially. You are mixing up the current and its derivative. The voltage across the inductor tells you nothing about the current in general. It tells you how the current is changing. Also, you say that you understand things from the equations, but I would argue that if you don't understand how the equations model reality then you haven't truly understood the equations. It would help for you to look at how the equations are derived. Make sure you understand the physical significance and motivation for each step, each equation, etc. This is an important step in the learning process, so I will leave that job to you. I hope this answer is a good scaffold to hold up the deeper understanding you will develop here.
{ "domain": "physics.stackexchange", "id": 59566, "tags": "electromagnetism, electric-circuits, electromagnetic-induction, inductance" }
Calculation of gain at certain frequency in case of nth order IIR filter
Question: There is this method for to set 0dB gain to be at wanted frequency (fc) (Octave/Matlab example for biquad LPF): % needed for Octave ------------------------- pkg load signal % ------------------------------------------- clf; % calculate coefficients -------------------- fs = 44100; % sample rate fc = 700; %Hz fpi = pi*fc; wc = 2*fpi; wc2 = wc*wc; wc22 = 2*wc2; k = wc/tan(fpi/fs); k2 = k*k; k22 = 2*k2; wck2 = 2*wc*k; tmpk = (k2+wc2+wck2); a0 = 1; a1 = (-k22+wc22)/tmpk; a2 = (-wck2+k2+wc2)/tmpk; b0 = (wc2)/tmpk; b1 = (wc22)/tmpk; b2 = (wc2)/tmpk; b = [b0 b1 b2]; a = [a0 a1 a2]; FLT1 = tf(b, a, 1/fs); % adjust 0dB @ 1kHz ----------------------------- fc = 1000; % Hz w = 2.0*pi*(fc/fs); num = b0*b0+b1*b1+b2*b2+2.0*(b0*b1+b1*b2)*cos(w)+2.0*b0*b2*cos(2.0*w); den = 1.0+a1*a1+a2*a2+2.0*(a1+a1*a2)*cos(w)+2.0*a2*cos(2.0*w); G = sqrt(num/den); b0 = b0/G; b1 = b1/G; b2 = b2/G; b = [b0 b1 b2] % ------------------------------------------------ FLT2 = tf(b, a, 1/fs); % plot nf = logspace(0, 5, fs/2); figure(1); [mag0, pha0] = bode(FLT1,2*pi*nf); semilogx(nf, 20*log10(abs(mag0)), 'color', 'g', 'linewidth', 2, 'linestyle', '-'); hold on; [mag, pha] = bode(FLT2,2*pi*nf); semilogx(nf, 20*log10(abs(mag)), 'color', 'm', 'linewidth', 2, 'linestyle', '-'); legend('LPF', 'LPF 0dB@1kHz', 'location', 'southwest'); xlabel('Hz');ylabel('dB'); axis([1 fs/2 -30 15]); grid on; How are formulas for to resolve num and den derived so calculation of G for nth order filter can be done? As for an example for a 4th order filter: a = [1.00000 -0.61847 -1.09281 0.43519 0.30006]; b = [6.9411e-03 1.1097e-02 5.2508e-03 6.9077e-04 -3.2936e-06]; fc = 1000; % Hz w = 2.0*pi*(fc/fs); num = ...; % ???? den = ...; % ???? G = sqrt(num/den); b(1) = b(1)/G; b(2) = b(2)/G; b(3) = b(3)/G; b(4) = b(4)/G; b(5) = b(5)/G; Answer: You just need to evaluate the transfer function on the unit circle at the frequency of interest: $$H(e^{j\omega_0})=\frac{\displaystyle\sum_{k=0}^Nb_ke^{-jk\omega_0}}{\displaystyle\sum_{k=0}^Na_ke^{-jk\omega_0}}\tag{1}$$ and take the magnitude. For the special values $\omega_0=0$ and $\omega_0=\pi$, Eq. $(1)$ simplifies to $$H(e^{j0})=\frac{\displaystyle\sum_{k=0}^Nb_k}{\displaystyle\sum_{k=0}^Na_k}\tag{2}$$ and $$H(e^{j\pi})=\frac{\displaystyle\sum_{k=0}^N(-1)^kb_k}{\displaystyle\sum_{k=0}^N(-1)^ka_k}\tag{3}$$ respectively. EDIT: If you want a formula that directly expresses the squared magnitude of $H(e^{j\omega})$ then use this: $$\big|H(e^{j\omega})\big|^2=\frac{\displaystyle r_b[0]+2\sum_{k=1}^Nr_b[k]\cos(k\omega)}{\displaystyle r_a[0]+2\sum_{k=1}^Nr_a[k]\cos(k\omega)}\tag{4}$$ where $r_a[k]$ and $r_b[k]$ are the autocorrelations of the denominator and numerator coefficients, respectively: $$r_a[k]=a[k]\star a[-k]\\r_b[k]=b[k]\star b[-k]$$ where $\star$ denotes convolution.
{ "domain": "dsp.stackexchange", "id": 9146, "tags": "filters, filter-design, infinite-impulse-response, frequency-response" }
Fluid mechanics problem
Question: I have a $2D$ fluid parcel with coordinates $(0.5,-0.5), (-0.5,-0.5), (0.5,0.5)$ and $(-0.5,0.5)$ and this parcel is deformed by a steady flow field of $u=ay$ and $v=0$, defined on the basis ${(1,0), (0,1)}$. I tried to calculate the velocity gradient tensor $\tau_{ij}$ given by the matrix $ \left( \begin{array}{ccc} 0 & a \\ 0 & 0 \end{array} \right) $. I now need to decompose this into the symmetric strain rate tensor and the antisymmetric rotation tensor. However, this matrix isn't diagonalizable. Am I missing something here? I need to use this part to "solve" the transformation equation of a point that is given by $$x_i(t+dt)=x_i+(u_i+du_i)dt$$where $du_i = \tau_{ij}dx_j$. Then, I must remove the rotation rate tensor from the velocity gradient tensor (which basically means I have the strain rate tensor left, if I am not mistaken) and use that to show that the transformation equation then becomes $$x_i(t+dt)=x_i+(u_i+\frac{1}{2}(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i})\partial x_j)dt$$ and finally determine the transformation equations for $x(t+dt), y(t+dt)$. Answer: Any tensor $A_{ij}$ can be decomposed into symmetric and antisymmetric parts, regardless of whether or not it is diagonalizable. $$ \begin{align} S_{ij} & = \frac{1}{2}\left(A_{ij} + A_{ji}\right) \\ \Omega_{ij} & = \frac{1}{2}\left(A_{ij} - A_{ji}\right) \end{align} $$ so that $$ A_{ij} = S_{ij} + \Omega_{ij} $$
{ "domain": "physics.stackexchange", "id": 15994, "tags": "fluid-dynamics" }
How to include headers from other packages
Question: I'm using ROS Groovy with catkin. I created a package1 with a file1.cpp in the src directory and a header file1.h in its include/package directory. I need to #include this header in another file2.cpp of another package2 in the same workspace and then call those declared function. file1.cpp doesn't have any main() function. It's only a list of functions and their body definitions. I'm not secure with this tool because everything I tried using tutorials is not working. Please, give me full informations about what to write in every CMakeLists.txt or package.xml and how to include that header in both packages. Thanks in advance. Originally posted by eds on ROS Answers with karma: 101 on 2013-12-12 Post score: 0 Answer: You have to say in the manifest of package2 that it need package1 to run, if I remember right I is a dependencie. After you have rebuild It then, you should be able to include it something like "packageName/HeaderName" Originally posted by pkohout with karma: 336 on 2013-12-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16434, "tags": "ros, include, catkin-package, header, cmake" }
PHP session_set_saver_handler with session timeout
Question: I have developed a class that utilises the session_set_saver_handler function so I can store sessions within my DB. The class works just as I would like. However, my only concern is the way I have approached the session timeout. Current the _read() function code looks like: /** * Read session function * @access public * @return the 'data' record providinf the PDO statement executed correctly. Otherwise, return false. */ public function _read($id) { $timeout = time() - $this->accessTime; $locked = false; $this->database->query('SELECT updatedTime, data FROM sessions WHERE session = :id AND locked = :locked'); $this->database->bind(':id', $id); $this->database->bind(':locked', $locked); if($this->database->execute()) { if($this->database->rowCount() > 0) { $row = $this->database->singleResult(); if($row['updatedTime'] < $timeout) { //Set the location of the user. $url = "http://" . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; if($url != $this->redirectUrl) { header('Location: ' . $this->redirectUrl); return; } return ''; } return $row['data']; } } return ''; } When I originally create the script I hard coded the redirect URL. The problem was logout.php (the file the user is redirected to) contains the session class. Meaning I have a constant loop. So I approached it by implementing the following: if($row['updatedTime'] < $timeout) { //Set the location of the user. $url = "http://" . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; if($url != $this->redirectUrl) { header('Location: ' . $this->redirectUrl); return; } return ''; } This seems to me more of a 'hack' than intelligent code. Did I approach this correct (probably not)? If not, what would be a more intelligent work around? Answer: Since the code for logout.php is not provided, I'll just make assumptions. Your code is pretty clean but I think it could be improved a bit. Miscellaneous Both locked and timeout are defined but only used once. Since they are local variables, I consider that they could be removed and just replaced by their values. Still concerning the timeout, I prefer working with proper date and time types provided by MySQL rather than working with timestamps stored as integers. The main reason is that there are a lot of functions at your disposition, like for example NOW() and TIMESTAMPDIFF(), which could be used to rewrite your query easily SELECT TIMESTAMPDIFF(MINUTE, NOW(), updatedTime ) AS minutesSinceLastActivity, data FROM sessions WHERE session = :id AND locked = :locked Once you have that, your condition is pretty simple : if($row['minutesSinceLastActivity'] > $this->accessTime) . Which leads me to another point, the naming of accessTime is pretty bad, I would define it as something like minutesInactivityBeforeLogout. One last thing that I'm a bit unsure about (I know it works for C, don't remember for PHP), you could regroup the two conditions if($this->database->execute()) and if($this->database->rowCount() > 0) into one with the and operator. Since you'll use an and, if the execution of the query fails, execution the rest of the code in the if condition is unnecessary since it cannot change the result of the condition. Once again, this is something I know works in C, not quite sure about PHP. How it works You said you had a constant redirection loop with a previous version of your code. The main reason I could fin for that is that if you detect a timeout, you don't do anything on your MySQL table, which means you'll always end up in the same condition and doing always the same thing. That's why you should keep your structure and query as it is (you could put the timeout condition in the query, but doing so, you would be unable to differentiate the case where the timeout is reached from the case where there is no record for the current id). The only thing that you have to do is for example to delete the record if the timeout condition is reached.
{ "domain": "codereview.stackexchange", "id": 14566, "tags": "php, object-oriented, session" }
How does electrostatic repulsion between electrons in "many electron atom" lead to coupling of individual orbital angular momentum vectors?
Question: I just started studying $LS$ coupling scheme, book has described $LS$ coupling in following order 1. Firstly it mentions due to "spin-spin" correlation individual spin angular momentum vectors couples to from resultant spin angular momentum vector. i.e $\vec{S}$. And quantum number $S$ takes values from $|\vec{s_1}+\vec{s_2}+\vec{s_3}.....|_{min}$ to $(\vec{s_1}+\vec{s_2}+\vec{s_3}......)$ 2. Then it says As a result of Residual Electrostatic Interaction individual orbital angular momentum vectors of "optical" electrons are strongly coupled with one another to form a resultant orbital angular momentum vector $\vec{L}$ of magnitude $\sqrt{L(L+1)} \hbar$ which is constant of motion. My question arises here is how does residual electrostatics interaction which is repulsive electric potential between electrons in an atom, leads to the coupling of individual orbital angular momentum vectors ? Reference :- Page 144 of the PDF or 140 of the book. Answer: Thanks to @lineage 's comment about seeing section 10.3 of book Quantum Physics of Atoms,Molecules Solids. for insights. The Coulomb interaction doesn't result in coupling of $\vec{l_1},\vec{l_2}....$ to form $\vec{L}$. Instead it makes the coupling to happen in such a way that the $\vec{L}$ remains constant. This happens simply because in most quantum states the charge distributions of the electrons are not spherically symmetrical, and so they exert torques on each other. Since the space orientation of the charge distribution of an electron is related to the space orientation of its orbital angular momentum vector, there are torques acting between the angular momentum vectors. The torques do not tend to change the magnitude of the individual orbital angular momentum vectors, but only tend to make them precess about the total orbital angular momentum vector in such a way that its magnitude L' remains constant. The question then arises: Which of the possible values of L' corresponds to the state of lowest energy? There are opposing tendencies, but the basis of the one which usually dominates can be understood even from classical physics by considering two electrons in a Bohr atom. Two optically active electrons mov- ing in the same Bohr orbit tend to remain at op- posite ends of a diameter so as to minimize their Coulomb repułsion. As a result, their orbital angu- lar momenta tend to couple in such a way as to yield a maximum total orbital angular momentum. repulsion between the electrons, the most stable arrangement is obtained when the electrons stay at the opposite ends of a diameter. In this state of lowest energy, the electrons rotate together with individual orbital angular momentum vcctors parallel, and therefore with the magnitude L' of the total angular momentum vector a max- imum. This conclusion is confirmed by an analysis of the spectra produced by atoms with several optically active electrons. That is, for such atoms the residual Coulomb interaction produces a tendency for the orbital angular momenta of the optically active electrons to couple in such a way that the magnitude of the total orbital angular momen- tum L' is constant, and the energy is usually lowest for the state in which L' is largest.
{ "domain": "physics.stackexchange", "id": 67016, "tags": "quantum-mechanics, angular-momentum, quantum-spin" }
Creating an image of the Mandelbrot set in Rust
Question: I'm in the process of familiarizing myself with Rust. In order to get some practice, I decided to make a program that generates images of the Mandelbrot set. The function I use to generate the image is included below. I'm specifically interested in feedback regarding my use (or lack thereof) of Rust idioms, as well as my interaction with the borrowing system. Notice that I had to clone the PathBuf as I have two uses of it, and the call to save borrows the value. Of course, however, any and all feedback is appreciated as well. use image::{ImageBuffer, Rgb}; use num::Complex; use std::path::PathBuf; pub fn generate_image(width: u32, height: u32, iterations: u32, zoom: f32, out: PathBuf) { let to_imaginary_domain = |x: u32, y: u32| -> (f32, f32) { let re: f32 = x as f32 - width as f32 / 2.0; let im: f32 = y as f32 - height as f32 / 2.0; (re / zoom, im / zoom) }; println!("Generating {} x {} image of the Mandelbrot set...", width, height); let img = ImageBuffer::from_fn(width, height, |px, py| { let (x, y) = to_imaginary_domain(px, py); let c = Complex::<f32> { re: x, im: y }; let mut z = Complex::<f32> { re: 0.0, im: 0.0 }; for _i in 0..iterations { z = z * z + c; if z.norm() >= 2.0 { return Rgb::<u8>([0x00, 0x00, 0x00]); } } Rgb::<u8>([0xFF, 0xFF, 0xFF]) }); match img.save(out.clone()) { Ok(_) => { println!("Successfully saved image to {:#?}.", out.as_os_str()); }, Err(error) => { panic!("Failed to save the image: {:#?}", error); } }; } Answer: Disclaimer: I'm not an experienced Rust developer. Note for other reviewers/people who want to test the code Here are the dependencies you can use: [dependencies] image = "0.24.4" num = "0.4.0" and here are parameters that worked fine for me: width: 1000 height: 1000 iterations: 100 zoom: 300 Clippy clippy is a really nice tool to catch mistakes and improve your code. In your case, there is not much to say: a few things about integer types (but you are a bit stuck with the ImageBuffer::from_fn types) a few things about Path and PathBuffer which may answer your initial question. I ended up with: /// # Panics /// /// Will panic if image is not saved pub fn generate_image(width: u32, height: u32, iterations: u32, zoom: f32, out: &Path) { ... match img.save(out) { Ok(_) => { println!("Successfully saved image to {:#?}.", out.as_os_str()); } Err(error) => { panic!("Failed to save the image: {:#?}", error); } }; Splitting the logic in functions Having a to_imaginary_domain function is nice but it is a bit surprising to me that it does not return a complex number. let to_imaginary_domain = |x: u32, y: u32| -> Complex<f32> { let re: f32 = x as f32 - width as f32 / 2.0; let im: f32 = y as f32 - height as f32 / 2.0; Complex::<f32> { re: re / zoom, im: im / zoom, } }; The mathematical operation probably deserves to be in a dedicated function as well. #[must_use] pub fn mandelbrot_func_diverges(c: Complex<f32>, iterations: u32) -> bool { let mut z = Complex::<f32> { re: 0.0, im: 0.0 }; for _i in 0..iterations { z = z * z + c; if z.norm() >= 2.0 { return true; } } false } Then, in the from_fn call, we just have: let img = ImageBuffer::from_fn(width, height, |px, py| { if mandelbrot_func_diverges(to_imaginary_domain(px, py), iterations) { Rgb::<u8>([0x00, 0x00, 0x00]) } else { Rgb::<u8>([0xFF, 0xFF, 0xFF]) } }); If it was for me, I'd also rewrite the generate_image so that it returns the ImageBuffer instead of dealing with saving it, but it makes things slightly more verbose. #[must_use] pub fn generate_image( width: u32, height: u32, iterations: u32, zoom: f32, ) -> ImageBuffer<Rgb<u8>, Vec<u8>> { More ideas In order to make the output somehow better: you could center the image around a different point you could get the number of iterations before divergence and use this number to get a gradient of color
{ "domain": "codereview.stackexchange", "id": 43988, "tags": "beginner, rust, image, fractals" }
Wavelet Coefficients Algorithm for Haar System
Question: In my DSP class a few lectures ago my professor shared a basic algorithm with us for computing wavelet coefficients when the wavelet basis belong to the Haar system. He also shared a circuit with us which looks something like this: (I made it in tikz rather than having to post a picture of my scribbled notes) Here is also an example of how the algo works: Here is a sample input sequence: $$x[n] = 8,2,3,-1,0,4,-2,3$$ Step 1: $$ \underbrace{8 \quad 2}_{\text{Add & divide by 2}} \quad \quad \text{for all four pairs}$$ New Sequence: $$ 5, 1, 2, \frac 12, \color{green} {3,2,-2,-\frac 52}$$ Here the values in green come from subtracting $5$ from $8$, $1$ from $3$ and so on. Now we repeat step 1 to get: $$ 3, \frac 54, \color{green} {2, \frac 34}$$ Again values in green come from subtracting $3$ from $5$ and so on. Now repeat step 1 again: $$\frac{17}{8}, \color{green} {\frac 78}$$ And now we have our coefficients in order as follows: $$\textbf{Coefficients} = \{\frac{17}{8}, \frac 78, 2, \frac 34, 3, 2, -2, -\frac 52 \} $$ Now the thing is, I cannot find this circuit or algorithm anywhere. Our course textbook is FSP 2014 by Vetterli but I also use O&S to study. And these topics are not discussed in any of these books. So I was just wondering if anyone had any resource which I could use to get a basic level understanding of wavelets as well as such algorithms and circuits. Also I was wondering if something similar exists for when our input is continuous-time: $$x(t) = \sum_{k=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} c_{k,n} \phi_{k,n}$$ where $c_{k,n}$'s are the coefficients and $\phi_{k,n}$'s are the wavelet basis functions (Haar system). As such I would appreciate if someone could either point to a source or themselves give a basic understanding of why the specific algorithm and circuit work. Answer: I'd suggest Burrus, C., Gopinath, R. & Guo, H. (1998) Introduction to Wavelets and Wavelet Transforms - A Primer, Prentice Hall International, Inc., Houston, Texas that seems to be available as a PDF in the link. Chapter 4 has the following picture which is a slight evolution from the diagram in the OP.
{ "domain": "dsp.stackexchange", "id": 12055, "tags": "wavelet, reference-request" }
How is the Heisenberg Uncertainty calculated value added to a measurement error bar?
Question: It is not clear to me if the calculated HU for a specific experiment is added to the measured value Gaussian curve together with the statistical errors (i.e. added to the total error bar) of this measurement and in which way? I know that statistical errors like systemic errors are chaotic deterministic in nature whereas HU is probabilistic. Can you give me the method (using maybe an example) by which the HU is integrated to the total calculated error bar for a measurement in the field of Quantum mechanics? Answer: No one adds uncertainties from the uncertainty principle as errors to measurements. For one, that wouldn't make a lot of sense because measurements don't always know what actual quantum state they're dealing with prior to measurement and hence couldn't even calculate that uncertainty if they wanted to, but more importantly: Because that's not what "uncertainty" means. Uncertainty is not an "error" on any specific measurement. Uncertainty of a random variable $x$ is its standard deviation $\sigma_x = \sqrt{\langle x^2\rangle - \langle x\rangle^2}$. Quantum mechanics rests on the idea that the result of measuring an observable $A$ on a certain state $\lvert \psi\rangle$ is effectively a random variable, with $\langle A\rangle = \langle \psi\vert A\vert \psi \rangle$ the expectation value of that variable. Now, statistics tells us by the law of large numbers that the average of our results after $N$ measurements will tend towards the expectation value as $N$ becomes larger, just as the sample standard deviation of our results will tend towards the theoretical standard deviation $\sigma_A$. In more "physics-like" language, the statistical error of our measurements converges towards the uncertainty from the uncertainty principle as we increase the number of measurements. Other than that, the theoretical value does not have anything to do with the actual errors we should use for our experimental sample.
{ "domain": "physics.stackexchange", "id": 87693, "tags": "quantum-mechanics, statistical-mechanics, experimental-physics, heisenberg-uncertainty-principle, metrology" }
How to reproduce free energy in canonical ensemble in stat mech
Question: Given a quantum mechanical density matrix $\rho$, the internal energy and entropy of the system is given by: \begin{align} E &= \text{Tr}[H \rho] \\ S &= -k_B \, \text{Tr}[\rho \ln \rho] \end{align} where $H$ is the quantum mechanical Hamiltonian. Then, I can minimize the Free Energy $F = E - TS$ to deduce the form of $\rho$: \begin{align} \delta F &= \text{Tr}(\delta \rho (H+ k_B T \ln\rho+k_BT)) = 0\\ &\Rightarrow \rho = \mathcal{N}e^{-H/k_B T} \end{align} But then, when I insert this back into $F$, I don't get the usual formula $$ F = -T \ln(\text{Tr}[e^{-H/k_B T}]) $$ Instead I get zero. Can someone help me out with what went wrong? Thanks Answer: You've probably forgotten about $\cal{N}$ in $\ln \rho$. $$ \rho = {\cal{N}} e^{- H /k_B T}\ \longrightarrow\ \ln\rho = -H/k_BT + \ln{\cal{N}}\ \longrightarrow\ TS = \mbox{Tr}[\rho H] -k_BT\ln{\cal N}\ \longrightarrow $$ $$ F = k_BT\ln{\cal N}. $$ Here ${\cal N}$ is the normalization constant $$ {\cal N} = \left(\mbox{Tr} e^{-H/k_BT} \right)^{-1}. $$ The usual formula follows from two last equations
{ "domain": "physics.stackexchange", "id": 60316, "tags": "energy, statistical-mechanics" }
Do epoxide rings react with bases?
Question: We've learned in class that epoxide ring openings can be catalyized with acids, and I seem to recall either the professor or a classmate mentioning that they also react with bases, but some quick internet searching didn't turn up anything useful. Do epoxide rings react with bases, and if so, what is the mechanism? (Dis)claimer: this is just for my own curiosity/learning, and not being asked as a homework/test question (yet). Answer: Do epoxied rings react with bases Yes, they can be opened, e.g. with alkoxides. what is the mechanism? In principle, this is a $S_N2$ reaction, with the typical approach of the nucleophile from the back side. The difference to a non-epoxide case is that removal of the leaving group (bond breaking, ring opening) is facilitated by the ring strain. But the general rules for the reactivity of alkyl-substituted centres in $S_N2$ nevertheless apply: the higher the substitution, the lower the rate. This is attributed to the steric hindrance of the respective centre. Note that the regioselectivity is different under acidic conditions!
{ "domain": "chemistry.stackexchange", "id": 2968, "tags": "organic-chemistry, reactivity" }
Orbital velocity of a star
Question: I am supposed to show that the orbital speed of a star is proportional with $\sqrt{r}$, where $r$ is the distance from the center of a galaxy. Suppose the galaxy's mass is equally distributed like in a disc. I am confused since I thought orbital velocity would be defined as: $$v=\sqrt{ \frac{G \cdot M}{r}}$$ In this case, the velocity is inversely proportional to the square of the distance. Is there something I am missing here? Edit: I also hope that the answer does not lie simply in this algebraic manipulation: $$v=\frac{\sqrt{r} \cdot \sqrt{GM}}{r}$$ I. The ratio between the area of the area that the star orbiting a galaxy center covers and its respective mass. Compared with the area of the disk-like the galaxy itself and its mass $\frac{\pi r_s^2}{x}=\frac {\pi r_g^2}{m_g}$ Plugging in the formula for mass of the body in orbit $m= \frac{v^2 r}{G}$, which was derived from $G\frac{mM}{r^2}=m \frac{v^2}{r}$ We get $\frac{r_g}{v_g^2}=\frac{r_s}{v_s^2}$, and isolating for the orbital speed of the star... $v_s=v_s \cdot \sqrt{\frac{r_s}{r_g}}$ This was my desperate attempt, and I cannot figure this out it seems Answer: The distribution of the mass is important! The law $F=GMm/r^2$ is only true for point masses (or, through Newton's shell theorem, spherically symmetric objects). Your derivation is close to the way I've seen it justified. You do have $v_s=\sqrt{G M/r_s}$, but $M$ is a function of $r_s$. (This is loosely justified through Newton's shell theorem). For $R$ the radius of the galaxy and $r_s$ the star's position in the galaxy, we have $M(r_s)=M_{g} r_s^2/R^2$, so $v=\sqrt{G M_{g} r_s/R^2} \propto \sqrt{r_s}$. I'm not a huge fan of this justification because Newton's shell theorem doesn't apply. A better analysis would be to look at the full vector integral for $\vec{F}$, for a disk radius $R$ with surface density $\sigma$ and a test mass placed at $(-r_s,0)$. \begin{align*} \frac{1}{m}\vec{F}&=\int_0^{2\pi} d\theta \int_0^{R} r dr G\sigma \frac{(r_s+r\cos(\theta),0)}{\|(r_s+r\cos(\theta),r\sin(\theta))\|^3}\\ &=G\sigma\int_0^{2\pi}d\theta \int_0^1 \ell d\ell \frac{(r_s/R+\ell\cos(\theta),0)}{\|(r_s/R+\ell\cos(\theta),\ell \sin(\theta))\|^3}\\ &=G\sigma f(r_s/R) \end{align*} Where I made the change of variables $r=\ell R$. If we're doing a back of the envelope calculation, one idea is to toss out the unitless function $f$. Then $F=v^2/r_s=\text{const.}$ and we have the desired result $v_s\propto \sqrt{r_s}$. If we want to be precise, the full function $f$ is very difficult and annoying to evaluate, so you can see why no one bothers with it! For example, for $y\gg 1$ we expect $f(y)\approx 2\pi/y^2$ because the force should behave like $GM/r^2$ and $M=2\pi\sigma$. For $y$ just greater than one, the force diverges (look at $\theta=0$) and we would have to add a finite thickness to the disk to get a finite result. For $0<y<1$, you have to use the Cauchy principal value to get a finite result, because you'd have to evaluate things like $\text{p.v.}\int_{-1}^1 x/|x|^3 dx=0$. But all this only matters if we're being precise. The overall point is that the force behaves like $G\sigma$ times some unitless function of $r_s/r_g$ which, for estimation purposes, can be treated as constant. I can't help but make plots of the velocity $v=\sqrt{r f(r)}$ for $\sigma=G=R=1$. I regularize the function by defining: $$f_\varepsilon(r)=\int_0^{2\pi}d\theta \int_0^1 \ell d\ell \frac{r+\ell\cos(\theta)}{\|r^2+\ell^2+2\ell\cos(\theta)+\varepsilon^2 \|^3}$$ $\varepsilon\to 0$ for an infinitely thin disk, and $\sqrt{ r f_{0.1}(r)}$ is the velocity for a circular orbit for a galaxy disk with thickness $0.1 R$. The green dashed line is the expected result for a point mass galaxy $v_s\propto r_s^{-1/2}$, the red line is $v_s\propto r_s^{1/2}$, and the other lines are the full messy truth for varying galaxy thickness. Mathematica source code on pastebin. Stuff like this has been pretty useless in the astrophysics courses I've taken because real galaxies have dark matter halos, central bulges, and all sorts of other things, so the issue of an infinitely thin disk of constant surface density never comes up.
{ "domain": "physics.stackexchange", "id": 89788, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion" }
Free particle propagator, wavefunction not moving problem
Question: I'm learning the QM propagator and the first example is of course the free particle: $\hat{H}=\frac{p^2}{2m}$, then the new wavefunction is found by: $$\psi(x,t)=\int dx_0\;K(x,t;x_0,t_0)\;\psi(x_0,t_0)$$ and $K=\langle x|U(t,t_0)|x_0\rangle;$ Thus evaluating the time-evolution operator and etc we find: $$\psi(x,t)=\int dx_0\;\frac{A}{\sqrt{t}}\exp\bigg({\frac{im(x-x_0)^2}{2t}}\bigg)\psi(x_0,t_0)$$ with $A$ being a constant. So the problem for me is this: giving a gaussian like $\psi(x_0,t_0)$ centered at $x_0$. I supposed that the latter-time wavefunction $\psi(x,t)$ would have a maximum at $\bar x=(p/m)(t-t_0)$, but it seems that the wavefunction keeps centered at $x_0$ and just goes spreading. Am I in the proper frame of the particle? How can I see the particle moving at all? Answer: If you start with a particle having a definite position,e.g., $\psi(x_0,t_0)=\delta(x_0-y)$, then its momentum ic completely uncertain - it can go either right or left with equal probability. Thus we do not expect directed motion. If we start with a particle having a definite momentum, $\psi(x_0,t_0)\propto e^{ikx_0}$, it will remain in this state, since it is the eigenstate of the Hamiltonian - nothing changes. Now, you can experiment with Gaussian wave packets - having finite position and momentum uncertainty - these will indeed behave as moving with speed, which is the mean speed of the initial packet. But this information is encoded in $\psi(x_0,t_0)$, not in the propagator.
{ "domain": "physics.stackexchange", "id": 95023, "tags": "quantum-mechanics, wavefunction, time-evolution, propagator" }
RNN: why Wx + Uh instead of W[x,h]
Question: Traditionally, a state for RNN is computed as $$h_t = \sigma(W\cdot \vec x + U\cdot \vec h_{t-1} + \vec b)$$ For a RNN, why to add-up the terms $(Wx + Uh_{t-1})$ instead of just having a single matrix times a concatenated vector: $$W_m[x, h_{t-1}]$$ where $[...]$ is concatenation. In other words, we would end up with a long vector like $\{x_1, x_2, x_3, h_{1,t-1}, h_{2,t-1}, h_{3,t-1} \}$ multiplied by $W_m$. It seems like the second approach would have a significantly larger matrix, which has more elements than $W$ and $U$ combined. Does that mean $W$ and $U$ are a simplification, what do we lose by using them, and adding up the results? Answer: Theoretically, the formula with two matrices is more clear and self-evident, I think that's the reason why it's used more often. In practice, both approaches are actually used in production and hence are equivalent. It's just a matter of preference. Tensorflow For example, Tensorflow is often optimized for performance. Here's how basic RNN cell is implemented there (tensorflow/python/ops/rnn_cell_impl.py): def call(self, inputs, state): """Most basic RNN: output = new_state = act(W * input + U * state + B).""" gate_inputs = math_ops.matmul( array_ops.concat([inputs, state], 1), self._kernel) gate_inputs = nn_ops.bias_add(gate_inputs, self._bias) output = self._activation(gate_inputs) return output, output A single matrix multiplication is more efficient, so it's applied, even though the comment describes the expanded formula with two matrices. Keras On the other hand, Keras often chooses simplicity and clarity over performance. Here's its implementation (keras/layers/recurrent.py): def call(self, inputs, states, training=None): prev_output = states[0] ... if dp_mask is not None: h = K.dot(inputs * dp_mask, self.kernel) else: h = K.dot(inputs, self.kernel) if self.bias is not None: h = K.bias_add(h, self.bias) ... output = h + K.dot(prev_output, self.recurrent_kernel) if self.activation is not None: output = self.activation(output) The class thus makes its it easy to access two matrices separately (self.kernel and self.recurrent_kernel). Pytorch Pytorch approach is closer to keras. Moreover, not only they use separate kernel matrices, they also have two bias vectors, and these four arrays are accessible for the client code. Despite these differences all three libraries are functionally equivalent.
{ "domain": "datascience.stackexchange", "id": 2923, "tags": "machine-learning, linear-algebra, rnn" }
How do I check what is wrong in my full-adder code?
Question: I am trying to solve the first question on the qiskit test which is writing a code for a full adder. So based on my research if I have $A$ q[0], $B$ q[1] and $C$ in q[2] as input and Sum and Cout as output, I should be able to produce the correct outputs by the following gates: q[0] XOR1 q[1] ---> q[4] q[0] AND1 q[1] ---> q[3] q[2] XOR2 q[4] ---> q[5] (SUM) q[2] AND2 q[4] ---> q[6] q[3] OR q[6] ---> q[7] (COUT) Writing the following program I get that my answer is producing wrong results : from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit from qiskit import IBMQ, Aer, execute ##### build your quantum circuit here #Define registers and a quantum circuit q = QuantumRegister(8) c = ClassicalRegister(2) qc = QuantumCircuit(q,c) # Preparing inputs qc.x(q[0]) # Comment this line to make Qbit0 = |0> qc.x(q[1]) # Comment this line to make Qbit1 = |0> qc.x(q[2]) # Comment this line to make Qbit2 = |0> ( carry-in bit ) qc.barrier() # AND gate1 implementation qc.ccx(q[0],q[1],q[3]) qc.barrier() # XOR gate1 implementation qc.cx(q[0],q[4]) qc.cx(q[1],q[4]) qc.barrier() # XOR gate2 implementation qc.cx(q[2],q[5]) qc.cx(q[4],q[5]) qc.barrier() # AND gate2 implementation qc.ccx(q[2],q[4],q[6]) qc.barrier() #OR gate implementation qc.cx(q[3],q[7]) qc.cx(q[6],q[7]) qc.ccx(q[3],q[6],q[7]) qc.barrier() # Measuring and put result to classical bit # ( sum ) qc.measure(q[5],c[0]) # ( carry-out ) qc.measure(q[7],c[1]) # execute the circuit by qasm_simulator backend = Aer.get_backend('qasm_simulator') job = execute(qc, backend, shots=1000) result = job.result() count = result.get_counts() print(count) qc.draw(output='mpl') Grading tells me that my results are not matching, but I cannot figure out what is wrong with my code. Thank you so much for help. Answer: If I am correct, I suppose you are talking about the Qiskit Challenge 2020. A possible reason why your circuit is being graded wrong is because the question asks you to construct the circuit for full adder and give it the input $A=1$, $B=0$ and $X=1$. However, I think as per your code, you are preparing the qubits to be $|ABX\rangle = |111\rangle$ instead of $|101\rangle$. Baring that, your circuit works perfectly fine from what I could analyze.
{ "domain": "quantumcomputing.stackexchange", "id": 2134, "tags": "quantum-gate, programming, circuit-construction" }
ardrone_automony is not receiving correct navdata from AR.Drone2.0
Question: I have been running ardrone_autonomy with an AR.Drone1.0 and it has been working great. Today I bought an AR.Drone2.0 and assumed that my code would work out of the box. Unfortunately ardrone_autonomy ardrone_driver throws this error over and over again: One option (0) is not a valid option because its size is zero [Navdata] Checksum failed : 1006 (distant) / 34055 (local) Tag 12504 is an unknown navdata option tag Is there something I need to change in the code / launch file to make it compatible with the AR.Drone2.0? I should also mention (though I don't think it's relevant) that I modified ardrone_autonomy to make it catkin compatible. Originally posted by ounsworth on ROS Answers with karma: 37 on 2013-10-15 Post score: 0 Answer: You have to look for the drone firmware. it must be 2.3.3 look here https://github.com/AutonomyLab/ardrone_autonomy/issues/69 we are all waiting for the new sdk from parrot Originally posted by mmyself with karma: 138 on 2013-10-16 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ounsworth on 2013-10-29: Worked like charm, firmware was very easy to downgrade. Thanks!
{ "domain": "robotics.stackexchange", "id": 15875, "tags": "ros, ardrone-autonomy" }
Are photolysis reactions considered exothermic or endothermic?
Question: I know that photolysis is when a chemical reaction (usually decomposition) is forced by photons. I also note that there is a general warming in the stratosphere, due to the photolysis of ozone $$\ce{O_3 + $h\nu$ -> O + O_2}$$ and the subsequent restorative mechanism $$\ce{O + O_2 + M ->O_3 + M}$$ where $\ce{M}$ is a third body used to carry off excess energy. To me, that says that the photolysis of ozone is an endothermic reaction, and the reformation of ozone is an exothermic reaction, as the free body takes away excess energy (assumably thermal energy). Am I correct in generalizing that photolysis is generally an endothermic reaction? If so, could the enthalpy released by the second equation be quantified by the frequency of the photon? Answer: In general it could be either. In your example you will have to work out the heat of formation of ozone and of O atoms (presumably it is an endothermic reaction). In general what the photon is doing is providing a way to overcome an activation barrier in the ground state by opening up a pathway from an excited state to the products. This last step is generally exothermic. The enthalpy released in the second reaction is less than the photon energy by the difference in energy of the photon and that of products, an oxygen atom in your case as the heat of formation of O$_2$ is zero. The sketch gives the general idea, I hope, for an endothermic reaction :) Notes: The barriers may also be called transition states. There may be no barrier in the excited state. The crossing from the excited state potential to the product may be controlled by Landau-Zener behaviour or be a conical intersection. The far right upper line (not labelled) is the potential of the excited state of the product
{ "domain": "chemistry.stackexchange", "id": 7194, "tags": "thermodynamics, photochemistry" }
Maximal work from cooling a portion of water
Question: Let's consider a portion of water of mass $m$. We want to find out how much work (electric energy) we can acquire by cooling it from temperature $T_1$ to temperature $T_0$. The most efficient way is to use the Carnot engine. We'll get the work $$W = \left(1 - \frac {T_0}T\right) Q_w ~~~~~(1)$$ if we take the heat $Q_w$ from the water. On the other hand, the water will cool down by $\Delta T$ then, $$\Delta T = \frac {Q_w}{mc} ~~~~~(2)$$ where $c$ is the water's specific heat. So $$W = \left(1 - \frac {T_0}T\right) mc\Delta T ~~~~~(3)$$ Then I'd say that the work acquired from cooling down the water will be $$W = \left(1 - \frac {T_0}{T_1}\right) mc\left(T_1 - T_0\right) ~~~~~(4)$$ But in the solution of a problem it is claimed that $$ W = \int_{T_0}^{T_1} \left(1 - \frac {T_0}T\right) mc dT~~~~~(5)$$ Why is it so and not as proposed by me in formula (4)? Why am I wrong? Answer: You must take into account the fact that the efficiency of your Carnot engine is continuously changing while you are cooling the water, because $T$ changes. For this reason your equation (3) is correct if applied to a very small (infinitesimal) cycle, and should read $$dW = \left( 1-\frac{T_0}{T} \right) mc dT$$ By integrating you obtain the correct answer.
{ "domain": "physics.stackexchange", "id": 26527, "tags": "thermodynamics" }
Why does relative humidity appear limited for temperatures above 80°F?
Question: I recently compiled ten years of NOAA local climatological data. I noticed that the maximum relative humidity dropped linearly from 100% at about 80°F to 20% at about 110°F. Nothing obvious comes to mind that explained this observation. Here is my scatterplot: Here is graph showing the edge of interest: I would have expected to see relative humidity values at or near 100%, even for temperatures near 100°F. Obviously, that's not what I see. Maybe this has something to do with a limit of absolute humidity or density? Answer: Nothing obvious comes to mind that explained this observation. Consider a simple model with the following aspects/assumptions: The coastal regions of large bodies of water on Earth are at most as hot as about 80°F. (Some exceptions exist, such as the Persian Gulf. If the data are taken from weather stations near areas of human occupancy, such as airports, note also that regions with a wet-bulb temperature much greater than about 80°F are generally hazardous to humans.) Water enters the atmosphere predominantly through evaporation from these large bodies of water, up to the maximum relative humidity at that maximum temperature. That fixes one point of the line you observed: 100% relative humidity at 80°F (300 K). The vapor pressure $P_\text{vapor}$ of water increases with increasing temperature $T$; a simple model of this exponential relation is the August equation $P_\text{vapor}\approx\exp\left(20- \frac{5100}{T}\right)$, with $T$ measured in kelvins. The relative humidity corresponds to the actual partial pressure of water vapor relative to the saturation vapor pressure at that temperature. We characterize this behavior in part through psychrometric charts. Therefore, we should expect a downward-sloping maximum relative humidity with increasing temperature as the saturated vapor is transported inland to regions over land that may be hotter. The maximum mass of water vapor remains the same, as does the maximum absolute humidity, but the maximum relative humidity drops with increasing temperature. What's the slope of that relationship? The maximum relative humidity according to this model is $$\text{RH}=\frac{\exp\left(20- \frac{5100}{300}\right)}{\exp\left(20- \frac{5100}{T}\right)},$$ or $\text{RH}=\exp\left[5100\left(\frac{1}{T}-\frac{1}{300}\right)\right],$ which for small changes around 300 K is approximately $\text{RH}= 1-\frac{17(T-300)}{300}$. What maximum relative humidity do we therefore expect at 100°F, 311 K, for example? We expect 38%, pretty much exactly what your data tell us.
{ "domain": "physics.stackexchange", "id": 94421, "tags": "temperature, weather, humidity" }
Implementing plugins in my Ruby social aggregator app
Question: Some time ago I started with a small Ruby project. I call it social_aggregator. This software aggregates information from different networks to one xml-stream, which you can reuse. For instance on your personal website to show some of your activity to your audience or as your personal news feed. I've written something like a plugin interface and a manager, which loads the plugin during the boot up process. A plugins aggregates data from a social network or another data source. One plugin per data source. There is a PluginFrame.rb, which provides the interface and generalize some plugin functionality: require 'celluloid' require 'digest/md5' require 'app/models/Plugin' require 'app/models/Action' require 'app/models/Log' require 'app/models/Message' require 'app/models/MessageCategory' require 'app/models/Follower' require 'app/utils/Logging' require 'app/utils/Setting' class PluginFrame include Setting include Logging include Celluloid finalizer :unload def initialize(plugin_model) @plugin = plugin_model settings_path @plugin.conf_path logger.info "The plugin #{@plugin.name} has been initialized." end def run logger.warn "The plugin #{@plugin.name} is not implemented!" terminate end def unload logger.warn "The plugin #{@plugin.name} is terminating!" end protected # Returns a persisted action by given name def get_action(name) Action.find_or_create_by!(name: name, plugin: @plugin) end # Return whether last action occurance was before given time def action_ready?(action, timer) time_since_last_occurance = action.last_occurance unless time_since_last_occurance.nil? || time_since_last_occurance > timer logger.info "Possible aggregation in #{timer - time_since_last_occurance} seconds." return false end return true end end The PluginManager.rb checks if a plugin is valid and loads them: require 'celluloid' require 'app/models/Plugin' require 'app/utils/Logging' require 'app/utils/Setting' require 'app/plugins/PluginValidator' require 'app/plugins/PluginWorker' class PluginManager include Logging include Setting include Celluloid # Stores all valid plugin models @plugin_definitions # Stores all plugin instances (threads) @plugin_instances def initialize logger.info 'Initializing plugin manager' @plugin_definitions = [] @plugin_instances = [] initialize_plugins end def defined_plugins @plugin_definitions end def loaded_plugins @plugin_instances end def run logger.debug 'Aggregating data from plugins.' if loaded_plugins.count <= 0 logger.info 'No plugins loaded to aggregate data from.' Aggregator::shutdown return elsif loaded_plugins.count > 2 pool_size = loaded_plugins.count else pool_size = 2 end plugin_worker = PluginWorker.pool(size: pool_size) loaded_plugins.map { |p| plugin_worker.future.run(p) }.map(&:value) end private def initialize_plugins plugins = [] logger.info 'Initializing plugins' search.each do |p| plugin = PluginValidator::validate p unless plugin.nil? plugins << plugin end end logger.info "Found #{plugins.count} valid plugins." plugins.each do |p| plugin = Plugin.find_or_initialize_by(name: p.name) plugin.update_attributes( class_name: p.class_name, conf_path: p.conf_path, class_path: p.class_path ) @plugin_definitions << plugin end logger.info 'Persisted plugin information.' if plugins.count > 0 @plugin_definitions.each do |p| begin require p.class_path rescue => e logger.warn "Couldn't parse file #{p.class_path}. Aggregator is not able to use the #{p.name} plugin." logger.debug e next end begin if Object::const_get(p.class_name).ancestors.include? PluginFrame instance = Object::const_get(p.class_name).spawn(p) @plugin_instances << instance logger.info "Plugin #{p.name} initialized" else raise end rescue => e logger.warn "Couldn't instantiate class #{p.class_name} or class is not a plugin. Aggregator is not able to use the #{p.name} plugin." logger.debug e end end logger.warn 'Found no useable plugin!' if @plugin_instances.empty? end # Search for plugins def search(directory = setting.plugin_folder) plugins = Dir.glob("#{directory}/**") if Aggregator::environment == :development plugins.each do |p| logger.debug "Found plugin folder #{p}. Validating plugin now." end end plugins end end My questions are: How would you modularize the functionality to pull the data from different sources? What about to put the plugin into gems? If you're interested in the project, you can find it here: https://github.com/openscript/social_aggregator Answer: Most of your code is not here, but some general points: File naming conventions: in ruby, file name convention is snake case - your files should be named as the name of your class, but as snake case - meaning plugin_frame.rb, plugin_manager.rb, message_category.rb. Remember to change you requires also. Finalizers are rarely if ever used in ruby, and I see no compelling reason to use them in your code. Ruby is a duck typed language - you don't have to declare methods just to say that they are not implemented - simply don't implement them. If you don't want to allow the initialization of the class PluginFrame you can refactor it to be a module. A module cannot be initialized by itself, but classes which include inherit its methods. You should abstract the use of members in your code - don't use @plugin, instead dd an attribute_reader :plugin and use plugin. If you are using rails you don't need to explicitly require the models, they should be automatically loaded when the server starts Refrain from using unless when there are action both for the positive and the negative results - if is much more readable Ruby is intended to more compact, so if the return value should be true or false due to some condition, simply return the result of the condition (no need for return true). Even the return keyword is not needed. Refactored plugin_frame.rb: require 'celluloid' require 'digest/md5' require 'app/utils/logging' require 'app/utils/setting' class PluginFrame include Setting include Logging include Celluloid protected attr_accessor :plugin # Returns a persisted action by given name def get_action(name) Action.find_or_create_by!(name: name, plugin: plugin) end # Return whether last action occurrence was before given time def action_ready?(action, timer) time_since_last_occurrence = action.last_occurrence time_since_last_occurrence && time_since_last_occurrence <= timer end end This class is much more readable, succinct, and more easily shows its usage and responsibility. Unfortunately, this class seems to contain some partial responsibility - reading an action, checking if its ready, but it misses other actions which are assumed to be handled else-where - setting action's last occurrence and saving its state, for example. I assume this is written in some other class, but it breaks the Encapsulation guideline - put the logic for loading and saving an object in the same place, evaluating and maintaining state in the same place, etc... As for the plugin_manager.rb, some more points: You don't need to declare members in your class (@plugin_definitions, etc.). When they are initially assigned they 'magically' appear. You define getters, which are fine, although you can use attr_accessors and attr_readers instead. Any way, you give your getters a different name than the members. This is not advisable - give them the same name. Instead of initializing an array, then using each on another array, and pushing the results to the first array, simply use map and select. Method naming - if you feel that a method needs a comment (namely search) to explain what it does, it might be better to rename it to be self-explanatory. Refactored: class PluginManager include Logging include Setting include Celluloid def initialize initialize_plugins end attr_reader :plugin_definitions, :plugin_instances def run logger.debug 'Aggregating data from plugins.' if plugin_instances.count <= 0 logger.info 'No plugins loaded to aggregate data from.' Aggregator::shutdown return end pool_size = [plugin_instances.count, 2].max plugin_worker = PluginWorker.pool(size: pool_size) plugin_instances.map { |p| plugin_worker.future.run(p) }.map(&:value) end private def initialize_plugins logger.info 'Initializing plugins' plugins = search_for_plugins.map { |p| PluginValidator::validate p }.compact logger.info "Found #{plugins.count} valid plugins." @plugin_definitions = plugins.map do |p| Plugin.find_or_initialize_by(name: p.name).tap do |plugin| plugin.update_attributes( class_name: p.class_name, conf_path: p.conf_path, class_path: p.class_path ) end end logger.info 'Persisted plugin information.' if plugins.count > 0 @plugin_instances = plugin_definitions.map do |p| begin require p.class_path begin if Object::const_get(p.class_name).ancestors.include? PluginFrame Object::const_get(p.class_name).spawn(p) else raise end rescue => e logger.warn "Couldn't instantiate class #{p.class_name} or class is not a plugin. Aggregator is not able to use the #{p.name} plugin." logger.debug e end rescue => e logger.warn "Couldn't parse file #{p.class_path}. Aggregator is not able to use the #{p.name} plugin." logger.debug e end end logger.warn 'Found no useable plugin!' if plugin_instances.empty? end def search_for_plugins(directory = setting.plugin_folder) Dir.glob("#{directory}/**").tap do |plugins| if Aggregator::environment == :development plugins.each do |p| logger.debug "Found plugin folder #{p}. Validating plugin now." end end end end end To package your code as a gem you can start by reading the guide As for separating the plugins into separate gems - that is mostly a matter of opinion, I would advise against that - implement your plugins in the main gem, which then can be used as reference implementations for your gem users, which will be able to develop their own plugins in their applications, or within their own gems.
{ "domain": "codereview.stackexchange", "id": 6161, "tags": "ruby, plugin" }
What is the difference between sparse conditional constant propagation and constant propagation?
Question: Reading the Wikipedia pages on these topics, it seems that sparse conditional constant propagation (SCCP) is a more powerful form of constant propagation (CP)? E.g all optimizations available in CP are subsumed in SCCP. Answer: In general, SCCP will remove the unreachable blocks based on the previous const propagation, that's why it can propagate more constant (some phi's oprands can be removed) And it's sparse, which means it runs on SSA form IR, that would make the analysis and procedure faster. I read from a blog that there are four kinds of constant propagation, that is (simple) constant propagation conditional constant propagation sparse constant propagation sparse conditional constant propagation : the most powerful Based on the attributes of each algorithm, I guess we can understand what they do.
{ "domain": "cs.stackexchange", "id": 21046, "tags": "program-optimization" }
Why is the Earth not an inertial frame of reference?
Question: From many sources I have found the explanation that the Earth is not an inertial frame of reference because it rotates around its axis. However, nobody mentions the rotation about the Sun. What I thought was, since the Earth rotates around the Sun, there is a centripetal force acting on the Earth, hence an object that is considered to have zero acceleration is actually being accelerated by the Sun. People look at this from the perspective of the rotating frame. And actually, I don't even get why rotation about its axis can be a reason for non inertial frame of reference. Answer: Let's look at the definition: An inertial frame of reference in classical physics and special relativity possesses the property that in this frame of reference a body with zero net force acting upon it does not accelerate; that is, such a body is at rest or moving at a constant speed in a straight line. An inertial frame of reference can be defined in analytical terms as a frame of reference that describes time and space homogeneously, isotropically, and in a time-independent manner. Conceptually, the physics of a system in an inertial frame have no causes external to the system Italics mine. The crucial word is conceptually. It carries after it the whole concept of measurement, and physics is about experiments and measurements, and the theories and definitions are tools to describe and then mathematically model the observations, so that one gets a predictive theory. Measurements come with experimental errors, and thus how complicated the theoretical model one is using depends on these errors. One takes the simplest assumptions, it makes no sense to use the galactic reference frame ( we are also rotating around the galactic center) when measuring a force on bodies on earth, and also the measurement will depend on our measuring instruments. For example, for usual engineering uses we accept that the earth is flat, the errors of the curvature of the earth to the details of a building are so small that they are within measurement errors. We accept that the earth is rotating, a non inertial frame, when calculating the coriolis force and the distances planes travel etc, because there the force from the rotation effect is larger than the instrument errors. So it depends on what you are measuring, whether you can use/assume that the earth is in an inertial frame within measurement errors or not. It depends on the problem at hand.
{ "domain": "physics.stackexchange", "id": 63631, "tags": "newtonian-mechanics, reference-frames, inertial-frames, earth, approximations" }
Compact, simple calculator made with python
Question: num1 = int(input("What is your first number you would like me to calculate?(not a decimal)")) num2 = int(input("What is your second number you would like me to calculate?(not a decimal)")) calculation = input("How would you like me to calculate this?") def add(): if calculation in ["+", "add", "addition"]: answer = num1 + num2 print(answer) def subtract(): if calculation in ["-", "subtract", "subtraction"]: answer = num1 - num2 print(answer) def divide(): if calculation in ["/", "divide", "division"]: answer = num1 / num2 print(answer) def multiply(): if calculation in ["x", "X", "*", "multiply", "multiplication"]: answer = num1 * num2 print(answer) add() subtract() divide() multiply() Is there a way I could make an error if a number and/or an operator was not pressed? (a letter) Answer: Making errors appear in your program as an output is known as 'Error Handling'. For this you need to know all the types of errors there are in python. Since you seem like a beginner it would be okay for you to memorize some off the most used one's at least. Ideally you would want to know the ZeroDivisionError, ValueError and SyntaxError. When implementing the error handling in your code you need to use try, except and else statements. This works in three steps: Try : This part tries to execute a certain piece of code. Except : This part is run only when and error occurs in the try statement. Else : This part runs if no error occurs in the try statement. Here is the a sample code that improves the input section of your program: # Simple Calculator # Operator set operator_set = set(["+", "add", "addition", "-", "subtract", "subtraction", "/", "divide", "division", "x", "*", "multiply", "multiplication"]) def User_input(): """Prompt user input and handle errors""" while True: try: num1 = int(input("Enter first number: ")) num1 = int(input("Enter second number: ")) calculation = input("How would you like t calulate this?\n").lower() except Exception as e: print("Error : {}".format(str(e))) else: if calculation in operator_set: break; else: print("Please re-enter a valid operator!") return num1, num2, calculation Improvements made: 1.Comments and Docstrings - They make your program easier to understand and make it more readable to other programmers as well. A Docstring is: def User_input(): """Prompt user input and handle errors""" The """Data within these triple inverted commas define a functions process(what it does)""" A comment is: # Operator set This gives a brief 'comment' on what that line of code does. Added a new function for validating user input. Within this function is where the users input is validated and returned. Validation is the process of ensuring that a program operates on clean, correct and useful data. It useful to remember that in future programs as well. Here i have used try except and else statements for validation. except Exception as e: This line means that if any error occurs, assign that error to e This is used to catch almost all errors without specifically mentioning them. You can individually write an exception statement for each error but the is the easy way out. Used a set to check if users operator was valid. Again another form of validation used here to make sure user inputs only valid operators. calculation = input("How would you like t calculate this?\n").lower() The .lower() means that the users input is automatically converted to lower case. DO keep in mind that this is only for letters not symbols and numbers. This means even if the user inputs "ADD" the returned value will be "add", which is as we need in the operator set. To check whether this was in the operator set we used a simple if condition. Finally a while loop used to keep asking the user's input until valid input has been entered. This is pretty straight-forward and can be used in several cases. There are still more improvements but these are the error handling exceptions that you requested for.
{ "domain": "codereview.stackexchange", "id": 25553, "tags": "python, python-3.x, calculator" }
What is the physical meaning of expectation value of the Hamiltonian operator?
Question: I've been studying David Griffiths' Introduction to Quantum Mechanics and int that, it was explained that the expectation value of position $x$ is the average of the positions of $N$ identically prepared particles. This makes sense but later on, they tried finding the expectation value of the Hamiltonian operator. What is the meaning of this? Average of an operator doesn't make sense. Answer: In the same way that the expectation value of the position operator is the average position you'd get if you measured a bunch of identically-prepared states, the expectation value of the Hamiltonian operator is the average value of the Hamiltonian that you'd get if you measured a bunch of identically-prepared states. In most of the elementary situations you'll be looking at,* the value of the Hamiltonian is equivalent to the total energy, so the expectation value of the Hamiltonian is the average value of the energy that you'd get if you measured a bunch of identically-prepared states. *The question of when the Hamiltonian is equivalent to the total energy is a complicated one, and depends, in part, on what you define "the total energy" to be in the first place, but until you get into Hamiltonians involving the electromagnetic field, you can usually take the two to be equivalent.
{ "domain": "physics.stackexchange", "id": 50425, "tags": "quantum-mechanics, operators, hamiltonian, measurement-problem, observables" }
Bra-ket notation and linear operators
Question: Let $H$ be a hilbert space and let $\hat{A}$ be a linear operator on $H$. My textbook states that $|\hat{A} \psi\rangle = \hat{A} |\psi\rangle$. My understanding of bra-kets is that $|\psi\rangle$ is a member of $H$ and that $\psi$ alone isn't defined to be anything, so $|\hat{A}\psi\rangle$ isn't defined. Is $|\hat{A} \psi\rangle = \hat{A} |\psi\rangle$ just a notation or is there something deeper that I am missing? Answer: This should be understood as a mere definition, i.e. a new label for the state you get when you apply the operator A to the ket psi.
{ "domain": "physics.stackexchange", "id": 2334, "tags": "notation, hilbert-space" }
Modeling the problem of finding all stable sets of an argumentation framework as SAT
Question: As a continuation of my previous question i will try to explain my problem and how i am trying to convert my algorithm to a problem that can be expressed in a CNF form. Problem: Find all stable sets of an argumentation framework according to Dung's proposed framework. Brief theory: Having an argumentation framework AF, with A the set of all arguments and R the set of the relations, a stable set is a set which attacks all arguments not in their set and there is no attack relation between arguments in the stable set. Example: Let's say we have an argumentation framework AF ,A={1,2,3,4}(arguments of AF) and attack relations R{1,3} and R{2,4}. It's obvious that the set {1,2} is a stable extension of the framework because: a)it attacks all arguments not in their set (3 and 4) b)it's conflict free(no attacks between arguments in the set) because argument 1 does not attack argument 2 and the opposite My exhaustive abstract algorithm: argnum=number of arguments; Ai[argnum-1]=relation "attacks" ,where 1<=i<=argnum P[2^argnum-1]=all possible relations that can be generated from all the arguments S[2^argnum-1]=empty; where S are all the stable sets j=0; //counter for while k=1; //counter for counting stable sets while j<2^argnum-1 if P[j] attacks all arguments not in P[j](check using Ai[]) if all arguments in P[j] are conlfict-free S[k++]=P[j]; end if end if j++; end while I want to solve the above problem either by transforming the above algorithm to CNF or by using a different algorithm and finally use a SAT Solver(or anything similar if exists) give CNF as input and get stable sets as output. I wonder if someone can give me any feedback of how i can transform any algorithm like the above to CNF in order to be used into a SAT Solver. I decided to use precosat. Answer: Finding a stable argument set is equivalent to finding an independent set in the directed graph of argument attacks, with the added restriction that some member of the set must be adjacent to each vertex in the graph not in the independent set. The problem is at least as hard as the indepependent set decision problem and is thus NP-hard. The decision version of the stable argument set problem is reducible to Boolean SAT as follows: Input: Given a set of $n$ arguments $ARG_{1}$ to $ARG_{n}$, let the SAT propositional variable $ATTACK_{i,j}$ be true if $ARG_{i}$ attacks $ARG_{j}$. Output: Let $INDSET_{1} ... INDSET_{n}$ be a new set of propositional variables. $INDSET_{i}$ will be true in the SAT solution iff $ARG_{i}$ is part of the stable set found. Generating the clauses: For every pair of variables $INDSET_{i}$, $INDSET_{j}$, add clauses that require $\overline{(INDSET_{i} \land INDSET_{j})} \lor (\overline{ATTACK_{i,j}} \land \overline{ATTACK_{j,i}})$. These clauses prohibit any stable set argument from attacking another. Let $NEEDATTACK_{1} ... NEEDATTACK_{n}$ be a new set of propositional variables. For each $INDSET_{i}$ variable, add clauses that require $INDSET_{i} \oplus NEEDATTACK_{i}$ These clauses record which arguments must be attacked by the stable set arguments. Let $GOTATTACK_{1} ... GOTATTACK_{n}$ be a new set of propositional variables. For each $GOTATTACK_{j}$ variable, add clauses that require $GOTATTACK_{j} = (INDSET_{1} \land ATTACK_{1,j})$ $\lor$ ... $\lor$ $(INDSET_{n} \land ATTACK_{n,j})$ These clauses record which arguments have been attacked by the stable set arguments. For each $GOTATTACK_{i}$ variable, add clauses that require $NEEDATTACK_{i} \oplus \overline{GOTATTACK_{i}}$ These clauses require that every argument that needed to be attacked by some stable set argument was in fact attacked. The Boolean expressions can be converted to circuits and from there to CNF using Tseitin transformations. To obtain all the stable sets, when the SAT solver returns a set of $INDSET$ variables, you must construct a CNF clauses that forbids that solution and append it to the CNF formula. Rerun the solver and it will either find a new solution or report that the formula is now unsatisfiable. If "unsatisfiable" is reported, then you know you've found all the stable sets. If a new solution is found, construct another CNF clause to forbid that solution, append it to the formula and run the solver again.
{ "domain": "cs.stackexchange", "id": 1431, "tags": "algorithms, complexity-theory, time-complexity, np-complete, satisfiability" }
How to simplify exponential of a matrix into Cosine and Sine Hyperbolic terms?
Question: In Peskin and Schroeder book "An Introduction to QFT" there is an equation (eq. 3.48): I know that exp(x)=cosh(x)+sinh(x). But, I couldn't understand how the above equation is obtained. Any help is welcome. Also why is the above matrix chosen for boost from four momentum (m,0) to (E,p^3)? Answer: Consider Taylor expansion of an exponent and even and odd powers of the matrix under the exponent.
{ "domain": "physics.stackexchange", "id": 37185, "tags": "homework-and-exercises" }
Sort-of Alexa clone using Python on a Raspberry Pi
Question: It's still a working process. There are still things I would like to add, and if you have any critiques or ideas they would be greatly appreciated! I would like to know if my code is readable. GitHub link: import os import webbrowser import subprocess import speech_recognition as sr import json from time import sleep from difflib import get_close_matches # defining functions that are used def reload_files(): """ Order is: files, directories,then paths.""" found_files = [] found_dirs = [] found_paths = [] for paths, dirs, files in os.walk(os.getcwd()): found_files += files found_dirs += dirs found_paths += paths return found_files, found_dirs, found_paths def find_path(file): for path, dirs, files in os.walk(os.getcwd()): for f in files: if f == file: return os.path.join(path, file) for d in dirs: if d == file: return os.path.join(path, file) def file_search(file): return get_close_matches(file, current_files, cutoff=0.4)[0] def dir_search(directory): try: requested_dir = get_close_matches(directory, current_dirs, cutoff=0.4)[0] dir_list = [] # Iterating through the directory that was declared before. for dirs in os.listdir(find_path(requested_dir)): if os.path.isdir(find_path(dirs)): dir_list.append("{}/{}".format(requested_dir, dirs)) if len(get_close_matches(directory, dir_list, cutoff=0.4)) == 0: return find_path(requested_dir) else: return get_close_matches(directory, dir_list, cutoff=0.4)[0] except: print('Could not find directory "{}"!'.format(directory)) def exe(command): duckurl = "https://www.duckduckgo.com/?q=" command = command.lower() order = command.split()[0] if order == "search": website = command.split()[1] # if command.split()[1] in websites then it will search the website # instead of googling it. if website in websites: webbrowser.open("{}{} {}".format(duckurl, websites[website], command[command.index(command.split()[2]):])) return True else: webbrowser.open(duckurl + command[command.index(command.split()[1]):]) return True elif "play directory" in command: combined_dir = dir_search(command[command.index(command.split()[2]):]) for paths, dirs, files in os.walk(os.getcwd()): if combined_dir in paths: os.system("vlc {}".format(paths)) return True elif order == "play": #this will play movies file = file_search(command[command.index(command.split()[1]):]) subprocess.call(["xdg-open",find_path(file)]) return True elif order == "run": os.system(command[command.index(command.split()[1]):]) return True elif order == "add": # The second letter will be the webbsite third will be the bang website_bang = {command.split()[1]: "!" + command.split()[2]} websites.update(website_bang) with open("websites.json", "w") as websites_json: json.dump(websites, websites_json) print(websites) return True elif order == "refresh" or order == "reload": print("{}ing files...".format(order.capitalize())) global current_files, current_dirs, current_subs current_files, current_dirs, current_subs = reload_files() print("Complete!") return True else: print('Could not recognize command "{}"!'.format(command)) return False # speech recognition def listening(): with sr.Microphone() as sauce: print("Listening...") try: audio = r.listen(sauce, timeout=5) except sr.WaitTimeoutError: print("You ran out of time!") return False print("Recognizing...") try: command_worked = exe(r.recognize_google(audio)) except sr.RequestError: print("Something went wrong with the conection. Trying sphinx...") command_worked = exe(r.recognize_sphinx(audio)) except sr.UnknownValueError: print("Could not hear what you were saying!") return False if not command_worked: print("Something went wrong with the exe function!") return False else: return True # Constants current_files, current_dirs, current_subs = reload_files() r = sr.Recognizer() with open("websites.json") as websites_json: websites = json.load(websites_json) while True: listening() Answer: This is very interesting. I love it. Good work. Critcism: with sr.Microphone() as sauce: From https://pixabay.com/photos/tomato-soup-tomato-soup-sauce-482403/ Sauce is what you add to pasta. Microphone is a source. def exe(command): duckurl = "https://www.duckduckgo.com/?q=" This function is too long and should be shortened. We can use a dictionary of commands and callables. (Callables are functions, lambdas, classes, objects with __call__ implementation, ...) If play directory is different you can either iterate the dictionary (In that case you might as well use a list) or handle it as a special case. # speech recognition def listening(): Why not just rename to recognize_speech? from time import sleep Delete this. Is this used? # defining functions that are used This comment is not needed. r = sr.Recognizer() Can we rename this to RECOGNIZER or something meaningful. def find_path(file): for path, dirs, files in os.walk(os.getcwd()): for f in files: # ^^^^^^ # over indent Nested for is over-indented. (This might also be an error of copying to code-review, so check this too) current_files, current_dirs, current_subs = reload_files() r = sr.Recognizer() with open("websites.json") as websites_json: websites = json.load(websites_json) I think it is better to move the constants to the top of the file, so we know where they come from. I would also recommend making them SCREAMING_SNAKE_CASE to highlight they are constants. Small caveat: you might need to add reload_files to top of the file before calling it or import it. while True: listening() It would be better to use if __name__ == "__main__" idiom. This way you can use this code as a module as well. You can also catch keyboard inturrupt and exit gracefully. Ideas/Feature Requests: These are improvements that I think you can add to your code. print('Could not find directory "{}"!'.format(directory)) - replace print with speech. create a help command.
{ "domain": "codereview.stackexchange", "id": 36904, "tags": "python, raspberry-pi" }
Nearly Illegible Prime Number Calculator
Question: This is my first reasonably large C# program. Considering my code looks like it was written in an esolang to me, I'm guessing I did something wrong at some point. My main priority is readability, although speed is also a plus. using System; public class PrimeSearcher { public static string Stringilate (int[] iterable) { string returnable=""; foreach(int i in iterable){ returnable+=i.ToString()+" "; }; return returnable; } public static bool isPrime (int[] iterable,float target) { foreach(int i in iterable){ if(target%i==0){ return false; } if(i==0){ return true; } //reached the "end" (last discovered prime) of the array, hurray! } return true; //will only do anything for the last element } static public void Main () { int[] primes=new int[1000]; float j=2; //float because I think it's the smallest datatype that can return a non-zero number when divided, correct me if I'm wrong while(primes[primes.Length-1]==0){ if(isPrime(primes,j)){ primes[Array.IndexOf(primes,0)]=(int)j; } j++; } Console.WriteLine(Stringilate(primes)); //I don't *think* ToString worked when I tried it. } } Answer: By starting with j = 3 and incrementing j by 2 you could skip a lot of unneeded computations because it will skip even values. Instead of using a float I would go with int. Both are 32-Bit and you won't need the floating point values. You should let your variables have some space to breathe. Consider while(primes[primes.Length-1]==0){ if(isPrime(primes,j)){ primes[Array.IndexOf(primes,0)]=(int)j; } j++; } versus while (primes[primes.Length - 1] == 0) { if (isPrime(primes, j)) { primes[Array.IndexOf(primes, 0)] = (int)j; } j += 2; } where the later is much more readable. It uses the bracing style most C# developer use as well. Stringilate() could be improved as well by using string.Join() which would look like so public static string Stringilate(int[] iterable) { return string.Join(" ", iterable); } and as a side note, if you need to concatenate strings in a loop its much better to use a StringBuilder instead. You should be consitent in your coding style (public static vs. static public). You are using braces although they might be optional. Good choice ! Instead of a concrete type you could use var instead
{ "domain": "codereview.stackexchange", "id": 29996, "tags": "c#, beginner, primes" }
Displaying picture collections from a user's picture library
Question: Currently, I have been digging up my old codes. One of the applications I worked on was an Image editing phone app that displays a collection of random pictures from a user's picture, which I limited to less than 80 pictures because of performance. These random pictures are chosen from different folders in the pictures Library and read as a bitmapImage to give the user a general flip view of their pictures. Although, I have initialised the pictures to display in the constructor of my ViewModel which is bad because it takes a bit of time( almost 1 min) to load up the data. I implemented INotifyPropertyChanged because I allow the user to create an album of pictures from the display or subset and save it as a gif image but I will not be reviewing that at the moment public class PictureVM : INotifyPropertyChanged { public PictureVM() { Task.Run(() => loadData()); } ObservableCollection<BitmapImage> pictures = new ObservableCollection<BitmapImage>(); private ObservableCollection<BitmapImage> _pictureGallery; public ObservableCollection<BitmapImage> PictureGallery { get { return _pictureGallery; } set { if (_pictureGallery != value) { _pictureGallery = value; onPropertyChanged("PictureGallery"); } } } public async void loadData() { StorageFolder recentPictures = KnownFolders.PicturesLibrary; IReadOnlyList<StorageFile> recentImagesInLibrary = await recentPictures.GetFilesAsync(); IReadOnlyList<StorageFolder> subfolders = await recentPictures.GetFoldersAsync(); IReadOnlyList<StorageFile> subfolderImages; await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, async () => { // tranversing the pictures library current folder foreach (var item in recentImagesInLibrary) { BitmapImage BitImage = new BitmapImage(); var stream = await item.OpenAsync(Windows.Storage.FileAccessMode.Read); BitImage.SetSource(stream); pictures.Add(BitImage); } foreach (var subfolderItem in subfolders) { subfolderImages = await subfolderItem.GetFilesAsync(); foreach (var subfolderItemImages in subfolderImages) { BitmapImage BitImage2 = new BitmapImage(); var stream2 = await subfolderItemImages.OpenAsync(Windows.Storage.FileAccessMode.Read); BitImage2.SetSource(stream2); if (pictures.Count < 80) { pictures.Add(BitImage2); } else { break; } } } PictureGallery = pictures; } ); }} The flipview in xaml looks like this at the moment <FlipView x:Name="flipView" Height="500" ItemsSource="{Binding PictureGallery}" Visibility="Visible" Margin="0,0"> <FlipView.ItemTemplate> <DataTemplate> <Image Height="680" Width="680" Source="{Binding}" /> </DataTemplate> </FlipView.ItemTemplate> </FlipView> How can I improve the way the data is loaded (performance wise)? Is there a way to load more images without performance trade off? How do I improve my randomised image selection from the user's library? Any suggestions will be gladly appreciated. Answer: which I limited to less than 80 pictures because of performance This is not correct. If in the recentImagesInLibrary are e.g 200 pictures you are loading all of them. The limiting is only happening in the second loop (the inner one). A thing which bothers me as well is that you are requesting the subfolders although you might not need them. By extracting the picture loading to a separate method you can get rid of some duplicated code. you aren't consistent with using var. E.g I would expect using var here: BitmapImage BitImage2 = new BitmapImage();. local variables should be named using camelCase casing, so e.g BitImage2 => bitImage2 methods should be named PascalCase casing. Please take a look at the C# naming guideline async methods should be named xxxAsync see: https://stackoverflow.com/a/14652922/2655508 by having the pictures.Count < 80 condition only in the inner loop you are doing more work than is needed. For each subFolderItem you are loading at least one image although the condition might be reached already. the stream you are opening for each image should be properly disposed/closed after it isn't needed anymore. This can be done by enclosing it inside of a using statement. 80 is a magic number. By using a meaningful named variable/const you would always know what it is about. You should consider to pass it as a constructor ment so you could control it from the outside of the class. if you are using C# 6 (VS 2015) you could use the nameof operator. Instead of onPropertyChanged("PictureGallery"); you would then write onPropertyChanged(nameof(PictureGallery)); which is a little bit longer, but if you later on decide to rename the property you wouldn't need to worry about changeing the string "PictureGallery" as well. Disclaimer: I usually don't use async, so if anything is wrong... Implementing the mentioned points leads to private const int MaximumPicturesToLoad = 80; private int maximumPicturesToLoad; public PictureVM() : this(MaximumPicturesToLoad) {} public PictureVM(int maximumPicturesToLoad) { this.maximumPicturesToLoad = maximumPicturesToLoad; Task.Run(() => loadData()); } private async void LoadImagesAsync(StorageFolder folder) { if (AreMaximumPicturesLoaded()) { return; } IReadOnlyList<StorageFile> imageFiles = await folder.GetFilesAsync(); foreach (var file in imageFiles) { pictures.Add(await LoadImageAsync(file)); if (AreMaximumPicturesLoaded()) { return; } } IReadOnlyList<StorageFolder> subFolders = await folder.GetFoldersAsync(); foreach (var subFolder in subFolders) { LoadImagesAsync(subFolder); } } private async Task<BitmapImage> LoadImageAsync(StorageFile file) { var image = new BitmapImage(); using (var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read)) { image.SetSource(stream); } return image; } private bool AreMaximumPicturesLoaded() { pictures.Count == maximumPicturesToLoad }
{ "domain": "codereview.stackexchange", "id": 23476, "tags": "c#, performance, xaml, uwp" }
Could we move a spaceship by moving a Black Hole?
Question: What if a spacecraft were dragged by a black hole, and the black hole were pushed by continuously firing lasers into it? Would this allow the ship to be accelerated faster than laser propulsion, which could only safely accelerate the ship at 1 g or slightly higher, while the black hole could (conceptually) be accelerated at much higher speeds, and the trailing spaceship would accelerate towards it in constant free fall, as oppose to suffering high G forces? I'm aware this would all take literally astronomical amounts of energy, but conceptually? Answer: I interpret the question as: "Can you orbit an accelerating black hole?" The answer to this question is: That depends on the acceleration. There is a well known analytic solution to the vacuum Einstein equation, known as the "C-metric". In this metric the acceleration is power not by a laser (as in the OP's question), but by a conic singularity (i.e. cosmic string) "pulling" or "pushing" the black hole. (However, the exact source of the acceleration should make little difference.) The existince of stable orbits around the black hole in the C-metric was investigate in this paper: [1405.2611]. They conclude that stable circular orbits exist as long as: $$ a < 0.0045396037095\frac{c^4}{2GM}$$ For a black hole with a Schwartzschild radius of about 1 meter, this works out to about $4\times 10^{14}$ m/s^2 or $4\times 10^{13} g$. As an object orbiting the black hole is in free fall, it would not suffer from the acceleration. (Although I have not looked at the tidal forces involved.) Now, only for the trivial matter of accelerating the black hole at this rate...
{ "domain": "physics.stackexchange", "id": 84025, "tags": "gravity, black-holes, rocket-science" }
Solution of dynamics of density matrix
Question: Given the dynamics of the density matrix: $ \frac{d}{d t}\begin{pmatrix} \rho_{00} & \rho_{01} \\ \rho_{10} & \rho_{11} \end{pmatrix} = \begin{pmatrix} \lambda i(\rho_{10}-\rho_{01})+\lambda^2\rho_{11} & \lambda i(\rho_{11} -\rho_{00})+\lambda^2 \rho_{01} \\ \lambda i(\rho_{00}-\rho_{11}) +\lambda^2 \rho_{10} & \lambda i(\rho_{01}-\rho_{10}) +\lambda^2 \rho_{11} \end{pmatrix} $ How can this system of differential equations be solved, since they refer to each other. With initial condition $\rho_{ij}\in \mathbb{R}$. Answer: As Lelesquiz points out, that looks like a somewhat standard matrix differential equation to me. The Wikipedia link does give a solution method for the matrix, but I think that it might be easier to do some remapping: \begin{align} \rho_{00}&\to v_1\\ \rho_{01}&\to v_2\\ \rho_{10}&\to v_3\\ \rho_{11}&\to v_4 \end{align} and write it as (assuming what you wrote is correct and that I did the right mapping, you should double check this) $$ \frac{d}{dt}\left(\begin{array}{c}v_1\\v_2\\v_3\\v_4\end{array}\right)=\left(\begin{array}{c}\lambda i(v_3-v_2)+\lambda^2v_1 \\ \lambda i(v_4-v_1)+\lambda^2v_2 \\ \lambda i(v_1-v_4)+\lambda^2 v_3 \\ \lambda i(v_2-v_3)+\lambda^2v_4\end{array}\right) $$ which makes it more clear that this can be solved numerically with Runge-Kutta methods because it is a simple vector with coupled components. You may want to note that, given the complex term, stability is going to be an issue.
{ "domain": "physics.stackexchange", "id": 17087, "tags": "quantum-mechanics, homework-and-exercises, quantum-information, density-operator" }
Casimir Effect and parallel $D$-Branes
Question: In the well-known setup for the calculation of the Casimir effect, we take 2 perfectly reflecting plates, impose the appropriate boundary conditions on the relevant fields (scalar, vector, etc.) and calculate the energy of this configuration. So the most natural analog of plates (that is objects that impose boundary conditions on fields) in String Theory are $D$-Branes. This got me wondering whether a similar scenario can be realized in String Theory where we consider the energy of an open or closed string field theory interacting with 2 parallel $D$-Branes. The scenario I had in mind is loosely related to figure1. in Scattering of Strings from $D$-branes. I know that $D$-branes have open strings stretched between them, but that scenario would not be analogous to the Casimir effect setting because there is no propagation. Answer: No, there is no analogue of Casimir effect in between two parallel D-branes. The reason in supersymmetry. Two D-branes interact with each other by means of open strings stretched among them. The one-loop amplitude for open strings streched between two D-branes is exactly computable (see equation (48) in TASI Lectures on D-Branes) and shown to be exactly zero. This shouldn't be so surprising, since parallel D-branes break only half of supersymmetry, therefore the "no-force condition" between two BPS states is satisfied.
{ "domain": "physics.stackexchange", "id": 77294, "tags": "string-theory, branes, casimir-effect" }
Global warming kinetics
Question: A rule of thumb in chemistry is that a reaction doubles in rate for every 10 °C increase in temperature. Considering the impact of a 10 °C increase in global temperature, would the same rule apply to kinetic weather processes – for example, would the average wind speed around the world double? Answer: That is a good rule of thumb. It is based on the Arrhenius equation $$k = A {\operatorname{e}^{\frac{-{E_{\text{act}}}}{RT}}}$$ You'll find that the "temperature doubles every 10 degrees" rule only applies if the following assumptions are valid The reaction must obey the Arrhenius equation it must have an entropic pre-term based on collision probability the reaction must be an activated process (e.g. it must have an activation energy) The Arrhenius equation works well for simple unimolecular reactions like the ring opening of cyclobutene to buta-1,3-diene. Now you'll need to ask a weather physicist or a meteorologist if weather can be treated as an activated process. My guess is that the answer is "no", but that's just a guess. It seems to me that there is no minimum energy required (activation energy) for the rain to fall or for the wind to blow.
{ "domain": "chemistry.stackexchange", "id": 1269, "tags": "kinetics, environmental-chemistry" }
Multiplying 2 numbers without using * operator in Java
Question: I saw this interview question and decided to solve using recursion in Java. Write a multiply function that multiples 2 integers without using * public class Main { public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.println("Enter first num: "); double num = in.nextDouble(); System.out.println("Enter second num: "); double numTwo = in.nextDouble(); System.out.println(multiply(num, numTwo)); } private static double multiply(double x, double y) { if (x == 0 || y == 0) { return 0; } else if (y > 0) { return x + multiply(x, y - 1); } else if (y < 0) { return -multiply(x, -y); } else { return -1; } } } What should I return instead of -1 to make this clear? Answer: What should I return instead of -1 to make this clear? Don't return -1, but recognise that you have exhausted the possible states of y, so simply do a return for the last possibility. private static double multiply(double x, double y) { if (x == 0 || y == 0) { return 0; } else if (y > 0) { return x + multiply(x, y - 1); } return -multiply(x, -y); }
{ "domain": "codereview.stackexchange", "id": 25407, "tags": "java, recursion, interview-questions" }
What topic and equations will characterize the pulling and breaking of a piece of wood
Question: Let's say hypothetically I pull on a wooden rod from both opposite ends The rod then breaks in half at the center What equations govern this. Is it an engineering problem with things to do with stress and strain and how do I determine at what force does the thing break Answer: Ultimate Tensile Strength is the stress you describe. For something as complicated as wood, it is experimentally measured, and highly variable from sample to sample owing to grain properties and other variables. If, however, you search on terms like and("tensile strength","quantum mechanics") or "computational nanomechanics" you will find that the numerical quantum mechanical prediction of tensile strength of materials with highly regular structure is becoming quite successful now.
{ "domain": "physics.stackexchange", "id": 25051, "tags": "newtonian-mechanics" }
Is the 2-sphere always useful for intuition and visualized learning in cosmology?
Question: The unit 2-sphere is often used, pedagogically, to help provide some visual intuition about topology, differential geometry and geometric objects and properties like curvature, geodesic, smoothness, Lie algebra, tangent vector spaces, singularity, homotopy group, Killing form, embedding, etc. In the context of general relativity and cosmology, what are the limitations of usefulness of the 2-sphere in helping visual intuition about higher dimensions such as three or four and more general curved spaces and complicated geometries. In particular when imagining the 2-sphere as a simple model of spacetime (not space) in what ways it will be misguiding say in differential geometry? For example, Given the Lorentzian metric with signature (-1, 1) on the 2-sphere, geodesics will behave differently from what we typically associate with the Riemannian geometry of the 2-sphere and great circles. The question is how they behave or (look like intuitively)? Answer: the Lorentzian metric with signature (-1, 1) on the 2-sphere The assumption is wrong! We can't get a Lorentzian metric on the 2-sphere. So, the 2-sphere is useful for the visual intuition about the curvature of space only. But the 2-sphere is useless and misleading for the visual intuition about the curvature of spacetime.
{ "domain": "physics.stackexchange", "id": 97166, "tags": "general-relativity, cosmology, education" }
Is the vanishing of the covariant derivative of the metric necessary?
Question: Does the covariant derivative of the metric metric always vanish? I.e. $$\nabla_a g_{bc}=0$$ Are there situations where this can be assumed to not hold? For instance in case of an asymmetric metric? Answer: Asymmetric metrics don't particularily make sense, because a metric exists for distance calculation as $ds^2=g_{\mu\nu}dx^\mu dx^\nu$, which is a symmetric expression, thus if we had instead of $g$ a tensor $h_{\mu\nu}=g_{\mu\nu}+a_{\mu\nu}$ with $g$ being symmetric and $a$ being antisymmetric, then the antisymmetric part would just cancel. Are there situations where this can be assumed to not hold? Yes. A linear connection ($\nabla$) is technically a distinct, unrelated object to a metric tensor $g$. If a linear connection satisfies $\nabla_\sigma g_{\mu\nu}=0$, then we say that $\nabla$ is metric compatible. The fundamental theorem of Riemannian geometry says that a metric compatible symmetric connection is unique, and is given by $$ \Gamma^\sigma_{\mu\nu}=\frac{1}{2}g^{\sigma\lambda}(\partial_\mu g_{\nu\lambda}+\partial_\nu g_{\mu\lambda}-\partial_\lambda g_{\mu\nu}). $$ So that we use this unique connection, it is a choice on our part. Why do we make this choice? Flat space/spacetime: In flat spacetime, we have a natural connection, one that in a cartesian frame is given by the partial derivatives $\partial_\mu$. Since the metric in these coordinates is given by $\eta_{\mu\nu}$ (or $\delta_{\mu\nu}$), and these objects have constant coefficients, we have $\partial_\sigma\eta_{\mu\nu}=0$, and the connection is manifestly symmetric. These are coordinate-independent properties, so they also hold in whacky curvy coordinates, where the $\Gamma$ coefficients are not identically zero. So our "God given" connection is precisely the unique compatible, symmetric connection. General relativity: We would be execused if we used this connection from the get-go, because it matches the properties of the flat-space connection, and because $\nabla g=0$ is a very natural assumption (it means parallel transport does not change the length of vectors). We assume symmetricity on top of that, because it can be shown that the equivalence principle is valid only then. $$ $$ However, if we keep the Einstein-Hilbert action (from which the equations of GR come) as it is, but assume that the connection is symmetric, but not metric compatible, we get the same equations of motion, along with the compatibility condition, so it seems mathematics enforces the compatibility on us. This is called Palatini-formalism. If we also allows for torsion (nonsymmetric connection), we get further equations, but as it turns out, this will only affect fermionic fields (Einstein-Cartan theory). So it turns out, letting the connection be more general doesn't change much.
{ "domain": "physics.stackexchange", "id": 41246, "tags": "general-relativity, differential-geometry, metric-tensor, differentiation" }
Angular Acceleration of Spinning Body About a Rotating Axis
Question: How would you find the angular acceleration of a body spinning about an axis that is itself rotating? Specifically, how would you find the angular acceleration in question 1.58 of Irodov's physics book. A solid body rotates with a constant angular velocity $\omega_0 = 0.50$ rad/s about a horizontal axis $AB$. At the moment $t = 0$, the axis $AB$ starts turning about the vertical with a constant angular acceleration $\alpha = 0.10 \ rad \ s^{-2}$. Find the angular velocity and angular acceleration of the body after $t = 3.5 \ s$. The answer key gives: $$\frac{\mathrm{d}\vec\omega_0}{\mathrm{d}t} = \vec\omega' \times \vec\omega_0$$ where $\vec\omega' = \alpha t$, is the angular velocity of the axis $AB$, but I have no idea why this is true. Any help would be appreciated. Answer: Let's first start with a picture. The object (blue ball) is rotating around the AB axis with angular velocity $\vec\omega_0$. Next, the $AB$ axis is rotating around the vertical with an angular velocity $\vec \omega '$. We are told that $\omega'$ is constantly being accelerated by an angular acceleration $\alpha$, and from our kinematic equations (recall $\displaystyle \alpha = \dfrac{d\omega'}{dt} \implies \omega' = \int\alpha \, \mathrm dt = \alpha t $) we find that $\vec \omega ' = \alpha t \hat z$. We need to also find out what $\vec \omega_0$ is -- and let's work in cylindricalcoordinates here -- since the $AB$ axis is always rotating in the $xy$ plane, we can write $\vec\omega_0 = \omega_0 \hat r$ where $\hat r$ is the radial unit vector pointing along $\vec\omega_0$. The angular acceleration of the body is \begin{align} \dfrac{d\vec\omega_0}{dt} &= \dfrac d{dt} \left( \omega_0\hat r \right)\\ &= \omega_0\dfrac{d\hat r}{dt} \end{align} Using the cartesian basis, one can show that $\dfrac{d\hat r}{dt}=\omega' \hat \varphi$ to get $$\dfrac{d\vec\omega_0}{dt} = \omega_0 \omega'\hat\varphi.$$ Next, recall in cylindrical coordinates that $\hat\varphi = \hat z \times \hat r$ so we can substitute that in and get $$\dfrac{d\vec\omega_0}{dt} = \omega' \hat z\times \omega_0\hat r$$ which is exactly $\vec \omega' \times \vec \omega_0$.
{ "domain": "physics.stackexchange", "id": 88914, "tags": "homework-and-exercises, reference-frames, rotational-kinematics, angular-velocity" }
How to read the labeled enron dataset categories?
Question: I am trying to use the labeled Enron dataset (link) but I am really confused about the labeling system they use. I understand the Cat_[1-12]_level_weight is some form of confidence level. This dataset labeled by multiple students. Cat_[1-12]_level_weight increases with the number of the same label assigned by multiple students to a certain row (sample). But, what is for instance the Cat_1_level_2? From the overview (in the link), I guess Cat_1 mean "Course genre" and level_2 mean "Purely Personal (49 cnt.)" ? If so, why the Cat_1_level_2 has values like 1, 2, 3, 4, etc.? I am really confused. Answer: Ok, I guess here how it should be read. A sample could be in multiple categories. First, you look into a sample. Below is an example (at row 12) and the mail content: Content Cat_1_level_1 Cat_1_level_2 Cat_1_weight Cat_2_level_1 Cat_2_level_2 Cat_2_weight Jennifer, Thank-you for stepping in on this and guiding the process! ---------------------- Forwarded by Sarah-Joy Hunter/NA/Enron on 12/12/2000 05:03 PM --------------------------- From: Patrick Tucker@ENRON COMMUNICATIONS on 12/12/2000 02:52 PM PST To: Sarah-Joy Hunter/NA/Enron@ENRON cc: Subject: Re: HP -- confidential internal document Sarah-Joy, thanks for your excellent recap of progress to date. I really appreciate the organization and order you have brought to this process. It's great to work with you again after all of this time! Patrick 1 2 2 4 6 2 Here, you need to combine the Cat_1_level_* to get a single category that this mail belongs to. You can do the same combination for other categories as well (like Cat_2_level_*). If you combine the first category as Cat_1_level_1 (which is 1) and Cat_2_level_2 (which is 2), you get an actual category label which is Category 1.2 (Purely personal). As I said before, a sample could be in multiple categories. If you combine the second category labels, the Cat_2_level_1 (which is 4) and Cat_2_level_2 (which is 6), you get another category label which is Category 4.6 (gratitude). If you check the content, you can see they are talking about something personal and they try to express how grateful they are to each other. About the weights (Cat_1_weight, Cat_2_weight, Cat_*_weight...). Since this dataset samples were labeled by multiple people, in some cases, there were different labels assigned to the same sample (or in some cases a sample assigned with the same label by multiple people). The weights show that how many people assigned the same label in a category for a certain sample. Here, the Cat_1_weight is 2, which means 2 people assigned this Cat_1_level_* labels to this sample. (I guess...)
{ "domain": "datascience.stackexchange", "id": 9355, "tags": "classification, nlp, dataset, feature-engineering" }
How can change in entropy be the same for all processes if the entropy production $\sigma$ is present for irreversible processes?
Question: From the definition of entropy change, $$S_2-S_1=\left ( \int_{1}^{2} \frac{\delta Q}{T}\right )_{int.rev}$$ From the closed system entropy balance, we have $$S_2-S_1=\left ( \int_{1}^{2} \frac{\delta Q}{T}\right )_{b}+\sigma $$ where $\sigma$ is the entropy produced within the system, vanishing to zero in the absence of irreversibilities. I don't quite understand how the entropy change between two states is the same for all processes. Is it the case that the entropy transfer in the case of internal irreversibilities present is lower than that of the entropy transfer in an internally reversible process between these same two states, and the entropy production makes up for this difference? Answer: The way that you describe it is exactly correct. This is the essence of the Clausius Inequality.
{ "domain": "chemistry.stackexchange", "id": 15124, "tags": "thermodynamics, entropy" }
Why don't objects bounce infinitely?
Question: I'm computer programmer, and I was making a little test of my “possibilities”, so to speak, so I made a small game with sort of gravity, bouncing ball and walls around. Recently I've noticed that when bouncing off a wall, the ball will divide its speed and so endlessly even if the speed is less than 1 (digital pixels per second), for example if speed equals to 0.05 pixels per second it will divide and it will be 0.025 pixels per second, etc. To prevent this, I did limit bounce values. How can this be explained in real life physics? Are there any limits of bounce value? Answer: Suppose the ball loses a fraction $b$ of its velocity on every bounce. Let's also assume the ideal case, where the ball takes an infinite amount of bounces to lose all of its momentum. (In reality, the model is no longer valid at sufficiently small bounces, because, for example, thermal fluctuations of the atoms in the materials are larger.) The key here is that, while there are an infinite number of bounces, they happen in a finite amount of time, and we can actually calculate that time. Let's start with the ball in the air at height $h$, at rest (i.e. right when it's dropped). The amount of time $t_0$ it takes to fall a distance $h$ is given by the usual kinematic equations: $$t_0=\sqrt{\frac{2h}{g}}$$ When it reaches the ground, it has a velocity $v_0=gt_0=\sqrt{2gh}$. After the bounce, it has a velocity $v_1=(1-b)\sqrt{2gh}$, since it lost a fraction $b$ of its velocity. The time it takes to go to the next bounce $t_1$ is equal to the time it takes for the velocity to change from $v_1$ to $-v_1$; in other words, $$gt_1=2v_1\implies t_1=2(1-b)\sqrt{\frac{2h}{g}}$$ On the second bounce, it has a velocity $v_2=(1-b)v_1=(1-b)^2\sqrt{2gh}$, and we can repeat the same procedure forever. So, in general, for bounce $n$, we have that $v_n=(1-b)^n\sqrt{2gh}$, and therefore that $$t_n=2(1-b)^n\sqrt{\frac{2h}{g}}$$ The time $t_{rest}$ it takes for the ball to come to rest is equal to the time it takes to complete all of the bounces (since this is a geometric series, we can actually evaluate it exactly): \begin{align} t_{rest}&=t_0+\sum_{n=1}^\infty t_n\\ &=\sqrt{\frac{2h}{g}}+\sum_{n=1}^\infty\sqrt{\frac{8h}{g}}(1-b)^n\\ &=\sqrt{\frac{2h}{g}}+\sqrt{\frac{8h}{g}}\frac{1}{b}-\sqrt{\frac{8h}{g}}\\ &=\frac{1}{b}\sqrt{\frac{8h}{g}}-\sqrt{\frac{2h}{g}} \end{align} For example, for a ball with $b=0.1$ dropped from height of 1 meter will execute an infinite number of bounces and come to rest after roughly 8.6 seconds according to this model. If instead that ball had $b=0.4$, it would come to rest in roughly 1.8 seconds. So if you're not worried about the number of bounces, then this is a perfectly workable solution for how much time it takes for a bouncing ball to come to rest. That said, if you're doing a simulation, you're likely worried about an infinite number of bounces as well. In that case, one thing you can do is take advantage of the fact that your simulation (probably) has a finite time step. You can calculate $t_n$ with the above, and whenever $t_n$ is shorter than your time step, the object stops bouncing.
{ "domain": "physics.stackexchange", "id": 58602, "tags": "newtonian-mechanics, collision, dissipation" }
Orange's Results are not reproducible
Question: I've been watching a few training videos from Orange here and attempted to reproduce the process. They used iris dataset for classification task. When I compared my confusion matrix to theirs, I didn't get the same results. Is this a problem with Orange software or with sklearn (I know they somehow leverage sklearn)? When you run a code again 6 years later, you get different results, even though the dataset is the same... Answer: In general, algorithms has a lot of maths behind. Maybe the difference between Orange software and sklearn is due to small differences in these maths. Of course, only small differences should appear. Moreover, many algorithms (like random forest) are created with some type of randomness; for example, random forest select randomly samples and features. So here is another possible factor that creates differences.
{ "domain": "datascience.stackexchange", "id": 10758, "tags": "orange" }
Maximum energy of photoelectrons
Question: Why is the maximum energy of a photoelectron ${KE}_{max}=hf-W$ with $W$ being the work function ? I understand the Einstein-Planck relation but not how it fits into this equation. Also, can the maximum energy of a beam of photoelectrons by calculated simply by multiplying the maximum energy of 1 photoelectron with the number of electrons, i.e. : $${KE}^n_{max}=n{KE}_{max}$$ Answer: Since the work function is the energy required to remove the electron from the metal any additional energy the photon had will be converted into kinetic enrgy for the electron. The reason this is the max KE is because it is likely that the electron will loose KE through collisions etc. As for your second question yes that seems right to me.
{ "domain": "physics.stackexchange", "id": 45618, "tags": "quantum-mechanics, photoelectric-effect" }
How hard is it to decide if there exists a strict improvement of a given solution of an NP-complete problem?
Question: Take the Set Cover problem as an example. When we ask if there is a set of size k that covers all the elements, the problem is NP-complete. Now if we ask, for a given set $S$ of size $k$, if there exists another set that covers strictly more elements than $S$ does. Is this problem still NP-complete? To be clearer, let's think about the Set Cover problem as an optimization problem: what is the maximum number of elements that can be covered by $k$ sets. The decision version is: are there $k$ sets that cover at least $m$ elements? (where $m$ is part of the input, and the version you were saying is simply the special case when $m=n$). Now the problem is, for $k$ given sets (as part of the input), does there exist $k$ other sets that cover strictly more elements than the $k$ given sets do. Answer: Here is a reduction from Set Cover to your problem. Let $(\{S_1,\ldots,S_m\},k)$ be an instance of Set Cover, and let $U = S_1 \cup \cdots \cup S_m$. Let $x_1,\ldots,x_k$ be $k$ new elements, and consider the following instance of your problem: The sets are $S_i \cup \{x_j\}$ for $i \in [m]$ and $j \in [k]$, together with the set $U$. The given cover consists of $S_1 \cup \{x_j\}$ for $j \in [k-1]$ together with $U$. The given cover covers all elements but $x_k$. There is a cover which covers more elements – that is, all elements – iff $U$ can be covered by at most $k$ sets from $S_1,\ldots,S_m$.
{ "domain": "cs.stackexchange", "id": 12193, "tags": "optimization, np-complete, decision-problem, set-cover" }
To the right, to the left, now rotate
Question: I was working on HackerRank: Circular Array Rotation which is about coming up with an efficient algorithm for array rotations in right-ward manner. John Watson performs an operation called a right circular rotation on an array of integers. After performing one right circular rotation operation, the array is transformed from [a(0), a(1), ..., a(n-1)] to [a(n-1), a(0), ..., a(n-2)]. Watson performs this operation \$k\$ times. To test Sherlock's ability to identify the current element at a particular position in the rotated array, Watson asks \$q\$ queries, where each query consists of a single integer, \$m\$, for which you must print the element at index \$m\$ in the rotated array. The first line contains 3 space-separated integers, \$n\$, \$k\$, and \$q\$, respectively. The second line contains \$n\$ space-separated integers, where each integer \$i\$ describes array element \$a(i)\$. Each of the \$q\$ subsequent lines contains a single integer denoting \$m\$. After vainly attempting to brute-force it (and the inevitable "time limit expired"), I came up with an algorithm that seems quite fast, as well as offering the flexibility of rotating by an arbitrary number of positions (which is, incidentally, why it is a lot faster than brute force). I then decided to implement a complementary left-rotation function which works similarly (though I didn't manage to use list comprehensions for that one). As you can see from the code, they are coupled in that if a negative number is input as the number of positions to rotate, they call the other with its positive version. I did this for the sake of code reuse, but could it bite me back? Here is a working demo on repl.it that demonstrates the results. Note I have tested it with small and large numbers, as well as negative numbers, and all appears to be working well. def rotate_right(array:list, rotate_by:int = 1) -> list: ''' Default behavior: Given input [1,2,3] return [3,1,2] Supplying a rotate_by value other than 1 will increase the number of positions the values are moved towards the right by. ''' if rotate_by < 0: return rotate_left(array, - rotate_by) array_length = len(array) while rotate_by >= array_length: rotate_by -= array_length return [array[i - rotate_by] for i in range(array_length)] def rotate_left(array:list, rotate_by:int = 1) -> list: ''' Default behavior: Given input [1,2,3] return [2,3,1] Supplying a rotate_by value other than 1 will increase the number of positions the values are moved towards the left by. ''' if rotate_by < 0: return rotate_right(array, - rotate_by) array_length = len(array) while rotate_by >= array_length: rotate_by -= array_length rotated = [] for i in range(array_length): val_index = i + rotate_by if val_index >= array_length: val_index -= array_length rotated.append(array[val_index]) return rotated def main() -> None: # Testing code arr = [1,2,3,4,5,6,7,8] rotate_by = 1 print('Input array:', arr) print('Rotate by:', rotate_by) arr_L = arr_R = arr print('Rotate left:') for _ in range(len(arr)): arr_L = rotate_left(arr_L, rotate_by) print(arr_L) print('Rotate right:') for _ in range(len(arr)): arr_R = rotate_right(arr_R, rotate_by) print(arr_R) Which will print the following to the output console: Input array: [1, 2, 3, 4, 5, 6, 7, 8] Rotate by: 1 Rotate left: [2, 3, 4, 5, 6, 7, 8, 1] [3, 4, 5, 6, 7, 8, 1, 2] [4, 5, 6, 7, 8, 1, 2, 3] [5, 6, 7, 8, 1, 2, 3, 4] [6, 7, 8, 1, 2, 3, 4, 5] [7, 8, 1, 2, 3, 4, 5, 6] [8, 1, 2, 3, 4, 5, 6, 7] [1, 2, 3, 4, 5, 6, 7, 8] Rotate right: [8, 1, 2, 3, 4, 5, 6, 7] [7, 8, 1, 2, 3, 4, 5, 6] [6, 7, 8, 1, 2, 3, 4, 5] [5, 6, 7, 8, 1, 2, 3, 4] [4, 5, 6, 7, 8, 1, 2, 3] [3, 4, 5, 6, 7, 8, 1, 2] [2, 3, 4, 5, 6, 7, 8, 1] [1, 2, 3, 4, 5, 6, 7, 8] With the above rotate_right function, the solution to the HackerRank challenge is very straightforward. It passes all test cases with no time-outs. def main() -> None: values, rotations, queries = input().strip().split(' ') values, rotations, queries = [int(values), int(rotations), int(queries)] array = [int(n) for n in input().strip().split(' ')] array = rotate_right(array, rotations) # Final step, query the resulting array's indexes for _ in range(queries): index = int(input().strip()) print(array[index]) Answer: Since you do not modify the list after it is created, one approach to efficient rotations is a simple wrapper class that holds the actual list and an offset denoting the position of the element that is considered to be the first element of a rotation. Now, "rotating" simply increases/decreases that offset value, which is done, of course, in constant time. Conversely, in order to get an \$i\$th element from the rotation, only a constant time overhead is present. All in all, I had this in mind: #!/bin/python3 import sys class ListRotationWrapper: def __init__(self, the_list): self.the_list = the_list self.offset = 0 def rotate(self, num): self.offset = ((self.offset - num) % len(self.the_list)) def __getitem__(self, index): return self.the_list[(index + self.offset) % len(self.the_list)] def main() -> None: values, rotations, queries = input().strip().split(' ') values, rotations, queries = [int(values), int(rotations), int(queries)] tmp_list = [int(n) for n in input().strip().split(' ')] rotable_list = ListRotationWrapper(tmp_list) rotable_list.rotate(rotations) # Final step, query the resulting array's indexes for _ in range(queries): index = int(input().strip()) print(rotable_list[index]) if __name__ == "__main__": main() Hope that helps.
{ "domain": "codereview.stackexchange", "id": 23105, "tags": "python, algorithm, python-3.x, array" }
What should be the dimensions of a Fourier Transformed image?
Question: I have applied Fourier Transform to the following image. I have downloaded this image from the Internet. I re-sized (without maintaining the aspect ratio) it using MS Paint application of Win7 to make it 512x256. Then I have used two applications to observe its Fourier Transformed appearance. IPLab gives the following output: ImageJ2-20160205 gives the following output: As you can see, the first output is a 512x256 image. The second one is a 512x512 image. Why are those outputs different? How would they effect processing of an image? Answer: Looking at the source code for ImageJ I see: public void run(ImageProcessor ip) { boolean inverse; if (!powerOf2Size(ip)) { IJ.error("A square, power of two size image or selection\n(128x128, 256x256, etc.) is required."); return; } ImageProcessor fht = (ImageProcessor)imp.getProperty("FHT"); if (fht!=null) { ip = fht; inverse = true; } else inverse = false; ImageProcessor ip2 = ip.crop(); if (!(ip2 instanceof FloatProcessor)) { ImagePlus imp2 = new ImagePlus("", ip2); new ImageConverter(imp2).convertToGray32(); ip2 = imp2.getProcessor(); } fft(ip2, inverse); } which suggests that some parts of the code expect a) the image size to be a power of 2 and b) that the image is square. Perhaps some part of the code you are using is enforcing this?
{ "domain": "dsp.stackexchange", "id": 4056, "tags": "image-processing, fft, fourier-transform" }
Trouble with basic Buoyancy : a treasure chest on a raft
Question: I'm having trouble sorting through the following problem: You are shipwrecked and floating in the middle of the ocean on a raft. Your cargo on the raft includes a treasure chest full of gold that you found before your ship sank, and the raft is just barely afloat. To keep you floating as high as possible in the water, should you (a) leave the treasure chest on top of the raft, (b) secure the treasure chest to the underside of the raft, or (c) hang the treasure chest in the water with a rope attached to the raft? (Assume throw- ing the treasure chest overboard is not an option you wish to consider.) It's easy enough for me to see why (c) should be a good solution given the following identity: $ {\Huge \frac{V_{displaced}}{V_{object}}\ =\ \frac{\rho_{object}}{\rho_{fluid}}} $ The density of the raft and chest together is greater than the density of each object alone. So, the numerator of the right-hand side of the identity would go down, and so necessarily would the left. The chest doesn't contribute much to the volume of the raft, so the volume displaced would have to go down. However, the back of the book says (b) is also a viable solution. Why? Answer: Securing the treasure chest to the underside of the raft will mean that less volume of the raft is required to stay afloat. Assume your raft is a cuboid with bottom of area $A$ (for sake of clarity of the maths). $M_{tot} = M_{raft} + M_{chest} + M_{person}$ Initially, this is held up by a volume of the raft (and none of the chest as the chest is inside) $M_{tot} = A h_0\rho_{water} $ Where $h_0$ is how much of the height of the boat is in water. Placing the chest on the underside of the raft, you're increasing the effective volume of the raft. $M_{tot} = (V_{chest} + A h_1)\cdot \rho_{water}$. As $V_{chest} > 0$, $h_1 < h_0$, and so you're less in the water than before.
{ "domain": "physics.stackexchange", "id": 29910, "tags": "homework-and-exercises, buoyancy" }
To find work of force $F$ while the physical object of mass $m$ is moving
Question: I need to solve the problem in this image: The exercise asks to find work of force $F$ while the physical object of mass $m$ is moving from point $A$ to point $B$. I know from theory that work is the line integral of $F \cdot ds$. Where, "$\cdot$" is the dot product, therefore I need to compute it as follows: $$F \ ds \ cos \ \theta$$ I know that $F$ is a constant force, displacement is $3d$ (i.e. $3$ times $d$). So, the partial solution is $F3d$, but I have to put a minus sign because force is in the opposite direction from physical object vector. Therefore, final solution is -F3d. But I think that's not right. In the second part (the arc), I don't have a displacement in the $y$-axis, because object goes up to $y$ and goes down to $y$ so: $y-y = 0$. Therefore, I have displacement only in the $x$-axis, and it's exactly equal to $d$. Answer: If the question is really the work of force $F$, does your calculation look right, if it is the work against $F$ or in the presence of $F$ you have to change the sign.
{ "domain": "physics.stackexchange", "id": 88899, "tags": "homework-and-exercises, forces, work" }
RC circuits - what can we interpret when the voltage across the capacitor is equal to zero?
Question: When applying the law of addition of voltage in RC series circuit with alternating sinusoidal current, we notice that voltage across the generator is equal to the sum of the voltages across all other components . When the voltage of the generator is equal to that across the resistor , that eventually means that voltage across the capacitor is zero . But the voltage across the capacitor is equal to the anti derivative of the current multiplied by $1/C$ (inverse of the capacitance ) . But when we have that the voltage of capacitor is equal to zero doesn't that mean that current is zero, too? there might be a certain idea or rule that I've missed, but I'm confused because this then means that current flowing through the circuit is zero too ( current is unique in series circuit )? Answer: But when we have that the voltage of capacitor is equal to zero doesn't that mean that current is zero, too? No, a capacitor is governed by the rule $$I = C\frac{dV}{dt}.$$ So when its current is zero, it doesn't tell you anything about what its voltage is, it only tells you the voltage is not changing. Similarly, if you know the value of the voltage, it doesn't tell you anything about the current. To know the current you need to know how quickly the voltage is changing.
{ "domain": "physics.stackexchange", "id": 55347, "tags": "electric-circuits, electrical-resistance, capacitance, voltage" }
Anti particles: What exactly is inverted?
Question: http://en.wikipedia.org/wiki/Antiparticle says "Corresponding to most kinds of particles, there is an associated antiparticle with the same mass and opposite electric charge." and What is anti-matter? goes along with that. But. Reading about the Standard Model I see several things sometimes called XXX-charge, like baryon-charge, lepton-charge - or baryon-number, lepton-number. And as far as I see in the examples I found (but until now never in any definition I found) the anti particles have a negative XXX-charge/-number too. So can you point to a list, what exactly is inverted in anti particles? (And why this list? Sure it has to do with the Standard Model, hasn't it? But Why? :) Answer: Charge is only the most familiar of the properties that are inverted between a particle and its antiparticle, but it's not the only one. So you should not consider "same mass and opposite electric charge" to be a definition of what an antiparticle is; it's merely a plain-English explanation. A list of properties in which particles and antiparticles differ can be found on the Wikipedia page for flavor. In particular, they include Each of the six quark flavor quantum numbers (upness, downness, strangeness, charmness, bottomness, topness) Isospin, which is like a combination of upness and downness Baryon number, which is like a combination of all six quark flavor numbers Each of the three lepton flavor quantum numbers (electron number, muon number, tau number) Lepton number, which is like a combination of all three lepton flavors Weak isospin Electric charge Hypercharge, which is like a combination of weak isospin and electric charge Parity Chirality... sort of (let's just say that one's complicated) As I've mentioned, some of these are just combinations of others, so you couldn't make a complete list of all the quantum numbers in which particles and their antiparticles differ, but you could list all the "fundamental" ones (the basis of the vector space of quantum number operators). I'm not sure I got them all here, but I can't think of any others off the top of my head.
{ "domain": "physics.stackexchange", "id": 4372, "tags": "standard-model, antimatter" }
saving pcd map for ccny_rgbd package
Question: Hi, I have tried the instructions on the wiki page of ccny_rgb package. After running the following sequence of commands, roslaunch ccny_openni_launch openni.launch publish_cloud:=true roslaunch ccny_rgbd vo+mapping.launch the map can be seen in rviz (sometimes it doesn't work but after restarting all nodes, it works) but after running rosservice call /save_full_map "mymap.pcd" it gives the following error: ERROR: Service [/save_full_map] is not available. However with rosservice list, a different service "/save_pcd_map" can been seen and after calling this service no error occurs but no map is saved as well. rosservice call /save_pcd_map "mymap.pcd" Did someone try this or have some suggestion how to save a map for this package? I tried it on Ubuntu 12.04 using ros fuerte. Thank you in advance, Zahid Originally posted by zahid on ROS Answers with karma: 81 on 2013-03-28 Post score: 5 Answer: I have tried it earlier and it works fine on Fuerte. You are right, the name of service has changed [Wiki-help needs to be updated]. Here is how to save the Pcd file: rosservice call /save_pcd_map /home/your_user_name/my_map.pcd Originally posted by usman with karma: 81 on 2013-05-15 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Ivan Dryanovski on 2013-05-16: Yes, the service name changed, but I haven't updated the wiki yet. The change was due to the fact that there are now 2 formats for the full map: pcd and octomap, hence the /save_pcd_map and /save_octomap services Comment by zahid on 2013-05-21: true, it works with /save_pcd_map. I have also tried to save the octomap and it works fine as well. Thank you Usman and Ivan
{ "domain": "robotics.stackexchange", "id": 13580, "tags": "ros, ccny-rgbd" }
What is the biochemical reason for mental fatigue?
Question: Is it known exactly why the brain needs sleep? What's dropping low / going high when we experience mental fatigue? I can see why low glucose could result in mental fatigue, are other reasons known? Answer: This is not the biochemistry, but the brain regions involved are described in this article about an fMRI study: http://www.sciencedaily.com/releases/2012/12/121210101630.htm EDIT: From what I can tell, mental fatigue is attributed to low oxygenation levels. Here's a study that examines the effect of creatine in preventing mental fatigue: http://jtoomim.org/brain-training/watanabe2001-creatine-reduces-mentalfatigue.pdf
{ "domain": "biology.stackexchange", "id": 830, "tags": "biochemistry, neuroscience, cell-biology" }
Compare between JPEG and JPEG2000
Question: JPEG image compression is Fourier based DCT while modern image compression technique like JPEG2000 is based on more multi-scale technique like Wavelets.I want to know how Fourier and Wavelets are useful in Image compression. So can anybody explain advantages and disadvantages of JPEG and JPEG2000 with the help of (characteristics of) transforms they use? Answer: JPEG is far simpler. It divides the image into 8x8 pixel blocks, and processes each using a Discrete Cosine Transform. The results are quantised and then encoded. The quality is fixed by the encoder. JPEG2000 uses a 2D wavelet function, the output of which is four "images", each a quarter the size of the original. One of those is actually an image, while the others are high-frequency components that can be added to it to re-construct the full-resolution image. The wavelet process may be repeated multiple times. The result is a tiny image, and a series of high frequency components that may be combined with it. Each resulting component is quantised and encoded. JPEG is fine for high quality and modest compression, which is why it is still very widely used. JPEG2000 offers several advantages: To achieve very high compression, it is possible to throw away or heavily quantise the high frequency components. This gives a poor quality - but usable - image where JPEG would fail completely. Images can be re-constructed progressively at ever improving quality. This can be either in terms of increasing resolution or bit depth as required. It supports the JPIP protocol for progressively transmitting images to a client. The client may retrieve low-resolution thumbnails and then just the parts of the image they want at better resolutions.
{ "domain": "dsp.stackexchange", "id": 2763, "tags": "wavelet, fourier, compression, image-compression" }
When is the Choi matrix of a channel pure?
Question: For a quantum channel $\mathcal{E}$, the Choi state is defined by the action of the channel on one half of an unnormalized maximally entangled state as below: $$J(\mathcal{E}) = (\mathcal{E}\otimes I)\sum_{ij}\vert i\rangle\langle j\vert\otimes \vert i\rangle\langle j\vert$$ For isometric channels, the Choi state is also a pure state. What about the converse statement? Does the Choi state being pure give us any information about the properties of the channel? Answer: It works the other way around too. A pure state is rank $1$, and any channel with more than one Kraus operator will give a higher-rank Choi matrix, which can be easily seen from the definition. You can also work it out on a different condition for pure states: For any pure state $\rho$ we have $\rho^{2} = \rho.$ $$ (J(\mathcal{E}))^{2} = \sum_{ij}\sum_{kl} \mathcal{E}(|i\rangle \langle j|)\mathcal{E}(|k\rangle \langle l|) \otimes |i\rangle \langle j|k\rangle\langle l| = \sum_{ijl}\mathcal{E}(|i\rangle \langle j|)\mathcal{E}(|j\rangle \langle l|) \otimes |i\rangle \langle l| $$ so if $J(\mathcal{E})$ is pure then: $$ \sum_{ij}\mathcal{E}(|i\rangle \langle j|) \otimes |i\rangle \langle j| = \sum_{ijl}\mathcal{E}(|i\rangle \langle j|)\mathcal{E}(|j\rangle \langle l|) \otimes |i\rangle \langle l| $$ which, when relabeling $j <-> l$ on the right hand side, leads to: $$ \sum_{ij}\mathcal{E}(|i\rangle \langle j|) \otimes |i\rangle \langle j| = \sum_{l}\sum_{ij}\mathcal{E}(|i\rangle \langle l|)\mathcal{E}(|l\rangle \langle j|) \otimes |i\rangle \langle j| $$ Since all different $|i\rangle\langle j |$ are orthogonal, this needs to hold term-by-term: $$ \mathcal{E}(|i\rangle \langle j|) = \sum_{l}\mathcal{E}(|i\rangle \langle l|)\mathcal{E}(|l\rangle \langle j|). $$ Writing $\mathcal{E}$ in it's Kraus decomposition $\{A_{k}\}$ sheds some extra light: $$ \sum_{k} A_{k}|i\rangle \langle j | A_{k}^{\dagger} = \sum_{l} \sum_{k'}\sum_{k''} A_{k'}|i\rangle \langle l | A_{k'}^{\dagger} A_{k''}|l\rangle \langle j | A_{k''}^{\dagger} $$ noting that $\sum_{l} \langle l| A^{\dagger}_{k'} A_{k''}|l\rangle = \mathrm{tr}[A^{\dagger}_{k'} A_{k''}]$, we get: $$ \sum_{k} A_{k}|i\rangle \langle j | A_{k}^{\dagger} = \sum_{k'}\sum_{k''}\mathrm{tr}[A^{\dagger}_{k'} A_{k''}] A_{k'}|i\rangle \langle j | A_{k''}^{\dagger} $$ and taking the trace and using its cyclic property on either side we get: $$ \sum_{k'k''}\delta_{k'k''} \langle j | A_{k''}^{\dagger}A_{k'}|i\rangle = \sum_{k'}\sum_{k''}\mathrm{tr}[A^{\dagger}_{k'} A_{k''}] \langle j | A_{k''}^{\dagger}A_{k'}|i\rangle $$ Importantly, this works for every $|i\rangle, | j \rangle$, so the above equation can only hold if $\delta_{k'k''} = \mathrm{tr}[A^{\dagger}_{k'} A_{k''}]$, which is evidently only true if the Kraus operators are orthogonal and of unit length. But then they are unitary, which means there is only a single Kraus operator, necessarily unitary.
{ "domain": "quantumcomputing.stackexchange", "id": 2258, "tags": "quantum-operation, information-theory" }
Does $\#W$[1]-hardness imply approximation hardness?
Question: Let $\Pi$ be a parametrized counting problem, where the parameter is the solution cost, e.g. counting the number of $k$-sized vertex cover in a graph, parametrized by $k$. Assume that $\Pi$ is $\#W$[1]-complete (a known problem for example would be counting the number of simple paths of length $k$ in a graph). Does it imply that $\Pi$ is $APX$-hard (i.e. no PTAS for the problem exists unless $P=NP$)? Note that when discussing a parameter which is the cost of solution it makes sense to discuss the approximation hardness (e.g. see this question), as opposed to other popular parametrizations. Answer: $\mathrm{W}[1]$-hardness implies that a problem has no eptas unless (at least) $\mathrm{W}[1] = \mathrm{FPT}$ (having an eptas implies parameterized tractability for the standard solution size parameterization), but there are problems with a ptas that are $\mathrm{W}[1]$-hard (i.e. not $\mathrm{APX}$-hard unless $\mathrm{APX} = \mathrm{PTAS}$). Transferring this to $\#\mathrm{W}[1]$-hardness, you can at least say that a $\#\mathrm{W}[1]$-hard problem still has no eptas, but I don't think a stronger statement can be made a priori. Turning to speculation, I suspect it's also not true that $\mathrm{APX}$-hardness immediately follows from $\#\mathrm{W}[1]$-hardness in general, though perhaps something may be said for higher classes, such as the counting version of $\mathrm{para}\text{-}\mathrm{NP}$.
{ "domain": "cs.stackexchange", "id": 4413, "tags": "complexity-theory, approximation, counting, parameterized-complexity" }
Has the Big Crunch been ruled out?
Question: If the dark energy equation of state $\omega$ evolves with time, and becomes larger, this would result in decelerated expansion and eventually a Big Crunch. If $\omega$ decreases with time, becoming more negative, then a Big Rip would occur. I understand that the Big Rip has not yet been ruled out by current observations, but I often hear that the Big Crunch has been ruled out since we know the universe is expanding (and accelerating). If the Big Rip can't currently be ruled out, then how can the Big Crunch be ruled out? Shouldn't they both still be on the table, unless we can somehow infer which direction $\omega$ evolves in under a quintessence model? Answer: The Big Rip happens if the equation of state for the dark energy has $p/\rho = w<-1$, and all empirical data give us $w\approx -1$. A Big Crunch requires a pretty high value of $w$ (it must go above -1/3 to just stop the acceleration), but of course if dark energy is changing over time it might do that. So tentatively ruling out Big Crunches is the reasonable thing to do given observational evidence and the assumption that the future will not be wildly different. If one thinks we should be more open to dark energy changing (which is going to be rather unconstrained by observation for the foreseeable future) then we should also be more open to $w$ moving across the Big Rip boundary.
{ "domain": "astronomy.stackexchange", "id": 3771, "tags": "cosmology, expansion, dark-energy, fate-of-universe" }
How to mathematically determine row, column, and sub-square of cell in nxn array where n is a perfect square?
Question: Given an one dimensional array of size nxn, where n is a perfect square How can one mathematically determine the row, column, and/or sub-square the cell resides in? Additionally, is there a mathematical way to traverse the subsquare? Answer: Let the one-dimensional cells be $c_1, c_2, \cdots, c_{n^2}$. Assume the top-left cell is at $(1,1)$, i.e., the first row and the first column. Assume the top right cell is at $(1,n)$, i.e., the first row and the $n$-th column. Then the $i$-th cell, $c_i$ is at $(i/n + 1, i \%n +1)$. Here $i/n$ is the integer division and $i\%n$ is the modulo operation in any popular programming language. For example, let $n=9$. Then the $42$-th cell, $c_{42}$ is at $(42/9+1, 42\%9+1)=(5, 7)$. Suppose the subsquares are lined up in the same order as the cells, so that we have subsquares $S1, S2, \cdots, Sn$. Consider each subsquare as a kind of "large cell". So that we would have the following coordinates for subsquares. Note that $c_i$ belongs to the $(i/n+1)$-th subsquare, i.e. subsquare $S(i/n+1)$. For example, $c_{37}$ belongs to the $5$-th subsquare, i.e, subsquare $S5$. We are in the same situation as before, but with the $(i/n+1)$-th "large cell" and a $\sqrt n\times\sqrt n$ of "large cells". So similarly we see that the $(i/n+1)$-th subsquare is at $((i/n+1)/\sqrt n + 1,(i/n+1)\%\sqrt n + 1)$, using the coordinates for subsquares. Suppose we want to traverse the subsquare at $(j,k)$ (where $(j,k)$ is in the coordinates for subsquares). The first cell (the top-left cell) of that subsquare is $c_{(j-1)\sqrt n\cdot n + (k-1)\sqrt n +1}$ The first cell of the second row of that subsquare is $c_{(j-1)\sqrt n\cdot n + (k-1)\sqrt n +1 + n}$ $\cdots$ The first cell of the last row of that subsquare is $c_{(j-1)\sqrt n\cdot n + (k-1)\sqrt n + 1 + (\sqrt n -1 )n}$ So we can traverse all cells in that subsquare by the following pseudocode. $\quad$ for $row$ in $1, 2, \cdots, \sqrt n$ $\quad\quad$ for $column$ in $1, 2, \cdots, \sqrt n$ $\quad\quad\quad$ visit the cell at $row$-th row and $column$-th column of the subsquare at $(j,k)$, which is $c_{(j-1)\sqrt n\cdot n + (k-1)\sqrt n + (row -1 )n + column}$ Note "$row$-th row and $column$-th column" are referring to cells in that subsquare. For example, we will traverse the cells at subsquare (2,3) in the following order. cells in its first row, $c_{34}$, $c_{35}$, $c_{36}$, cells in its second row, $c_{43}$, $c_{44}$, $c_{45}$, cells in its third row, $c_{52}$, $c_{53}$, $c_{54}$.
{ "domain": "cs.stackexchange", "id": 16199, "tags": "algorithms, arrays, square-grid" }
rosmake of sick_tim3xx package failed : says permission denied on SickTim3xx.cfg
Question: I am using ros-groovy on ubuntu 12.04 after running $ rosmake sick_tim3xx i get the output [ rosmake ] rosmake starting... [ rosmake ] Packages requested are: ['sick_tim3xx'] [ rosmake ] Logging to directory /home/piyush/.ros/rosmake/rosmake_output-20140504-222452 [ rosmake ] Expanded args ['sick_tim3xx'] to: ['sick_tim3xx'] [rosmake-0] Starting >>> catkin [ make ] [rosmake-0] Finished <<< catkin ROS_NOBUILD in package catkin No Makefile in package catkin [rosmake-0] Starting >>> genmsg [ make ] [rosmake-3] Starting >>> cpp_common [ make ] [rosmake-0] Finished <<< genmsg ROS_NOBUILD in package genmsg No Makefile in package genmsg [rosmake-2] Starting >>> rospack [ make ] [rosmake-0] Starting >>> genlisp [ make ] [rosmake-1] Starting >>> genpy [ make ] [rosmake-3] Finished <<< cpp_common ROS_NOBUILD in package cpp_common No Makefile in package cpp_common [rosmake-3] Starting >>> gencpp [ make ] [rosmake-0] Finished <<< genlisp ROS_NOBUILD in package genlisp No Makefile in package genlisp [rosmake-2] Finished <<< rospack ROS_NOBUILD in package rospack No Makefile in package rospack [rosmake-0] Starting >>> rostime [ make ] [rosmake-2] Starting >>> roslib [ make ] [rosmake-1] Finished <<< genpy ROS_NOBUILD in package genpy No Makefile in package genpy [rosmake-1] Starting >>> roslang [ make ] [rosmake-0] Finished <<< rostime ROS_NOBUILD in package rostime No Makefile in package rostime [rosmake-0] Starting >>> roscpp_traits [ make ] [rosmake-3] Finished <<< gencpp ROS_NOBUILD in package gencpp No Makefile in package gencpp [rosmake-2] Finished <<< roslib ROS_NOBUILD in package roslib No Makefile in package roslib [rosmake-2] Starting >>> rosunit [ make ] [rosmake-3] Starting >>> message_generation [ make ] [rosmake-1] Finished <<< roslang ROS_NOBUILD in package roslang No Makefile in package roslang [rosmake-0] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits No Makefile in package roscpp_traits [rosmake-0] Starting >>> roscpp_serialization [ make ] [rosmake-1] Starting >>> xmlrpcpp [ make ] [rosmake-2] Finished <<< rosunit ROS_NOBUILD in package rosunit No Makefile in package rosunit [rosmake-2] Starting >>> rosconsole [ make ] [rosmake-3] Finished <<< message_generation ROS_NOBUILD in package message_generation No Makefile in package message_generation [rosmake-3] Starting >>> rosgraph [ make ] [rosmake-0] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization No Makefile in package roscpp_serialization [rosmake-0] Starting >>> message_runtime [ make ] [rosmake-1] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp No Makefile in package xmlrpcpp [rosmake-1] Starting >>> rosclean [ make ] [rosmake-2] Finished <<< rosconsole ROS_NOBUILD in package rosconsole No Makefile in package rosconsole [rosmake-0] Finished <<< message_runtime ROS_NOBUILD in package message_runtime No Makefile in package message_runtime [rosmake-1] Finished <<< rosclean ROS_NOBUILD in package rosclean No Makefile in package rosclean [rosmake-0] Starting >>> std_msgs [ make ] [rosmake-3] Finished <<< rosgraph ROS_NOBUILD in package rosgraph No Makefile in package rosgraph [rosmake-3] Starting >>> rosparam [ make ] [rosmake-1] Starting >>> rosmaster [ make ] [rosmake-0] Finished <<< std_msgs ROS_NOBUILD in package std_msgs No Makefile in package std_msgs [rosmake-0] Starting >>> rosgraph_msgs [ make ] [rosmake-3] Finished <<< rosparam ROS_NOBUILD in package rosparam No Makefile in package rosparam [rosmake-2] Starting >>> geometry_msgs [ make ] [rosmake-3] Starting >>> diagnostic_msgs [ make ] [rosmake-0] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs No Makefile in package rosgraph_msgs [rosmake-0] Starting >>> roscpp [ make ] [rosmake-1] Finished <<< rosmaster ROS_NOBUILD in package rosmaster No Makefile in package rosmaster [rosmake-3] Finished <<< diagnostic_msgs ROS_NOBUILD in package diagnostic_msgs No Makefile in package diagnostic_msgs [rosmake-1] Starting >>> rospy [ make ] [rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp No Makefile in package roscpp [rosmake-0] Starting >>> rosout [ make ] [rosmake-2] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs No Makefile in package geometry_msgs [rosmake-2] Starting >>> sensor_msgs [ make ] [rosmake-1] Finished <<< rospy ROS_NOBUILD in package rospy No Makefile in package rospy [rosmake-0] Finished <<< rosout ROS_NOBUILD in package rosout No Makefile in package rosout [rosmake-0] Starting >>> roslaunch [ make ] [rosmake-2] Finished <<< sensor_msgs ROS_NOBUILD in package sensor_msgs No Makefile in package sensor_msgs [rosmake-0] Finished <<< roslaunch ROS_NOBUILD in package roslaunch No Makefile in package roslaunch [rosmake-0] Starting >>> rostest [ make ] [rosmake-0] Finished <<< rostest ROS_NOBUILD in package rostest 30/40 Complete ] No Makefile in package rostest [rosmake-0] Starting >>> topic_tools [ make ] [rosmake-1] Starting >>> diagnostic_updater [ make ] [rosmake-0] Finished <<< topic_tools ROS_NOBUILD in package topic_tools No Makefile in package topic_tools [rosmake-0] Starting >>> rosbag [ make ] [rosmake-1] Finished <<< diagnostic_updater ROS_NOBUILD in package diagnostic_updater No Makefile in package diagnostic_updater [rosmake-1] Starting >>> self_test [ make ] [rosmake-0] Finished <<< rosbag ROS_NOBUILD in package rosbag No Makefile in package rosbag [rosmake-0] Starting >>> rosmsg [ make ] [rosmake-1] Finished <<< self_test ROS_NOBUILD in package self_test No Makefile in package self_test [rosmake-0] Finished <<< rosmsg ROS_NOBUILD in package rosmsg No Makefile in package rosmsg [rosmake-0] Starting >>> rosservice [ make ] [rosmake-0] Finished <<< rosservice ROS_NOBUILD in package rosservice No Makefile in package rosservice [rosmake-0] Starting >>> dynamic_reconfigure [ make ] [rosmake-0] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure No Makefile in package dynamic_reconfigure [rosmake-0] Starting >>> driver_base [ make ] [rosmake-0] Finished <<< driver_base ROS_NOBUILD in package driver_base No Makefile in package driver_base [rosmake-0] Starting >>> sick_tim3xx [ make ] [ rosmake ] Last 40 linesck_tim3xx: 3.4 sec ] [ 1 Active 39/40 Complete ] {------------------------------------------------------------------------------- -- This workspace overlays: /opt/ros/groovy -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Python version: 2.7 -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Skip enable_testing() for dry packages -- Using CATKIN_TEST_RESULTS_DIR: /home/piyush/ros_packages/sick_tim3xx/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- catkin 0.5.86 -- Using these message generators: gencpp;genlisp;genpy [rosbuild] Including /opt/ros/groovy/share/roslisp/rosbuild/roslisp.cmake [rosbuild] Including /opt/ros/groovy/share/roscpp/rosbuild/roscpp.cmake [rosbuild] Including /opt/ros/groovy/share/rospy/rosbuild/rospy.cmake MSG: gencfg_cpp on:SickTim3xx.cfg [gendeps] Finding dependencies for /home/piyush/ros_packages/sick_tim3xx/cfg/SickTim3xx.cfg -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CMAKE_TOOLCHAIN_FILE -- Build files have been written to: /home/piyush/ros_packages/sick_tim3xx/build cd build && make -j4 -l4 make[1]: Entering directory `/home/piyush/ros_packages/sick_tim3xx/build' make[2]: Entering directory `/home/piyush/ros_packages/sick_tim3xx/build' make[3]: Entering directory `/home/piyush/ros_packages/sick_tim3xx/build' make[3]: Leaving directory `/home/piyush/ros_packages/sick_tim3xx/build' make[3]: Entering directory `/home/piyush/ros_packages/sick_tim3xx/build' [ 50%] Generating ../cfg/cpp/sick_tim3xx/SickTim3xxConfig.h, ../docs/SickTim3xxConfig.dox, ../docs/SickTim3xxConfig-usage.dox, ../src/sick_tim3xx/cfg/SickTim3xxConfig.py, ../docs/SickTim3xxConfig.wikidoc **make[3]: execvp: ../cfg/SickTim3xx.cfg: Permission denied make[3]: *** [../cfg/cpp/sick_tim3xx/SickTim3xxConfig.h] Error 127 make[3]: Leaving directory `/home/piyush/ros_packages/sick_tim3xx/build' make[2]: *** [CMakeFiles/ROSBUILD_gencfg_cpp.dir/all] Error 2 make[2]: Leaving directory `/home/piyush/ros_packages/sick_tim3xx/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/piyush/ros_packages/sick_tim3xx/build'** -------------------------------------------------------------------------------} [ rosmake ] Output from build of package sick_tim3xx written to: [ rosmake ] /home/piyush/.ros/rosmake/rosmake_output-20140504-222452/sick_tim3xx/build_output.log [rosmake-0] Finished <<< sick_tim3xx [FAIL] [ 3.39 seconds ] [ rosmake ] Halting due to failure in package sick_tim3xx. [ rosmake ] Waiting for other threads to complete. [ rosmake ] Results: [ rosmake ] Built 40 packages with 1 failures. [ rosmake ] Summary output to directory [ rosmake ] /home/piyush/.ros/rosmake/rosmake_output-20140504-222452 I HAVE HIGHLIGHTED THE PART THAT SEEMS TO BE THE PROBLEM permissions of the piyush@TheCoolStuffStation:~/ros_packages/sick_tim3xx/cfg$ roscd sick_tim3xx/ piyush@TheCoolStuffStation:~/ros_packages/sick_tim3xx$ ls -lah cfg total 24K drwxrwxr-x 3 piyush piyush 4.0K May 5 19:59 . drwxrwxr-x 12 piyush piyush 4.0K May 5 20:00 .. drwxrwxr-x 3 piyush piyush 4.0K May 5 19:59 cpp -rw-rw-r-- 1 piyush piyush 3.2K Dec 14 02:30 SickTim3xx.cfg -rw-rw-r-- 1 piyush piyush 3.7K Dec 14 02:30 SickTim3xx.cfg~ -rw-rw-r-- 1 piyush piyush 1.3K Dec 14 02:46 SickTim3xx.cfgc piyush@TheCoolStuffStation:~/ros_packages/sick_tim3xx$ git status fatal: Not a git repository (or any of the parent directories): .git piyush@TheCoolStuffStation:~/ros_packages/sick_tim3xx$ cd .. piyush@TheCoolStuffStation:~/ros_packages$ whoami piyush piyush@TheCoolStuffStation:~/ros_packages$ Originally posted by hgtc-dp on ROS Answers with karma: 15 on 2014-05-04 Post score: 0 Answer: You've probably run make using sudo before (you shouldn't do that), so you'll have to check the permissions on those folders. The easiest way to fix it would be to remove the build folder (using sudo) and try again. Edit 1: Can you edit your question and paste the output of the following commands there? roscd sick_tim3xx ls -lah cfg git status whoami Edit 2: Thanks for adding that info. Something weird is going on: you seem to have deleted the sick_tim3xx/.git directory, for example. Please delete the whole sick_tim3xx directory and run: git clone -b groovy https://github.com/uos/sick_tim3xx.git Then start over. Originally posted by Martin Günther with karma: 11816 on 2014-05-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by hgtc-dp on 2014-05-05: I tried it but i am getting the same error. I know for sure that rosmake is working fine as I have built other packages. Permissions attached with cfg folder are drwxrwxr-x(same for all folders in that directory). And with SickTim3xx.cfg -rw-rw-r-- do i need to change something in these. Thanks alot Comment by Martin Günther on 2014-05-05: It's not just the permissions, but also which user the files belong to. I've edited my answer. Comment by Martin Günther on 2014-05-05: Also, maybe the file isn't there at all. Try a git reset --hard. Comment by hgtc-dp on 2014-05-06: Thanks a lot it worked. Earlier I was downloading the .zip file and then extracting the package. Could you tell me why it could have been failing Comment by Martin Günther on 2014-05-06: No idea. Somehow you didn't get all necessary files in the right places. Glad it works now for you. :-)
{ "domain": "robotics.stackexchange", "id": 17843, "tags": "ros" }
A chessboard model in JavaScript
Question: I've created a chessboard model in JavaScript that will eventually iterate through some moves of a championship game. I've created a board object and a piece constructor. I've also created all the pieces with their correct position on a typical chessboard by rank and file. I don't think I've created my move method correctly in the piece constructor though, which would just move the pieces on the board. Is this the best way to simulate moving pieces you think? I want the move method to push the moves onto my empty game array and eventually I want to be able to move back and forth on the array. This is going to be hooked up to the chessboard I've already made with the HTML and CSS code below if you care to look at it. (Some of the boilerplate CSS has been omitted.) (function(window) { var board = { file: ["a", "b", "c", "d", "e", "f", "g", "h"], rank: [1, 2, 3, 4, 5, 6, 7, 8] }; var piece = function (filePlace, rankPlace) { this.file = board.file[filePlace - 1]; this.rank = board.rank[rankPlace - 1]; this.move = function (a, b) { this.file = board.file[a - 1]; this.rank = board.rank[b - 1]; }; }; var whiteRook1 = new piece(1, 1); var whiteRook2 = new piece(8, 1); var whiteKnight1 = new piece(2, 1); var whiteKnight2 = new piece(7, 1); var whiteBishop1 = new piece(3, 1); var whiteBishop2 = new piece(6, 1); var whiteQueen = new piece(4, 1); var whiteKing = new piece(5, 1); var whitePawn1 = new piece(1, 2); var whitePawn2 = new piece(2, 2); var whitePawn3 = new piece(3, 2); var whitePawn4 = new piece(4, 2); var whitePawn5 = new piece(5, 2); var whitePawn6 = new piece(6, 2); var whitePawn7 = new piece(7, 2); var whitePawn8 = new piece(8, 2); var blackRook1 = new piece(1, 8); var blackRook2 = new piece(8, 8); var blackKnight1 = new piece(2, 8); var blackKnight2 = new piece(7, 8); var blackBishop1 = new piece(3, 8); var blackBishop2 = new piece(6, 8); var blackQueen = new piece(4, 8); var blackKing = new piece(5, 8); var blackPawn1 = new piece(1, 7); var blackPawn2 = new piece(2, 7); var blackPawn3 = new piece(3, 7); var blackPawn4 = new piece(4, 7); var blackPawn5 = new piece(5, 7); var blackPawn6 = new piece(6, 7); var blackPawn7 = new piece(7, 7); var blackPawn8 = new piece(8, 7); var game = []; window.chess = { }; })(window); body { background-color: darkgrey; } .container { width: 80%; margin: 3em auto 3em auto; min-width: 1in; max-width: 6in; } .chessboard .row { margin: 0; padding: 0; position: relative; clear: both; } .chessboard .row::before, .chessboard .row::after { font-size: 300%; position: absolute; } .chessboard .row::after { left: 103%; } .chessboard .row::before { right: 103%; } .chessboard .rank-8::before, .chessboard .rank-8::after { content: '8'; } .chessboard .rank-7::before, .chessboard .rank-7::after { content: '7'; } .chessboard .rank-6::before, .chessboard .rank-6::after { content: '6'; } .chessboard .rank-5::before, .chessboard .rank-5::after { content: '5'; } .chessboard .rank-4::before, .chessboard .rank-4::after { content: '4'; } .chessboard .rank-3::before, .chessboard .rank-3::after { content: '3'; } .chessboard .rank-2::before, .chessboard .rank-2::after { content: '2'; } .chessboard .rank-1::before, .chessboard .rank-1::after { content: '1'; } .chessboard .square { background-color: red; width: 12.5%; padding-bottom: 12.5%; display: inline-block; float: left; } .chessboard .row:nth-child(even) .square:nth-child(even) { background-color: red; } .chessboard .row:nth-child(even) .square:nth-child(odd) { background-color: lightgray; } .chessboard .square:nth-child(even) { background-color: lightgrey; } .chessboard .legend { display: inline-block; width: 12.5%; text-align: center; float: left; margin: 0; padding: 0; font-size: 300%; } nav { text-align: center; } nav button { font-size: 5ex; background-color: red; padding: 0 0.5em 0 0.5em; border-radius: 20%; } .chessboard .row .white::before, .chessboard .row .black::before { font-size: 300%; text-align: center; position: absolute; width: 12.5%; line-height: 1.2; } .chessboard .row .black.pawn::before, .white.pawn::before { content: '\265f'; } .chessboard .row .black.knight::before, .white.knight::before { content: '\265e'; } .chessboard .row .black.rook::before, .white.rook::before { content: '\265c'; } .chessboard .row .black.queen::before, .white.queen::before { content: '\265b'; } .chessboard .row .black.king::before, .white.king::before { content: '\265a'; } .chessboard .row .black.bishop::before, .white.bishop::before { content: '\265d'; } .chessboard .row .white { color: white; } <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>TIY Chessboard: Kasparov v Karpov (1984)</title> <meta name="description" content=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- Place favicon.ico in the root directory --> <link rel="stylesheet" href="css/normalize.css"> <link rel="stylesheet" href="css/main.css"> <script src="js/vendor/modernizr-2.8.3.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <style> </style> <main class="container"> <div class="chessboard"> <section class="rowFileLegend"> <p class="legend">A</p> <p class="legend">B</p> <p class="legend">C</p> <p class="legend">D</p> <p class="legend">E</p> <p class="legend">F</p> <p class="legend">G</p> <p class="legend">H</p> </section> <section class="row rank-8"> <div class="square file-a black rook"></div> <div class="square file-b black knight"></div> <div class="square file-c black bishop"></div> <div class="square file-d black queen"></div> <div class="square file-e black king"></div> <div class="square file-f black bishop"></div> <div class="square file-g black knight"></div> <div class="square file-h black rook"></div> </section> <section class="row rank-7"> <div class="square file-a black pawn"></div> <div class="square file-b black pawn"></div> <div class="square file-c black pawn"></div> <div class="square file-d black pawn"></div> <div class="square file-e black pawn"></div> <div class="square file-f black pawn"></div> <div class="square file-g black pawn"></div> <div class="square file-h black pawn"></div> </section> <section class="row rank-6"> <div class="square file-a"></div> <div class="square file-b"></div> <div class="square file-c"></div> <div class="square file-d"></div> <div class="square file-e"></div> <div class="square file-f"></div> <div class="square file-g"></div> <div class="square file-h"></div> </section> <section class="row rank-5"> <div class="square file-a"></div> <div class="square file-b"></div> <div class="square file-c"></div> <div class="square file-d"></div> <div class="square file-e"></div> <div class="square file-f"></div> <div class="square file-g"></div> <div class="square file-h"></div> </section> <section class="row rank-4"> <div class="square file-a"></div> <div class="square file-b"></div> <div class="square file-c"></div> <div class="square file-d"></div> <div class="square file-e"></div> <div class="square file-f"></div> <div class="square file-g"></div> <div class="square file-h"></div> </section> <section class="row rank-3"> <div class="square file-a"></div> <div class="square file-b"></div> <div class="square file-c"></div> <div class="square file-d"></div> <div class="square file-e"></div> <div class="square file-f"></div> <div class="square file-g"></div> <div class="square file-h"></div> </section> <section class="row rank-2"> <div class="square file-a white pawn"></div> <div class="square file-b white pawn"></div> <div class="square file-c white pawn"></div> <div class="square file-d white pawn"></div> <div class="square file-e white pawn"></div> <div class="square file-f white pawn"></div> <div class="square file-g white pawn"></div> <div class="square file-h white pawn"></div> </section> <section class="row rank-1"> <div class="square file-a white rook"></div> <div class="square file-b white knight"></div> <div class="square file-c white bishop"></div> <div class="square file-d white queen"></div> <div class="square file-e white king"></div> <div class="square file-f white bishop"></div> <div class="square file-g white knight"></div> <div class="square file-h white rook"></div> </section> <section class="rowFileLegend"> <p class="legend">A</p> <p class="legend">B</p> <p class="legend">C</p> <p class="legend">D</p> <p class="legend">E</p> <p class="legend">F</p> <p class="legend">G</p> <p class="legend">H</p> </section> <nav> <button class="buttons">&#9654;</button> <button class="buttons">| <</button> <button class="buttons"><</button> <button class="buttons">></button> <button class="buttons">> |</button> </nav> </div> </main> </body> </html> Answer: Semantic markup This is a clear misuse of the p and section elements. Sections are for outlining purposes, and single letters don't make for a paragraph. Game boards like this are tabular data, making tables the best element to use. The "labels" A-H and 1-8 should be marked up using the th element. Classitis There's nothing efficient about adding a class to every single element on the page: <div class="square file-a"></div> This sort of repetition is precisely why we have the descendant selector, element selector, and the wildcard selector. Element selector will fit best when the markup is switched to a table. td, th { width: 12.5%; padding-bottom: 12.5%; } Class names for storing metadata Using the class attribute to store information about your tiles is a poor choice and it makes moving your pieces more annoying (because now you have to modify your list of classes to remove/change what piece resides there). In some instances, using the custom data-* attributes will make things easier to work with. Your chess pieces are content. The information for them should be in the markup, not your CSS. <tr> <th>1</th> <td>&#9814;</td><!-- white rook --> <td></td><!-- the white knight used to be here --> <td>&#9815;</td><!-- white bishop --> <!-- etc --> </tr> This way, moving the pieces is just a matter of removing/appending the text nodes to the cell element. Using an extra element such as a span to contain each chess piece might make it easier to write your click event handler(s) if you want to make this a playable chess game. You might also prefer it for readability purposes (<td>&#9815;</td> vs <td><span data-color="white" data-piece="bishop">&#9815;</span></td>). Upgrade your browser notifications This sort of warning is pretty tacky. On top of that, the number of features you're using that would prevent an IE8 user from enjoying your content is pretty much nil (nth-child is the only thing I see that I'm certain won't work, and that's purely decorative) or requires an extremely tiny shim (section and main, and we've already established that you shouldn't be using section for this).
{ "domain": "codereview.stackexchange", "id": 13887, "tags": "javascript, html, css, chess" }
Oxidation of conjugated dienes into carboxylic acids by potassium permanganate
Question: In the book - MS Chouhan; Advanced Problems In Organic Chemistry; 11th ed, Chapter Alkenes Q32 - the following reaction is given: The given solution says that: Conjugated dienes on oxidation by $\ce{KMnO4/\Delta}$ give oxalic acid However, the mechanism for this reaction is not given. Also, the formation of a dicarboxylic acid on the parent ring is not explained. I am also unsure what would the products be in a simpler case like buta-1,3-diene. What products will it give? I checked the internet but could not find a mechanism for the same. The related question of oxidation of arene subsituents by $\ce{KMnO4}$ is not a duplicate. What is the mechanism for this oxidation? Answer: Here is a relevant extract from a PDF (Dr. P.G. Hultin, 28 July 2016) depicting oxidative cleavage. For the compound in hand (your question) the process will be done twice. $\ce{H2O2}$ will serve as the source of $\ce{H3O+}$. When alkenes are treated with $\ce{KMnO4}$ in acidic solutions, the diol is not formed. Instead, the alkene is cleaved. The reaction proceeds by the same mechanism at the start, forming a cyclic manganate ester (although since the reaction is under acidic conditions the structure is protonated). In the schemes below, the alkene carbons are highlighted throughout, so you can see where they end up in the product. Now, under these conditions the manganate ester is not very stable and it undergoes a cyclic fragmentation process, which results in breaking the C-C bond between the two oxygens. Notice that in this case, since there was a hydrogen atom attached to each of the alkene carbons in the starting material, there is a hydrogen attached to the carbonyl carbon in the product and therefore the product that is initially formed is an aldehyde Aldehydes are very easily oxidized to carboxylic acids, and thus the aldehydes formed in the cleavage reaction do not survive. They are rapidly transformed into carboxylic acid groups, by a complex reaction whose mechanism you need not worry about. Now, if the alkene had not had any hydrogens attached, the product in that case would have been a ketone rather than an aldehyde. Ketones are not easily oxidized further, and the reaction would have stopped at that stage. If one of the alkene carbons had a hydrogen substituent, while the other did not, then we would get both acid and ketone groups in our product, as shown below.
{ "domain": "chemistry.stackexchange", "id": 9842, "tags": "organic-chemistry, reaction-mechanism, redox" }
Advantage of using a polygonal mirror with larger number of faces in Michelson method of measuring the speed of light and its value
Question: The following image is from Concepts of Physics by Dr. H.C.Verma, from the chapter "Speed of Light", page 447, topic "Michelson Method": For higher image resolution click here. The following text is from the "Science Hero" article - Michelson’s Method for Determining Velocity of Light, under the topic "Disadvantages of Michelson’s method": At high speeds [angular speed of the rotating mirror], the rotating mirror may break. But speed can be reduced by increasing the number of faces of the mirror. I can understand that when the number of faces in the rotating mirror is increased, the clear image of the source could be seen at lower angular speeds since the angle by which the mirror needs to rotate for the next face to take the position of the adjacent face is decreased. The speed of light as measured by this method is given by $$c=\frac{D\omega N}{2\pi}$$ where $D$ is the distance travelled by the light between reflections from the polygonal mirror, $\omega$ is the minimum possible angular speed of rotation of the mirror when the image becomes steady and $N$ is the number of faces in the polygonal mirror. As $c$ is a constant, the product $\omega N$ is also constant. So, it can be seen that when we increase the number of faces the rotating mirror, the clear image could be obtained at lesser angular speeds. Now as $N$ gradually approaches infinity, i.e., the polygonal mirror becomes a cylindrical mirror, the angular speed $\omega$ tends to zero. So I think there must be a highest possible value for $N$ which gives the most benefit. What is its value, and what is the reason for this choice? Are there any other advantages of using a larger number of faces in the rotating mirror besides the one discussed in the question? Related question asked by me: Number of reflecting surfaces in the rotating mirror in the Michelson method of determination of speed of light I think Michelson method of determining the speed of light is different from the Michelson Morley experiment. So, I had to use the query michelson speed of light -morley as my initial results were populated with the second experiment which has a similar name. This method of determination of speed of light is briefly discussed here and here. Answer: The purpose of having a relatively large number N of mirrors on the polygon is to increase the switching rate for a given rotational speed. This allows the distance to the retroreflecting mirror to be short enough to be practical. An important factor is that the beam size needs to be large enough to ensure that a significant fraction of the beam will reach the distant retroreflector. As you probably know, a light beam spreads faster when its waist diameter is small. This means the mirrors need to be relatively large, depending on how far away the retroreflector is. In turn, this means that N must be relatively small for a fixed-size wheel.
{ "domain": "physics.stackexchange", "id": 63828, "tags": "optics, speed-of-light, reflection, geometric-optics" }
Will electricity follow a path of least resistance on a rotating conductor?
Question: I would like to know if electricity will follow a path of least resistance on a rotating conductor. Please reference the drawing below. This is showing a rotating non-conductive disc with a strip of copper around its perimeter and there are two brushes (blue squares) which the DC electrical current is flowing through. Say that this disc is 4 inches in diameter and the copper strip is moving at a rate of 130 fps. Would electricity take the path indicated in the drawing since the free electrons would be moving with the atoms in the copper strip in contrast to the other available path, in which the free electrons would have to flow against the oncoming atoms in the copper strip? Also, as a secondary question concerning this setup, would centrifugal force acting on the copper atoms and on the free electrons have any effect on the flow of electricity? Answer: Let's not talk about the direction of conventional current. The negative terminal has a habit of repelling away electrons due to the electric field there. You can imagine that just after a while there will be slightly higher electron density at the negative terminal and a slightly lower electron density at the positive terminal due to the combined effect of the electric field at the terminals and rotation. However that isn't something that keeps happening otherwise you'd end up with loads of charges at the terminals. Also charges hate very much being stuffed up together. I think that pile-up process will continue for a period of time (however small that interval may be) until the repulsive forces are strong enough to drive the electrons against the rotation. Moreover the electrons moving in the same direction might speed up (for a while) but that isn't going to increase the current because the external wires offer finite resistance to the flow of current.
{ "domain": "physics.stackexchange", "id": 57570, "tags": "forces, electricity, electric-circuits, electric-current" }
How can the universe die if all energy is conserved?
Question: So according to physics, because of the expansion of the universe, there will be a time in the future, where all that exists (maybe...quantum fields) will be so far away from each other that it will be impossible for them to communicate with each other, interact and exchange energies. But those "individual pockets" of energy will still be there, because energy is conserved. Question #1 So how can we talk about a "dead" universe in the future when the same amount of energy that exists now will also exist and be "floating around" this so-called "dead" universe in the future ? I guess we could say that the same amount of energy will still be there, but because the carriers of that energy won't be able to communicate with each other it will be some sort of "passive" energy that never interacts with/changes into anything else, so for lack of a better term, we could call that "passive" state of energy a "dead" state. But because of the law of conservation of energy, the universe cannot continue to expand indefinitely. At some point it must run out of "expansion" energy because there is only a certain amount of energy in the universe and we know that part of it is already used for stuff other than "expansion" so the universe cannot create new "expansion" energy, ex nihilo, ad infinitum. Question #2 And if the universe were to stop expanding wouldn't those "pockets of energy" floating around eventually start finding each other once again and start interacting with each other once again, in which case we wouldn't be talking about a "dead" universe anymore ? Answer: First, the “dead” terminology is just a label, so don’t take it too seriously. It is fine to call a universe where nothing interesting ever happens “dead”, even if it still contains energy and particles. There is no implication that “dead” means 0 energy. Second, the energy of the universe is not even well defined, let alone conserved. While energy conservation applies locally everywhere in the universe, globally there is no unique way to add up the energy here and there to get a total energy for the whole universe. This means that the argument that “the universe cannot create new expansion energy, ex nihilo, ad infinitum” is not justified. That total energy cannot even be computed, let alone compared to past or future values. However, you can discuss the energy density of the universe. The cosmological constant means that there is a fixed energy density due to spacetime itself. A constant energy density over an expanding volume could be interpreted as an increasing total amount of energy (although doing so runs into the same definitional problems mentioned above)
{ "domain": "physics.stackexchange", "id": 92089, "tags": "energy-conservation, universe" }
Alternative type energy storage to batteries?
Question: When designing portable devices it is usually the battery that stores its energy, because it has a high energy density that allows even small batteries to run up to a year, given appropriate circuit design. In mechanical engineering, I am now making a project that is somewhat like a portable mechanical FSM, and it requires a power source that can last for a reasonable amount of time. We are using mechanical mechanisms because it can survive many events such as heavy rain and underwater usage unlike electronics. We considered using a conventional battery for this case, but it would defeat the purpose to this design, and it would be best not to have to build a waterproofing/somethingproofing shield. Is there an alternative type of energy storage, suitable for these cases, to batteries? (e.g. Springs, but it may break way before getting to the one-year rating.) Answer: You can use a flywheel. When spinning in a vacuum, with magnetic bearings, flywheels lose very little energy over time. While they are probably best for large-scale applications, e.g. as backup for other power sources like solar energy, they have even been proposed as a power source for cars - although I assume there could be serious problems caused by the gyroscopic effect of the flywheel. There was a vehicle called the Gyrobus that used a 3 ton flywheel for energy storage. I do not know of any commercially available flywheel energy-storage systems, so you may have to build your own.
{ "domain": "physics.stackexchange", "id": 18008, "tags": "energy, potential-energy, design" }
Counting the number of instructions in an instruction set
Question: An imaginary processor has the following hardware specification: 8bit data bus 12bit address bus 32 × 8bit general purpose registers e.g. S0 – S1F Briefly describe what bit fields are required within an instruction to encode the following functionality: 56 different instructions Register addressing e.g. ADD S0, S1, add the contents of register S1 to register S0, store the result in register S0. Immediate addressing e.g. ADDI S0, 10, add the constant 10 (base 16) to register S0, store the result in register S0. Absolute addressing e.g. ADDA S0, 100, add the data stored in external memory address 100 (base 16) to register S0, store the result in register S0 If the processor uses a fixed length instruction format, briefly describe how many bits are required to represent an instruction and the bit fields used. For (1), I know it's $\log_2 56$ round up to 6 bits but for (2), I know the answer is 6bits+5bits+5bits, but I can't figure out why. Answer: An "add" instruction (e.g., ADD S0 S1B) must have 3 parts of information: which instruction to do ("ADD") Which is the input register (S0) Which is the output register (S1B) How many bits does each part take? Well, you correctly answered that the first part is 6 bit. Can you see why the second and third parts take 5 bits each? For the other parts of the question, try to split to the information the instruction must have, and analyze the amount of bits each part takes, under the definitions of the specific machine in use.
{ "domain": "cs.stackexchange", "id": 2945, "tags": "computer-architecture" }