anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why does rubber ball bounce back while iron ball doesn't?
Question: Suppose there are two balls, one of rubber and the other metallic. There are of the same mass and are thrown on a wall with the same velocity. Why does a rubber ball bounce back while a metallic ball simply falls down after striking with the wall? I know it has got to do something with the change in linear momentum and its elasticity but what? Answer: If you use a plate glass window instead of a wall you'll find that the rubber and iron balls bounce by a similar amount (though be careful throwing iron balls at windows :-). It's a basic principle in physics that energy cannot be lost. The rubber ball starts off with kinetic energy, hits the wall, and rebounds moving with about the same kinetic energy. So no energy is lost. If the iron ball doesn't bounce it must mean that the energy it originally had has been transferred to the wall. Rubber balls are soft, so they decelerate relatively slowly and they deform and spread out as they hit the wall. This means that the pressure they exert on the wall while they are bouncing is relatively low. By contrast an iron ball is very hard so it stops very suddenly and all the force it exerts on the wall is concentrated on a small area. That means the pressure is high enough to damage the wall. It might cause a visible dent, or it might just cause cracks within the wall that you can't see. In both cases energy is used in damaging the wall, and this energy comes from the motion of the ball. That means little energy is left for the iron ball to bounce back. I started by saying the iron ball would bounce off plate glass. This is because plate glass is very rigid and provided you don't shatter it the glass is not damaged by the iron ball. Since no energy is absorbed by the glass, the iron ball bounces back just as the rubber ball does.
{ "domain": "physics.stackexchange", "id": 43793, "tags": "forces, momentum, elasticity" }
Verifying that tasks are really async with AsyncDetector
Question: Running tasks asynchronously can sometimes by tricky and no matter how careful I am, I sometimes forget some crucial part and my tasks run synchronously. I don't usually notice that until it's too late and performance problems arise because of large amount of data not being processed in parallel/async. It's also difficult to write tests for it. I thought maybe there is a way to detect if tasks are running asynchronously? My idea was to create the AsyncDetector. It works by running an internal Stopwatch and tracks the two timestamps per action: start & stop. The BeginScope method returns an IDisposable scope that when disposed adds both timestamps to an internal ConcurrentBag. Later when I want to check if tasks were really running async I group all async-scopes by their time intervals and check if any of them overlap. If they do, then I assume they ware running at the same time (at least for a moment). class AsyncDetector { private static readonly IEqualityComparer<(TimeSpan Start, TimeSpan End)> AsyncScopeComparer; private readonly ConcurrentBag<(TimeSpan Start, TimeSpan End)> _runtimes = new ConcurrentBag<(TimeSpan Start, TimeSpan End)>(); private readonly Stopwatch _stopwatch = Stopwatch.StartNew(); static AsyncDetector() { AsyncScopeComparer = AdHocEqualityComparer<(TimeSpan Start, TimeSpan End)>.CreateWithoutHashCode((left, right) => { var a = left.Start.Ticks; var b = left.End.Ticks; var c = right.Start.Ticks; var d = right.End.Ticks; return (a <= c && c <= b) || (a <= d && d <= b); }); } public int MaxAsyncDegree { get { return _runtimes .GroupBy(t => t, AsyncScopeComparer) .Select(t => t.Count()) .Max(); } } public IEnumerable<int> AllAsyncDegrees { get { return _runtimes .GroupBy(t => t, AsyncScopeComparer) .Select(t => t.Count()); } } public int AsyncGroupCount { get { return _runtimes .GroupBy(t => t, AsyncScopeComparer).Count(); } } public IDisposable BeignScope() { return new AsyncScope(this); } private object ToDump() => new { MaxAsyncDegree, AsyncGroupCount }; private class AsyncScope : IDisposable { private readonly TimeSpan _start; private readonly AsyncDetector _asyncDetector; public AsyncScope(AsyncDetector asyncDetector) { _asyncDetector = asyncDetector; _start = _asyncDetector._stopwatch.Elapsed; } public void Dispose() { _asyncDetector._runtimes.Add((_start, _asyncDetector._stopwatch.Elapsed)); } } } It calculates the overlap by checking the endpoints of each time interval: a-----b - task1 c-------d - task2 In case someone wanted to run it, I add the AdHocEqualityComparer that the above class is using: public class AdHocEqualityComparer<T> : IEqualityComparer<T> { private readonly Func<T, T, bool> _equals; private readonly Func<T, int> _getHashCode; private AdHocEqualityComparer(Func<T, T, bool> equals, Func<T, int> getHashCode) { _equals = equals; _getHashCode = getHashCode; } public static IEqualityComparer<T> CreateWithoutHashCode([NotNull] Func<T, T, bool> equals) { if (equals == null) throw new ArgumentNullException(nameof(@equals)); return Create(equals, _ => 0); } public static IEqualityComparer<T> Create([NotNull] Func<T, T, bool> equals, [NotNull] Func<T, int> getHashCode) { if (equals == null) throw new ArgumentNullException(nameof(equals)); if (getHashCode == null) throw new ArgumentNullException(nameof(getHashCode)); return new AdHocEqualityComparer<T>(equals, getHashCode); } public bool Equals(T x, T y) { if (ReferenceEquals(null, x)) return false; if (ReferenceEquals(null, y)) return false; if (ReferenceEquals(x, y)) return true; return _equals(x, y); } public int GetHashCode(T obj) => _getHashCode(obj); } I had another implementation before the AsyncDetector that work with thread-ids but it wasn't reliable when working with only async. I post it for reference: class ParallelityDetector { private readonly object _syncLock; private readonly ObservableCollection<int> _threads; private int _maxThreads = 1; public ParallelityDetector() { _syncLock = new object(); _threads = new ObservableCollection<int>(); _threads.CollectionChanged += (sender, e) => { switch (e.Action) { case NotifyCollectionChangedAction.Add: _maxThreads = Math.Max(_maxThreads, _threads.Distinct().Count()); break; } }; } public int MaxThreadCount => _maxThreads; public void Beign() { lock (_syncLock) _threads.Add(Thread.CurrentThread.ManagedThreadId); } public void End() { lock (_syncLock) _threads.RemoveAt(0); } } Example The test code runs four different loops: Parallel.ForEach and three different styles of Task.WaitAll: One without async, this one runs sequentially One with async and without limitations One with async but using a SemaphoreSlim to limit the degree of parallelism The complete test code: void Main() { var count = 10; var delay = 500; // in milliseconds TestParallelForeach(count, delay); TestWaitAllWithoutAsync(count, delay); TestWaitAllWithAsync(count, delay); TestWaitAllWithAsyncAndSemaphoreSlim(count, delay); } private static void TestParallelForeach(int count, int delay) { var asyncDetector = new AsyncDetector(); Parallel.ForEach(Enumerable.Range(0, count), i => { using (asyncDetector.BeignScope()) { Thread.Sleep(delay); PrintThreadId(i); } }); asyncDetector.Dump(nameof(TestParallelForeach)); } private static void TestWaitAllWithoutAsync(int count, int delay) { var asyncDetector = new AsyncDetector(); var tasks = Enumerable.Range(0, count).Select(i => { using (asyncDetector.BeignScope()) { Thread.Sleep(delay); PrintThreadId(i); } return Task.CompletedTask; }); Task.WaitAll(tasks.ToArray()); asyncDetector.Dump(nameof(TestWaitAllWithoutAsync)); } private static void TestWaitAllWithAsync(int count, int delay) { var asyncDetector = new AsyncDetector(); var tasks = Enumerable.Range(0, count).Select(i => Task.Run(async () => { using (asyncDetector.BeignScope()) { await Task.Delay(delay); PrintThreadId(i); } return Task.CompletedTask; })); Task.WaitAll(tasks.ToArray()); asyncDetector.Dump(nameof(TestWaitAllWithAsync)); } private static void TestWaitAllWithAsyncAndSemaphoreSlim(int count, int delay) { var asyncDetector = new AsyncDetector(); var semaphore = new SemaphoreSlim(Environment.ProcessorCount); var tasks = Enumerable.Range(0, count).Select(i => Task.Run(async () => { await semaphore.WaitAsync(); using (asyncDetector.BeignScope()) { await Task.Delay(delay); PrintThreadId(i); } semaphore.Release(); })); Task.WaitAll(tasks.ToArray()); asyncDetector.Dump(nameof(TestWaitAllWithAsyncAndSemaphoreSlim)); } private static void PrintThreadId(int item) { Console.WriteLine($"{item} [{Thread.CurrentThread.ManagedThreadId}]"); } Results As the output shows all cases has been correctly recognized by the AsyncDetector: 0 [10] 2 [13] 3 [12] 1 [11] 4 [7] 8 [13] 5 [11] 6 [10] 7 [12] 9 [7] TestParallelForeach MaxAsyncDegree 5 AsyncGroupCount 3 --- 0 [12] 1 [12] 2 [12] 3 [12] 4 [12] 5 [12] 6 [12] 7 [12] 8 [12] 9 [12] TestWaitAllWithoutAsync MaxAsyncDegree 1 AsyncGroupCount 10 --- 9 [10] 6 [10] 5 [10] 4 [10] 3 [10] 2 [10] 1 [10] 0 [10] 8 [7] 7 [11] TestWaitAllWithAsync MaxAsyncDegree 10 AsyncGroupCount 1 --- 2 [11] 0 [7] 1 [13] 3 [10] 6 [13] 4 [11] 5 [10] 7 [7] 8 [13] 9 [7] TestWaitAllWithAsyncAndSemaphoreSlim MaxAsyncDegree 4 AsyncGroupCount 4 I'm not a thread/async expert so this implementation might not be the best one but can you think of anything better that would have the least performance/synchronization hit? I'd like this test to be as invisible as possible. Answer: What about RelayComparer name? It sounds more dotnetish :) It could be like this - 17 lines instead of 36 - and could be a way shorter if C# was reasonable :) class RelayComparer<T> : IEqualityComparer<T> { public RelayComparer(Func<T, T, bool> equals) : this(equals, _ => 0) { } public RelayComparer(Func<T, T, bool> equals, Func<T, int> getHashCode) { _equals = equals ?? throw new ArgumentNullException(nameof(equals)); _getHashCode = getHashCode ?? throw new ArgumentNullException(nameof(getHashCode)); } readonly Func<T, T, bool> _equals; readonly Func<T, int> _getHashCode; public bool Equals(T x, T y) => _equals(x, y); public int GetHashCode(T obj) => _getHashCode(obj); } Where AsyncScope might have disposing delegate injected to get rid of bidirectional dependency on AsyncDetector: class AsyncScope : IDisposable { public static readonly IEqualityComparer<AsyncScope> OverlappingComparer = new RelayComparer<AsyncScope>((AsyncScope left, AsyncScope right) => { var a = left.Start.Ticks; var b = left.End.Ticks; var c = right.Start.Ticks; var d = right.End.Ticks; return (a <= c && c <= b) || (a <= d && d <= b); }); public AsyncScope(IStopwatch stopwatch, Action<AsyncScope> dispose) { _stopwatch = stopwatch; _dispose = dispose; Start = _stopwatch.Elapsed; } public void Dispose() { End = _stopwatch.Elapsed; _dispose(this); } readonly IStopwatch _stopwatch; readonly Action<AsyncScope> _dispose; public TimeSpan Start { get; } public TimeSpan End { get; private set; } } And the AsyncDetector could be twice shorter – please note testability: public class AsyncDetector { public AsyncDetector() : this(new SystemStopwatch()) { } public AsyncDetector(IStopwatch stopwatch) { Runtimes = new ConcurrentBag<AsyncScope>(); Stopwatch = stopwatch ?? throw new ArgumentNullException(nameof(stopwatch)); Stopwatch.Start(); } ConcurrentBag<AsyncScope> Runtimes { get; } IStopwatch Stopwatch { get; } public IDisposable BeignScope() => new AsyncScope(Stopwatch, s => Runtimes.Add(s)); public int MaxAsyncDegree => Runtimes .GroupBy(t => t, AsyncScope.OverlappingComparer) .Select(t => t.Count()) .Max(); public IEnumerable<int> AllAsyncDegrees => Runtimes .GroupBy(t => t, AsyncScope.OverlappingComparer) .Select(t => t.Count()); public int AsyncGroupCount => Runtimes .GroupBy(t => t, AsyncScope.OverlappingComparer) .Count(); } UPDATE I wish we could write the following: class RelayComparer<T> : IEqualityComparer<T> { public RelayComparer(Func<T, T, bool> equals, Func<T, int> getHashCode) { Equals = equals ?? throw new ArgumentNullException(nameof(equals)); GetHashCode = getHashCode ?? throw new ArgumentNullException(nameof(getHashCode)); } public bool Equals(T x, T y) { get; } public int GetHashCode(T obj) { get; } }
{ "domain": "codereview.stackexchange", "id": 28217, "tags": "c#, performance, async-await, task-parallel-library" }
Splitting fasta file into smaller files based on header pattern
Question: I have to split this fasta files into smaller files and write them into individual files my files >lcl|CP000522.1_prot_ABO13860.1_1 [locus_tag=A1S_3471] [protein=hypothetical protein] [protein_id=ABO13860.1] [location=1..957] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13850.1_2 [locus_tag=A1S_3461] [protein=DNA replication protein] [protein_id=ABO13850.1] [location=950..1504] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13851.1_3 [locus_tag=A1S_3462] [protein=hypothetical protein] [protein_id=ABO13851.1] [location=complement(2523..3437)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13852.1_4 [locus_tag=A1S_3463] [protein=YPPCP.09C-like protein] [protein_id=ABO13852.1] [location=3538..4788] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13853.1_5 [locus_tag=A1S_3464] [protein=Cro-like protein] [protein_id=ABO13853.1] [location=5039..5629] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13854.1_6 [locus_tag=A1S_3465] [protein=hypothetical protein] [protein_id=ABO13854.1] [location=complement(6340..6906)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13855.1_7 [locus_tag=A1S_3466] [protein=Resolvase] [protein_id=ABO13855.1] [location=complement(7074..7685)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13856.1_8 [locus_tag=A1S_3467] [protein=hypothetical protein] [protein_id=ABO13856.1] [location=complement(8602..9732)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13857.1_9 [locus_tag=A1S_3468] [protein=putative lipoprotein] [protein_id=ABO13857.1] [location=complement(10072..10374)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13858.1_10 [locus_tag=A1S_3469] [protein=Diaminopimelate decarboxylase] [protein_id=ABO13858.1] [location=complement(10367..10723)] [gbkey=CDS] >lcl|CP000522.1_prot_ABO13859.1_11 [locus_tag=A1S_3470] [protein=regulatory protein LysR] [protein_id=ABO13859.1] [location=complement(12076..12444)] [gbkey=CDS] The other pattern is >lcl|CP000523.1_prot_ABO13861.1_1 [locus_tag=A1S_3472] [protein=DNA replication protein] [protein_id=ABO13861.1] [location=1..951] [gbkey=CDS] >lcl|CP000523.1_prot_ABO13862.1_2 [locus_tag=A1S_3473] [protein=hypothetical protein] [protein_id=ABO13862.1] [location=3048..4262] [gbkey=CDS] >lcl|CP000523.1_prot_ABO13863.1_3 [locus_tag=A1S_3474] [protein=hypothetical protein] [protein_id=ABO13863.1] [location=4357..5133] [gbkey=CDS] >lcl|CP000523.1_prot_ABO13864.1_4 [locus_tag=A1S_3475] [protein=hypothetical protein] [protein_id=ABO13864.1] [location=6197..8608] [gbkey=CDS] >lcl|CP000523.1_prot_ABO13865.1_5 [locus_tag=A1S_3476] [protein=secretory lipase] [protein_id=ABO13865.1] [location=8705..9403] [gbkey=CDS] So now my idea is how do i parse and write them into individual files such that CP000522 output written to one file and CP000523 written to another file so forth and so on. So far what i understand is i have to match the pattern after >lcl so there are other patterns like "LN997847" in the file Not sure how to proceed tried it in R but failed it can be done with awk and sed which i tried but i can;t define something that parse all header like takes into account CP as well as LN . Any suggestion or help would be highly appreciated . here is my file Answer: Here's a simple awk approach: awk '{if(/^>/){split($1,a,"[|.]")}print >> a[2]".fa"}' Protein_FASTA.txt Or, more concisely, just: awk '/^>/{split($1,a,"[|.]")}{print >> a[2]".fa"}' Protein_FASTA.txt When run on the file linked to in your question, that results in the following files: $ ls AP014650.fa CP003848.fa CP007713.fa CP012005.fa CP015122.fa CP017645.fa CP018422.fa CP020594.fa CP023021.fa CP024577.fa CP026712.fa CP027245.fa CP030108.fa CP000522.fa CP003850.fa CP007714.fa CP012007.fa CP015365.fa CP017647.fa CP018678.fa CP020596.fa CP023023.fa CP024578.fa CP026748.fa CP027529.fa CP030109.fa CP000523.fa CP003887.fa CP008707.fa CP012008.fa CP015366.fa CP017649.fa CP018679.fa CP021322.fa CP023024.fa CP025267.fa CP026749.fa CP027531.fa CU459137.fa CP000864.fa CP003888.fa CP008708.fa CP012953.fa CP015484.fa CP017651.fa CP019218.fa CP021327.fa CP023025.fa CP026126.fa CP027121.fa CP027532.fa CU459138.fa CP000865.fa CP003907.fa CP008709.fa CP012954.fa CP015485.fa CP017653.fa CP020573.fa CP021348.fa CP023027.fa CP026127.fa CP027122.fa CP027608.fa CU459139.fa CP001183.fa CP003908.fa CP008850.fa CP012955.fa CP015486.fa CP017655.fa CP020575.fa CP021783.fa CP023028.fa CP026128.fa CP027124.fa CP027609.fa CU459140.fa CP001922.fa CP003968.fa CP008851.fa CP012956.fa CP016296.fa CP017657.fa CP020576.fa CP021784.fa CP023030.fa CP026129.fa CP027179.fa CP027610.fa JN377410.fa CP001923.fa CP004359.fa CP010398.fa CP013925.fa CP016297.fa CP018144.fa CP020577.fa CP021785.fa CP023032.fa CP026339.fa CP027180.fa CP029570.fa LN865144.fa CP001938.fa CP006769.fa CP010399.fa CP014216.fa CP016299.fa CP018255.fa CP020580.fa CP021786.fa CP023033.fa CP026340.fa CP027181.fa CP029571.fa LN997847.fa CP002523.fa CP007578.fa CP010400.fa CP014217.fa CP016301.fa CP018257.fa CP020585.fa CP021787.fa CP023035.fa CP026705.fa CP027182.fa CP029572.fa LT594096.fa CP002524.fa CP007579.fa CP010780.fa CP014292.fa CP016302.fa CP018333.fa CP020589.fa CP022284.fa CP024125.fa CP026706.fa CP027243.fa CP029573.fa Protein_FASTA.txt CP003501.fa CP007580.fa CP010782.fa CP014293.fa CP017643.fa CP018334.fa CP020593.fa CP022285.fa CP024419.fa CP026708.fa CP027244.fa CP030107.fa Explanation if(/^>/){split($1,a,"[|.]") : if this line starts with a >, split the first field on any occurrence of either | or . and save the results in the array a. Since your header lines all start with >lcl|, then the string you are looking for and a ., this means that the second value in the a array will be your target string. print >> a[2]".fa" : print (append, >>) the current line to a file called "whatever the name of this sequence is" (a[2]) and .fa. This is run for every line in your input file. Note that if you run the same command again, you will need to first delete the files created the first time. If you don't, because I am using the >>, you will just append to the existing files.
{ "domain": "bioinformatics.stackexchange", "id": 916, "tags": "fasta, awk, shell" }
Does a charge accelerated by a magnetic field emit radiation?
Question: Let us assume a charged particle moving in a circle due to a constant magnetic field. Does this particle emit radiation? If so, where this energy comes from? Answer: Yes, it radiates. The energy of the radiation comes from the kinetic energy of the particle, which would decrease unless some kind of electric field causes the particle not to slow down. The force tending to slow down the particle is called “radiation reaction”. If the particle slows down, the trajectory is a spiral rather than a circle. For non-relativistic particles, this kind of radiation from circular motion is called cyclotron radiation. For relativistic particles, it is called synchrotron radiation. Calculating the characteristics of this radiation (total power, angular dependence, polarization, frequency dependence, etc.) is a common topic in a graduate-level electromagnetism course. The radiation is one of the reasons that a particle accelerator like the Large Hadron Collider uses a lot of electricity. The electric power is used to keep the particles from slowing down as they radiate.
{ "domain": "physics.stackexchange", "id": 72582, "tags": "electromagnetism, electromagnetic-radiation" }
Where is the calmest place on Earth?
Question: I have done some research online, and I've found out that Antarctica has the calmest winds (lowest maximum wind speed) recorded on Earth. However, it is uninhabitable for human life. Other very calm areas are the doldrums, but they are over water. Therefore, I would like to know: What is the calmest place on Earth that is on or over land other than Antarctica? PS: I'm also pondering this question about balloon chains, so I'm also interested in the places with the calmest overall wind column all the way from the troposphere to the mesosphere. Answer: The main resistance that winds have to their movements comes from the topography and surface obstacles. Therefore, as a general rule the closer to the surface the less wind you will find. But I guess you are interested in the winds in areas clear of surface obstacles, otherwise the answer would a be a cave or a dense forest somewhere. To figure out what is the calmest place on Earth wind-wise, we can use one of the global datasets that NASA put together (initially to help with solar and wind energy planning). One of them is a 10-year average (July 1983 - June 1993) of wind speed at 50 m above surface, ignoring the effect of trees and other non-topographic obstacles at a resolution of one degree (roughly 111 km or less). I just loaded that dataset and searched for the minimum (and maximum) values, this is how it looks: The minimum is not actually in Antarctica as your research suggested, but in the Amazon at longitude 68° west and latitude 0° (that's Brazil near the border with Colombia), with a mean wind speed of 1.55 m/s (5.6 km/h). This is an enlarged version of the most relevant area overlaid with a world map for easier interpretation: In line to what was stated at the beginning, the maximum happens in an area were the lack of landmasses allow it to blow unimpeded around the world. And it happens at 60° W -52° S, reaching a mean wind speed of 12.62 m/s (45 km/h). Note that this analysis is valid for broad areas. And of course individual weather stations at punctual locations can give different values. As for Antarctica, the winds there are mainly catabatic, therefore at the top of antarctic domes the average wind speed are very low. For example the average at Dome C is 10.1 km/h and 9.4 km/h at Dome F. However, those values are still bigger than what NASA's models suggest for the Amazonas. I think the model I used here is reliable. As an exercise, you can see all the images that show up in a Google search for "global mean wind speed" and you see that most images show a similar pattern, where Antarctica doesn't seem to contain the calmest place. The model used to produce these datasets is a reanalysis, it means it takes all the weather data of the past (weather stations and satellite data) and attempts to reconstruct how exactly was the weather over the whole word. A similar kind of model takes only past weather data and attempts to predict how will be the weather over the whole world. That's a much more difficult problem, prone to bigger errors but that's actually what all the weather forecast we see on TV are based upon. One of the best global forecast models is called GFS (Global Forecast System) that is produced by NOAA's Environmental Modeling Center. To have a sense of how winds vary in different locations and altitudes you can explore the GFS data using Cameron Beccario's amazing vizualization tool. There you can see the predicted global wind patterns all the way from the surface to more than 26 km of altitude (here you can find a table with referential equivalences between pressure levels in hPa and altitudes). Despite those are not multi-year averages, they are still very informative. Muze, the author of the question, pieced together a beautiful animated gif that show how the above mentioned visualization looks like for the elevation range of the jet streams at about 10 km of altitude (250 hPa): Finally, I have to note that I've interpreted "calmest" as the minimum mean wind speed. However, it would be sensible also to consider it as the place with the lowest maximum wind speed or some other metric, that would perhaps change the picture described above. And maybe using that metric one of the Antarctic domes could be the "calmest" place. But I won't extend the answer further with any possible interpretation for "calmest".
{ "domain": "earthscience.stackexchange", "id": 1370, "tags": "meteorology, atmosphere, geophysics, climate, geography" }
Prepare state $|00\rangle+|1+\rangle$ using Clifford gates and the T-gate
Question: I am looking for a quantum circuit which maps state $|00\rangle$ to $|\psi\rangle=\frac{1}{\sqrt{2}} |00\rangle+\frac{1}{\sqrt{2}}|1+\rangle$. The circuit should only apply quantum gates from the Clifford group (specifically, $CNOT$, $H$, $P$) and the $T$ gate: $$ CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}, \quad H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}, \quad P = \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix}, \quad T = \begin{bmatrix} 1 & 0 \\ 0 & e^{i \pi / 4} \end{bmatrix} $$ My Thoughts Because these gates are universal for quantum computation (as stated here), I know that a circuit which approximates $|\psi\rangle$ must exist. I am hoping that I can produce $|\psi\rangle$ exactly, but I was not able to find the corresponding circuit. I already figured out the the circuit needs to apply $T$ at least one, as $|\psi\rangle$ has no stabilizers from the Pauli group (determined by brute-force), and any state produced by a Clifford circuit would have stabilizers from the Pauli group. Answer: The circuit is a Hadamard + a Controlled - Hadamard gate. Note that $ S $ gate is $ P $ gate in your notation.
{ "domain": "quantumcomputing.stackexchange", "id": 4783, "tags": "circuit-construction, clifford-group" }
Is there any way to use syringe pumpe in pumping flow
Question: I want to use syringe pump to pump water at 0.82 ml/min for 30 min ? Is that possible ( I read some paper use that but don't know how ?) Answer: This should be no problem when you use a syringe pump model with adjustable flow rate and sufficient volume (at least 30 ml).
{ "domain": "chemistry.stackexchange", "id": 1070, "tags": "physical-chemistry, equipment" }
Alternative way of proving a language is $NP$
Question: I would like to show a language $A$ is in $NP$. I can find a TM for $A$, but I wanted to try a different approach and see if it is correct. Let $B$ be a known problem in $NP$. If I show a polynomial time reduction $A \leq B$, will this imply $A\in NP$? I believe it will, but I am not 100% sure. Answer: It depends on exactly what kind of reduction you're using. Assuming that you mean a polynomial-time many-one reduction (also called "mapping reductions") then, yes, if $A$ reduces to $B$ and $B\in\mathrm{NP}$, then $A\in\mathrm{NP}$, also. This was probably proven to you when you started to study reductions between $\mathrm{NP}$ problems. If you mean polynomial-time Turing reduction, it's an open problem whether $A\in\mathrm{NP}$. This corresponds to the question of whether $\mathrm{NP} = \mathrm{P^{NP}}$. If you don't know what kind of reduction you mean, you should check with your teacher, but it's probably many-one reductions. Those are the ones that are most commonly used with respect to problems in $\mathrm{NP}$.
{ "domain": "cs.stackexchange", "id": 9222, "tags": "reductions, np" }
My 'almostIncreasingSequence(sequence)' code is too slow for lists that have 10000+ elements. [CodeSignal]
Question: I asked this question on Stackoverflow as well, but I think it's best suited here because my code needs optimization instead of error checking (that I previously thought). I've made changes to my code as well. But the logic is pretty much the same: My code first checks the length of the provided sequence, if it is 2 or less it automatically returns True. Next, it creates a newlist with the first element removed and checks if the rest of the list is in ascending order. If the sequence is not in order, the iteration breaks and a newlist is generated again and this time with the next element removed. This continues until there are no more elements to remove (i.e. i == len(sequence) - 1), which ultimately returns as False If in any of the iterations, the list is found to be in ascending order(i.e. in_order remains True), the function returns True. def almostIncreasingSequence(sequence): # return True for lists with 2 or less elements. if len(sequence) <= 2: return True # Outerloop, removes i-th element from sequence at each iteration for i in range(len(sequence)): newlist = sequence[:i] + sequence[i+1:] # Innerloop, checks if the sequence is in ascending order j = 0 in_order = True while j < len(newlist) - 1: if newlist[j+1] <= newlist[j]: in_order = False break j += 1 if in_order == True: return True elif i == len(sequence)-1: return False I received a suggestion that I should only use one loop, but I cannot think of a way to implement that. Nested loops seems necessary because of the following assumptions: I have to remove every next element from the original sequence. (outer loop) I need to check if all the elements are in order. (inner loop) This is a brief on almostIncreasingSequence() my code follows the logic provided in the answer here, it solves almost all of the Tests as well, but it is too slow for larger lists (that are approx 10000+ elements). Answer: Checking if a list is strictly increasing The inner loop checks if a list (in this case newList) is strictly increasing: j = 0 in_order = True while j < len(newlist) - 1: if newlist[j+1] <= newlist[j]: in_order = False break j += 1 if in_order == True: return True In general variable names should use underscores, so newlist becomes new_list. PEP 8. It can be simplified with a for-else: for j in range(len(new_list) - 1): if new_list[j+1] <= new_list[j]: break else: return True Or using all: if all(new_list[i] > new_list[i - 1] for i in range(1, len(new_list))): return True Optimization As you said, the solution is too slow for large input due to the inner loop that runs every time, making the overall complexity \$O(n^2)\$. The current solution follows this approach: For each element of the input list Build a new list without such element and check if strictly increasing Consider simplifying the approach like the following: Find the first pair that is not strictly increasing Build a new list without the first element of the pair and check if it is increasing Build a new list without the second element of the pair and check if it is increasing For example: def almostIncreasingSequence(sequence): def is_increasing(l): return all(l[i] > l[i - 1] for i in range(1, len(l))) if is_increasing(sequence): return True # Find non-increasing pair left, right = 0, 0 for i in range(len(sequence) - 1): if sequence[i] >= sequence[i + 1]: left, right = i, i + 1 break # Remove left element and check if it is strictly increasing if is_increasing(sequence[:left] + sequence[right:]): return True # Remove right element and check if it is strictly increasing if is_increasing(sequence[:right] + sequence[right + 1:]): return True return False I believe there should be an approach that doesn't build the two additional lists to reduce the space complexity, but I'll leave that to you.
{ "domain": "codereview.stackexchange", "id": 40974, "tags": "performance, python-3.x" }
Quantum field theory, interpretation of commutation relation
Question: Let $\phi$ be the quantum field $$ \phi(x) = \int \frac{d^3\mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2E_\mathbf{p}}} \Big[ b_\mathbf{p}e^{-ip\cdot x} + c_\mathbf{p}^\dagger e^{ip\cdot x} \Big] $$ with commutation relations $$ [b_\mathbf{p}, b_\mathbf{q}^\dagger] = (2\pi)^3\delta^{(3)}(\mathbf{p}-\mathbf{q}), $$ $$ [c_\mathbf{p}, c_\mathbf{q}^\dagger] = (2\pi)^3\delta^{(3)}(\mathbf{p}-\mathbf{q}), $$ all other commutators zero. Let $Q$ be the charge operator $$ Q = \int \frac{d^3\mathbf{p}}{(2\pi)^3} \Big[c_{\mathbf{p}}^\dagger c_{\mathbf{p}} - b_{\mathbf{p}}^\dagger b_{\mathbf{p}} \Big]. $$ We calculate the commutator $[Q,\phi] = \phi$. The question is what is an interpretation of this commutation relation? We know that $Q$ is the number of antiparticles minus the number of particles. Answer: One interpretation is as follows: $[Q,\phi(\vec{x})] = \phi(\vec{x})$ means that $Q\phi(\vec{x}) = \phi(\vec{x})(Q + 1)$. Thus, if $\vert q \rangle$ is a charge eigenstate with eigenvalue $q$ (i.e. $Q\vert q \rangle = q\vert q \rangle$) then $$ Q \phi(\vec{x}) \vert q \rangle = \phi(\vec{x}) (Q + 1) \vert q \rangle = (q + 1) \phi(\vec{x}) \vert q \rangle, $$ which means that $\phi(\vec{x}) \vert q \rangle$ is a charge eigenstate with eigenvalue $q + 1$. Thus, acting with $\phi(\vec{x})$ increases the charge by $1$. In fact, a common interpretation of the operator $\phi(\vec{x})$ is that it creates a particle at position $\vec{x}$ (being a kind of Fourier transform of the creation operator $c^\dagger_\vec{p}$, which creates a particle with momentum $\vec{p}$). So the commutation relation says that if you add a particle, the total charge will increase by $1$.
{ "domain": "physics.stackexchange", "id": 63441, "tags": "homework-and-exercises, quantum-field-theory, charge, commutator, antimatter" }
How to calculate the weight between neurons in ANN?
Question: I am currently learning Supervised ANN training using Backpropogation and I am stuck in this exercise. I calculated the δA using the equation at the bottom of the screenshot, however, I am unable to calculate δB because the weight of BD is not given. How do I find the weight of BD? Answer: As far as I can tell, the information provided does not appear to be enough to answer your question, though I confess I am not familiar with what a "unipolar continuous threshold function" is, and it is not clear to me what $f(\text{net}_A$) represents nor what the loss function for this task is. You might need to talk to your instructor or wherever you got that image from. You might ask them what a "-" over an edge means.
{ "domain": "cs.stackexchange", "id": 11952, "tags": "machine-learning, neural-networks" }
Determine the URL to use for a REST API based on configuration
Question: There is a bit of if-else-if going in within the code. Is there a construct which could come handy. Also the section names are very similar too public string GetRestApiUrlFromHost() { var restApiUrl = string.Empty; var HostUrl = Request.Host.Value; var DevURL = _configuration.GetSection("DEV_API_URL").Value; var QAURL = _configuration.GetSection("QA_API_URL").Value; var ProdURL = _configuration.GetSection("PROD_API_URL").Value; if (HostUrl.Contains(DevURL, StringComparison.CurrentCultureIgnoreCase)) { restApiUrl = _configuration.GetSection("DEV_REST_API_URL").Value; } else if (HostUrl.Contains(QAURL, StringComparison.CurrentCultureIgnoreCase)) { restApiUrl = _configuration.GetSection("QA_REST_API_URL").Value; } else if (HostUrl.Contains(ProdURL, StringComparison.CurrentCultureIgnoreCase)) { restApiUrl = _configuration.GetSection("PROD_REST_API_URL").Value; } else { restApiUrl = _configuration.GetSection("Local_REST_API_URL").Value; } return restApiUrl; } Looks redundant to me. But cannot place it. Which would be the most ideal way to refactor this code? Answer: You could create an array of tuples with url keys and section keys and loop them like this: (string urlKey, string sectionKey)[] keys = new [] { ("DEV_API_URL", "DEV_REST_API_URL"), ("QA_API_URL", "QA_REST_API_URL"), ("PROD_API_URL", "PROD_REST_API_URL") }; string sectionKey = "Local_REST_API_URL"; foreach (var key in keys) { if (HostUrl.Contains(key.urlKey, StringComparison.CurrentCultureIgnoreCase)) { sectionKey = key.sectionKey; break; } } return _configuration.GetSection(sectionKey).Value; Or with LINQ (with same tuple array). .NET 6: var key = keys.FirstOrDefault( k => HostUrl.Contains(k.urlKey, StringComparison.CurrentCultureIgnoreCase), (null, "Local_REST_API_URL") ); return _configuration.GetSection(key.sectionKey).Value; Uses the FirstOrDefault(IEnumerable, Func<TSource,Boolean>, TSource) extension method overload available since .NET 6 Framework versions prior to .NET 6: var key = keys .Where( k => HostUrl.Contains(k.urlKey, StringComparison.CurrentCultureIgnoreCase)) .DefaultIfEmpty((null, "Local_REST_API_URL")) .First(); ); return _configuration.GetSection(key.sectionKey).Value;
{ "domain": "codereview.stackexchange", "id": 42888, "tags": "c#, configuration" }
Scaling arguments for the Contact mechanics between two elastic spheres
Question: I am studying a bit granular dynamics and I have seen that two spheres of radius $R$ in contact with a contact area of radius $a$ would need an applied force $F$ on this two spheres that is nonlinear in the depth of deformation $\delta$ as it goes as: $F \sim \delta^{3/2}$ To be honnest, I am not really interested in the full calculation as I am pretty sure I will forget it within two days and plus te full calculation probably would not give me a lot of insight on what is happening. One way "a la de Gennes" that I have seen consist in relating the stess $\sigma$ to the vertical deformation $\epsilon$ via $\sigma = E \epsilon$. Then people say that for spheres and if $a \ll R$ then 1) $\epsilon \sim \delta /a$ 2) $a \sim \sqrt{\delta R}$ $\Rightarrow \:\sigma \sim E\sqrt{\delta/R}$ 3) $F = \pi a^2 \sigma \sim \pi R\delta \sqrt{\delta /R} \sim \pi E\sqrt{R}\delta^{3/2}$ This final result is pretty close to the actual one. My point is that I don't understand the physics of the first equation as I am used to $\epsilon = \delta L/L$ which would give me $\epsilon \sim \delta/R$. Question: Is there any way to understand physically the fact that $\epsilon \sim \delta/a$? Thanks very much for any answer. Answer: On an intuitive level if $a\ll R$ then nearly all of the deformation will occur close to the surface. Imagine for a bit that R is radius of the earth and you're pushing on some dirt with base ball so there's a circular contact patch with a radius of about 1/2 an inch. Now if earth were half as big would the forces/stresses/strain/contact area be any different? No. All of the stress and deformation is local to the contact patch so the size of the overall body makes no difference. Specifically, the deformation only goes to a depth that's proportional to the radius of the contact patch. So the strain, which is inversely proportional to deformation depth, must also be inversely proportional to contact radius.
{ "domain": "physics.stackexchange", "id": 24503, "tags": "newtonian-mechanics, classical-mechanics" }
Can dogs see infrared radiation, i.e., heat?
Question: I, like most of my fellows, prefer my meals served piping hot. As all know, humans sometimes feed their dogs table scraps. If I put down a small piece of chicken from within a piping hot chicken pot pie and alert my small pet dog that it's for her, she'll stand back 2 or 3 feet away from it for a while before eventually consuming it. I am starting to believe that she only does this behavior when the food I put down is too hot for her. So it begs the question I here pose: are dogs able to see heat radiation from afar? Answer: A dog's nose can even sense weak thermal radiation, as found out recently by researches at at Lund University and Eötvös Loránd University. So strong thermal radiation can be detected from a couple of meters away which would explain your observations. Citation from this article: To test the idea, researchers trained three pet dogs to choose between a warm (31°C) and an ambient-temperature object, each placed 1.6 meters away. The dogs weren't able to see or smell the difference between these objects. (Scientists could only detect the difference by touching the surfaces.) After training, the dogs were tested on their skill in double-blind experiments; all three successfully detected the objects emitting weak thermal radiation, the scientists reveal today in Scientific Reports. Next, the researchers scanned the brains of 13 pet dogs of various breeds in a functional magnetic resonance imaging scanner while presenting the pooches with objects emitting neutral or weak thermal radiation. The left somatosensory cortex in dogs' brains, which delivers inputs from the nose, was more responsive to the warm thermal stimulus than to the neutral one. The scientists identified a cluster of 14 voxels (3D pixels) in this region of the dogs' left hemispheres, but didn't find any such clusters in the right, and none in any part of the dogs' brains in response to the neutral stimulus. Together, the two experiments show that dogs, like vampire bats, can sense weak hot spots and that a specific region of their brains is activated by this infrared radiation, the scientists say. They suspect dogs inherited the ability from their ancestor, the gray wolf, who may use it to sniff out warm bodies during a hunt. Here's the scientific article if you want to dig your nose in a bit further (pun intended)..
{ "domain": "biology.stackexchange", "id": 12150, "tags": "zoology, vision, senses, heat, dogs" }
Can we define Kinetic Energy from non-inertial frame?
Question: I have read somewhere that Work Energy Theorem can be applied even from non-inertial frame by adding work done by pseudo force. I further think that we need to take Kinetic Energy with reference to this frame for the theorem. My question: Is my thinking correct? I find it a little absurd that "energy depends on frame". Please provide some arguments to justify/counter it. Answer: I have read somewhere that Work Energy Theorem can be applied even from non-inertial frame by adding work done by pseudo force. Yes, this is the case. The work-energy theorem results from Newton's second law, and pseudo-forces are what allow for Newton's second law to work in non-inertial reference frames. So the work-every theorem works fine in non-inertial reference frames. Pseudo-forces essentially allow you to treat non-inertial frames like inertial ones without Newton's third law. I find it a little absurd that "energy depends on frame". Please provide some arguments to justify/counter it. Kinetic energy of a point particle is $\frac12mv^2$. Since $v$ is reference frame dependent, so is the kinetic energy. As a simple example, if I see an object moving by me, I say it has kinetic energy. If you are removing with that object so that you see it as being at rest, then you would say it has no kinetic energy.
{ "domain": "physics.stackexchange", "id": 71724, "tags": "newtonian-mechanics, energy, reference-frames, work" }
How does myopia develop, exactly?
Question: Recently I was reading about myopia and I understood a few basic facts about it: Its initial cause is a constant spasm in the ciliary muscle. To do less work, the eyeball elongates a tiny bit. During adolescence, because of the growth of the body, the eyeball elongates accordingly even more (i.e. the myopia is progressing). I don't really understand the exact mechanism responsible for the transition from 1 to 2. Is the ciliary muscle capable of elongating the eyeball on its own, or is there some other relationship to another part of the eye? It's weird how myopia is so common these days and I wasn't able to find the precise mechanism behind it. Answer: What is myopia? Myopia, a.k.a. near-sightedness, is a refractory error. Refraction is the process by which the optical system focuses images of objects on the retina, which are then transmitted to the visual cortex of the brain to be interpreted as vision. The major components of refraction are the cornea, lens, and axial eye length (distance from the front to the back of the eye). The problem in myopia is that images are focused anterior to the retina, like this: Causes of myopia You ask I don't really understand the exact mechanism responsible for the transition from 1 [ciliary muscle spasm] to 2 [elongation of the eye]. Swelling or spasm of the ciliary body (the circumferential tissue inside the eye composed of the ciliary muscle and ciliary processes) can be seen after trauma or in inflammatory conditions. Contraction of the ciliary muscles causes decreased tension in the suspensory ligaments that connect the ciliary body to the lens. The results in the lens becoming more rounded and/or moving anteriorly. Both processes also bring the focal point more anterior, i.e. further in front of the retina, resulting in a configuration as in the picture above. You could call this "elongation of the eye" if you wanted. This 25 second video explains the process succinctly with pretty pictures. Although interesting to think on, this is not the most common cause of myopia. Myopia is an extremely common condition, occurring in ~25% of the US population. Myopia can be classified as axial or refractive. Axial myopia: The refracting power of the eye is normal but the length of the eye is too long. Refractive myopia: The length of the eye is normal, but the lens refracts too much for a given axial length. Both axial and refractive myopia are usually caused by failure of emmetropization, the process by which a normal eye has coordinated growth of its refractive components. This process takes place mostly during the childhood and adolescent years, which is why myopia most commonly develops during that time. There are certain disease conditions that increase lens power (e.g. osmotic effects in diabetes) or axial length (e.g. posterior staphyloma) that may also cause myopia at other points in life, but these are less common. References: 1. Neil J. Friedman and Peter K. Kaiser. Essentials of Ophthalmology. © 2007, Elsevier Inc. 2. Albert, Daniel M., MD MS.Albert & Jakobiec's Principles & Practice of Ophthalmology, Third Edition. © 2000, 1994 by W.B Saunders Company, © 2008, Elsevier Inc. Image from: http://en.wikipedia.org/wiki/Myopia
{ "domain": "biology.stackexchange", "id": 2850, "tags": "vision, pathophysiology, human-eye" }
Is there such a thing as an interaction radius for molecules?
Question: My question is about estimating the radius of influence between two molecules; picture some mixture, comprised of water, oxygen gas (in small concentrations) and a molecule we denote $G$. In the mixture, oxygen gas can react with the excited state of the molecule $G$ but not with the ground state of $G$. After an excitation event, we can work out the proportion of $G$ moles which have become excited, but it's a little trickier to ascertain the next part about the chances of an oxygen molecule being sufficiency close to the excited $G$ to interact. One approach is to calculate the fractional volume of oxygen per volume of mixture; this is relatively straight-forward to calculate and I've established a formula to do so for different oxygen concentrations; however, when I take the results from this and feed them into my model, they're too low by a factor of more than $10^5$; this could of course be due to the model being a poor choice, but my intuition tells me that what I really should be looking at is not the volume of oxygen but the "interaction volume" surrounding the O2 molecules; for example, if there is a field of radius of $R$ inside which electrical interaction is likely, then my volume calculation should not aim to find total volume of O2 gas but rather the total volume over which interaction is likely $V_{L}$, in a naive form assuming constant electrical attraction, something like; $V_{L} = N_{o}\left(\frac{4}{3}\pi R^3 \right)$ where $N_{o}$ is the total number of oxygen molecules. This might be overly simplistic, as an interaction volume is likely to be a function of distance by Coulomb's law / electrodynamic effects but before I go and re-invent the wheel, does anyone know if there is a branch of molecular dynamics or atomic theory which deals with these kind of problems? I've messed around with Van Der Waals radii but they present the same problem in so much as they approximate the volume but not the sphere of interaction. Answer: In molecular dynamics, you want to think about "cross sections" and "mean free path" - two closely related concepts. You also have to consider the relative velocity of the molecules - is G of the same mass as the oxygen molecules? As you know, lighter molecules travel faster (same kinetic energy by equipartition theorem). You might find this article helpful. It gives the mean free path as $$\ell = \frac{k_BT}{\sqrt{2}\pi d^2p}$$ When you divide that by the velocity you get the mean time between collisions of similar molecules. When the masses and concentrations of the molecules are different, some scaling is required. You can find more information and diagrams at http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/menfre.html
{ "domain": "physics.stackexchange", "id": 19077, "tags": "atomic-physics, interactions, molecular-dynamics" }
Why is it common to plot $xG(x,Q^2)$ and not simply $G(x,Q^2)$?
Question: I'm trying to understand the modern description of high-energy scattering processes involving hadrons in the initial states. The phenomenological parton distributions functions play a central role, and as I understand it at the moment, if we are e.g. talking about gluons, the function $G(x, Q^2)$ is the probability of finding a gluon with momentum fraction $x$ inside the hadron if the transmitted four-momentum is $Q^2$. When these functions are plotted, I often encounter plots showing $x G(x, Q^2)$ instead of simply $G(x, Q^2)$. Why is this so? Is this just because the plots look a lot nicer if plotted this way? Or is there some deeper reason behind this that I haven't figured out yet? As an example, take a look at the plot used on Wikipedia. (Picture from: http://en.wikipedia.org/wiki/File:CTEQ6_parton_distribution_functions.png) Answer: As you say, "$G(x,Q^2)$ is the probability of finding a gluon with momentum fraction $x$ inside the hadron if the transmitted four-momentum is $Q^2$." In other words, $G(x,Q^2)$ is a probability density function. As you can see from the article, in this case the expectation value of the variable is $E[X] = \int_{0} ^{1} x\cdot G(x,Q^2) dx$ The plot of $x\cdot G(x,Q^2)$ then gives an intuitive sense of the "contribution" to the expectation value of the probability density function.
{ "domain": "physics.stackexchange", "id": 1596, "tags": "particle-physics, quantum-chromodynamics, visualization" }
Is Conservation of Energy maintained when the orbit of a rotating mass increases in diameter?
Question: I read an example where someone was explaining how the law of conservation of energy does not have to be maintained within a rotating mass even though angular momentum is maintained. The given example is an ice skater who spins faster as she brings her arms inward. The energy used bringing her arms inward gets transferred to her total rotational energy. Therefore, her total rotational energy increases while angular momentum was maintained. My question is about the opposite movement. Assume you have a weight positioned on a rotating wheel and the weight is held in place by a lock on a radial slide. Now assume the wheel is rotating in motion. If you unlock the weight (via electronic control for example), the weight will move outwardly toward the perimeter of the wheel due to its own inertia (centrifuge effect). When this occurs, does the kinetic energy of the wheel decrease? I actually performed an experiment of this very thing, and my crude setup seemed to confirm that the available energy in the wheel DOES indeed decrease. If this complies with the mathematical laws, can someone please confirm and explain where the energy goes? It's certainly not being lost to vibration or heat. How is the energy in the wheel decreasing simply because the weight travels outward to the perimeter? Answer: Yes, the rotational kinetic energy decreases. The extra energy is converted to thermal energy in the wheel and environment. If you imagine letting the weight go, it will slide across the surface of the wheel as it moves towards the edge. This sliding is motion against friction, so energy is lost there. Then the weight might bang into whatever holds it at the edge of the wheel, dissipating more energy. So the energy is simply thermalized just like it is when you drop a lump of clay to the ground. It would be possible to construct a device so that the energy goes somewhere else. For example, you could put a hole in the center of the wheel and tie a rope from the weight, through the hole, to another weight. Then as the weight on the wheel moved towards the edge it would lift the weight on the rope, and at least some of the energy would be stored as gravitational potential energy. Similarly, you could make the weight on the wheel compress a spring as it moved towards the edge, and then some of the energy would be stored as potential energy in the spring, etc.
{ "domain": "physics.stackexchange", "id": 26109, "tags": "angular-momentum, rotational-dynamics, centripetal-force, centrifugal-force, moment-of-inertia" }
Does acceleration warp space?
Question: I know that mass warps spacetime and gravity and acceleration are equivalent so does acceleration warp spacetime too? Answer: Sort of. You are correct in saying (with some caveats) that gravity and acceleration are equivalent. According to general relativity, gravity is manifested as curvature of spacetime. As we know from special relativity and Einstein's famous equation $E = mc^2$, energy and mass are equivalent. As a result, any type of energy contributes to gravity (i.e. to the curvature of spacetime). This relationship can be seen directly from Einstein's Field Equations of General Relativity: \begin{equation} G_{\mu\nu} = 8\pi T_{\mu\nu}, \end{equation} where the left hand side of the equation (called the Einstein tensor) contains information about the curvature of spacetime and the right hand side (called the stress-energy tensor) contains information about the mass and energy contained in that spacetime. Recall that Minkowski spacetime is the spacetime of special relativity. That is, it has no curvature (no gravity) and is the shape of spacetime when you are in an inertial (non-accelerating) reference frame. So, let's ask the question: what happens when you accelerate in Minkowski space? The answer is that spacetime no longer looks flat to accelerated observers. This is precisely the equivalence principle; locally we cannot tell if we are in a gravitational field or accelerating. Thus, when we are in fact accelerating in a flat spacetime, everything will locally appear as though we are in a spacetime that is curved due to gravity. There are other interesting similarities between accelerated observers in flat spacetime and observers in gravitational fields. For example, accelerated motion leads to horizons similar to the event horizon of a black hole because if you accelerate at a constant rate for long enough then there will be portions of the spacetime to which you can never send or recieve light signals. There is also an analog of Hawking radiation that occurs for accelerated observers in Minkowski space, called the Unruh effect.
{ "domain": "physics.stackexchange", "id": 77193, "tags": "general-relativity, spacetime, acceleration, curvature" }
Can a body under weightlessness detect if it's under free fall or under a zero gravity field?
Question: My first thought was to use an accelerometer. However, quoting from wikipedia, an accelerometer at rest on the surface of the Earth will measure an acceleration g= 9.81 m/s2 straight upwards. By contrast, accelerometers in free fall (falling toward the center of the Earth at a rate of about 9.81 m/s2) will measure zero. I'm assuming that the same accelerometer would measure zero in absence of any gravitational fields whatsoever too (say in outer space, away from all and any massive objects) So an accelerometer can't distinguish between the two situations, then what can? I think that the two situations aren't equivalent because in free fall, the kinetic energy of the object increases with time (i.e. there is a change in velocity due to acceleration), so there must be a way to distinguish between the two. Can this be answered with Newtonian mechanics? If not, then is it a limitaion of Newtonian mechanics? Answer: No, it can't: free fall and zero gravity are the same thing. On that equivalence hangs the theory of General Relativity, in fact.
{ "domain": "physics.stackexchange", "id": 29803, "tags": "newtonian-mechanics, newtonian-gravity, equivalence-principle" }
Split Larger String Into Console Window based on window size
Question: Issue: Our application logs and traces are not standard, to help our aggregation tool for multiline content required some intervention on our part. Below I wrote some code to partition those longer strings into lines based on window size. I feel like the code could be improved. private static IEnumerable<string> Partition(this string content, int size) { var index = 1; var line = String.Empty; var lines = new List<string>(); var words = content.Split(' ', StringSplitOptions.RemoveEmptyEntries); foreach(var word in words) { if((line.Length + word.Length) <= size) line += $"{word} "; if((line.Length + word.Length + words.ElementAt(index).Length > 96) { lines.Add(line); line = String.Empty; } index++; } return lines; } The complaint or area that stands out: The excessive instantiation of string The collection and loop combination Answer: I suggest a simpler implementation. Instead of looking ahead in a second if-statement, we output the current line if adding the new word would exceed the maximum line size and clear the StringBuilder used to build the lines (see below). We ensure the last line is returned after the loop. We use a StringBuilder Class to minimize the number of string allocations. Also, we return the lines immediately by using an iterator method. This makes the List<string> collection superfluous, which saves allocations again. public static IEnumerable<string> Partition(this string content, int maxLineLength) { string[] words = content.Split(' ', StringSplitOptions.RemoveEmptyEntries); var sb = new StringBuilder(maxLineLength + 1); // + 1 for the trailing white space. foreach (string word in words) { if (sb.Length > 0 && sb.Length + word.Length > maxLineLength) { sb.Length--; // Remove the trailing white space. yield return sb.ToString(); sb.Clear(); } sb.Append(word).Append(' '); } if (sb.Length > 1) { sb.Length--; // Remove the trailing white space. yield return sb.ToString(); } } I also tested for sb.Length > 0 in the if-condition, just in case a single word exceeds the maximum line length. Removing the trailing white spaces is an addition to your implementation. Test string input = "Issue: Our application logs and traces are not standard, to help our aggregation tool for multiline content required some intervention on our part. Below I wrote some code to partition those longer strings into lines based on window size. I feel like the code could be improved."; foreach (string line in input.Partition(40)) { Console.WriteLine(line); } outputs: Issue: Our application logs and traces are not standard, to help our aggregation tool for multiline content required some intervention on our part. Below I wrote some code to partition those longer strings into lines based on window size. I feel like the code could be improved. A more advanced implementation could use a ReadOnlySpan<char> and the MemoryExtensions.Split Method (.NET 8.0) to save even more memory allocations.
{ "domain": "codereview.stackexchange", "id": 44885, "tags": "c#" }
Why are three way bridges rare?
Question: It's true that most bridges are "two directional." But three way bridges are pretty rare, globally. I can understand why there wouldn't be many for rivers, but if bridges are designed based on the lie of the surrounding ground, why wouldn't there be a large number of non-river sites that would support such bridges. On the other hand, three out of the world's bridges exist in Michigan (and only ten or so elsewhere in the United States). What is it about the land, topography, or other features of Michigan that cause it to have a disproportionate number of the country's and world's three way bridges. Answer: Most bridges (and overpasses) are built to cross over something. With a few notable exceptions, most of these "somethings" are relatively long perpendicular to the desired crossing direction and fairly narrow parallel to it. Therefore a simple "two directional" bridge best meets the needs of the engineering problem. Engineers always try to solve a problem in the simplest possible way to prevent introducing more problems than they have solved. If a simple two-way bridge solves the problem, then there is no reason to complicate the solution. Along those lines, vehicular traffic flow on a three-way bridge can be fairly complicated and may not easily facilitate high traffic volume. I believe that is why you will find more pedestrian bridges created in this style. As for Michigan, I don't know. It's entirely possible that a structural engineer or engineering company in the area had a fondness for that type of structure and bid on those projects with three-way designs.
{ "domain": "engineering.stackexchange", "id": 141, "tags": "structures, design, civil-engineering" }
Word based perplexity from char-rnn model
Question: I'm training a character based RNN model for text prediction and want to compare it to similar models. Since most literature uses word based perplexity as a performance metric, what would be the "proper" way to calculate word based perplexity from a character based model? Answer: Actually, there is a formula which can easily convert character based PPL and word based PPL. $PPL = 2^{(BPC*Nc/Nw)}$ where $BPC$ is character based $PPL$, $Nc$ and $Nw$ are the number of characters and words in a test set, respectively. The formula is not completely fair, but it at least offers a way to comparing them. The following are some reference. [1] Hwang K, Sung W. Character-Level Language Modeling with Hierarchical Recurrent Neural Networks[J]. 2016. [2] Graves A. Generating Sequences With Recurrent Neural Networks[J]. Computer Science, 2013. [3] T. Mikolov, I. Sutskever, A. Deoras, H. Le, S. Kombrink, and J. Cernocky.Subword language modeling with neural networks. Technical report, Un-published Manuscript, 2012.
{ "domain": "datascience.stackexchange", "id": 1743, "tags": "nlp, rnn" }
Standard Cutoff for Moderated T-statistics
Question: I'm looking at some microarray data. For the first time I've calculated a moderated T statistic from limma. Is there any standard practice for where to cut off that value? For log2 fold change I usually cut off at +/- 1.5 and adjust accordingly; for adjusted p value I cut off at p<0.01 and adjust accordingly. These moderated T statistics seem a little higher than log2 fold change and I'm not sure what is generally accepted as significant. Thanks! Answer: You're misinterpreting the moderated T-statistic, it's basically the fold-change divided by its variance. The p-value comes directly from that, so if you filter by moderated fold-change you're just setting an unknown (unless you go through the trouble of figuring it out) p-value threshold. Instead, do exactly as you've been doing and set appropriate (adjusted) p-value and fold-change thresholds.
{ "domain": "bioinformatics.stackexchange", "id": 263, "tags": "bioconductor, statistics, differential-expression, microarray" }
Is the non-simply connected version of AdS space a maximally symmetric spacetime?
Question: A common construction of anti-de Sitter space is the following: Start with the flat five-dimensional manifold with metric $ds_5^2 = -du^2 - dv^2 + dx^2 + dy^2 + dz^2$. Consider the hyperboloid submanifold given by $-u^2 - v^2 + x^2 + y^2 + z^2 = -\alpha^2$. Define the hyperbolic coordinates $(t, \rho, \theta, \phi)$ on the hyperboloid. The coordinate $t$ is periodic with period $2 \pi$. The submanifold geometry inherited from the ambient space yields the metric $ds^2 = \alpha^2 \left( -\cosh^2(\rho)\, dt^2 + d\rho^2 + \sinh^2(\rho)\, d\Omega_2^2 \right)$. Consider the universal cover of this hyperboloid, which has the same metric as above but the new coordinate $t$ ranges over the whole real line. This is AdS space. My questions are about the original hyperboloid constructed in steps 1-3, before taking the universal cover. Q1. What is the global topology of this spacetime? Is it $S^1 \times \mathbb{R}^3$? Q2. Is this spacetime maximally symmetric? It seems like it should be, since (I think) it has the same Killing vector fields as AdS. But I've sometimes seen the claim that "AdS is the unique maximally symmetric spacetime with constant negative scalar curvature." Is that true, or is this hyperboloid a counterexample? Answer: Related answers: this one by me and this one by Slereah, which has a classification of maximally symmetric spaces with one timelike dimension (citing Spaces of Constant Curvature: Sixth Edition by Joseph A. Wolf). Your hyperboloid (which Wolf calls $\mathbb H^3_1$) is topologically $S^1\times\mathbb R^3$. The maximally symmetric spacetimes of that signature and curvature are the covers of $\mathbb H^3_1 / \mathbb Z_2$, so $\mathbb H^3_1$ and $\text{AdS}_4$ are maximally symmetric. $\text{AdS}_4$ is unique if you also require simple connectedness.
{ "domain": "physics.stackexchange", "id": 90690, "tags": "general-relativity, spacetime, symmetry, curvature, anti-de-sitter-spacetime" }
How come the Tunguska fireball reached the ground before the shockwave?
Question: In this post on his Bad Astronomy blog, Phil Plait describes the Tunguska event as having had a fireball which was followed by a shock wave: A chunk of rock (or possibly ice) about 30 meters across—the size of a house—barreled in at a speed probably 50 times that of a rifle bullet. Ramming through the Earth's atmosphere, incredible forces compressed it, crumbled it, and when it reached a height of just a few kilometers above the ground, those forces won. In a matter of just a few seconds the energy of its immense speed was converted into heat, and it exploded. [...] The fireball created a huge forest fire over hundreds of square kilometers of the Podkamennaya Tunguska River region of the Siberian forest ... but then the immense shock wave from the blast touched down. It blew the fire out and swept down those trees like a rolling pin, knocking down untold millions of them. I guess the forest-fires-then-shock-wave makes sense as an explanation for the large number of scorched and partially scorched trees found at the site. However, I'm confused by the explanation - wouldn't the shockwave be the first thing to hit the ground? Can anyone in the know comment on the anatomy of this explosion? Answer: As illustrated HERE on Astronomy SE, since the velocity of the meteor or comet nucleus is going to be about 40 km/s (essentially the escape velocity from the sun at at 1 AU) and the velocity of Earth is about 30 km/s, the relative velocity of the two can be anywhere from 10 km/s to 70 km/s (varying by a factor of 7) depending on the direction the object is coming from. The kinetic energy of the object is proportional to the square of the velocity, so it varies by a factor of 50! All of the kinetic energy is converted, in milliseconds, to heat in the air (which quickly become visible and UV light) - or to forming a crater if it reaches the ground. Examining the damage, something of an estimate of the energy can be made, but this could be a smaller object coming in relatively fast, or a larger object coming in relatively slower. This means the mass of the object is very poorly constrained. The Tunguska aerial explosion is thought to be from an object coming in at a shallow angle. This matters for two reasons: objects coming in at such an angle is more likely to explode before impact, and when they do, they leave a butterfly shaped pattern of damage under them owing to the explosion happening along a path segment rather than at a point. Light from the explosion, not the fireball itself, reaches the ground essentially instantaneously. The explosion itself takes milliseconds. The light from the plasma created by the explosion fades quickly. Most of the light happens within a small fraction of a second. This light pyrolyzes, rather than burns, the vegetation, carbonizing some of it. Only in the case of a fireball barely above the surface would thr fireball itself interact with the surface. The shock wave, which knocks over the trees and cools the trees to the point they will not burn, reaches the ground directly under the blast travelling at the speed of sound, in 1 to 3 seconds, depending on height. Ground some distance away from the center is hit a little later, also depending on distance. Trees directly below lose branches but stay standing, similar to Genbaku Dome under the atomic bomb at Hiroshima.
{ "domain": "physics.stackexchange", "id": 82320, "tags": "classical-mechanics, atmospheric-science, meteors, comets, shock-waves" }
Script scraper for outputting variables and functions to a text file
Question: I've been programming with Python 2.7 for about six months now and if possible, I'd like some critique to see what I might have done better or what I might be doing wrong. My objective was to make a script scraper that finds variables and functions, outputting them to a text file along with their line number. It also organizes the output based on type and string length for readability. It works as intended, but I'd like to know if/how I could have wrote it better, cleaner. import re def change_line(): parse_next['line_count'] += 1 def make_readable(list): fix_order = {} for item in list: fix_order[len(item)] = item return sort_this(fix_order) def sort_this(dict): fixed = dict.keys() fixed.sort() newly_ordered = [] for number in fixed: newly_ordered.append(dict[number]) return newly_ordered def parse_match(match): if '\n' in match: parse_next[match]() elif 'def' in match: parse_next[match] = parse_next['line_count'] funcs.append(match) elif match: parse_next[match] = parse_next['line_count'] varis.append(match) else: print 'error' funcs = [] varis = [] txt_file = open('c:\\code\\test.txt', 'r') output = open('c:\\code\\output.txt', 'w+') found = re.findall(r'\n|def\s.+|.+ = .+', txt_file.read()) parse_next = { '\n': change_line, 'line_count': 1, } for match in found: parse_match(match) funcs = make_readable(funcs) varis = make_readable(varis) output.write('\t:Functions:\n\n') for item in funcs: s_fix = item.replace('def ', '',) to_write = [s_fix, ' Line:', str(parse_next.get(item)), '\n'] output.writelines(to_write) output.write('\n'*2) output.write('\t:Variables:\n\n') for item in varis: to_write = [item.strip(),' Line:', str(parse_next.get(item)), '\n'] output.writelines(to_write) output.close() txt_file.close() To run it, you'll need to edit this filepath: txt_file = open('c:\code\test.txt', 'r') with code from a script of your choice. Be harsh if necessary, I'd really like to get better. Answer: A couple of suggestions: make_readable has a very serious problem - what if two inputs have the same length? A dictionary can only have one value per key; I suggest making it a dictionary of lists of items (or a collections.defaultdict(list)) instead. Also, don't call the argument list - that's a built-in. sort_this could be shortened (and the argument shouldn't be called dict): def sort_this(d): return [d[key] for key in sorted(d)] Iterating over a dictionary (e.g. in sorted) automatically iterates over the keys. Also, this will work without modification in Python 3.x (where fixed.sort() will fail). The global variables (e.g. funcs) should either be explicitly global (i.e. put global funcs at the start of functions that use them) or, much better, passed as arguments to the functions: def parse_match(match, funcs, varis, parse_next): One bug you may want to look into - if a variable is defined over multiple lines: my_list = [1, 2, 3, 4, 5, 6] Your code only gets the start of the definition. Finally, it would be neat to not have to change the hard-coded strings to change the files to analyse and output to! Look into taking command line arguments (sys.argv) or using e.g. tkinter to put a GUI in. At the very least, have a higher-level function analyse(input_file, output_file) then call it at the end of your script: if __name__ == "__main__": analyse("C:\\code\\test.txt", "C:\\code\\output.txt") It is not best practice to put the variables you have to change somewhere in the middle of the file - top or bottom is easier to find!
{ "domain": "codereview.stackexchange", "id": 6566, "tags": "python, optimization, python-2.x" }
More volume with more mass?
Question: Regarding to the Schwarzschild solution, is there more volume with more mass? Lets take for example a space shuttle (spheric for simplicity) in an stationary orbit at the position $r_1$. The black hole around which it is orbitting has a mass $M$. Now, the mass is increased (not interesting, how). $M_2=2\cdot M_1$for example. The position $r_1$ is kept the same. Doesn't the volume of the spaceshuttle increase due to the mass increase and curvature increase? According to a comment, the question might be similar to the question whether the volume of the space shuttle increases while moving nearer to the center of the black hole. The curvature there is stronger then outside. As the metric is $ds^2 = -Bdt^2 + Adr^2 + \text{angular terms}$, and A is increasing while decreasing r, the volume should increase either. However, this seems to be dependent on the sign convention: If (+---) is used instead of (-+++), then the volume is decreasing, isn't it? I read somewhere, that Einstein used (+---) and now we are using (-+++) mostly and it is pure convention - but isn't the volume change important to classify? Addendum: To make the question clearer I want to change it to the following thought experiment: Mankind finally managed to develop a material which completely isolates from gravity (please, don't ask me how). Real hoverboards (see "Back to the future" 2) are used all-around the planet and, at last, the following experiment is done regularly in the physics undergrads course: They got an empty cube of this gravity-isolating material. In the middle of the cube, there is a (non-isolating) sphere which can be filled with a material of high gravitational mass. The volume of the cube is measured with and without the mass in the middle (probably by simply pressing water into it). Of course, the undergrads need to be very precise with the volume measurement, carefully keeping temperature the same and everything. Do they measure more volume inside the cube if the middle-sphere is filled with mass? For clarification: volume is positive here, mass is also positive here. The material of the cube is magical. It just stays where it is and isolates gravity. (Hm, ok, I see: "It stays where it is" regarding inside the cube is different from "it stays where it is" regarding the room with the students. So, to define even this: It stays where it is regarding the room with the students) Answer: @BarrierRemoval, you must be aware of what kind of curvature do you speak. Schwarzschild vacuum spacetime is indeed curved. Its Riemann curvature $R^{\rho}_{\sigma \mu \nu}$ is non-zero. On the other hand, its Ricci $R_{\mu \nu}$, and correspondingly $R$ curvatures are zero. Therefore, for a co-moving observer a spaceship does not change its volume. For reference see "Physical and Geometric Interpretations of the Riemann Tensor, Ricci Tensor, and Scalar Curvature", by Lee C. Loveridge https://arxiv.org/abs/gr-qc/0401099v1
{ "domain": "physics.stackexchange", "id": 84536, "tags": "general-relativity, black-holes, curvature, event-horizon, tidal-effect" }
Velocity as a result of instantaneous force?
Question: A particle with mass $m$ has force with magnitude $F$ applied to it in the positive $x$ direction at time $t = 0$ and for all $t > 0$ the magnitude of the force equals $0$. What velocity $v$ will the particle travel at assuming no frictional force or loss of energy due to internal friction etc.? Answer: The impulse is zero. Hence the change in momentum is zero. The particle will remain at rest.
{ "domain": "physics.stackexchange", "id": 33371, "tags": "homework-and-exercises, classical-mechanics" }
Lambda Calculus - free vs. bound variables
Question: While I am reducing the following term, I encountered a little Problem: $$(\lambda w x. w x)(\lambda w x. w x) \rightarrow_\beta (\lambda x.(\lambda w x. w x) x)$$ Now my first question: Why is it possible to reduce further after doing alpha conversion? Isn't the last x bound be the first lambda? Second Question: In my Lectures notes there is mentioned that $(\lambda y . a )b$ -> a. Why can we do this? Shouldn't the result be b? Because we apply b to all free occurrences of y in a, but there are none so our expression reduces to b. Answer: Iterated abstraction uses abstraction to the right: \begin{align} LHS &= (\quad \lambda wx.wx \space\space\space)\space(\lambda wx.wx) \\ &= (\lambda w.(\lambda x.wx))\space(\lambda wx.wx) \end{align} The beta reduction axiom is $(\lambda x. M[x])N = M[x := N]$. To satisfy your original equation, replace all occurences of $x$ in the beta reduction axiom, with $w$. Then, substitute the following values into the beta reduction axiom: $$M[w] = (\lambda x.wx)$$ $$N = (\lambda wx.wx)$$ The beta reduction axiom with substituted values is $$(\lambda w.(\lambda x .wx))(\lambda wx.wx) = M[w:=(\lambda wx.wx)]$$ where $M[w:=(\lambda wx.wx)]$ means "substitute all occurences of $w$ in $M$ with $(\lambda wx.wx)$". So we get: $$(\lambda w.(\lambda x .wx))(\lambda wx.wx) = (\lambda x.(\lambda wx.wx)x)$$ as expected. For the $(\lambda y.a)b \rightarrow a$ equation, $y$ is a bound variable, while $a$ is a free variable. We apply $b$ to all occurences of $y$, but there are none so our expression doesn't change (no $y$ is converted to $b$). The result is still $a$, unchanged. Source, really well written and accessible if you give it enough time.
{ "domain": "cs.stackexchange", "id": 9002, "tags": "lambda-calculus" }
COUNT on External Table in HIVE
Question: I have been trying around the EXTERNAL table concepts in HIVE CREATE EXTERNAL TABLE IF NOT EXISTS MovieData (id INT, title STRING,releasedate date, videodate date, URL STRING,unknown TINYINT, Action TINYINT, Adventure TINYINT, Animation TINYINT,Children TINYINT, Comedy TINYINT, Crime TINYINT, Documentary TINYINT, Drama TINYINT, Fantasy TINYINT, Film-Noir TINYINT, Horror TINYINT, Musical TINYINT, Mystery TINYINT, Romance TINYINT, Sci-Fi TINYINT, Thriller TINYINT, War TINYINT, Western TINYINT) COMMENT 'This is a list of movies and its genre' ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' STORED AS TEXTFILE; Created a table using the above statement and then used the LOAD statement to get the data populated. LOAD DATA LOCAL INPATH '/home/ubuntu/MovieLens.txt' INTO TABLE MovieData; Next time I DROP the table in HIVE and recreate it again and LOAD the data... But when I do a COUNT operation on the table I get double the values that's present in the file that I loaded. I read through few articles that EXTERNAL table does not delete the data but the schema alone from the HIVE metastore... External Table Can you please advise why does HIVE behave this way... Answer: In Hadoop framework, there are multiple way to analyze the data. This depends on your use case, expertise and preference. Hive EXTERNAL tables are designed in a such way that other programmer can also share the same data location from other data processing model like Pig, MapReduce Programming, Spark and other without affecting each other work. In case of external table you will not loose the data if you have accidentally dropped your table as you already know it only drops the meta data and delete the schema and data remain untouched. This method is useful if there is already legacy data in HDFS on which the user wants to put some metadata so that the data can be queried and manipulated using Hive. Or you are loading data on HDFS from other ETL tools. Since EXTERNAL table doesn't delete the data and you are loading file again you are getting the count difference. if you are on your own to do all operation like load, analysis, drop etc, Hive support the INTERNAL table as well. If you want to delete the data when you drop table you can use Hive INTERNAL table. To create internal table you just need to remove the EXTERNAL keyword from your query and when you will drop this table it will delete the data as well.
{ "domain": "datascience.stackexchange", "id": 3155, "tags": "hive" }
Work done on an object by the internal forces
Question: How is the work done by the internal forces acting in a rigid body zero? Actually I read in a book an example for the same. Let me present that example here. Consider a rigid body having two particles $A$ and $B$. Suppose, the particles move in such a way that the line $AB$ translates parallel to itself. The displacement $d\textbf{r}_A$ of the particle $A$ is equal to the displacement $d\textbf{r}_B$ of the particle $B$ in any short interval of time. The net work done by the internal forces ,i.e, the force that $A$ exerts on $B$ and the force that $B$ exerts on $A$, is zero. How can it be analysed mathematically that the work done is zero? This example is very unclear to me. Answer: I disagree with both other answers. It's not enough that the sum of forces is zero This is not a vocabulary problem The work-energy theorem applies also to systems of interacting bodies, where the total work of all forces (internal and external) equals the variation of the sum of the kinetic energies of all bodies. Now to answer the question. Newton's third law gives you $\mathbf F_{A→B}=-\mathbf F_{B→A}$ (without condition, in any FoR). In your example, $\mathrm d\mathbf r_A=\mathrm d\mathbf r_B$, so you get $δW_{A→B}=\mathbf F_{A→B}·\mathrm d\mathbf r_B=-\mathbf F_{B→A}·\mathrm d\mathbf r_A=-δW_{B→A}$, hence the net work $δW_{A→B}+δW_{B→A}=0$.
{ "domain": "physics.stackexchange", "id": 30462, "tags": "forces" }
How is the partition decided in ROS2?
Question: In ROS2 Ardent, the default partition used is rt/. I recently got a ROS2 publisher code snippet where nothing was mentioned about the partition. So I assumed it to be rt/ but when I ran the RTI Admin Console, I got to know that the partition used was rt/ros. Where do I alter it ? Originally posted by aks on ROS Answers with karma: 667 on 2018-07-31 Post score: 1 Answer: Until ROS Bouncy, the full topic names were constructed as follow: <ROS_PREFIX>/<NAMESPACE>/<TOPIC_NAME> where: ROS_PREFIX is one of the prefixes defined here Some examples can be found here The partition value is everything before the last forward slash. In you case I assume that the namespace is ros, resulting in a fully qualified topic name rt/ros/my_topic. This leads to the partition rt/ros and the topic name my_topic. Note that as of ROS Bouncy the partitions are not used anymore so for the topic rt/ros/my_topic the partition will be an empty string and the DDS topic name will be rt/ros/my_topic Originally posted by marguedas with karma: 3606 on 2018-07-31 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31421, "tags": "ros, ros2, dds" }
What is the influence of size of conductor on a repulsive force produced by a changing magnetic flux,according to Lenz's law?
Question: On Lenz's law, Wikipedia says: Faraday's law states that the EMF is also given by the rate of change of the magnetic flux where epsilon is the electromotive force (EMF) and $\phi_B$ is the magnetic flux. $$\varepsilon = - \frac{-d \phi_B}{dt}$$ But is there also an influence of the size/shape of the receiving circuit on the EMF produced? I'm trying to recreate this at home using aluminium powder suspended in water but not seeing any movement, and many of the things I read on the internet seem to suggest that a larger conductor will work better, which would explain why I can't get this to work? Answer: (a) "But is there also an influence of the size/shape of the receiving circuit on the EMF produced?" $\Phi_B$ takes account of the size of the loop. For example, if the magnetic flux density has magnitude $B$ all over the loop, and is directed at angle $\theta$ to the loop, then $$\Phi_B=BA\cos \theta$$ in which $A$ is the area of the loop. Clearly a given change in $B$ will produce a larger change in $\Phi_B$ in a loop of larger area. (b) You were expecting to see moment of the aluminium powder due to induced currents in the individual grains. I believe that the grains were too small for this to happen. Here is an attempt to show this by the method of dimensions. Assume that the current is proportional to $\frac{dB}{dt} =\dot B$, and to unknown powers ($\alpha$ and $\beta$) of the resistivity, $\rho$, and radius, $r$. Thus $$I=\dot B\rho^\alpha r^\beta$$ Equating SI units: $$\text A=\text {T s}^{-1} (\Omega\ \text m)^\alpha\ \text m^\beta$$ Working towards expressing in SI base units: $$\text A=\text {N A}^{-1}\text m^{-1} \text s^{-1}(\text{V A}^{-1} \text m)^\alpha\ (\text m)^\beta $$ So $$\text A=\text {N A}^{-1}\text m^{-1} \text s^{-1}(\text{N m s}^{-1} \text A^{-2} \text m)^\alpha\ (\text m)^\beta $$ So $$\text A=\text {kg m s}^{-2} \text{A}^{-1}\text m^{-1} \text s^{-1}(\text{kg m s}^{-2} \text{m s}^{-1} \text A^{-2} \text m)^\alpha\ (\text m)^\beta $$ Equating powers of $\text{kg}$: 0 = 1 +$\alpha$, so $\alpha$ = –1 Equating powers of $\text m$: 0 = 1 +3$\alpha$ + $\beta$, so $\beta$ = 2 Equating powers of $\text s$: 0 = –3 –3$\alpha$, so $\alpha$ = –1 Equating powers of $\text A$: 1 = –1 –2$\alpha$, so $\alpha$ = –1 We see that the current is proportional to the square of the radius for a given rate of change of flux density. Therefore the currents in very small spheres will be very small indeed, and no doubt too small for the spheres to experience significant magnetic forces – which, incidentally, they would only do if the field were non-uniform.
{ "domain": "physics.stackexchange", "id": 83060, "tags": "electromagnetism, electromagnetic-induction, lenz-law" }
What would be the result of the collision of two down quarks?
Question: Even if we can't have single quarks in nature because of the charge colour, what would be the result of the collision of two down quarks at high velocities (0,99% c) at high energies, like the ones of LHC proton's collision..? Answer: 99% of the speed of light generates a Lorentz factor of only $$ \gamma = \left[ 1 - (.99)^2 \right]^{-1/2} \approx 7 $$ which means that you have only about 14 times the mass of a down-quark to make additional particles. The PDG puts the bare mass of the down quark in the neighborhood of 5 MeV, so $14 \times 5\,\mathrm{MeV} = 70\,\mathrm{MeV}$ isn't enough energy to create any pair except an electron--positrons. So, here are some possible outcomes (taking into account that you must be doing baryon collisions and just looking at cases where you have collisions between constituent down quarks Elastic collision. Your baryons go in and come out with different moment but otherwise unchanged. Electron-positron pair creation, but the baryons still come out unchanged. (If you have at least on nucleus rather than a bare nucleon in the input state) nuclear excitation. In other words: nothing much. That is a result of your specifying a very low energy regime. Some things you don't have enough energy to do: Muon--antimuon creation. Meson creation (not even a $\pi^0$). Nucleon excitation (assuming a nucleon or nuclear beam). Most meson excitations (assuming a meson beam for at least one of the quarks) BTW--Particle physicists rarely talk about speed in this kind of context, because most speeds approach $c$ (often very closely). Instead we generally talk about energy and/or momentum. The exception when we do talk about speeds is when we are using Cerenkov detectors.
{ "domain": "physics.stackexchange", "id": 8640, "tags": "particle-physics, collision, quarks" }
Why does the capacitance of a parallel plate capacitor increase on filling it with an insulating dielectric if the voltage is fixed?
Question: In the case that the electric field $\textbf{E}_0$ is confined to the space between the plates of an isolated and charged parallel-plate capacitor, an inserted linear dielectric that fills the space would reduce $\textbf{E}_0$ and as a consequence the potential difference $\Delta \text{V}_0$ between them, by a factor of $\frac{1}{\kappa}$, where $\kappa$ is the dielectric constant, because $|\textbf{E}| = \text{E}$ and $\Delta \text{V}$ are related linearly as $\Delta \text{V} = \text{E} \times \text{d}$. The charge on either plate is unaffected, simply because it has nowhere to go. Hence, because $\Delta \text{V}_0 \rightarrow \frac{1}{\kappa} \times \Delta \text{V}_0$, we have $\text{C}_0 = \frac{\text{Q}_0}{\Delta \text{V}_0} \rightarrow \kappa \times \text{C}_0$ Now if the voltage between the plates is held fixed with the use of a source like a battery, regardless of the dielectric constant, $\Delta \text{V}_0$ (and hence $\text{E}_0 = \frac{\Delta \text{V}_0}{\text {d}}$) are unchaged. The capacitance still increases by a factor of $\kappa$, and the explanation is that in this case, $\text{Q}_0 \rightarrow \kappa \times \text{Q}_0$, with the additional charge coming from the wires attached to either plate. What is the physical explanation behind this? Why does the charge have to increase on either plate? Answer: First consider an extreme case, the insertion of a conductor of thickness $e$ between the plates of a parallel capacitor with separation $d$ and a fixed potential difference of $V$ between them. Before the insertion the work done in moving unit charge between the plates is $E_{\rm free}\,d = \frac Vd \,d = V$, ie that is the definition of potential difference and also $E_{\rm free} = \dfrac {\sigma_{\rm free}}{\epsilon_0}$. What happens when the conductor is inserted? Surface charge $\sigma_{\rm induced} = \sigma' _{\rm free}$ is induced on the conductor such that the electric field inside the conductor is zero. Now what is the work done moving unit charge between the two plates of the capacitor? The work done is $E'_{\rm free}\,(d-e) + 0\, e= E'_{\rm free}\,(d-e)$, but this must equal $V$, and since $d-e<d$ then the electric field due to the free charge density on the capacitor plates, $\sigma' _{\rm free}$, must increase, which in turn means that the charge on the capacitor plates must increase. As a check if the charges on the plates before and after the insertion are $Q_{\rm free}$ and $Q'_{\rm free}$ respectively then, $V=\dfrac{Q_{\rm free}}{A\epsilon_0}\cdot d = \dfrac{Q'_{\rm free}}{A\epsilon_0}\cdot (d-e) = \dfrac{Q_{\rm free}}{\left(\frac{A\epsilon_0}{d}\right)} = \dfrac{Q'_{\rm free}}{\left(\frac{A\epsilon_0}{(d-e)}\right)}$. You will then see the familiar equation $V= \dfrac QC$ applied twice, once with no conductor being inserted and the other with the conductor being inserted. With a dielectric present due to the induced charge (called polarisation) the electric field within the dielectric is reduced but unlike a conductor does not become zero but a similar analysis will show that the charge on the plates of the capacitor must increase to keep the potential difference between the plates constant. In this case for a well-behaved dielectric, $\sigma_{\rm induced} = \chi\epsilon_0 E_{\rm external}$ and $\kappa = 1+\chi$, where $\chi$ is the electric susceptibility, $\kappa$ the relative permittivity or dielectric constant, and $E_{\rm external}$ is the electric field outside the dielectric. Feynman Chapter 10 Dielectrics is a useful source for more information.
{ "domain": "physics.stackexchange", "id": 98259, "tags": "electrostatics, charge, voltage, capacitance, dielectric" }
What's Optimal About Six Legs According to Physical Laws?
Question: In many respects the insects can be regarded as the most successful class of animals in evolutionary terms. And one of the most common features of insects is that they (mostly) all have six legs. Not discounting other traits, is there something about six legs that has helped insects achieve this success? Can we use physical laws to analyze and determine an optimality of having six legs - perhaps such as stability? Answer: I can think of two possible reasons: first, you can have half your legs up in the air at one time (as in walking - two on one side and one on the other, then change) and still be perfectly stable (3 legs = most stable, like a tripod); and second, if a predator chews off a leg on either side, you still have two legs (so you can still walk). I think those arguments are borderline biomechanical, rather than physical... The first argument has some solid scientific backing - see for example http://web.neurobio.arizona.edu/gronenberg/nrsc581/powerpoint%20pdfs/cpg.pdf . It doesn't take a lot of brains to walk with six legs... I fact it can be done almost entirely with "local" neurons. That's a good thing when you don't have a lot of brains. Quoting from https://answers.yahoo.com/question/index?qid=20090418111020AA75mgR : Generalizing, insects walk with a metachronal gait and, with speed, a tripod gait - which involves a tripod stance - 2 legs on one side of the body and one on the other remain stationary while the other legs move forward, then the stationary legs walk as the others take a stance. In this way, walking involves maximum stability with a minimum of neural coordination. In fact, ganglia and other nerves and sensors located on each leg may contribute as much to the actual walking movement as the brain does. It's a very easy, stable and adaptable locomotory system which evolved from the basic arthropod body plan with 2 pairs of limbs on each body segment.
{ "domain": "physics.stackexchange", "id": 17660, "tags": "everyday-life, biophysics" }
Find longest path by number of edges, excluding cycles
Question: I need to analyse a directed graph (not a DAG) but I don't know the name of the algorithm I would need to use. The graph has many cycles. My desired behaviour is: given a graph source and graph sink, find the longest path by number of edges, excluding cycles. By graph source, I mean a vertex with one or more edges to other vertices and no incoming edges, and the opposite for sink. If there's better terminology, then please let me know about this. It's important that I'm able to determine what the path is, so I would need an algorithm that can produce a list of edges. By excluding cycles, this might entail not traversing an edge the process traversed previously. Do you recognise this algorithm and could you tell me the name, please? Thanks in advance Answer: The problem you are defining is called Longest Path (and occasionally Longest $s$-$t$-Path) and is NP-complete. That is, there is an algorithm for solving it, but you shouldn't keep you hopes up when it comes to the running time of the algorithm: it's unlikely to run in polynomial time. The trivial algorithm is to check for every permutation of the graph, in time $O(d! \cdot 2^{dn})$, but this becomes intractable extremely quickly. It is possible to bring this down to $O(2^n \cdot \text{poly}(n))$. You have three options as far as I can see it: limit the length of the path you need to some reasonable integer $k$ and solve with FPT techniques use an approximation algorithm limit the type of input graph to certain smaller and easier graph classes. Note 1: As Yuval Filmus points out, the problem Longest Path is usually referring to the undirected version. However, the problem remains NP-complete also in its directed version by a reduction from Directed Hamiltonian Path (Between Two Vertices) (Garey & Johnson, 1979). Note 2: Yuval Filmus also pointed out that it is solvable in single-exponential time. Note 3: The problem is solvable in linear time on DAGs, and if you allow repeating edges, the answer is always infinite when your graph has a cycle.
{ "domain": "cs.stackexchange", "id": 15709, "tags": "graphs" }
French Republican date conversion
Question: This program converts a date given from the French Republican Calendar to the corresponding date in our standard Gregorian Calendar. I'm looking for some improvements that I should start making in my code as well as some tips on how to write less lengthy code. For example, the RepubtoGregDate function is very long and I believe there is probably a more effective way of implementing it. I'm very new to Python and have no prior knowledge in programming so it would nice to have some suggestions in how to improve my code. #Python Assignment - French Republican to Gregorian Date Convertor #Coded by - - import sys def Main(): Startup() while True: dayN, monthN, yearN, date = Date() StopProgram(date) ValidDate = DateVal(yearN, monthN, dayN) ValLoop(ValidDate) monthW = MonthNtoW(monthN) yearRN = YearNtoRN(yearN) RepubDate = FullRepubDate(dayN, monthW, yearRN) Leap = LeapCheck(yearN) GrDate = RepubtoGregDate(yearN, monthN, dayN, Leap) def Startup(): print("Republican to Gregorian Date Converter\n") print("This program will convert a date from the French Republican Calendar to the equivalent date in the Gregorian Calendar.") print("To stop calculating dates and close the program, simply input: 0 0 0.\n") # Asks for a Republican Date and returns day, month and year. def Date(): print("Input a Republican Date in the format: dd mm yy.") date = input("Please enter a Republican Date: ") print("") date = date.split() dayN = int(date[0]) monthN = int(date[1]) yearN = int(date[2]) return(dayN, monthN, yearN, date) # Ends the program if the sentinal is matched. def StopProgram(date): if date == ['0', '0', '0']: sys.exit(print("Program Closed.")) # Validates the date, making sure it fits the Republican format and that # the date is between the calendars real world use: 1793 - 1806. def DateVal(yearN, monthN, dayN): if yearN >= 1 and yearN <= 14: ValidYear = True else: ValidYear = False if monthN >= 1 and monthN <= 12: ValidMonth = True else: ValidMonth = False if dayN >= 1 and dayN <= 30: ValidDay = True else: ValidDay = False if yearN < 2 and monthN < 1 and dayN < 14: InvalidDate = True elif yearN > 14 and monthN > 4 and dayN > 11: InvalidDate = True else: InvalidDate = False if ValidYear == True and ValidMonth == True and ValidDay == True and InvalidDate == False: ValidDate = True else: ValidDate = False return(ValidDate) # Repeats the Date() Function if the Date is invalid. def ValLoop(ValidDate): while ValidDate == False: print("The entered date was not a valid gregorian date, please try again.") dayN, monthN, yearN = Date() # Finds the month number's corresponding month word. def MonthNtoW(monthN): MonthWDict = {1 : "Vendémiaire", 2 : "Brumaire", 3 : "Frimaire", 4 : "Nivôse", 5 : "Pluviôse", 6 : "Ventôse", 7 : "Germinal", 8 : "Floréal", 9 : "Prairial", 10 : "Messidor", 11 : "Thermidor", 12 : "Fructidor"} monthW = MonthWDict[monthN] return monthW # Finds the year number's corresponding year in roman numerals. def YearNtoRN(yearN): YearRNDict = {1 : "I", 2 : "II", 3 : "III", 4 : "IV", 5 : "V", 6 : "VI", 7 : "VII", 8 : "VIII", 9 : "IX", 10 : "X", 11 : "XI", 12 : "XII", 13 : "XIII", 14 : "XIV"} yearRN = YearRNDict[yearN] return(yearRN) # Outputs the Republican date in its written format. def FullRepubDate(dayN, monthW, yearRN): dayN = str(dayN) RepubDate = dayN + " " + monthW + " an " + yearRN print("The French Republican Date is: ", RepubDate) dayN = int(dayN) return(RepubDate) # Checks if the year is a leap year. def LeapCheck(yearN): if yearN == 3 or yearN == 7 or yearN == 11: Leap = True else: Leap = False return(Leap) # Converts the Republican date to a Gregorian date. # It does this by converting the date to days in the year and then adds # that to the day the Republican Calendar started in The Gregorian calendar. def RepubtoGregDate(yearN, monthN, dayN, Leap): monthL = 30 StartDate = 266 StartYear = 1792 YDays = 365 LYDays = 366 Year = StartYear + (yearN - 1) Day = ((monthN - 1) * monthL) + dayN if Leap == True and Day >= 60: Day += 1 PH = StartDate + Day if Leap == True: if PH > LYDays: Year += 1 Day = PH - LYDays else: Day = PH if Leap == False: if PH > YDays: Year += 1 Day = PH - YDays else: Day = PH if Leap == False: if Day >= 1 and Day <= 31: GregMonth = "January" if Day >= 32 and Day <= 59: Day -= 31 GregMonth = "February" if Day >= 60 and Day <= 90: Day -= 59 GregMonth = "March" if Day >= 91 and Day <= 120: Day -= 90 GregMonth = "April" if Day >= 121 and Day <= 151: Day -= 120 GregMonth = "May" if Day >= 152 and Day <= 181: Day -= 151 GregMonth = "June" if Day >= 182 and Day <= 212: Day -= 181 GregMonth = "July" if Day >= 213 and Day <= 243: Day -= 212 GregMonth = "August" if Day >= 244 and Day <= 273: Day -= 243 GregMonth = "September" if Day >= 274 and Day <= 304: Day -= 273 GregMonth = "October" if Day >= 305 and Day <= 334: Day -= 304 GregMonth = "November" if Day >= 335 and Day <= 365: Day -= 334 GregMonth = "December" if Leap == True: if Day >= 1 and Day <= 31: GregMonth = "January" if Day >= 32 and Day <= 60: Day -= 32 GregMonth = "February" if Day >= 61 and Day <= 91: Day -= 60 GregMonth = "March" if Day >= 92 and Day <= 121: Day -= 91 GregMonth = "April" if Day >= 122 and Day <= 152: Day -= 121 GregMonth = "May" if Day >= 153 and Day <= 182: Day -= 152 GregMonth = "June" if Day >= 183 and Day <= 213: Day -= 182 GregMonth = "July" if Day >= 214 and Day <= 244: Day -= 213 GregMonth = "August" if Day >= 245 and Day <= 274: Day -= 244 GregMonth = "September" if Day >= 275 and Day <= 305: Day -= 274 GregMonth = "October" if Day >= 306 and Day <= 335: Day -= 305 GregMonth = "November" if Day >= 336 and Day <= 366: Day -= 335 GregMonth = "December" Day = str(Day) Year = str (Year) GregDate = Day + " " + GregMonth + " " + Year Day = int(Day) Year = int(Year) print("The Gregorian Date is: ", GregDate, "\n") return(GregDate) #Main Main() Answer: Running your code through PEP8 will produce alot of errors, you can see it for yourself. Just to name a few PEP8 improvements: Use indentation of 4 spaces, it looks cleaner Functions should be named with lowerscore letters. Example: stop_program You use a lot of if var == False: which could be rewritten to if not var: Same goes for if var == True: which is just if var: You have some calculations like if val > 0 and val < 12: which can be rewritten to if 0 < val < 12: When using comments don't indent them! - #comment should be # comment Other improvements: When possible use libraries, be sure to check out datetime library
{ "domain": "codereview.stackexchange", "id": 27435, "tags": "python, beginner, python-3.x, datetime, unit-conversion" }
Progressive and regressive wave combination to get the most general form of one dimensional standing wave
Question: Te most general form of a (one dimension) standing wave is $$(A \mathrm{cos}(\omega t)+B \mathrm{sin}(\omega t))(C \mathrm{cos}(k x)+D \mathrm{sin}(k x))=G\mathrm{cos}(\omega t+\phi_1)\mathrm{cos}(kz+\phi_2)\tag{1}$$ Which can be written with exponential notation as $$G\mathrm{cos}(kz+\phi_2)e^{i(\omega t +\phi_1)}\tag{2}$$ As any standing wave $(2)$ should be the superposition of a progressive and a regressive wave. Nevertheless on textbooks it is usually shown how to get a standing wave like $$F\mathrm{sin}(\omega t-k x)+F \mathrm{sin}(\omega t+k x)=2F\mathrm{sin}(\omega t)\mathrm{cos}(kz)\tag{3}$$ Or in exponential notation $$F e^{i(\omega t-k x)}+F e^{i(\omega t+k x)}=2F\mathrm{cos}(kz)e^{i\omega t }\tag{4}$$ Which is quite particular, as here $\phi_1=\phi_2=0$. So what are the progressive and regressive waves (in exponential form) to combine as in $(4)$ but to get an expression as the one in $(2)$ (i.e. where $\phi_1$ and $\phi_2$ are not necessarily zero)? Answer: First note that all we have to do to get $\text{cos} (\omega t + \phi_1)\ \text{cos} (kx+\phi_2)$ from $\text{cos}\ \omega t\ \text{cos}\ kx$ is to measure x from a different origin and t from a different zero of time. But if we superimpose progressive waves with different phase constants we get $$\text{cos} (\omega t -kx+ \phi_1)\ + \text{cos} (\omega t+kx+\phi_2)= 2\text{cos}\ (\omega t+\frac{\phi_{2}+\phi_{1}}{2})\ \text{cos}\ (kx+\frac{\phi_{2}-\phi_{1}}{2}).$$ So there are your phase constants: $(\frac{\phi_{2}+\phi_{1}}{2})$ and $(\frac{\phi_{2}-\phi_{1}}{2})$.
{ "domain": "physics.stackexchange", "id": 43157, "tags": "waves, wavefunction, interference, resonance, superposition" }
Mirror Telescope
Question: When I'm looking at the inside of a mirror telescope: I'm wondering why the secondary mirror does not block half of the incoming light? Is it "transparent" in this direction? Answer: In a Newtonian reflector, as pictured, the secondary mirror does block some of the light, but maybe less than you think. Even if the secondary were half the diameter of the primary, it would only block 1/4 of the light ($\ (1/2)^2$). In a more typical case the secondary would be somewhat smaller - perhaps a quarter of the size of the primary (or less). Hence a $1/16th$ or less is blocked, which is not too bad.
{ "domain": "astronomy.stackexchange", "id": 2000, "tags": "telescope" }
Flipping a coin with same initial conditions
Question: Today, in my physics class my teacher was talking about how we can never predict the outcome of a coin flip. So I thought: Will the outcome of a coin flip be the same if we do not change the initial conditions (such as launch angle, force position where force is applied,etc.)? Intuitively, I feel that the answer would be yes. But is there something related to quantum mechanics that may produce a different answer? Answer: Today, in my physics class my teacher was talking about how we can never predict the outcome of a coin flip Your teacher was most likely not talking about this from a QM perspective of how experiments have probabilistic outcomes due to the inherent nature of QM (as we currently understand it). Your teacher was most likely making a comment about how it is nearly impossible to know all of the relevant initial conditions, system parameters, etc. to accurately predict the result of a coin toss. However, on the spatial and temporal scales a coin toss resides on, it is safe to say we are in the classical mechanics regime. Quantum effects likely play no significant role in any of this. Therefore, you are correct in saying that if we could exactly reproduce the initial conditions of the entire system, then we would most certainly expect the same outcome each time. In other words, your teacher was talking about inability to predict the outcome based on lack of sufficient information of the system, not because of any underlying quantum mechanical probabilities.
{ "domain": "physics.stackexchange", "id": 92784, "tags": "quantum-mechanics, probability, determinism, randomness" }
Does dependent type checkers need to store lambda parameter type in their core language?
Question: "Core language" refers to the exported well-typed terms that can be evaluated (or reduced). In the core language of MiniAgda, a dependently-typed language, the parameter type of a lambda is not stored anywhere. So does in Mini-TT and Agda. However, Idris does store lambda parameter type in its core language. I wonder do we need/needn't to store parameter type (or, under what condition do we need/needn't to store it)? Because according the surface syntax of all these languages, they don't have lambdas with their parameters explicitly-annotated. For Idris here's a link showing that Idris does not have lambda with type annotation. Answer: In general, type inference for dependent types is undecidable. This means that when checking a function, we need some way to know what type its argument has. In the case of Idris, they simply annotate lambdas with parameter types. This is very common in type theory, since it makes your type system syntax directed in a very simple way. When you do this, you can usually view the typing judgment as having the type as an output parameter (i.e. given the input term, you can determine its type). I don't know what MiniAgda does in particular, but the other main way is to treat the type as an input to the checking judgment. So in this case, you don't need to store the type of the function in the AST, but when typechecking, you will use the context in some way to get the type of the function (i.e. from an annotation). Generally this is done using bidirectional typechecking. This paper and this post both give excellent overviews of how this can be done in practice.
{ "domain": "cs.stackexchange", "id": 14081, "tags": "dependent-types" }
Time-ordered exponential operator generated by two commuting Hamiltonians
Question: Define a time-dependent Hamiltonian $$H(t) = H_1(t) + H_2(t),\tag{1}$$ where $$[H_1(t), H_2(t)] = 0 ~ \forall t \in [0,T].\tag{2}$$ Is it true that the unitary operator generated by $H(t)$ is a product of two unitaries generated by $H_1(t)$ and $H_2(t)$, i.e. \begin{equation} U(T) = \mathcal{T}\exp\Big(-i \int_0^T dt H(t) \Big) = \mathcal{T}\exp\Big(-i \int_0^T dt H_1(t) \Big)\mathcal{T}\exp\Big(-i \int_0^T dt H_2(t) \Big) = U_1(T) U_2(T)~?\tag{3} \end{equation} (Essentially, I'm curious whether the BCH formula works in time-ordered exponentials.) Answer: Surely, your instructor must have drilled you to illustrate such expressions for a finite number of time points, first; here, take them to be just two. Incorporate the -i into the Hamiltonian pieces, and use lower case for the former time point and upper case for the latter point. So, define $$ -iH_1(t_1)\equiv a, \qquad -iH_2(t_1)\equiv b,\qquad -iH_1(t_2)\equiv A, \qquad -iH_2(t_2)\equiv B,\\ [a,b]=[A,B]=0, $$ but all other commutators need not vanish, in general, so, here, they are taken to be non-vanishing and uncorrelated. Then, $$ -i \int_0^T dt H_1(t) = a+A, \leadsto \qquad U_1(T)=e^A e^a,\\ -i \int_0^T dt H_2(t) =b+B, \leadsto \qquad U_2(T)=e^B e^b ,\\ -i \int_0^T dt H(t) = a+b+A+B, \leadsto \qquad U(T)=e^{A+B} e^{a+b}=e^{A} e^{B} e^{a} e^b, $$ which manifestly violates your wrong conjecture, in general.
{ "domain": "physics.stackexchange", "id": 96864, "tags": "quantum-mechanics, operators, hamiltonian, commutator, time-evolution" }
Can Temperature Data be Predicted Using Adaptive Filter (Such As LMS) Algorithm?
Question: I am working on a project which requires me to implement adaptive filter as a predictor. I have just started on adaptive filter and I intend to use least mean square algorithm for weight adjustment. How can I predict future values from this system ? Any help would be beneficial for me. Thanks. Answer: Yes you can predict future temperatures, based on past temperatures, using adaptive filtering as well. The optimal linear estimation of a WSS random process from its past values, which is known as linear prediction, is given by a Wiener filter structure where the desired response to be estimated is the current sample of the input (current temparature in your case) and the filter input is the $N$ past samples of the input, (assuming one step forward prediction of order $N$). The LMS adaptive filtering algorithm simply approaches this optimal Wiener predictor coefficients for WSS signals and for non WSS signals tries to continue to be optimal by tracking it. This prediction mechanism does not depend on the physical origin of the signals but on their statistical characterisation. As long as your temperature data posses reasonable degree of correlation within it, then the filter will do its best to predict it.
{ "domain": "dsp.stackexchange", "id": 7170, "tags": "discrete-signals, adaptive-filters, linear-prediction, lms" }
Wrapping postback functions to call custom code at execution
Question: Here is my solution for this stackoverflow question. It is designed to return an onbeforeunload message when the user leaves the page (excluding postbacks). It does this by logging a timestamp each time a .NET control causes a postback (either full or partial using update panels). When this happens, if allowedWaitTime has elapsed, amessage will be returned. Are there any best or better practices missing from this code and does anyone have an opinion on my style (albeit a small sample)? (function ($) { if (typeof $ !== 'function') { throw new Error('jQuery required'); } /* The time in milliseconds to allow between a function call and showing the onbeforeunload message. */ var allowedWaitTime = 100, timeStamp = new Date().getTime(), // Each function to override baseFuncs = { __doPostBack: this.__doPostBack, WebForm_DoPostBackWithOptions: this.WebForm_DoPostBackWithOptions }; // Set timeStamp when each baseFunc is called for (var baseFunc in baseFuncs) { (function (func) { this[func] = function () { var baseFunc = baseFuncs[func]; timeStamp = new Date().getTime(); if (typeof baseFunc === 'function') { baseFunc.apply(arguments.callee, arguments); } } })(baseFunc); } /* Form submit buttons don't call __doPostBack so we'll set timeStamp manually on click. */ $('input[type="submit"]').click(function () { timeStamp = new Date().getTime(); }); $(this).on('beforeunload', function (e) { // Only return string if allowedWaitTime has elapsed if (e.timeStamp - timeStamp > allowedWaitTime) { return 'message'; } }); }).call(window, jQuery); Answer: Interesting question, I think the biggest thing is that callee is going the way of the Dodo bird. JsHint does not like your code because you are using for( .. in .. ) without filtering properties, which is fine since you create the object with Object Notation, and I assume you did not modify the Object prototype. You are also creating functions/closures in a loop. Are you sure you need those closures there? Other than well commented, and easy to follow, something that I would not mind using in one of my projects.
{ "domain": "codereview.stackexchange", "id": 9129, "tags": "javascript, jquery, asp.net" }
Why does splitting hot tea from one glass in two glasses makes it cool faster?
Question: For a while now I've noticed that if I take a cup of hot tea and pour it into two cups and leave it then both cups will cool faster than the single cup. How is it that nature cools two cups of tea in parallel more quickly? It's still the same amount of tea but it gets cooled quicker if I have two cups. Answer: The tea cools mostly by evaporation - when you pour it into two cups you will have twice the surface area. During evaporation, the fastest (hottest) water molecules escape the liquid, leaving on average a cooler liquid behind (when the richest man leaves the room, the average wealth in the room drops). The "evaporation cools down tea" concept is well known by Indias Chai Wallahs - see for example this video. There are more spectacular examples but I could not locate one right now. There is a secondary effect of heat capacity: when you pour tea into a cold cup, some of the heat in the tea is used to warm up the cup. Two cups to warm up = more heat extracted from the tea. But that is a one time effect. The evaporation keeps going. One other reason why the chai wallah trick is so effective (and why blowing on your tea cools it more quickly): as water evaporates, it increases the partial vapor pressure right next to the liquid. If that vapor is not removed, the result is that evaporation (and cooling) slows down. The pouring trick ensures the vapor can escape easily - the liquid is always surrounded by fresh (somewhat dry) air. Note that even if you do this in highly humid air (relative humidity 95%), since the tea is hotter than the air it heats the local air which allows more vapor to go into it. But the rate of cooling will be greatest when the air is driest. Within limits, that is more important than how cold the air is.
{ "domain": "physics.stackexchange", "id": 23681, "tags": "cooling" }
PHP function to create a Hex dump
Question: I was needing to provide the hex-dump of a code, but I needed to create my own. And, for fun, I decided to do it. function hex_dump( $value ) { $start_time = microtime(true); switch( gettype( $value ) ) { case 'string': $lines = array_map( function( $line ){ return array_map( function( $char ){ return str_pad( dechex( ord( $char ) ), 2, 0, STR_PAD_LEFT ); }, str_split( $line ) ); }, str_split( $value, 16 ) ); break; case 'double': case 'integer': $lines = array( array_map( function( $digits ){ return str_pad( $digits, 2, 0, STR_PAD_LEFT ); }, str_split( dechex( $value ), 2 ) ) ); break; case 'array': $lines = array_map( function( $chunk ){ return array_map( function( $item ){ switch( gettype( $item ) ) { case 'double': case 'integer': return str_pad( dechex( $item & 255 ), 2, 0, STR_PAD_LEFT ); case 'string': return str_pad( dechex( ord( $item ) ), 2, 0, STR_PAD_LEFT ); default: return '--'; } }, $chunk ); }, array_chunk( $value, 16, false ) ); break; default: trigger_error( 'Invalid value type passed', E_USER_WARNING ); return false; } $num_length = strlen( dechex( $line_count = count( $lines ) ) ) + 1; $header = str_repeat( ' ', $num_length = $num_length + ( $num_length % 2 ) ). ' |'. implode( '|', array_map( function( $number ){ return str_pad( strtoupper( dechex( $number ) ), 2, 0, STR_PAD_LEFT ); }, range( 0, 15 ) ) ). '| TEXT '; echo $header, PHP_EOL; $separator = str_repeat( '-', strlen( $header) ); foreach( $lines as $current_line => &$line ) { $line_lenth = count( $line ); echo $separator, PHP_EOL, str_pad( strtoupper( dechex( $current_line ) ), $num_length - 1, 0, STR_PAD_LEFT ), '0 |', strtoupper( implode( '|', $line_lenth < 16 ?array_pad( array_merge( $line, array_fill(0, 16 - $line_lenth, ' ') ), 16, null ) :$line ) ), '|', implode( '', array_map( function( $value ){ if( $value == '--' ) { return "\xBF"; } else { $value = hexdec( $value ); return $value < 32 || $value > 126 ? '.' : chr( $value ); } }, $line ) ), PHP_EOL; } $stats = array( 'lines' => $line_count, 'bytes' => $line_count ? ( $line_count * 16 ) - ( 16 - count( $lines[ $line_count - 1 ] ) ) : 0, 'time' => microtime(true) - $start_time ); echo str_repeat( '=', strlen( $header) ), PHP_EOL, str_pad( 'Lines: ' . $stats['lines'], 15, ' '), '| ', str_pad( 'Bytes: ' . $stats['bytes'], 16, ' '), '| Time: ', $stats['time'], 'ms', PHP_EOL, PHP_EOL; return $stats; } As you can see, it is a total and complete mess, even though it is logically splitted. Since I use a lot of chained functions, I've tried to avoid to create long lines (100+ characters). But in the process, I made this mess! It is really easy to use. Just pass a string or a number or an array of bytes, then it will make an ascii table with the dump. For example, to dump a number: hex_dump(12345); And it outputs something like this: |00|01|02|03|04|05|06|07|08|09|0A|0B|0C|0D|0E|0F| TEXT -------------------------------------------------------------------- 00 |30|39| | | | | | | | | | | | | | |09 ==================================================================== Lines: 1 | Bytes: 2 | Time: 5.2928924560547E-5ms To dump a string: hex_dump('A very cool string that spans across multiple lines!!!'); Which outputs: |00|01|02|03|04|05|06|07|08|09|0A|0B|0C|0D|0E|0F| TEXT -------------------------------------------------------------------- 00 |41|20|76|65|72|79|20|63|6F|6F|6C|20|73|74|72|69|A very cool stri -------------------------------------------------------------------- 10 |6E|67|0D|0A|74|68|61|74|20|73|70|61|6E|73|0D|0A|ng..that spans.. -------------------------------------------------------------------- 20 |61|63|72|6F|73|73|0D|0A|6D|75|6C|74|69|70|6C|65|across..multiple -------------------------------------------------------------------- 30 |20|6C|69|6E|65|73|21|21|21| | | | | | | | lines!!! ==================================================================== Lines: 4 | Bytes: 57 | Time: 0.00011205673217773ms * The newlineas are represented as \x0D\x0A, which is \r\n (Windows newlines). And you can even mix both, in an array of bytes: hex_dump( array( 123, 's', 'v <-- only that will be dumped', 0 ) ); Which will produce: |00|01|02|03|04|05|06|07|08|09|0A|0B|0C|0D|0E|0F| TEXT -------------------------------------------------------------------- 00 |7B|73|76|00| | | | | | | | | | | | |{sv. ==================================================================== Lines: 1 | Bytes: 4 | Time: 5.9843063354492E-5ms Considering the huge mess, how can I improve the readability of the code? Also, is this code DRY enough? Answer: Converting to hex and outputting the hex dump are two separate concerns. You should have another function (named to_hex or something similar) that handles the actual converting. You might even want a third one for formatting if you want to truly separate concerns. Once you have this to_hex function, the array case should call to_hex recursively with each element rather than repeating logic (you asked about DRY -- your current approach is very unDRY with regards to this). As a bonus, it will then automatically support nested arrays. If you care (a lot) about performance, your string case is very non-optimal. str_split creates an array each time it's called which means that you're copying data for no real reason. If you're interested in performance, you should loop over the string in place (your array case has a similar flaw). Really I doubt performance matters this much, but it seemed worth mentioning given the timing code you have :). If unsupported types cause a false return, I would expect the array case to act similarly if it comes across an element of unhandled type. Likewise, I would expect the array case to actually handle integers, not just the first byte of integers. Don't be afraid to use intermediate variables. Some of your lines are just straight up beastly, especially the output parts. I also find it clearer to have multiple echo calls rather than stringing together super long echos: echo ...; echo ...; echo ...; vs echo ..., ..., ...; Functions should very rarely actually output. Instead, functions should return their values and the caller should output the return if desired. Imagine if you wanted to write one of these hex dumps to a file. Currently you'd have to do it with some nasty output buffering or stdout redirection (which then gets complicated if you want to have certain things actually echo out). It's a bit weird to think about it like this at first, but generating data and outputting data are two very different concerns and should thus not typically be handled by the same function. There are of course times when it's necessary or desirable for performance or usability reasons to directly output, but I don't think this is one of them.
{ "domain": "codereview.stackexchange", "id": 13861, "tags": "php" }
Why is the signal from small diaphragm condenser microphone not a symmetrical shape
Question: I've just purchased a pair of small diaphragm condenser microphones (never used one of these before) and was surprised by the signal shape of a sample recording (the recording is of me speaking some 20 cm from the microphone, with neither a windshield nor a pop filter): I can't figure why the shape is not only vertically asymmetrical, but in some points actually shows very pronounced shifts from the "zero" position, both positive and negative. I've used dynamic and large diaphragm condenser microphones before and never noticed this kind of behaviour. As I remember, wave shapes always seemed rather vertically symmetrical, and usually zero "centered". If there was some DC offset, it could easily be removed with a DC offset removal tool in an audio application (which does not happen with the recording I've made with these small condenser microphones). The audio seems ok to my ear, but I'm puzzled by the signal shape and I wonder if the mics may be defective (they both produce similar results) or if I'm doing something wrong (though I could not find any settings either in the audio interface or in the DAW application I'm using that appears to be related with this). Also, I'm worried that this kind of signal may have problems along the processing chain when applying filters, compression, etc. In summary, is this type of waveshape normal for a small diaphragm condenser microphone, and what is the explanation for the fluctuation of the oscillating "center"? Answer: That looks perfectly normal to me. These are just local variations of the air pressure: breathing, draft, HVAC, someone opening/closing the door or window. It's in essence very low frequency sound. Sound is inherently a highpass signal as it's defined as "variation around the steady state air pressure". In practice the air pressure is never really steady-state, it always varies a bit. So you have to pick a frequency above which you consider the pressure variations "sound" or "fluctuations in DC pressure". All microphones have a built-in acoustic high pass filters. Condensers and Electrets have a "barometric vent", so the inside pressure can equalize to the outside pressure. In addition the microphones also have a polarization voltage supply and/or pre-amp. There is typically a coupling capacitor in there that also forms and electrical high pass. That's tuned higher, maybe in the single Hz range. Most data acquisition systems have a DC blocking filter, that's also a highpass. So the amount of "baseline fluctuation" you see in the recording will be a function of what the highpass filters are that the signal went through before you see it. If you see a difference in microphone types, than it's probably due to the fact that they are optimized for different purposes. Large diaphragm condensers are mostly recording microphones. Their internal highpass is tuned high to keep rumble and wind noise out of the recording. Small condensers are often measurement microphones that are tuned lowed so the engineer can capture as much "physical reality" as is possible apply post processing as needed.
{ "domain": "dsp.stackexchange", "id": 11550, "tags": "audio, audio-processing" }
Python code to find greatest common divisor of multiple numbers
Question: Hmm, I know this has been implemented countless times, and Python 3.9.5 math standard library has a built-in gcd() method that does exactly this, but this is how I do this, I think completing simple programming challenges using new ways will let gain experience that will help me find ingenious ways to overcome unprecedented practical programming challenges, so bear with me. This implementation uses prime factorization method, it has three functions: factors(), gcd() and main(), the first function returns a dictionary object, the keys of the dictionary are the base prime factors of the inputted number, the values are the powers (how many times the factor should multiply by itself) of the keys, the dictionaries are created with one key{'1': 1}; And the second function accepts two numbers, uses each number as inputs to the first function, the compares the resultant dictionaries, removes keys of the first dictionary not contained in the second dictionary, and reduces the values of keys of the first dictionary to their respective values in the second dictionary if their values in first dictionary is greater than the second dictionary. Then the second function gets the product of all keys ^ values of the first dictionary. The third function applies gcd() recursively to the list of numbers if there are more than two numbers. This is the code, it is fully functional, and works properly if the inputs are valid: import math import sys def factors(n): factors = {'1': 1} f = 2 while f <= int(math.sqrt(n)): while n % f == 0: if f'{f}' in factors.keys(): factors.update({f'{f}': factors[f'{f}'] + 1}) else: factors[f'{f}'] = 1 n = int(n / f) f += 1 if n > 1: factors[f'{n}'] = 1 return factors def gcd(x, y): f1 = factors(x) f2 = factors(y) for f in f1.copy().keys(): if f not in f2.keys(): f1.pop(f) elif f1[f] > f2[f]: f1[f'{f}'] = f2[f] cd = 1 for f in f1.keys(): cd *= int(f) ** f1[f] return cd def main(args): args = list(map(int, args)) cd = gcd(args[0], args[1]) if len(args) > 2: for i in args[2:]: cd = gcd(cd, i) print(cd) args = sys.argv[1:] main(args) Currently I think two areas need be improved: 1, I need a better way than dictionaries to keep track of the divisors, so that I can easily find divisors not contained in the other number and find the lower power of the same divisor. 2, make the gcd() function do what main() function does internally so that the main() function isn't needed, currently I can't figure out a way to do this. How can this script be improved? Answer: It's more common to use Euler's method to find GCD, rather than factorising both numbers. However, I'll continue reviewing with the existing algorithm, as there's useful insights to be found. First, let's look at factors(). Our factors variable is being used as a counter or multiset, and Python provides us with a collections.Counter class for that. With from collections import Counter, we can replace if f'{f}' in factors.keys(): factors.update({f'{f}': factors[f'{f}'] + 1}) else: factors[f'{f}'] = 1 with factors[f] += 1 (I've also changed to using the numbers themselves as keys, instead of converting to string). The division int(n / f) can be rewritten using integer divide operator n // f (and we know the result will be exact, as we tested n % f == 0). Also, we should use math.isqrt() rather than int(math.sqrt()) when we want an integer. from collections import Counter def factors(n): factors = Counter({1: 1, n: 1}) for f in range(2, 1 + math.sqrt(n)): while n % f == 0: factors[f] += 1 n = n // f return factors Now let's look at gcd(). We are computing the intersection of the two multisets. The operator & does exactly that for us: common_factors = f1 & f2 cd = 1 for f in common_factors.keys(): cd *= f ** common_factors[f] When we iterate over a dictionary, we don't need to get the keys and then index again. We can iterate over its items() instead, like this: for f,count in common_factors.items(): cd *= f ** count Or we could expand the multiset using elements(): for f in common_factors.elements(): cd *= f This allows us to then use math.prod() instead of our own loop: return math.prod(common_factors.elements()) The whole lot then becomes a one-liner: def gcd(x, y): return math.prod((factors(x) & factors(y)).elements()) Next, main(). The heart of this function is what a functional programmer would call reduce, and - you guessed it - Python provides a reduce() function, in functools: def main(args): args = map(int, args) print(reduce(gcd, args)) Finally, it's good practice to use a main guard, so the program can become a module: if __name__ == "__main__": args = sys.argv[1:] main(args) Simplified code import math import sys from collections import Counter from functools import reduce def factors(n): factors = Counter({1: 1, n: 1}) for f in range(2, int(math.sqrt(n))): while n % f == 0: factors[f] += 1 n = n // f return factors def gcd(x, y): return math.prod((factors(x) & factors(y)).elements()) if __name__ == "__main__": print(reduce(gcd, map(int, sys.argv[1:])))
{ "domain": "codereview.stackexchange", "id": 41435, "tags": "python, beginner, python-3.x, programming-challenge, factors" }
Hexadecimal to RGB conversion
Question: I am trying to convert hex to rgb, and rgb to hex. My current code doesn't seem optimal, and there is an issue with 0's in the hexadecimal. What do you think about the overall quality of this code? Can you think of a more efficient way of converting? $scope.$watch('hex', function() { var rgb = parseInt($scope.hex, 16); $scope.r = (rgb >> 16) & 0xFF; $scope.g = (rgb >> 8) & 0xFF; $scope.b = rgb & 0xFF; $scope.rgb = 'rgb(' + $scope.r + ',' + $scope.g + ',' + $scope.b + ');'; }); $scope.$watch('r+g+b', function() { $scope.rgb = 'rgb(' + $scope.r + ',' + $scope.g + ',' + $scope.b + ');'; $scope.hex = parseInt($scope.r << 16 | $scope.g << 8 | $scope.b).toString(16); }); Here is a sample plunker: Answer: A few notes: I am confused as to why you need AngularJS for this. I prefer to use the bitwise & with 255 instead of 0xFF in this case. Your function can't handle non-hex characters such as #. I would remove them before parsing. You can simplify your creation of the RGB string by using the join() method. Final function: function hexToRgb(hex) { hex = hex.replace(/[^0-9A-F]/gi, ''); var bigint = parseInt(hex, 16); var r = (bigint >> 16) & 255; var g = (bigint >> 8) & 255; var b = bigint & 255; return [r, g, b].join(); }
{ "domain": "codereview.stackexchange", "id": 9154, "tags": "javascript, converting, angular.js, bitwise" }
Can an egg be forced into a bottle by lowering the pressure inside the bottle using cold air?
Question: I just asked a question about why ice-cold water inside a thermos results in what feels like suction on the cap. The answer stated that the cold water cools the air inside the thermos, thus slowing it down, thereby decreasing the pressure on the inside of the cap relative to the high atmospheric pressure pushing on it from the outside. But what about the egg and bottle demonstration in which a hard-boiled egg is forced into a bottle with atmospheric pressure by lowering the pressure inside the bottle? In that demonstration, the air pressure is lowered by heating the air inside the bottle. Could the same effect be achieved by somehow rapidly cooling the air inside the bottle? Or if not a bottle, then a thermos? Answer: I like BowlofRed's answer, but I think there's more going on here than that. I also don't believe that straight forward cooling would suck in the egg that fast. The egg gets sucked in very quickly. Certainly as the burning starts, air is escaping. It's impossible to see precisely when the egg stops vibrating, as some vibration might be quite small and hard to see but what visibly happens around 1:18 - 1:19 is the egg begins to get sucked down and as this happens, smoke begins to fill the bottle. Newsprint is primarily Cellulose: Source and Cellulose has a chemical structure of $C_6H_{10}O_5$. So, the simplified chemistry of burning newspaper is $C_6H_{10}O_5 + 6 O_2$ gives us $6CO_2 + 5H_20$, so you're replacing 6 Oxygen molecules with $6 CO_2$ and $5 H_20$ (in gaseous form), so it's not just the temperature but 11 molecules of gas are replacing 6 and that also causes air to rush out and vibrate the egg. The flame itself is also quite hot (Source) and this not only heats of the inside of the bottle, but there is a fair bit of localized displacement by the flame and the fast moving molecules from the chemical reaction. As the flame begins to shrink, in part due to lack of Oxygen the smaller flame creates less displacement and this creates the suction, even though the flame is still burning some as the egg begins to get sucked in and the fire is still adding gas molecules and generating some heat, but the reduced displacement when the flame shrinks begins to suck the egg in. Now it might be semantics to call it "displacement" vs heat. I see that, but the important point is that as the flame shrinks the egg only gets sucked in a little, but in a reasonably closed system like that even a small flame should continue to add some heat, so there's more at play than just temperature. The 2nd thing, which is visibly obvious is that smoke quickly fills the inside. This is because the gaseous $H_20$ is much too concentrated for the average temperature inside the bottle and it quickly forms into tiny suspended droplets of water and this gas into tiny droplets has a pretty significant reduction the air pressure, enough to suck the egg in in 2 seconds. Condensation is an exothermic process, so this water vapor to visible droplets of smoke/steam should warm the bottle further - a little bit, but the effect of the reduced air pressure due to condensation is a greater factor than rising temperature or anything else at that point. The abundance of the water vapor is basically super saturated and as it becomes visible, the bottle is losing air pressure. Cooling wouldn't happen that fast. For example, if you had a warm bottle like that one with an egg snug on top and you put it in the fridge, it would (I think) take quite a while longer than 2 seconds to suck the egg in. Now if you had a different fuel, say pure aluminum (which if you got it burning it would melt the glass bottle), but anyway, Aluminum sucks Oxygen out of the air without giving anything back. That kind of reaction might suck the egg straight in without any vibration from escaped gas at all. And if you had Gunpowder, which generates a huge amount of gas when it burns, the egg would be shot high into the air and the bottle, well, lets just say, do this one behind a protective screen. The chemical changes in that experiment can't be ignored, so it really can't be compared to the thermos.
{ "domain": "physics.stackexchange", "id": 24577, "tags": "pressure" }
What frame(s) of reference are used to measure the rotation of the Sun around the galaxy ?
Question: I can find various speeds and estimated durations listed at numerous places but none specifically describe the frame of reference. Possible options as example of kind of answer I expect. Local Galactic cluster Distance quasars The cosmic background radiation? --------- UPDATE --------- Thanks AIB and voithos. Lot of reading for me. Though technically, I still don't have an answer that meets the following criteria. rotational velocity(average preferably) of sol around best estimate of center of Milky-way galaxy. publicly available reference(I don't have immediate access to some of the books given) frame of reference external to Milky-way galaxy. As I note below the only reference (wmap5basic_reprint.pdf) I can read that uses an external frame of reference doesn't specifically state the vector is rotational (despite wikipedia article assuming such). The topic is barely touched on in that paper. I realised the speeds are variable. What I had not realised is that the whole idea of a (relatively) clearly defined x orbiting y system doesn't really scale up well from the local solar system to the galactic scale. The galaxy is more like a whirlpool or tornado compared to the "clockwork" appearance of the solar system. Although both the solar system and galaxy are constantly(very slowly) changing "fluid" rotational systems, the galaxy is obviously far more fluid than the solar system. Also we have not yet been able to observe anything about it's center. Or in other words, we are not "orbiting" the galaxy, we are part of the galaxy. I suspect the topic is more in the realm of "fluid dynamics" than "orbital mechanics" I've accepted AIB's answer as the most enlightening to me personally. Also, it would appear I have wiki-sidebar blindness. Apologies for that. FYI The paper referencing the speed relative to the CMB, as mentioned in wikipedia article can be found here http://cmbdata.gsfc.nasa.gov/product/map/dr3/pub_papers/fiveyear/basic_results/wmap5basic_reprint.pdf The relevant section appears to be 7.3.1. "... implies a Solar System peculiar velocity of 369.0 ± 0.9kms-1 with respect to the CMB rest frame." Although it's not obvious to me what vector that velocity is along. Though Dipole Anisotropy in the COBE DMR First-Year Sky Maps gives a specific velocity(including vector) for the local galactic group in relation to the CMB rest frame "implied velocity of the Local Group with respect to the CMB rest frame is 627 +/- 22 km/s toward (l,b) = (276 +/- 3 deg, 30 +/- 3 deg)." FYI Other reference frames that are external to the local galaxy are "The Supergalactic coordinate system" Answer: The Wikipedia page on Sun gives these three velocities, ~220 km/s (orbit around the center of the Galaxy) ~20 km/s (relative to average velocity of other stars in stellar neighbourhood) ~370 km/s(relative to the cosmic microwave background) So my inference is that 220km/s is the estimated orbital velocity. It is not constant velocity because the orbital motion around galactic center is not circular. The velocity of Sun around the Milkyway is in fact same as the spin motion of Milkyway around itself. All stars in the galaxy rotate around a galactic center but not with the same period. Stars at the center have a shorter period than those farther out Sun's orbital motion is calculated with galactic north pole as the frame of reference. It is called the galactic coordinate system. See this It's a complicated calculation, because stars have arbitrary motion in local regions, which need to be subtracted out.
{ "domain": "physics.stackexchange", "id": 2986, "tags": "sun, rotation, galaxies" }
Information extraction with reinforcement learning, feasible?
Question: I was wondering if one could use Reinforcement Learning (as it is going to be more and more trendy with the Google DeepMind & AlphaGo's stuff) to parse and extract information from text. For example, could it be a competitive approach to structured prediction such as Named Entity Recognition (NER), i.e. the task of labelling New York by "city", and New York Times by "organization" Part-of-speech tagging (POS), i.e. classifying words as determinant, noun, etc. information extraction, i.e. finding and labelling some target information in texts, for instance 12/03 is date given the context meaning 3 December and has the label "expiry date" What would be a relevant modelling to do these tasks? Rather naively I would think of a pointer that read the text from start to end and annotate each 'letter' by a label. Maybe it would learn that neighbouring letters in a 'word' share the same label, etc. Would it be able to learn long-term dependencies with this approach? I am interested by any ideas or references related to this subject. Answer: You ideally want to use Reinforcement Learning in situations where there is delayed feedback and stochastic transitions in the environment. Although you could potentially apply RL, in your case, you might be better off with a Sequence to Sequence learning framework (https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) since you have access to the entire sentence and there is no stochasticity involved. On the topic of RL with Information Extraction, this might be of interest: Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning (http://arxiv.org/abs/1603.07954)
{ "domain": "datascience.stackexchange", "id": 4771, "tags": "text-mining, reinforcement-learning, parsing, named-entity-recognition" }
How to get pure end-effector translation through Jacobian?
Question: I have a 7 DOF arm that I am controlling with joint velocities computed from the Jacobian in the standard way. For example: $$ {\Large J} = \begin{bmatrix} J_P \\J_O \end{bmatrix} $$ $$ J^{\dagger} = J^T(JJ^T)^{-1} $$ $$ \dot{q}_{trans} = J^{\dagger}_P v_{e_{trans}} $$ $$ \dot{q}_{rot} = J^{\dagger}_O v_{e_{rot}} $$ $$ \dot{q} = \dot{q}_{trans} + \dot{q}_{rot} $$ However, when specifying only translational velocities, the end-end effector also rotates. I realized that I might be able to compute how much the end-effector would rotate from the instantaneous $\dot{q}$, then put this through the Jacobian and subtract out its joint velocities. So I would do this instead of using the passed in $v_{e_{rot}}$: $$ v_{e_{rot}} = R(q) - R(q+\dot{q}_{trans}) $$ Where $R(q)$ computes the end-effector rotation for those joint angles. Is this OK to do, or am I way off base? Is there a simpler way? I am aware that I could also just compute the IK for a point a small distance from the end-effector with no rotation, then pull the joint velocities from the delta joint angles. And that this will be more exact. However, I wanted to go the Jacobian route for now because I think it will fail more gracefully. A side question, how do I compute $R(q) - R(q+\dot{q}_{trans})$ to get global end-effector angular velocity? My attempts at converting a delta rotation matrix to Euler angles yield wrong results. I did some quick tests and implemented the above procedure to achieve pure end-effector rotation while maintaining global position. (This is easier because $T(q) - T(q+\dot{q}_{rot})$ is vector subtraction.) And it did kind of work. Answer: You don't need the positional IK to solve this problem. All that is required is, that your Jacobian is invertible, i.e. keep away from sinuglar joint configurations. Consider, how the end effector's velocity is formed: $$ \begin{bmatrix} v_{trans} \\ v_{rot} \end{bmatrix} = J(\boldsymbol{q})\cdot \boldsymbol{\dot{q}} $$ So, if I understand correctly, you want to solve the above equation with $v_{rot} = 0$ for $\boldsymbol{\dot{q}}$: $$ \boldsymbol{\dot{q}} = J(\boldsymbol{q})^{-1}\begin{bmatrix} v_{trans} \\ 0 \end{bmatrix} $$ Keep in mind that -- in general -- you cannot nicely separate the joints in those joints which command translation and those which command rotation of the end effector. After all, how joints are mapped to the end effector's position and angle is determined by the mechanism's structure, which dictates the Jacobian's structure (and its singularites). For example, an arm with only rotational joints (like a typical industrial robot) needs to move all its motors in some kind of "compensating" way to produce pure end effector translation.
{ "domain": "robotics.stackexchange", "id": 449, "tags": "kinematics, robotic-arm, jacobian" }
neglect of lattice potential for conduction electrons
Question: Why is it true that in nearly free electron compunds, complete neglect of the lattice potential is usually a good approximation as long as one considers crystal momenta remote from the boundaries of the Brillouin zone? or more precisely, what's the essential difference between the electronic states with crystal momenta close to or far away from the Brillouin zone boundary? Answer: For a simple crystal with more or less cubic symmetry and with low free electron density, for example sodium, Fermi surface is more or less a sphere. This is because it is small and deep inside Brillouine zone. Spherical Fermi surface resembles that of a free electrons with parabolic dispersion...in a crystal we do not have this parabolic dispersion of electrons, because crystal potential modifies it and this is more prominent for the values of crystal momentum of electrons near the values at BZ...because Bragg law gives that exactly at these values electrons interact very strongly with the lattice and here crystal potential deforms dispersion relation. So if you add more and more electrons in a crystal they fill more and more states and are getting near the value at the edge of BZ. That is why I said with low electron density, meaning of course, conduction electrons. The electrons at the bottom just experience conditions like in a parabolic dispersion, like free ones, and as you fill the band up, right around the middle, they experience the crystal potential and act accordingly. Now when you calculate conductivity, you realize that only electrons at the top of Fermi surface are ones being effected, so if a metals Fermi surface is near the edges of the zone this surface will be deformed because dispersion relation is deformed and because electrons scatter just in this narrow area around the surface, their behavior depends strongly on the shape of this surface. Why dont all the other electrons deep inside Fermi surface scatter? Because there is not enough energy available. Only electrons in a narrow thermal layer participate in this, and it is narrow compared to the Fermi energy.Another reason is, of course, Pauli exclusion principle. This is actually, now I see, very broad question, and I can only say, look it up in Zimann or Kittel, Solid state theory for more elaboration.
{ "domain": "physics.stackexchange", "id": 18674, "tags": "solid-state-physics, electrons, crystals" }
Can a single classical particle have any entropy?
Question: recently I have had some exchanges with @Marek regarding entropy of a single classical particle. I always believed that to define entropy one must have some distribution. In Quantum theory, a single particle can have entropy and I can easily understand that. But I never knew that entropy of a single rigid classical particle is a well defined concept as Marek claimed. I still fail to understand that. One can say that in the classical limit, the entropy of a particle in QT can be defined and that corresponds the entropy of a single classical particle. But I have difficulty accepting that that gives entropy of a single Newtonian particle. In my understanding, If a system has entropy then it also should have some temperature. I don't understand how one would assign any temperature to a single classical particle. I came across a paper where there is a notion of "microscopic entropy". By no means, in my limited understanding, it corresponded to the normal concept of entropy. I am curious to know, what is the right answer. So, my question is, is it possible to define entropy of a single classical particle? Answer: Entropy is a concept in thermodynamics and statistical physics but its value only becomes indisputable if one can talk in terms of thermodynamics, too. To do so in statistical physics, one needs to be in the thermodynamic limit i.e. the number of degrees of freedom must be much greater than one. In fact, we can say that the thermodynamic limit requires the entropy to be much greater than one (times $k_B$, if you insist on SI units). In the thermodynamic limit, the concept of entropy becomes independent of the chosen ensembles - microcanonical vs canonical etc. - up to corrections that are negligible relatively to the overall entropy (either of them). A single particle, much like any system, may be assigned the entropy of $\ln(N)$ where $N$ is the number of physically distinct but de facto indistinguishable states in which the particle may be. So if the particle is located in a box and its wave function may be written as a combination of $N$ small wave packets occupying appropriately large volumes, the entropy will be $\ln(N)$. However, the concept of entropy is simply not a high-precision concept for systems away from the thermodynamic limit. Entropy is not a strict function of the "pure state" of the system; if you want to be precise about the value, it also depends on the exact ensemble of the other microstates that you consider indistinguishable. If you consider larger systems with $N$ particles, the entropy usually scales like $N$, so each particle contributes something comparable to 1 bit to the entropy - if you equally divide the entropy. However, to calculate the actual coefficients, all the conceivable interactions between the particles etc. matter.
{ "domain": "physics.stackexchange", "id": 99310, "tags": "thermodynamics, statistical-mechanics, entropy" }
Positivity of Liouville von Neumann equation
Question: I've been reading about the equation, and all of the sources I found state that the equation preserves the trace, self-adjointness, and positivity of the density matrix. The first two properties are easily verified, but I can't seem to figure out how to prove the positivity ($\langle\alpha|\rho|\alpha\rangle\geq0$) is maintained. So far I wrote down: $$ \langle\alpha|\frac{d\rho}{dt}|\alpha\rangle= \frac{d}{dt}\left(\langle\alpha|\rho|\alpha\rangle\right)= \frac{d\rho_{\alpha\alpha}}{dt}= -\frac{i}{\hbar}\langle\alpha|H\rho-\rho H|\alpha\rangle=-\frac{i}{\hbar} \sum_\mu \left(H_{\alpha\mu}\rho_{\mu\alpha} - \rho_{\alpha\mu}H_{\mu\alpha} \right) $$ Since both $H$ and $\rho$ are self-adjoint the last expression may be reduced to: $$ \frac{d\rho_{\alpha\alpha}}{dt}= -\frac{i}{\hbar} \sum_\mu \left(H_{\alpha\mu}\rho_{\mu\alpha} - \left(\rho_{\mu\alpha}H_{\alpha\mu}\right)^* \right) =\frac{2}{\hbar}\text{Im}\left[ \sum_\mu H_{\alpha\mu}\rho_{\mu\alpha} \right] $$ I'm kind of stuck from here, any Help? Edit: This part of the question was edited for clarification. I noticed another peculiar thing, when writing this down. My expressions don't relate to any particular basis. However, if I assume that the basis of choice is the eigenbasis of $\rho$ (or $H$), I trivially get $\frac{d\rho_{\alpha\alpha}}{dt}=0$ as: $$ \frac{d\rho_{\alpha\alpha}}{dt}= -\frac{i}{\hbar}\langle\alpha|H\rho-\rho H|\alpha\rangle= -\frac{i}{\hbar}\left( \rho_{\alpha\alpha}\langle\alpha|H|\alpha\rangle - \rho_{\alpha\alpha}^*\langle\alpha|H|\alpha\rangle \right)= -\frac{i}{\hbar}\rho_{\alpha\alpha} \left( H_{\alpha\alpha} - H_{\alpha\alpha} \right)=0 $$ What is the meaning of this? As I didn't make any assumptions regarding the eigenbasis of $H$, it means the the diagonal terms of the commutator are zero, and therefore to populations are constant of motion. This would make sense as for a statistical ensemble in equilibrium we claim $\rho\propto e^{-\beta H}$, which means $\left[ H,\rho\right]=0$, but if so what is the whole point of writing the equation to begin with? This reasoning is of course flawed as it implies that for every system the diagonal of the density matrix is a constant of motion, which means we cannot alter populations of various states. Where does my reasoning fail? Is the source of the problem the fact that I wasn't careful enough ragarding the commutation of the Hamiltonian with itself at different times? Edit 2: I leave the question as it is for the sake others who might repeat my mistake. I found my error in the first step of the derivation. I wrote that: $$ \langle\alpha|\frac{d\rho}{dt}|\alpha\rangle= \frac{d}{dt}\left(\langle\alpha|\rho|\alpha\rangle\right)= \frac{d\rho_{\alpha\alpha}}{dt} $$ This is of course wrong! Since the Liouville equation is written in the Schrödinger picture the states are time dependent and cannot be incorporated into the time derivative. What I implicitly did by assuming the it could be incorporated into the time derivative, is I assumed that the state $|\alpha\rangle$, is a stationary state, i.e. that diagonalizes simultaneously $H$ and $\rho$, thus implicitly assuming that they commute, in which case indeed the populations are constants of motion. In short the calculation of the RHS in the first equation I wrote is correct. However, it has nothing to do the the rate of change of the diagonal elements of $\rho$, and isn't related to population in any way. However, instead I could write: $$ \frac{d\rho_{\alpha\alpha}}{dt}= \frac{d}{dt}\left(\langle\alpha|\rho|\alpha\rangle\right)= \left(\frac{d\langle\alpha|}{dt}\right)\rho|\alpha\rangle + \langle\alpha| \left(\frac{d\rho}{dt}\right)|\alpha\rangle + \langle\alpha|\rho \left(\frac{d|\alpha\rangle}{dt}\right)= % \\ % =\frac{i}{\hbar}\left( \langle\alpha|H\rho|\alpha\rangle - \langle\alpha|\rho H|\alpha\rangle \right)+ \langle\alpha| \left(\frac{d\rho}{dt}\right)|\alpha\rangle= % \\ % =\frac{i}{\hbar} \langle\alpha|\left[H,\rho\right]|\alpha\rangle - \frac{i}{\hbar} \langle\alpha|\left[H,\rho\right]|\alpha\rangle=0 $$ Which essentially gives the same information as in the answer below. The last expression however shouldn't be interpreted the time evolution of any specific population. The time propogation of occupation probability is given by: $$ P_{\alpha\alpha}(t) = |c_\alpha(t)|^2 = \langle\alpha(0)|\rho(t)|\alpha(0)\rangle= \left(G(t)\rho(0)G^\dagger(t)\right)_{\alpha\alpha} $$ where $c_\alpha(t)$ are the time dependent coefficients of some spectral decomposition: $|\psi(t)\rangle = \sum_\alpha c_\alpha(t)|\alpha\rangle$, and $G(t)$ is the time propagator between the states with time difference $t$. Answer: A bounded operator $\rho$ over a Hilbert space is positive iff for all $x$ from the Hilbert-space $$ \langle \rho x, x\rangle \geq 0 $$ Now $\rho$ obeys the von-Neumann-equation, whose formal solution is$^1$ $$\rho(t) = e^{-itH}\rho(0)e^{+itH} =: U(t)\rho(0)U(-t)$$ Observe that $ U(t) $ is unitary since the Hamiltonian $H$ is self-adjoint. Unitarity also implies $U(t)$ is one-to-one. Therefore with $U(t)x=y$ it holds for all $y$ $$ \langle \rho(t) y, y\rangle = \langle U(t)\rho(0)U(-t) y, y\rangle = \langle \rho(0)U(-t)y, {U}(-t)y\rangle = \langle \rho(0) x, x\rangle \geq 0 $$ if $\rho(0)$ was positive. OP has answered the second part of the question himself. $^1$ For an explicitly time-dependent Hamiltonian replace $U(t)$ by its time-ordered generalisation. It makes no difference for the argument.
{ "domain": "physics.stackexchange", "id": 36947, "tags": "quantum-mechanics, density-operator" }
Model class representing a financial calculator
Question: Here is my Model class's new header file: @interface MMCalculator : NSObject @property (readonly) CGFloat calculatedPay; @property (readonly) CGFloat calculatedSavingsForStuff; @property (readonly) CGFloat calculatedSavingsForProfitFormula; @property (readonly) CGFloat calculatedSavingsForTaxes; - (void)calculateValuesWithMonthlyRevenue:(CGFloat)monthlyRevenue; @end Here is my Model class's new implementation file: #import "MMCalculator.h" #pragma mark - Const Variables static CGFloat const kPercentageToPayYourself = 0.50; static CGFloat const kPercentageToSaveForStuff = 0.20; static CGFloat const kPercentageToSaveForProfitFormula = 0.20; static CGFloat const kPercentageToSaveForTaxes = 0.10; @interface MMCalculator () @property (readwrite) CGFloat calculatedPay; @property (readwrite) CGFloat calculatedSavingsForStuff; @property (readwrite) CGFloat calculatedSavingsForProfitFormula; @property (readwrite) CGFloat calculatedSavingsForTaxes; @end @implementation MMCalculator #pragma mark - Calculator Methods - (void)calculateValuesWithMonthlyRevenue:(float)monthlyRevenue { //Call all 4 calculation methods and set calculated properties self.calculatedPay = [MMCalculator calculateYourPay:monthlyRevenue]; self.calculatedSavingsForStuff = [MMCalculator calculateSavingsForStuff:monthlyRevenue]; self.calculatedSavingsForProfitFormula = [MMCalculator calculateSavingsForProfitFormula:monthlyRevenue]; self.calculatedSavingsForTaxes = [MMCalculator calculateSavingsForTaxes:monthlyRevenue]; } + (CGFloat)calculateYourPay:(CGFloat)monthlyRevenue { return kPercentageToPayYourself * monthlyRevenue; } + (CGFloat)calculateSavingsForStuff:(CGFloat)monthlyRevenue { return kPercentageToSaveForStuff * monthlyRevenue;; } + (CGFloat)calculateSavingsForProfitFormula:(CGFloat)monthlyRevenue { return kPercentageToSaveForProfitFormula * monthlyRevenue; } + (CGFloat)calculateSavingsForTaxes:(CGFloat)monthlyRevenue { return kPercentageToSaveForTaxes * monthlyRevenue; } @end Key things I am looking for advice on: Are my constants defined in the correct location in my Model class's implementation file? Even though the compiler does not force me to, would it be better to declare my private methods in the class extension? I want to make sure I'm following best practices. Now that the Model's instance methods have been changed to class methods, should I change the public calculateValuesWithMonthlyRevenue: method to a class method as well? Any other general comments are welcome. Answer: Are my constants defined in the correct location in my Model class's implementation file? It is impossible to tell. Where a variable needs to be declared is entirely dependent upon its intended scope. If a variable is used only within a single block (within an if, else, or loop, or anything else of this nature), it should be declared within that block. If a variable is used only in a single function, it should be declared in that function. If a variable is used in multiple functions in a file, it should be declared in the .m file so it can be seen throughout the file, but not within other files in the project. If a variable is used in multiple files, it should be declared in a .h file so that it may be imported across the multiple files. Even though the compiler does not force me to, would it be better to declare my private methods in the class extension? I want to make sure I'm following best practices. There's no particular need to declaring methods in class extensions in Objective-C. I don't know of any real benefit to doing so. Now that the Model's instance methods have been changed to class methods, should I change the public calculateValuesWithMonthlyRevenue: method to a class method as well? This one can't be changed, at least not quite as easily. You can't refer to a class's instance variables within a class method. Perhaps what you should do is instead change it to more of a factory method that might look a bit like this: + (instancetype)calculatedValuesForMonthlyRevenue:(CGFloat)monthlyRevenue { return [[self alloc] initWithValuesForMonthlyRevenue:monthlyRevenue]; } - (instancetype)initWithValuesForMonthlyRevenue:(CGFloat)monthlyRevenue { self = [super init]; if (self) { _calculatedPay = [MMCalculator calculateYourPay:monthlyRevenue]; _calculatedSavingsForStuff = [MMCalculator calculateSavingsForStuff:monthlyRevenue]; _calculatedSavingsForProfitFormula = [MMCalculator calculateSavingsForProfitFormula:monthlyRevenue]; _calculatedSavingsForTaxes = [MMCalculator calculateSavingsForTaxes:monthlyRevenue]; } return self; } Any other general comments are welcome. The class name, MMCalculator needs some work. This does a more specific job then a regular calculator. This is more like a personal finance calculator if anything. Any other general comments are welcome. You really shouldn't represent money with floating point numbers. Instead, you should probably create a class for holding money. Money.h @interface Money @property (readonly) int dollars; @property (readonly) int cents; + (instancetype)moneyWithDollars:(int)dollars cents:(int)cents; - (instancetype)initWithDollars:(int)dollars cents:(int)cents; @end Money.m @implementation Money { long long _totalCents; } - (int)dollars { return _totalCents/100; } - (int)cents { return _totalCents%100; } + (instancetype)moneyWithDollars:(int)dollars cents:(int)cents { return [[self alloc] initWithDollars:dollars cents:cents]; } - (instancetype)initWithDollars:(int)dollars cents:(int)cents { self = [super init]; if (self) { _totalCents = cents + (100 * dollars); } return self; } @end The class could be rounded out with various math methods as well.
{ "domain": "codereview.stackexchange", "id": 9179, "tags": "mvc, objective-c, calculator, finance" }
How is gravitational potential energy $mgh$?
Question: I know the derivation that $W=Fd$, hence $F=mg$ and $d=h$ so energy gained by the body is $mgh$ considering the body on the ground to have $0$ gravitational potential energy. But the definition of work is (as given in my book) Work done is the product of force and displacement caused by it in the same direction. That means work done on a body to lift it against gravity to a certain height should be equal to the potential energy gained by it, right? My book also states that: $mg$ is the minimum force required to lift a body against earth's gravity(without acceleration). But how does that make sense? Suppose a body is kept on the ground, and we apply a force $mg$ on it, won't the force of gravity and this external force cancel out and ultimately result in no movement of the body? How is the derivation of $U=mgh$ thus obtained? Answer: Part of the problem is to distinguish between the work done by a particular force and the net work done by all the forces. The second is to notice that the work done on an object depends on the process undergone. The third is to understand that the relationship between work and potential energy is that the work done by a conservative force is proportional to the change in the potential energy. Let's walk through the scenario. A block of mass $m$ sits on the ground at position $y=0$. There are two forces acting: the gravitational force downward and then normal force upward. Newton's 2nd Law tells us that $$ m\vec{a}=\vec{F}_{\textrm{net on object}} = \vec{F}_{\textrm{G, on object by Earth}} +\vec{N}_{\textrm{on object by ground}}\,. $$ We'll abbreviate these as $\vec{F}_{\textrm{net}}$, $\vec{F}_{\textrm{G}}$, and $\vec{N}$. In the case where the object is just sitting on the ground, the acceleration is clearly zero, and the normal and gravitational force cancel each other out. The block doesn't move, and so the net work done by either force must be zero: $$ W_{\textrm{by G}}=\int_i^f\vec{F}_{\textrm{G}}\cdot d\vec{r}=\vec{F}_{\textrm{G}}\cdot\Delta\vec{r} = 0\, $$ where the second equality holds beccause the gravitational force is constant near the surface of the Earth, and the third holds because the net displacement is zero. Now, someone grabs the block, accelerates it upwards, and then starts lifting the block upwards at constant speed. Ignoring the acceleration part, as the block moves up at constant speed, the net force on it must be zero, and so the gravitational force and normal force acting must cancel, as they did above, although now $\vec{N} = \vec{N}_{\textrm{by person}}$, which we'll just call $\vec{N}$. The work done by gravity and the work done by the person lifting the block can be computed as follows: $$ W_{\textrm{by G}}=-mg(y_f-y_i)\,, $$ where $y_f$ and $y_i$ are the initial and final heights of the object, and $$ W_{\textrm{by N}}=N_{\textrm{by hand}}(y_f-y_i)\,. $$ Note that these two works are equal and opposite, and so the net work done is zero, as it must because the kinetic energy isn't changing! However, the works done by the individual forces are non-zero. Looking at the $W_{\textrm{by G}}$, we can see that we can alternatively define it as $$ W_{\textrm{by G}} = -(U_f-U_i)\,, $$ where we define $U = mgy$ to be the potential energy when the object is at height $i$. Then, $U_f-U_i = mgy_f - mgy_i$ is just the change in potential energy as the object is lifted from height $y_i$ to height $y_f$. We could write this as $mgy_f - mgy_i = mgh$, where $h$ is the change in height, but this isn't a great way to do things, because $h$ could be negative (if the block moves downward), and it's easy to confuse a position with a change in position is if it's not notated correctly. I would write this as $mgy_f - mgy_i = mg\Delta y$. To tie this in with the OP's specific questions, then, note that while the block is sitting on the ground, the potential energy is constant because its position doesn't change. The value of the potential energy itself is a meaningless quantity; it's only changes in potential energy that matter, via $W = -\Delta U$. We derive $U= mgy$ by considering the work done during a process in which the position of the object changes. Last important note: the third bullet point requires a change in perspective, and without this change in perspective, things can go wrong (mixed up understandings and incorrect calculations). In our analysis above, we chose the system to be the ball, and we computed the change in kinetic energy of the ball by computing the works done by all forces acting on the ball. If these works cancel, then the net change in kinetic energy is zero. If instead we move to a potential energy language, we have to reconsider what we call our system. Instead of thinking about the work done by the Earth via gravity on the ball, we consider a new system composed of both the Earth and the ball. In that case, we replace the work done by the Earth on the ball by the change in potential energy of the Earth-ball system, i.e., \begin{align} \Delta KE_{\textrm{ball}} &= W_{\textrm{N}}+W_{\textrm{G}} = \Delta KE_{\textrm{ball}} = W_{\textrm{N}}-\Delta PE_{\textrm{G}} \Longrightarrow \\ W_{\textrm{N}} &= \Delta KE_{\textrm{ball}} + \Delta PE_{\textrm{G}} \end{align} Since the kinetic energy of the Earth doesn't change, $$ \Delta KE_{\textrm{system}} = \Delta KE_{\textrm{ball}} + \Delta KE_{\textrm{Earth}} = \Delta KE_{\textrm{ball}}\,, $$ and so we can write $$ W_{\textrm{ext}} = \Delta KE_{\textrm{system}} + \Delta PE_{\textrm{system}}\,, $$ where $W_{\textrm{ext}}$ is the work done by objects outside the system on objects inside the system, or work done by external forces. In this case, that is the work done by the person in lifting the ball.
{ "domain": "physics.stackexchange", "id": 84259, "tags": "forces, energy, newtonian-gravity, work, potential-energy" }
Why is this matrix invertible in the Kalman gain?
Question: In the wikipedia article about Kalman filters, the well-known expression of the matrix of Kalman gains is given: $$ \mathbf {K} _{k}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}\mathbf {S} _{k}^{-1} $$ with $$\mathbf{S}_k=\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}+\mathbf {R} _{k}.$$ I understand that $\mathbf{R}_k$, as a covariance matrix, can be asked to be non-singular: it is reasonable to believe that no variance is zero. But this does not answer my question: why is $\mathbf{S}_k$ invertible? Answer: Note that $\mathbf{P} _{k\mid k-1}$, just like $\mathbf{R}_k$, is also a covariance matrix, and for this reason it is (at least) positve semi-definite, i.e., $\mathbf{y}^T\mathbf{P}_{k\mid k-1}\mathbf{y}\ge 0$ for $\mathbf{y}\neq\mathbf{0}$. Now set $\mathbf{y}=\mathbf{H}_k^T\mathbf{x}$ to see that also $\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}$ is at least positive semi-definite (positive definite if $\mathbf{P}_{k\mid k-1}$ is positive definite and $\mathbf{H}_k$ has full rank). Finally, note that the sum of two positive semi-definite matrices is positive semi-definite. For invertibility, we require that the sum of the two matrices is positive definite. This is the case if at least one of the two matrices is positive definite. In practice we can rather safely assume that both $\mathbf{R_k}$ and $\mathbf{P} _{k\mid k-1}$ are positive definite. However, $\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\text{T}}$ is usually only positive semi-definite due to rank-deficiency of $\mathbf{H}_k$, so for invertibility of the sum of the two matrices we have to rely on the positive definiteness of $\mathbf{R}_k$.
{ "domain": "dsp.stackexchange", "id": 4179, "tags": "kalman-filters, matrix" }
proving a step in the proof of regular intersection
Question: Let $L_1$ be a context-free language and $L_2$ be a regular language. Then $L_1 \cap L_2$ is context-free. Part of a proof given in the book "Formal languages and automata": Let $M_{1}=\left(Q, \Sigma, \Gamma, \delta_{1}, q_{0}, z, F_{1}\right)$ be an npda that accepts $L_1$, $M_{2}=\left(P, \Sigma, \delta_{2}, p_{0}, F_{2}\right)$ be a dfa that accepts $\mathrm{L}_{2}$. We construct a push-down automaton $\widehat{M}=\left(\widehat{Q}, \Sigma, \Gamma, \widehat{\delta}, \widehat{q_{0}}, z, \widehat{F}\right)$ that simulates the parallel action of $M_{1}$ and $M_{2}$ : Whenever a symbol is read from the input string, $\widehat{M}$ simultaneously executes the moves of $M_{1}$ and $M_{2}$. To this end we let $$ \begin{aligned} \widehat{Q} &=Q \times P \\ \widehat{q_{0}} &=\left(q_{0}, p_{0}\right), \\ \widehat{F} &=F_{1} \times F_{2}, \end{aligned} $$ and define $\widehat{\delta}$ such that $$ \left(\left(q_{k}, p_{l}\right), x\right) \in \widehat{\delta}\left(\left(q_{i}, p_{j}\right), a, b\right) $$ if and only if $$ \left(q_{k}, x\right) \in \delta_{1}\left(q_{i}, a, b\right) $$ and $$ \delta_{2}\left(p_{j}, a\right)=p_{l} . $$ In this, we also require that if $a=\lambda$, then $p_{j}=\mathrm{p}_{1}$. In other words, the states of $\widehat{M}$ are labeled with pairs $\left(q_{i}, p_{j}\right)$, representing the respective states in which $\mathrm{M}_{1}$ and $M_{2}$ can be after reading a certain input string. It is a straightforward induction argument to show that $$ \left(\left(q_{0}, p_{0}\right), w, z\right) \vdash_{\widehat{M}}^{*}\left(\left(q_{r}, p_{s}\right), \lambda, x\right), $$ with $q_{r} \in F_{1}$ and $p_{s} \in F_{2}$ if and only if $$ \left(q_{0}, w, z\right) \vdash_{M_{1}}^{*}\left(q_{r}, \lambda, x\right), $$ and $$ \delta^{*}\left(p_{0}, w\right)=p_{s} $$ Now, my question: I am not really sure how to do that "straightforward induction". I thought to do an induction on the length of the string, with the base case being 0. The base case means that we have lambda as our word. For the induction step I thought of assuming that it holds for strings of length n, and proving for strings of length n + 1. Do I have to prove it in both directions separately, or can I do both directions at once? Could someone help me write formally down why this statement is true, even though it seems quite straightforward? SOLUTION: SOLUTION IN WORDS For the base case, we have 0 steps. The proof just follows from the definition of what 0 steps actually is (doing nothing). For the inductive case: We say that the statement holds for steps of length n, and proof for steps of length n + 1. We should not say anything about the length of the strings, so they could be of length 0 or longer. From the induction hypotheses we have everything done and have 1 step left. This step is proven by the definition of $\widehat{\delta}$. For the DFA, I made the distinction between lambda-transition and actually consuming a letter. Answer: In order to prove the statement by induction it has to be generalized somewhat. Observe that the current statement refers to final states of the automata involved. To be able to extend arbitrary computations we will have to consider arbitrary states as last state of the computations, so we consider also non-accepting computations. Also note that the computation ends with no remaining symbols on the input tape. Also that has to be generalized, as we must be able to read a next symbol. Luckily $(q_{0}, w, z) \vdash_{M_{1}}^{*} (q_{r}, \lambda, x)$ if and only if $(q_{0}, w{\cdot} u, z) \vdash_{M_{1}}^{*} (q_{r}, u, x)$: the computation does not change if we add more symbols on the input tape. For the induction it is probably best to use the number of steps used by both pushdown automata. The length of the input word disregards long computations on the empty string possible for a pda. The inductive step probably strings this together as follows, but I did try not all details. $$ \left(\left(q_{0}, p_{0}\right), w{\cdot}a, z\right) \vdash_{\widehat{M}}^{\ell}\left(\left(q_{r}, p_{s}\right), a, x\right) \vdash_{\widehat{M}}\left(\left(q'_{r}, p'_{s}\right), \lambda, x'\right) $$ if and only if $$ \left(q_{0}, w{\cdot}a, z\right) \vdash_{M_{1}}^{\ell}\left(q_{r}, a, x\right) \vdash_{M_{1}}\left(q'_{r}, \lambda, x'\right), $$ $$ \delta^{*}\left(p_{0}, w\right)=p_{s},\text{ and } \delta^{*}\left(p_{s}, a\right)=p'_{s} $$ Probably it is best to have "if and only if" in the inductive hypothesis, but in the proof for the additional step check both directions separately, unless upon writing the proof arguments turn out to be symmetric.
{ "domain": "cs.stackexchange", "id": 20367, "tags": "formal-languages, regular-languages, automata, context-free" }
High Current (Speed) Transfer Buffer Recipe
Question: Does anyone know an effective buffer mix to use for high current Western transfers? We are successfully using the vendor's premixed buffer to transfer a wide range of protein sizes to PVDF membranes at 1A/25V for 10 min. We get great results with the vendor's expensive buffer. I haven't been able to find a non-proprietary recipe that works. The best ones we have hold the 25V, but then have a decrease in conductivity (increase in resistance) where they will start at 1A, but drop to 0.4A by the end of the 10 min. The vendor's buffer seems to hold the high current and results in better transfers. I don't know if it will be helpful to list all the variations I've tried in detail, but they have revolved around modifying a traditional Towbin buffer plus SDS, more glycine/Tris, or MgCl2. These were all various suggestions from people around the department, but I haven't found much published evidence for a buffer under these conditions. I realize 1A (and no I don't mean 1mA) is a lot of current, but the vendor's system works really well. Any pointers on what I might try/add would be appreciated even if you don't have a worked out protocol. Edit: Info from the MSDS indicates it has 3 reagents: Listing of dangerous and non-hazardous components: Proprietary Reagent K 10-20% Proprietary Reagent EB II 5-10% Proprietary Reagent S 1.0-2.5% 7732-18-5 water 50-100% · ..... Solvent content: Organic solvents: 0.0 % Water: 74.8 % Solids content: 25.2 % [I'm not sure if this MSDS info should be put here, just because I'm looking for someone who has already used a high current buffer they know the formulation for, not a guess to what I have.] Answer: So after a lot of work in optimization, I thought I would post what worked best for me. This buffer recipe was able to successfully transfer EGFR and insulin from the same lysate, and a clear band for both (large and small protein respectively). 10% SDS-PAGE gels were transferred at 1A for 10 min. High Current Transfer Buffer 48 mM Tris 15 mM HEPPS 1.0 mM EDTA 1.3 mM NaHSO3 1.3 mM N,N-dimethylformamide 25 Mm gLY-GLY 20% Methanol (v/v) I do hope this can help someone else, and want to site Garic et al as wonderful starting point. Their publication was made after I asked the question, as @user4148 pointed out.
{ "domain": "biology.stackexchange", "id": 1674, "tags": "lab-techniques, protocol, western-blot" }
Does Electric flux depend upon medium?
Question: Wikipedia or any other site that matters, expresses electric flux as dependent on the permitivity of vacuum and not on the permitivity of the medium. This is confusing, to me the electric flux should depend upon the material media as electric field intensity depends upon material media, but the Gauss law seems to say elsewise. I tried to look up on this site and the closest explanation I could get was: We will apply Gauss' law in a unique way: neglect effects of the dielectric. Using this way, we will consider the effect of polarization of the dielectric which would have otherwise been neglected if Gauss' Law was applied with the correct constant (ϵ instead of ϵ0) I am not sure If I understand it entirely, why would using ϵ neglect the effect of polarization? If we used the incorrect constant for the sake of derivation,why did we not write the correct one in the original equation? Also, none of the textbooks I have seen so far refer to medium at all when talking about Gauss' law. The concluding remarks usually express electric flux as dependent on the charge enclosed and independent of the shape and the location of the charge. Does Gauss law have really nothing to do with it? Answer: The $\epsilon$ is defined with polarization and Maxwell's equations in mind such that the equations look as similar as possible to the vacuum case: that is, $\epsilon$ incorporates polarization. Take the first equation, or Gauss' law, like you mentioned. The vacuum-case equation is $$\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon},$$ where $\rho$ is the (free) charge density. In the case of a polarizable medium, there will be bound charges as well as free charges, so we can write $\rho = \rho_f + \rho_b$ (you can infer the subscripts easily). Gauss' law then becomes $$\nabla \cdot \mathbf{E} = \frac{\rho_b + \rho_f}{\epsilon_0}.$$ It can be shown that the polarization $\mathbf{P}$ is related to the bound charge density as $$\nabla \cdot \mathbf{P} = -\rho_f,$$ so, replacing that in Gauss' law and rearranging, we get $$\nabla \cdot (\epsilon_0\mathbf{E} + \mathbf{P}) = \rho_f.$$ To simplify a bit, consider the case of a 'linear' medium, one where the polarization is directly proportional to the external electric field: $$\mathbf{P} = \epsilon_0 \chi \mathbf{E},$$ where $\chi$ is a constant called the 'electric susceptibility' of the medium. Inserting that into the previous equation, we get $$\nabla \cdot \epsilon_0 (1+ \chi) \mathbf{E}= \rho_f.$$ We then define the 'electric displacement' or $$\mathbf{D} = \epsilon_0 (1+ \chi )\mathbf{E},$$ which is the analog of the electric field in vacuum, with polarization taken into account. Finally, we define the general electric permittivity as $$\epsilon = \epsilon_0 (1+ \chi ),$$ with which, Gauss' law in media looks similar to that in vacuum: $$\nabla \cdot \mathbf{D} = \rho_f \implies \nabla \cdot \mathbf{E} = \frac{\rho_f}{\epsilon},$$ since $\mathbf{D} = \epsilon \mathbf{E}.$
{ "domain": "physics.stackexchange", "id": 74636, "tags": "electromagnetism, gauss-law" }
SubscriberStatusCallback not called in talker node
Question: Hi, I am trying to implement the simple listener and talker as explained in ROS tutorial link http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 but I have done a small modification, the publishing to the topic will happen, whenever the listener node connects to the talker, otherwise the message will not be published. Please check the below talker node implementation but some how the callback is not triggered whenever the listener is subscribed to the topic. void Publisher::CreateTopic(const std::string &topicname, const std::size_t &queue) { const ros::SubscriberStatusCallback callback_conn = std::bind(&Publisher::SubscriberStatusCallback_connect, this, std::placeholders::_1); const ros::SubscriberStatusCallback callback_discon = std::bind(&Publisher::SubscriberStatusCallback_Disconnect, this, std::placeholders::_1); chatter_pb = handle.advertise<std_msgs::String>(topicname, queue, callback_conn, callback_discon); } Originally posted by Ugesh on ROS Answers with karma: 16 on 2021-05-13 Post score: 0 Answer: I have found the reason, I have used the ros::spinOnce(); instead of ros::spin() so no callbacks are processed incase of ros::spinOnce() is used. Originally posted by Ugesh with karma: 16 on 2021-05-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36427, "tags": "ros" }
Generalized forces for virtual work - Why did they drop the summation?
Question: I am going through a PDF by Subhankar Ray & J. Shamanna on virtual work here and according to the PDF, equation 29, they write gerneralized force as: $$Q_j = -\nabla_k\tilde{V}\cdot\left(\frac{\partial\textbf{r}_k}{\partial q_j}\right) = -\frac{\partial V}{\partial q_j}\tag{29}$$ But shouldn't it be: $$Q_j = -\sum^N_{k=1}\nabla_k\tilde{V}\cdot\left(\frac{\partial\textbf{r}_k}{\partial q_j}\right) = -\frac{\partial V}{\partial q_j}~?$$ And according to Goldstein, page 22, just above eq(1.54), they also don't drop the summation. Why did they drop the summation in (1)? Answer: It is either a typo or that eq. (29) uses Einstein summation convention.
{ "domain": "physics.stackexchange", "id": 92602, "tags": "classical-mechanics, lagrangian-formalism, notation, textbook-erratum" }
Inconsistency of numbers of $d p$ and $d q$ in path integrals over phase space
Question: I am new to QFT. In books like Fradkin's QFT an integrated approach, and Stefan's Gauge field theories 2nd Ed., they derive the path integral from first writing down the integral over the phase space, $$ \lim_{N\to +\infty} \int \left\{\prod_{n=1}^{N-1} \mathrm{d}q_n\right\} \left\{\prod_{n=1}^{N} \frac{\mathrm{d}p_n}{2\pi\hslash}\right\} \exp\left[{\frac{i}{\hslash} \varepsilon \sum_{n=1}^N p_n \frac{q_n - q_{n-1}}{\varepsilon} - H(\overline{q}_n, p_n)}\right] $$ And then proceed with $$ \int \frac{\mathcal D q \mathcal D p}{2\pi \hbar} \exp\left[{\frac{i}{\hslash} \varepsilon \sum_{n=1}^N p_n \frac{q_n - q_{n-1}}{\varepsilon} - H(\overline{q}_n, p_n)}\right] $$ with the notation of $ \mathcal D q \mathcal D p $ meaning the equal number of product of $dq$ and $dp$ going to infinity, $$\mathcal D q \mathcal D p = \prod_{i=1}^{\infty} dp_i dq_i/2\pi \hbar \tag{1}$$ I am wondering if that missing $dq$ matters. Dimension-wise, at least the Stefan book made amendments to the dimension, while the Fradkin book didn't. I am not sure if that's how we write the functional integral. Edit: I mean I do agree that there should be one less $dq$. And I see how that comes about. But why are we ignoring this missing $dq$ (or discarding the extra $dp$) when we go to the equation 1? Answer: This is because there are 1 less position integration due to the Dirichlet boundary conditions $$q(t_0)~=~q_0\quad\text{and}\quad q(t_N)~=~q_N,$$ and the fact that the insertion of complete sets of position resp. momentum eigenstates in phase space path integral alternates temporally $$ p(t_{1/2}),\quad q(t_1),\quad p(t_{3/2}),\quad q(t_2),\quad \ldots,\quad p(t_{N-3/2}), \quad q(t_{N-1}),\quad p(t_{N-1/2}),$$ along the time discretization, $$t_n~=~t_0+n\epsilon, \quad \epsilon ~=~ \frac{t_N-t_0}{N}, \quad N~\in~\mathbb{N},\quad n~\in~\frac{1}{2}\mathbb{N}. $$
{ "domain": "physics.stackexchange", "id": 83583, "tags": "quantum-field-theory, path-integral, boundary-conditions, phase-space" }
Statistics updating SQL Query
Question: I have the following SQL statement that I think could be improved in areas (I believe there may be a way to use where over having, but I'm not sure how and I'm sure there's a way to reference the last updated column in the having clause but I'm unsure how to do it). Any suggestions would be greatly appreciated: /* Show indexes that haven't had statistics updated in two days or more */ select t.name as [Table Name], min(convert(CHAR(11),stats_date(t.object_id,s.[stats_id]),106)) as [Last Updated] from sys.[stats] as s inner join sys.[tables] as t on [s].[object_id] = [t].[object_id] group by t.name having min(convert(CHAR(11),stats_date(t.object_id,s.[stats_id]),106)) < dateadd(day, -1, getdate()) order by 2 desc Answer: You can just use the condition in a where, that will filter the single records instead of filtering groups, then you can group the result to avoid duplicates in the result. You shouldn't convert the date to a string when you are comparing it. When you convert it to a string, you should do that after getting the lowest value, otherwise min will compare the dates as strings and give you the wrong result. select t.name as [Table Name], convert(CHAR(11),min(stats_date(t.object_id,s.[stats_id])),106) as [Last Updated] from sys.[stats] as s inner join sys.[tables] as t on s.[object_id] = t.[object_id] where stats_date(t.object_id,s.[stats_id]) < dateadd(day, -1, getdate()) group by t.name order by 2 desc
{ "domain": "codereview.stackexchange", "id": 1339, "tags": "sql, sql-server, t-sql" }
Object to query string
Question: An object (POJO) holds query arguments and has two-binding to a form using some MVC framework. later the object needs to be converted to a query string that will be appended to an HTTP request. function objectToQueryString(obj) { return Object.keys(obj) .filter(key => obj[key] !== '' && obj[key] !== null) .map((key, index) => { var startWith = index === 0 ? '?' : '&'; return startWith + key + '=' + obj[key] }).join(''); } Any pitfalls? IE9+ is what we support. Example input: {isExpensive:true, maxDistance:1000,ownerName:'Cindy',comment:''} And its expected output: ?isExpensive=true&maxDistance=1000&ownerName=cindy Answer: You have neglected to escape your keys and values. If any of the data contains a special character such as &, the generated URL will be wrong. To perform the escaping, I recommend calling encodeURIComponent().
{ "domain": "codereview.stackexchange", "id": 13192, "tags": "javascript, url" }
What is the mechanism of ring contraction of 6-bromo-7-methoxy-2,3,4,7-tetrahydrooxepine from seven to five?
Question: I have recently come across a paper from here.I have tried to workout the mechanism for the following reaction presented in the paper. I was able to workout mechanism till acetal formation.It is a ring expansion from six to seven. However,the next step is a ring reduction from seven to five upon heating. Can someone please explain what is actually happening during ring reduction, and why it happens? Answer: It's in scheme 3 of the paper you cited! The 7-membered acetal undergoes ring-opening to the α,β-unsaturated oxocarbenium ion, then conjugate addition to give the 5-membered tetrahydrofuran. A bit of arrow pushing and addition of solvent gives the other THF product Not sure the legality of posting screenshots of papers, so I redrew the scheme:
{ "domain": "chemistry.stackexchange", "id": 12148, "tags": "organic-chemistry" }
Does a glass of water at room temperature emit (infrared?) radiation
Question: While reading the introduction to Feynman's lectures, it's mentioned how a glass of water cools down through evaporation, when some molecules get a bit extra energy and break free. If it's not a closed system, energy will be gradually taken away from the cup, hence blowing at the soup helps move those molecules away so that they don't reenter the surface. But I thought that all bodies also radiate heat? Does a cup of water also emit low frequency radiation, or is my understanding incorrect? Answer: Yes, all matter above absolute zero emits radiation. To quote wiki: When the temperature of a body is greater than absolute zero, inter-atomic collisions cause the kinetic energy of the atoms or molecules to change. This results in charge-acceleration and/or dipole oscillation which produces electromagnetic radiation, and the wide spectrum of radiation reflects the wide spectrum of energies and accelerations that occur even at a single temperature. This continuous release of energy would eventually cool the source to a lower and lower temperature except your glass of water is in contact with a heat reservoir (the room) which compensates for the energy loss.
{ "domain": "physics.stackexchange", "id": 42657, "tags": "water, thermal-radiation, evaporation" }
ROS Answers SE migration: built robot
Question: hai friends iam trying to built a robot like eddie. im have following things kinect, arduino , ultrasonic sensors 4 n.o , stepper motor(im not having encoder for dc motor or i should buy it) i want to implement touch screen & joystick control, obstacle avoidance and "follow me" application. if any body done similar work, give the links to download the code ,hardware assembly instruction , kinect caliberation and its code thank you in advance Originally posted by vikirobot on ROS Answers with karma: 13 on 2013-02-27 Post score: 0 Answer: I would look closely at TurtleBot, much of the code for eddie is a port from TurtleBot. http://www.ros.org/wiki/TurtleBot Originally posted by mmwise with karma: 8372 on 2013-02-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Jon Stephan on 2013-03-02: This link is broken. Should it be http://ros.org/wiki/Robots/TurtleBot ?
{ "domain": "robotics.stackexchange", "id": 13092, "tags": "ros, arduino, kinect, sensor" }
What quantile is used for the initial DummyRegressor for Gradient Boosting Regressor in scikit-learn?
Question: According to the documentation of Scikit-Learn Gradient Boosting Regressor: init: estimator or ‘zero’, default=None: An estimator object that is used to compute the initial predictions. init has to provide fit and predict. If ‘zero’, the initial raw predictions are set to zero. By default a DummyEstimator is used, predicting either the average target value (for loss=’ls’), or a quantile for the other losses. So what quantile is used for the DummyRegressor if the loss function is 'huber'? Is it the 50-quantile, ie. median? I need this information because I am reconstructing the predictor for the Gradient Boosting Regressor for use in another software environment. Answer: Yes, a GBM with Huber loss initializes with the median. The relevant bit of code is the method init_estimator of the loss class, in the file _gb_losses.py. For HuberLossFunction: def init_estimator(self): return DummyRegressor(strategy='quantile', quantile=.5) (source)
{ "domain": "datascience.stackexchange", "id": 8841, "tags": "scikit-learn, regression, loss-function, gbm" }
Rainfall challenge solution using Union-find
Question: I have tried to implement a solution to the Rainfall challenge based on the suggestions from 200_success♦ (Rainfall challenge). Problem Statement A group of farmers has some elevation data, and we're going to help them understand how rainfall flows over their farmland. We'll represent the land as a two-dimensional array of altitudes and use the following model, based on the idea that water flows downhill: If a cell’s four neighboring cells all have higher altitudes, we call this cell a sink; water collects in sinks. Otherwise, water will flow to the neighboring cell with the lowest altitude. If a cell is not a sink, you may assume it has a unique lowest neighbor and that this neighbor will be lower than the cell. Cells that drain into the same sink – directly or indirectly – are said to be part of the same basin. Your challenge is to partition the map into basins. In particular, given a map of elevations, your code should partition the map into basins and output the sizes of the basins, in descending order. Assume the elevation maps are square. Input will begin with a line with one integer, S, the height (and width) of the map. The next S lines will each contain a row of the map, each with S integers – the elevations of the S cells in the row. Some farmers have small land plots such as the examples below, while some have larger plots. However, in no case will a farmer have a plot of land larger than S = 5000. Your code should output a space-separated list of the basin sizes, in descending order. (Trailing spaces are ignored.) A few examples are below: ----------------------------------------- Input: Output: 3 7 2 1 5 2 2 4 7 3 6 9 The basins, labeled with A’s and B’s, are: A A B A A B A A A ----------------------------------------- Input: Output: 1 1 10 There is only one basin in this case. The basin, labeled with A’s is: A ----------------------------------------- Input: Output: 5 11 7 7 1 0 2 5 8 2 3 4 7 9 3 5 7 8 9 1 2 5 4 3 3 3 5 2 1 The basins, labeled with A’s, B’s, and C’s, are: A A A A A A A A A A B B A C C B B B C C B B C C C ----------------------------------------- Input: Output: 4 7 5 4 0 2 1 3 2 1 0 4 3 3 3 3 5 5 2 1 The basins, labeled with A’s, B’s, and C’s, are: A A B B A B B B A B B C A C C C ----------------------------------------- The code is in java. It would be great if anyone can review. class Topography { int[][] elevationData; Cell[][] map; List<Basin> basinList; Topography(int[][] elevationData) { this.elevationData = elevationData; this.basinList = new ArrayList<Basin>(); this.map = new Cell[elevationData.length][elevationData.length]; } private class Cell { int altitude; Basin basin; Cell(int altitude) { this.altitude = altitude; this.basin = new Basin(this); } } private class Basin { Cell sinkCell; HashSet<Cell> memberCells; Basin(Cell sinkCell) { this.sinkCell = sinkCell; this.memberCells = new HashSet<Cell>(); this.memberCells.add(sinkCell); } } private class BasinComparator implements Comparator<Basin> { public int compare(Basin basin1, Basin basin2) { if( basin1.memberCells.size() < basin2.memberCells.size() ) return 1; else if( basin1.memberCells.size() > basin2.memberCells.size() ) return -1; else return 0; } } private void createTopography() { Cell tmpCell; for(int i=0; i<elevationData.length; i++) { for(int j=0; j<elevationData.length; j++) { tmpCell = new Cell(elevationData[i][j]); map[i][j] = tmpCell; basinList.add(tmpCell.basin); } } } private Cell findSink(Cell cell) { if(cell == null) return null; if(cell != cell.basin.sinkCell) { cell.basin.sinkCell = findSink(cell.basin.sinkCell); } return cell; } private void union(Cell cellX, Cell cellY) { Cell sinkX = findSink(cellX); Cell sinkY = findSink(cellY); if(sinkX == null || sinkY == null || sinkX == sinkY) return; if(sinkX.altitude > sinkY.altitude) { sinkY.basin.memberCells.addAll(sinkX.basin.memberCells); basinList.remove(sinkX.basin); sinkX.basin = sinkY.basin; } else { sinkX.basin.memberCells.addAll(sinkY.basin.memberCells); basinList.remove(sinkY.basin); sinkY.basin = sinkX.basin; } } void printBasinLength() { createTopography(); for(int i=0; i<map.length; i++) { for(int j=0; j<map.length; j++) { Cell current_cell = map[i][j]; Cell minNeighbor = findMinimumNeighbor(i, j, current_cell); if(minNeighbor != current_cell) { union(minNeighbor, current_cell); } } } Collections.sort(basinList, new BasinComparator()); for(Basin basin: basinList) System.out.print(basin.memberCells.size()+ " "); } private Cell findMinimumNeighbor(int i, int j, Cell current_cell) { Cell min = current_cell; if(i>0){ if(map[i-1][j].altitude < min.altitude) min = map[i-1][j]; } if(i<map.length-1){ if(map[i+1][j].altitude < min.altitude) min = map[i+1][j]; } if(j>0) { if(map[i][j-1].altitude < min.altitude) min = map[i][j-1]; } if(j<map.length-1){ if(map[i][j+1].altitude < min.altitude) min = map[i][j+1]; } return min; } } Answer: Fun problem! This is looking quite good. That said, there are always stuff to improve... ;) Format 1. Whitespace You seem to have a lot of whitespace going on, many times without a particular meaning. I can understand whitespace cutting off a paragraph of code into sub-actions (though extracting to methods is better if possible). But why all the line jumps after ifs, fors, and method definitions? That is what indenting is for. So your line jumps seem redundant, they prevent having a good chunk of code on sight, as well as provoke lots of scrolling. 2. Missing brackets. You alternate between using brackets, and not using them. IT is strongly advised to use brackets everywhere. IDEs will insert them for ou, and even though their presence is not required to read the code, it helps when editing it, as adding an instruction in or out an else clause is made completely obvious. One exception to this rule that I use is that I do not add the bracket in else if series to keep indents manageable: if(conditionA) { // Do something, in either a one-liner is in brackets } else if { // Do something else } 3. Javadoc Where is it? It is always good to have it, and real easy. And there's this feeling of satisfaction when you come back to your code, find it Javadoc'ed, and think "attaboy" :D 4. Nested ifs if(i>0){ if(map[i-1][j].altitude < min.altitude) min = map[i-1][j]; } This is equivalent to: if(i > 0 && if(map[i-1][j].altitude < min.altitude) { min = map[i-1][j]; } But this last snipped has less indents and is shorter. Breaking if contitions is good if you want to leave room to make several internal ifs, but this is clearly not the case here. Also for polling neighbouring cells, you might want to screen an offset array like [[0, -1], [0, 1], [-1, 0], [1, 0]] and make a generic checkBound(int i, int j) method. This is easily expandable e.g. to 8-connected cells. Architecture 1. External Comparators Why externalize the Comparators? Do you think you may use different comparators at times? I doubt it, it's a highly specialized class. You should rather let Basin implement Comparable<Basin>. 2. Single Responsibility printBasinLength starts by calling createTopography. Then it goes on making union of basins. Wait, that's not part of the deal! According to its name (and in absence of Javadoc) it was only supposed to print something, not compute it. What if it was already computed? You should split the computing, the presenting, and the printing. You've partly done it, you just need to explode printBasinLength into logical components: Remove that createTopography() call altogether from Move those for...for...union calls and put them in a new void mergeBasins() 'business' method Move the sorting in a sortBasinsBySurface() method Break the prints into a call to concatenate the basins in String getRepresentation() method Never use System.out in a business method. Move it at last resort in main(). If you don't, anyone using the framework will get those prints on console without any control over it. Then make a : public String splitTopologyInBasins(){ createTopography(); sortBasinsBySurface(); mergeBasins() return getRepresentation(); } Algorithm You chose to: Instantiate immediately one object per cell Then make one pass per cell where: You then find its lowest neighbour Make a union of their sinks by: Recursing over both to find their sink The head-on cost of instancing all cells is unnecessary, they could be lazily generated. Many improvements in being lazy: Cells that flow in a known cell have very little work left to : just copy the data over, the sink will be the same. Because cells which are flowing are not sinks, they do not need to have a Basin generated at the start for them, which would later be deleted through union. I'd rather start with no cells anywhere, and use the recursive function to make a bunch of cells on the way to their sink, and only create one Basin for them all when and if I get there (unless the sink cell already existed or a previously-existing cell was encountered, then everyone inherits of that basin). In the double for-loop, you'd simply skip any Cell that are already generated. This reduces greatly the number of Basins (and lists) and merges.
{ "domain": "codereview.stackexchange", "id": 24160, "tags": "java, union-find" }
Generalize knowing when to accelerate towards or away from a target you are moving towards to arrive without overshooting it
Question: Imagine you are a spacecraft moving towards a position and you start out with zero relative velocity. You accelerate towards it, then halfway there you start accelerating in the opposite direction so that when you arrive your relative velocity is zero. How do you generalize this concept to account for when your relative velocity is not zero? Given a starting velocity, how do I determine what direction to accelerate towards in this instant in order to arrive at my target with zero velocity? Edit: this is my failed attempt at doing this with the displacement equation I tried using the determinant of the acceleration/velocity equation solved for t, as if the determinant is negative then there will be no real values for distance=0 since the spacecraft will never arrive. So if the determinant is positive even when we calculate it assuming a negative acceleration, that means we should accelerate in the negative direction because we will still arrive there (we may still overshoot, but accelerating in the negative direction is the best we can do). Here's what I'm describing in pseudocode displacement = velocity * time + 1/2 * acceleration * time^2 a = -1/2 * acceleration (the maximum amount we can accelerate in the opposite direction) b = velocity c = -displacement determinant = b^2 - 4ac if determinant is positive, accelerate in the negative direction else accelerate in the positive direction But this doesn't appear to be correct at all, I don't think I understand the implication of the determinant correctly. Answer: Assume you have constant acceleration of magnitude $g$ which is applied for a distance $x_C \geq \frac{\ell}{2} $ and then for the remaining distance $\ell- x_C$ the acceleration is reversed. $$ a(x) = g \;\mathrm{sign}( \frac{x_C - x}{\ell} ) $$ where $\rm sign()$ is a function that returns $+1$ for positive argument and $-1$ for negative argument. The velocity profile of the above acceleration is found from $ \tfrac{1}{2} v^2 = \int a(x)\,{\rm d}x $ and is is $$ v(x) = \sqrt{ 2 g \left( x_C - | x-x_C| \right) } $$ where $|\,|$ is the absolute value. To hit the target of final velocity $v(\ell) = v_F$ the point of thrust reversal must be at $$ x_C = \frac{\ell}{2} + \frac{v_F^2}{4 g} $$ Finally the time needed to reach distance $x$ is found with $t = \int \frac{1}{v}\,{\rm d}{x}$ or in this case $$ t(x) = \frac{x_C}{g} - \frac{ | x - x_C |}{g} $$ Giving the final travel time of $t_F = \frac{x_C}{g} - \frac{L-x_C}{g} $ In general, you go from a velocity profile $v(x)$ to acceleration with $a(x) = v(x) \tfrac{\rm d}{{\rm d}x} v(x)$. Here is a list of some of the direct integrals you can do with varying accelerations Acceleration as a function of speed $$ t = \int \frac{1}{a}\,{\rm d}v + C $$ $$ x = \int \frac{v}{a}\,{\rm d}v + C$$ Acceleration as a function of distance $$ \tfrac{1}{2}v^2 = \int a\,{\rm d}x + C$$ $$ t = \int \frac{1}{v}\,{\rm d}x + C$$
{ "domain": "physics.stackexchange", "id": 89590, "tags": "kinematics, acceleration, computational-physics, simulations" }
Too low accuracy on MNIST dataset using a neural network
Question: I am beginning with deep learning. This is an implementation of a simple neural network with just 1 hidden layer on MNIST dataset. Why is it that the loss doesn't change at all after any epoch? It clearly means that it is not learning at all. The accuracy is approx. 11% that is like random guessing. But should it be so less? I have used Adam optimizer and cross_entropy loss. input_nodes = 784 hl1_nodes = 64 output_nodes = 1 from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train_reshaped = X_train.reshape(X_train.shape[0],784) model = Sequential() model.add(Dense(hl1_nodes, activation='relu', input_shape=(input_nodes,))) model.add(Dense(output_nodes, activation = 'sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(x=X_train_reshaped, y=y_train, validation_split=0.33, verbose=1,epochs=10) #output Train on 40199 samples, validate on 19801 samples Epoch 1/10 40199/40199 [==============================] - 4s 87us/step - loss: -55.0254 - acc: 0.1142 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 2/10 40199/40199 [==============================] - 3s 76us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 3/10 40199/40199 [==============================] - 3s 74us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 4/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 5/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 6/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 7/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 8/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 9/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 Epoch 10/10 40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088 What am I missing? Edit: It is same upto the 4th digit even after 90th epoch. Answer: What am I missing? Incorrect architecture for the classification task. You have a single binary output, trained using binary_crossentropy, so the NN can only classify something as in a class (label 1) or not (label 0). Instead, you most likely want 10 outputs using softmax activation instead of sigmoid, and categorical_crossentropy as the loss, so that you can classify which digit is most likely given an input. Incomplete processing of MNIST raw data. The input pixel values in X_train range from 0 to 255, and this will cause numeric problems for a NN. The target labels in y_train are the digit value (0,1,2,3,4,5,6,7,8,9), whilst for classification you will need to turn that into binary classes - usually a one-hot coding e.g. the label 3 becomes a vector [0,0,0,1,0,0,0,0,0,0]. Scale the inputs - a quick fix might be X_train = X_train/ 255 and X_test = X_test/ 255 One-hot code the labels. A quick fix might be y_train = keras.utils.to_categorical(y_train) I made those changes to your code and got this after 10 epochs: val_loss: 0.1194 - val_acc: 0.9678
{ "domain": "datascience.stackexchange", "id": 6000, "tags": "neural-network, keras, mnist" }
Calculating read average length in a Fastq file with bioawk/awk
Question: I found here this awk script: BEGIN { headertype=""; } { if($0 ~ "^@") { countread++; headertype="@"; } else if($0 ~ "^+") { headertype="+"; } else if(headertype="@") { # This is a nuc sequence len=length($0); if (len>4) { readlength[len]++; } } } END { for (i in readlength){ countstored+=readlength[i]; lensum+=readlength[i]*i; print i, readlength[i]; } print "reads read = "countread > "/dev/stderr"; print "reads stored = "countstored > "/dev/stderr"; print "read average length = "lensum/countstored > "/dev/stderr"; } and I just wonder if it is possible to shorten it with bioawk? Answer: This can also be done with regular awk. awk '{if(NR%4==2) {count++; bases += length} } END{print bases/count}' <fastq_file> The NR%4==2 count the second line out of every block of 4. length is a built-in that defaults to the length of the line, same as length($0). In this case you can inject you custom printing to the END{} block but countread and countstored will always be the same since the averaging is done on the fly.
{ "domain": "bioinformatics.stackexchange", "id": 685, "tags": "fastq, awk, bioawk" }
Equinox sunrise sunset direction
Question: Looking at sunrise and sunset data around 2017 vernal equinox (because of GMT timezone I took London) https://www.timeanddate.com/sun/uk/london Date Sunrise Sunset Day Length Solar Noon 16 06:12 (92°) 18:07 (269°) 11:54:55 12:09 (37.0°) 17 06:09 (91°) 18:08 (269°) 11:58:53 12:08 (37.4°) 18 06:07 (90°) 18:10 (270°) 12:02:52 12:08 (37.8°) 19 06:05 (90°) 18:12 (271°) 12:06:50 12:08 (38.2°) 20 06:03 (89°) 18:13 (271°) 12:10:48 12:07 (38.6°) 21 06:00 (88°) 18:15 (272°) 12:14:47 12:07 (39.0°) 22 05:58 (88°) 18:17 (273°) 12:18:45 12:07 (39.3°) We see that 12-hours day happens around 17-18 of March, however the Vernal Equinox takes place on 20 March 2017, 10:29 GMT. I understand that due to refraction in the Earth's atmosphere and due to the fact that sunrises and sunsets are defined by the top of the Sun's disk, sunrises are "earlier" and sunsets are "later", so the 12-hours day indeed should happen before the equinox. But why the directions of sunrise and sunset are east--west on that same day? The above factors should not influence directions, should they? Answer: Due to the atmospheric refraction the Sun rises earlier and sets later as you correctly wrote. Refraction however is perpendicular to the horizon. So if the apparent sun rise is earlier, than the true (actual) position of the Sun will be more to the north east (smaller azimuth) than when the true sun rise occurs. As refraction is perpendicular the apparent sun rise will also be more to the north east and hence the azimuth will have a lower value (88-89° instead of 90°). NB. This is of course only true from the northern hemisphere. NB2. The Sun is not to scale in the figure below.
{ "domain": "astronomy.stackexchange", "id": 2149, "tags": "the-sun, observational-astronomy" }
rviz: OGRE EXCEPTION(3:RenderingAPIException): Unable to create a suitable GLXContext in GLXContext
Question: When i try to run "rosrun rviz rviz" i get this output : [ INFO] [1306327198.635135186]: Loading general config from [/home/rajat/.rviz/config] [ INFO] [1306327198.650256875]: Loading display config from [/home/rajat/.rviz/display_config] [ERROR] [1306327199.295138649]: Caught exception while loading: OGRE EXCEPTION(3:RenderingAPIException): Unable to create a suitable GLXContext in GLXContext::GLXContext at /tmp/buildd/ros-diamondback-visualization-common-1.4.0/debian/ros-diamondback-visualization-common/opt/ros/diamondback/stacks/visualization_common/ogre/build/ogre_src_v1-7-1/RenderSystems/GL/src/GLX/OgreGLXContext.cpp (line 61) After wasting more than 5 hours on this , i still can't figure out the solution. :( . Please help . Originally posted by rajat on ROS Answers with karma: 175 on 2011-05-25 Post score: 7 Original comments Comment by Asomerville on 2011-06-03: Did you ever solve your problem? Comment by harderthan on 2018-12-20: Did you ever solve your problem? Answer: I got it working , i guess this is some issue with the video card drive . i was using ubnutu in a virtual machine inside windows , but now i installed it as a separate operating system along with the windows , and installed everything again . and everything seems to work fine now :). i know this is not the solution , but if you have the same case give this method a try . Originally posted by rajat with karma: 175 on 2011-06-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dornhege on 2011-06-15: Actually it is the solution. 3D and virtual machines don't work well together and rviz/OGRE makes heavy use of 3D features.
{ "domain": "robotics.stackexchange", "id": 5659, "tags": "ros, rviz, opengl, ogre" }
Is the rainforest too wet to burn?
Question: I want to know if a living rainforest in the Amazon, completely untouched by humans, is too wet for the fire to spread, even in the dryer months around July? I.e. if somebody dropped 5000 gallons of gasoline there, put it on fire, would the fire spread, or would it die out after the gas has burned, because it is still too wet there? Or is it that the only areas that burn are those where the trees have been cut down (deforestation), the dead trees and remaining vegetation then have to dry out during the (relatively, for a rainforest) dryer months, and only then are they able to burn? Or does the fire spread into the actual living rainforest? Answer: My personal experience of rainforest is confined to Malaysia, where it is too damp to burn unless it is first cut down and allowed to dry out in the sun. The fires on these ladangs, as they are called, never spread to nearby rainforest, and abandoned ladangs eventually become rainforest again. I have some knowledge of other rainforests and have come to the conclusion that true rainforest is similar to Malaysia's. Throughout the wetter parts of the tropics, where there is not rainforest there is a kind of forest called monsoon forest which is only wet for a few months of the year. After that it dries out and eventually loses it's leaves as though it were winter. Often it is fairly close to real rainforest, so there is a gradual transition from monsoon forest to rain forest. Both kinds are found in Amazonia and Central America, Dropping 5,000 gallons of gasoline into a rainforest to see if it would burn would be regarded as very odd nowadays, but it frequently happened in WW2. Allied and Japanese aircraft were frequently shot down and crashed into the jungle, seldom with 5,000 gallons of fuel aboard but certainly enough to cause a fire. In Malaya and Borneo the jungle never caught fire, and although this happened all over S.E.Asia, surprisingly enough I have never heard of an incident where the jungle caught fire. There is some monsoon forest in India, but the fighting there was confined to Assam, where it is rainforest. In modern times a large jet occasionally crashes into jungle, but again I have never heard of one setting he jungle alight. Jet fuel, of course, is not as inflammable as gasoline. Some of the fires in Brazil currently shown on TV look as though it is the sort of scrub which springs up on land which was once cultivated.
{ "domain": "earthscience.stackexchange", "id": 1835, "tags": "amazon, wildfire" }
Building a matrix by applying XOR rules
Question: There is a matrix of \$m\$ rows and \$n\$ columns where each row is filled gradually. Given the first row of the matrix we can generate the elements in the subsequent rows using the formula: $$\begin{align} a_{i,j} =&\ a_{i-1,j} \oplus a_{i-1,j+1}\quad\forall j:0\le j\le n-2 \\ a_{i,n-1} =&\ a_{i-1,n-1} \oplus a_{i-1,0} \end{align}$$ Each row is generated one by one, from the second row through the last row. Given the first row of the matrix, find and print the elements of the last row as a single line of space-separated integers. For example, input as \$4 \space 2\$ (4 is the number of columns and 2 is the row which we are supposed to find): \$6 \space 7 \space 1 \space 3\$ (1st row input) 6^7 = 1 7^1 = 6 1^3 = 2 3^6 = 5 \$1 \space 6 \space 2 \space 5\$ are the final row output. Now, how could I optimise my program if the value of \$n\$ is like pow(10,5) and \$m\$ is like pow(10,18)? import java.util.Scanner; class XorMatrixMain{ public static void Xor_Array(int[] xor, int n){ int num = xor[0]; boolean bool = false; int last = 0; for(int j=0;j<n-1;j++){ for(int i=0 ; i<xor.length ; i++){ if(i<xor.length-1){ xor[i] = xor[i]^xor[i+1]; } if(i==xor.length-1){ if(bool){ xor[i] = xor[i]^last; } else{ xor[i] = xor[i]^num; bool = true; } } } last = xor[0]; } for(int i=0;i<xor.length;i++){ System.out.print(xor[i]+" "); } } public static void main(String[] args){ Scanner scan = new Scanner(System.in); int m = scan.nextInt(); int n = scan.nextInt(); int[] xor = new int[m]; for(int i=0;i<m;i++){ xor[i] = scan.nextInt(); } Xor_Array(xor,n); } } Answer: Time complexity is \$O(nm)\$. Obviously it is bound to TLE. There is an \$O(n)\$ solution, based on the following observations: \$ x \oplus x = 0\$ \$ x \oplus y = y \oplus x\$ Let's say your initial row is a b c d e f. The next row will be a^f b^a c^b d^c e^d f^e. The second next is more interesting: the first element is a^f ^ f^e = a^e, the second is a^f ^ b^a = b^f, etc. In general (prove it by induction), \$k\$'th row consists of \$a_i \oplus a_{i-k}\$ (where the last index is taken modulo n). Notice that the \$n\$'th row is all zeroes, and so are all the subsequent rows. The code is extremely hard to follow. num is only used once - may be not have it at all? last = xor[0]; for(int i=0 ; i<xor.length ; i++){ if(i<xor.length-1){ xor[i] = xor[i]^xor[i+1]; } if(i==xor.length-1){ xor[i] = xor[i]^last; } } Notice that i == xor.length - 1 is true exactly once, at the predictable i, yet you test it at each iteration. Lift it out of the loop: last = xor[0]; for(int i=0 ; i<xor.length - 1; i++){ xor[i] = xor[i]^xor[i+1]; } xor[i] = xor[i]^last;
{ "domain": "codereview.stackexchange", "id": 22467, "tags": "java, performance, algorithm, matrix" }
Why is the entirity of string tension $T$ included in the potential energy of a string wave?
Question: When calculating the elastic potential energy of a string segment inside a transversal string-wave, it is usually reasoned that the string tension $T$ causes the string segment to stretch as it oscillates, and therefore produces work equal to: $$\Delta U~=~T(d\ell-dx).$$ My question is, how come the entirety of the tension $T$ is contributing work here? We seem to be implying that the tension is always parallel to the string segment, but how is that possible if the tension also generates transversal movement in the string? Indeed, don't we assume that the string is curved to derive the wave equation in the first place? Shouldn't we take some geometric factor into account when calculating the work done to stretch the segment? Answer: Yes, OP is right. There is in general both longitudinal displacement $\xi$ in the $x$-direction and transversal displacement $\eta$ in the $y$-direction of the string. Then Pythagoras yields $$(d\ell)^2~=~(dx+d\xi)^2+(d\eta)^2. $$ If the string of length $\ell$ has undeformed length $\ell_0$ and spring constant $k$, then the total potential energy is $$V~=~\frac{k}{2}(\ell-\ell_0)^2,$$ cf. e.g. my Phys.SE answer here. The string tension is $$ T~=~k|\ell-\ell_0|. $$
{ "domain": "physics.stackexchange", "id": 52978, "tags": "newtonian-mechanics, waves, potential-energy, spring, string" }
where can i learn electronics from intro to advance digital
Question: I'd like some well put video series of like 30 videos. Or anything but it needs to thorough and in easy English...less mundane. So far all resources i have found either go upto resistors code or of projects that tell you do this and this and this and tada you got this. Is there really no online resource for people to learn electronics. I want further master analog and do move on to digital cause it's better to spend 0.40 cents.... than spend $95 on components and get the whole thing on tiny chip. Please bare with me like six months i have been searching for legit source, material that is meant to teach you. I like pictures and colors. Answer: MOOCS are so popular nowadays, that I'm not sure you haven't check it out. Anyway, let's give it a shot. Please, have a look at this courses: edx.org https://www.edx.org/course-list/allschools/electronics/allcourses www.coursera.org You can also check out www.coursera.org. it's similar to mentioned above. MIT OpenCourseware And last but not least, you can also try to find something here: MIT OpenCourseware electronics UPDATE: Also, there's a site with all MOOCS in one place: class-central.com
{ "domain": "robotics.stackexchange", "id": 269, "tags": "electronics" }
I want a fast rank only shuffling algorithm for a full 52 card deck
Question: My goal is to do some large iteration Monte Carlo simulation of different card games that require on the order of 1 billion random decks to get a reliably accurate answer. The bulk of the computer time is actually shuffling/preparing the decks, not searching for the "hit" criteria I am looking for. I am using an string function rich interpreted language. I have bit functions available that work up to 32 bit at a time. I am only interested in the ranks of the 52 card deck, not the suits. So I am looking for a fast algorithm that will give me 4 of each of the 13 ranks for a full 52 card deck. It doesn't matter which order the suits of a particular rank come in since we are ignoring the suits (other than we are enforcing that there only be 4 of each suit for each rank). My first attempt was to just generate a normal deck with the ranks and suits "encoded" using cards numbered 1 thru 52 with an array of flags telling which cards were already seen. Each random number generated, (from 1 to 52), was for determining the next card (if there was no collision). This algorithm worked but was kinda slow, only producing about 6000 full decks per second on a 3.06 Ghz dual core computer (using only about 50% of the CPU). My 2nd attempt was just to tweak the first attempt by only picking the first 48 cards (since the "seen" ratio is very high for the last 4 cards and likely many collisions will occur, slowing it down), then choosing a random number from 1 to 24 to handle the permutations of the last 4 "missing" cards which are known from simply scanning the flag array and finding the 4 that are marked "false" meaning they haven't been seen yet. This was roughly 50% faster as I was able to generate about 9000 full decks per second vs. about 6000 previously. For my 3rd attempt, I am using a 32 bit random number to place 4 or 8 cards at a time from the unshuffled deck into a built up shuffled deck. Any help/ideas would be greatly appreciated. Answer: The questioner, David James wrote on the occasion of bounty. Is there a way to speed this shuffle up being that there are repeated ranks? For example, suppose we only had 2 ranks but still 52 cards, could that be sped up over 52 distinct cards? if so, then how can we speed up 13 different ranks with 4 of each rank as in this problem? It seems like the rank only deck of 52 has many fewer possible distinct decks so there should be a way to shuffle them quicker. For example, suppose the deck starts out in rank order such as AAAA22223333... It won't matter if our random # generator (RNG) picks any of the first 4 cards, the result will be the same (a rank A card). I am hoping to use this to our advantage rather than just using Fisher-Yates shuffle as if we are dealing with 52 distinct cards. Driven by David James's insight and insistence that there should be some way to speed this shuffle up being that there are repeated ranks, this new answer will be focusing whether it is possible to and how to take advantage of the repeated ranks. Other methods to improve the performance, which might be much more effective, powerful and randomness-correct are not considered. Yes, there is a way to take advantage of the repeated ranks. Please go to my Java playground provided by repl.it and hit the button "run". You will see output like the following. [6, 10, 3, 8, 7, 10, 3, 3, 5, 13, 2, 2, 4, 5, 4, 4, 1, 9, 11, 11, 6, 9, 10, 10, 5, 12, 13, 3, 1, 7, 2, 4, 8, 2, 3, 13, 8, 12, 6, 9, 4, 13, 9, 12, 5, 7, 1, 7, 11, 1, 3, 6] warm-up one: 532197/second first 4 skipped: 552181/second warm-up two: 528262/second David-Apass: 656598/second Naive: 528541/second Improvement: 24.2 Your actual output might vary greatly because of many factors. What is important is the last three lines of output. "David-Apass: 656598/second" means 656598 decks per a second by David-Apass shuffle, a variation of Fisher-Yates shuffle that takes advantages of the equal ranks. "Naive: 528541/second" means 528541 decks per a second by a common implementation of the Fisher–Yates shuffle. The line of comparison "Improvement: 24.2%" shows the speed-up of David-Apass shuffle over the common Fisher-Yates shuffle. The nature or the mechanism of David-Apass shuffle apart might have been discovered many years ago. However, I have not found any reference to it yet. All I can claim is that I have discovered it independently, thanks to David James' insight and insistence. ** The following is the accepted answer to the version of the question at https://cs.stackexchange.com/revisions/98273/5, which wass before the bounty.** It is not a trivial task to create 1 billion random decks, although it is not a daunting task any more because of today's fast computer. Firstly, let us talk about the question proper. What would be a fast algorithm? It looks like that the standard algorithm to shuffle cards, Fisher–Yates shuffle is a pretty good choice in term of speed. It is certainly one of the easiest to implement as well. Here is the complete pseudocode. for t from 1 to 1000000000 do let arr=[1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10,10,11,11,11,11,12,12,12,12,13,13,13,13] for i from 51 down to 1 do let j be a random integer such that 0 ≤ j ≤ i exchange arr[j] and arr[i] output arr How can we make this algorithm run faster? The very first rule about performance improvement in real-life task is measure, measure and measure. A very rough measurement in my Python code reveals that the most of the time is spent in the generation of random integers. This is not surprising once you take a close look at the algorithm above, assuming outputting arr is fast enough. In fact, that observation is probably true in any reasonably good algorithm to create a random deck of cards. So the next attack should be how to produce the random number faster or how to reduce the numbers of random numbers need to be generated. If a user has recorded billions of random numbers of similar kind, then fetching them from the records might be a faster way. Or use a faster random number generator. On the other hand, I cannot see how to reduce the number of needed random numbers without affecting the randomness of the generated deck. Note that manipulation of string is much slower than manipulation of numbers in almost all programming languages, it is preferred to operate the cards in the form of numbers. Since there are a billion of decks, the storage becomes an issue. One of the most efficient ways to encode a deck is to encode them as a 52-digit number in base 13. However, it is not be very convenient to encode to decode from that big number. Another way is, as the questioner mentioned to me in chatroom, to encode each card in a nibble, i.e., four bits, similar to BCD, which is compact enough while much easier to manipulate. Also note that compression cannot probably do much to reduce the storage of this bunch of random numbers. As Yuval mentioned, an obvious suggestion is to take the advantage of a fast programming language such as C. Let us do not consider assembly languages yet. Let me conclude this answer with the strategy taken by the questioner, showing the art of applying compute science to real life situations may lead to very different directions. I decided to just store the 1 billion shuffled decks in a disk file (about 26GB for 1 billion decks), then I can just reread it when I want to do other analysis. Generating it once will isolate the speed reduction and then the analysis part should "rip" (relatively speaking).
{ "domain": "cs.stackexchange", "id": 12402, "tags": "algorithms" }
Booking system for shows
Question: Kindly accept my apologies as Java is something my mind cannot digest no matter what I do. At the moment I have an assignment and have written a piece of code and have used Constructor (I think so) but not sure if I have used it correctly and if Yes then will need some guidance in order to refactor the code. Anyhow I am writing code for a Booking System where user is given option to input value and create a new Show along with date. I have achieved this in the below code but note sure that I have used the constructor properly or not. import java.io.*; import java.util.*; public class LocalPlay extends Play { private static String playName; private static String playDate; private LocalPlay(String playName, String playDate) { this.playName = playName; this.playDate = playDate; } public String getShowName() { return playName; } public String getShowDate() { return playDate; } public void setShowName(String value) { this.playName = value; } public void setShowDate(String date) { this.playDate = date; } public static void lPlayDetails() { try { int selection = 0; Scanner option = new Scanner(System.in); System.out.println(); System.out.println("Press 1 to Add a New Play/ Show: "); System.out.println("Press 2 to View existing Play: "); System.out.println("Press 3 to delete existing Play: "); System.out.println("Press 4 for Previous Menu: "); System.out.println(); System.out.print("Kindly make the selection: "); selection = option.nextInt(); if (selection == 1) { addPlay(); } else if (selection == 2) { viewPlay(); } else if (selection == 3) { deletePlay(); } else if (selection == 4) { managePlay(); } else while (selection != 1 || selection != 2 || selection != 3) { System.out.println(); System.out.println(selection + " is not a valid option. "); managePlay(); } } catch (InputMismatchException ime) { System.out.println("Invalid option selected."); System.out.println("Try again"); lPlayDetails(); } } private static void addPlay() { String name; String date; Scanner option = new Scanner(System.in); System.out.print("Enter name of Play: "); name = option.nextLine(); System.out.print("Enter date of Play: "); date = option.nextLine(); LocalPlay lp = new LocalPlay(name, date); System.out.println("The name of play is " + playDate + " date played is " + playDate); lPlayDetails(); BufferedWriter writer = null; try { writer = new BufferedWriter(new FileWriter("play.txt")); writer.write(name + " " + date); } catch (IOException e) { } finally { try { if (writer != null) writer.close(); } catch (IOException e) { } } return; } private static void deletePlay() { System.out.println("Method not implemented"); lPlayDetails(); } private static void viewPlay() { System.out.println("Method not implemented"); lPlayDetails(); } } Answer: Structure Generally, a method should do one thing. For example addPlay should add a play (although it's an odd method for a play class, as a play generally cannot own another play). But your method doesn't just add something, it actually retrieves user input, prints stuff, writes stuff to a file, and then calls a seemingly unrelated method, which asks for more user input and processes it in some way. It's also very unclear what your LocalPlay class is actually supposed to do. Looking at the fields, constructor, and getter/setter, it looks as if it holds the data for one play, but then it is also possible to add a play, which doesn't make sense. You should definitely try to separate the holding of the data from the parsing of user input. You should also try to better define the tasks and differences of lPlayDetails and addPlay, and you should try to remove the confusing call structure (the functions call each other, which makes it difficult to follow the control flow of your program). Misc there is no reason to make the fields (or the methods) static. your naming is a bit confusing. There doesn't seem to be a difference between a play and a show, you seem to be using both interchangeably. In that case, use just one of the terms for clarity. if a method is not implemented, it shouldn't do anything. Instead of doing something totally different, throw something like an UnsupportedOperationException. lPlayDetails: It's very unclear what this method does. The name is vague and it is missing a JavaDoc comment. It is also very unclear what the exact difference to addPlay is. it is very unclear to me why you create a new LocalPlay in addPlay. There doesn't seem to be a reason for it, and it seems to be unused.
{ "domain": "codereview.stackexchange", "id": 19539, "tags": "java, object-oriented, homework" }
Avoiding code duplication in multiple except blocks in Logger class
Question: Context: A logger records events which contain an area, a level, a message and an option indicating that the source replaces another one. The logger attempts to send the message through HTTP, and on failure, saves it locally. If HTTP service times out, the logger stops attempting using HTTP and stores the message locally directly. Concern: I'm concerned with code duplication at the end of the method. The same call is repeated three times. In my opinion: Creating a function within the method would be an overkill. Creating a separate class to encompass the four elements and pass an instance of this class would be an overkill as well. Moving the call to the end and using return would complicate the logic. What are my options? class Logger(): ... def _log(self, area, level, message, replacing=None): ... if self._timedOut: self._logToFile(area, level, message, replacing) # ← This line... return try: self._post(json) except urllib.error.HTTPError: self._logToFile(area, level, message, replacing) # ← ... is duplicated here... except socket.timeout: self._logToFile(area, level, message, replacing) # ← ... and here. self._timedOut = True Answer: Method decomposition looks like the way to go here: def _try_post(self, json): """Try posting the message (unless this timed out previously). Return True if successful, False otherwise. """ if not self._timedOut: try: self._post(json) return True except urllib.error.HTTPError: pass except socket.timeout: self._timedOut = True return False
{ "domain": "codereview.stackexchange", "id": 9028, "tags": "python, http, logging" }
Dichromate in acidic medium
Question: We were taught that $\ce{Cr_2O_7^{2-}}$ (dichromate) in $\ce{H_2SO_4}$ gives $\ce{CrO_3}$ $$\ce{Cr_2O_7^{2-} +H_2SO_4 -> CrO_3 + ...}$$ Link: wikipedia But we know that dichromate in acidic medium converts to $\rm Cr(III)$. $$\ce{ Cr_2O^{2−}_7 + 14 H3O+ + 6 e− → 2 Cr^{3+} + 7 H2O}$$ Link: wikipedia How is this possible? Answer: Both statements on the behaviour of $\ce{Cr2O7^{2-}}$ are true, but you are confusing two different reactions. The formation of chromium trioxide from (sodium) dichromate in sulfuric acd is not a redox reaction. There is no partner that could be oxidized by $\ce{Cr2O7^{2-}}$. Concentrated sulfuric acid is an oxidant itself and metals like copper are dissolved (oxidized to $\ce{Cu^{2+}}$) while not hydrogen is generated. Remember also that sulfuric acid is a strong, oxidizing and dehydrating acid. This effect is used in the production of $\ce{CrO3}$, which can be considered the anhydride of chromic acid ($\ce{H2CrO4}$). Note that chromium in $\ce{Cr2O7^{2-}}$, $\ce{H2CrO4}$, and $\ce{CrO3}$ always has the same oxidation state. The second reaction that you mentioned does happen too, and this is indeed part of a redox reaction in which $\ce{Cr(VI)}$ is reduced to $\ce{Cr(III)}$. However, this can only happen when a suitable reaction partner is present, which in turn is oxidized by $\ce{Cr2O7^{2-}}$.
{ "domain": "chemistry.stackexchange", "id": 6722, "tags": "inorganic-chemistry, redox, transition-metals" }
Why loudness is dependent on number of molecules?
Question: Loudness is dependent on amplitud,e which in turn is dependent on number of molecules. The more the number, more the amplitude. But isn't amplitude just a measure of how displaced can the particle be from mean position? Why does number of particles affect that? Answer: Loudness doesn't depend on the number of molecules, at least not directly. The loudness depends on the amplitude of the acoustic pressure, not the total pressure. The total pressure depends on the number of molecules (and temperature, etc.), but the acoustic pressure is the difference between the total pressure and the ambient pressure. The acoustic pressure amplitude has everything to do with the source of the disturbance and relatively little to do with the propagation medium. You can have arbitrarily small acoustic pressure regardless of the medium. The main way that the number of molecules (and other properties of the propagation medium) affect sound is through the sound speed and the mass density. Changes in these properties can lead to reflection-transmission problems in which not all of the energy passes through the interface. From this perspective, the number of molecules in air versus the number and type of molecules in your ear organs actually affect how much of the sound energy gets in to your ear to be detected. So, from this perspective the number of air molecules do affect the loudness.
{ "domain": "physics.stackexchange", "id": 76563, "tags": "waves, acoustics, vibrations" }
Product of all but one number in a sequence
Question: I was given the following prompt in a coding interview: Given an array of integers, return a new array such that each element at index i of the new array is the product of all the numbers in the original array except the one at i. For example, if our input was [1, 2, 3, 4, 5], the expected output would be [120, 60, 40, 30, 24] I solved this in two ways: fun multiplies all elements together in the first iteration, and then loops again and divides by the number at that position fun2 does not use division, and instead iteratively builds up the sum in each index #include <stdio.h> #include <stdlib.h> int fun(int* nums, int arr_size) { int sum; int i; for(i=0, sum=1; i<arr_size; i++) sum*=nums[i]; for(i=0; i<arr_size; i++) nums[i]=sum/nums[i]; return 0; } int fun2(int* nums, int arr_size) { int i,j; int sum=1; int new_arr[arr_size]; for(i=0; i<arr_size; i++) { for(j=0; j<arr_size; j++) { if(i!=j) sum*=nums[j]; //skip member same index in the loop } new_arr[i]=sum; sum=1; } memcpy(nums, new_arr, arr_size*sizeof(int)); return 0; } int main(void) { /*Given an array of integers, return a new array such that each element at index i of the new array is the product of all the numbers in the original array except the one at i. For example, if our input was [1, 2, 3, 4, 5], the expected output would be [120, 60, 40, 30, 24] */ int nums[] = {1, 2, 2, 4, 6}; int size = sizeof(nums)/sizeof(nums[0]); int i; fun(nums, size); for (i = 0; i < size; i++) printf("%d ", nums[i]); //what if you can't use division? printf("\n"); int nums2[] = {1, 2, 2, 4, 6}; fun2(nums2, size); for (i = 0; i < size; i++) printf("%d ", nums2[i]); return 0; } ``` Answer: Here are some things that may help you improve your code. Use all required #includes The code uses memcpy, so it should #include <string.h>. It might still compile on your machine, with your compiler, but it's not portable. Think about potential errors As one of the comments correctly notes, if one of the entries has the value of zero, this line will have a problem: nums[i]=sum/nums[i]; Also, what happens if the passed arr_size is zero or negative? What should the function return if there is exactly one item in the array? What if the passed pointer is NULL? Follow directions exactly The problem says to "return a new array" but that is not really what this code is doing. This code is overwriting the input array. One of the problems with that is that it's not possible to call this with a const pointer as mentioned in the next suggestion. It also means that rather than returning a meaningless constant value in all cases, the function should probably return a pointer. Use const where practical As mentioned above, the code should return a new array rather than overwriting the passed one. I would suggest that the function should be something like this: int* exclusive_product(const int* nums, size_t nums_size) Note that first, we use const and second, we use size_t rather than int for the second argument to more clearly indicate the type of variable we are expecting. Use better variable names I would say that nums, size and i are good variable names, but that fun and fun2 and definitely sum are not. The problem is that fun doesn't tell the reader anything about what the code is supposed to do and sum is actually misleading (it's a product, not a sum). Think about an efficient way to solve this The \$O(n^2)\$ code you have in fun2 is not a terrible way to solve the problem and has the advantage of being obviously correct. When I interview people, I typically like such answers because it's much easier to make slow correct code fast than it is to make fast incorrect code correct. However, in a good interview, I like to ask the candidate to make comments on his or her own code, including any limitations, assumptions or potential improvements that might be made. In this case, it helps if we think mathematically about the final values in the resulting array \$B\$ from input array \$A\$. For example, we know that every value \$B_j\$ can be expressed as the product $$\displaystyle B_j = \prod_{i=0}^{j-1} A_i \prod_{i=j+1}^{n-1} A_i$$ if \$n\$ is the length of the array. This suggests a more efficient approach I'll leave for you to figure out.
{ "domain": "codereview.stackexchange", "id": 39907, "tags": "c, array, interview-questions" }
Why is circuit inverse not working for EfficientSU2?
Question: For some reason I get the following error when attempting to find the inverse of the EfficientSU2 VQE variational form: TypeError: 'NoneType' object is not reversible My code is as follows: var_form = EfficientSU2(6, entanglement="linear") var_form_inv = var_form.inverse() # error thrown of this line Is there a bug in the implementation of the inverse method? and if so how can I implement a working inverse function? Note qiskit.__qiskit_version__: {'qiskit-terra': '0.15.2', 'qiskit-aer': '0.6.1', 'qiskit-ignis': '0.4.0', 'qiskit-ibmq-provider': '0.9.0', 'qiskit-aqua': '0.7.5', 'qiskit': '0.21.0'} Answer: EfficientSU2 is a BlueprintCircuit and does not populates its internal data field until you try to access them. In this case, when you are calling the inverse function the data are still empty (None) and this is the error you are getting. It is probably a bug and should be fixed. Nevertheless, as a workaround for now you can try: var_form = EfficientSU2(6, entanglement="linear") # build the circuit var_form._build() # or just print it print(var_form) var_form_inv = var_form.inverse()
{ "domain": "quantumcomputing.stackexchange", "id": 1964, "tags": "qiskit, programming, ibm-q-experience, vqe" }
Reclassifying movies by theme
Question: Any efficient way to solve the following problem assuming data is large. I solved the problem but how can I improve the code, which will make it efficient. any suggestions? Data: movie_sub_themes = { 'Epic': ['Ben Hur', 'Gone With the Wind', 'Lawrence of Arabia'], 'Spy': ['James Bond', 'Salt', 'Mission: Impossible'], 'Superhero': ['The Dark Knight Trilogy', 'Hancock, Superman'], 'Gangster': ['Gangs of New York', 'City of God', 'Reservoir Dogs'], 'Fairy Tale': ['Maleficent', 'Into the Woods', 'Jack the Giant Killer'], 'Romantic':['Casablanca', 'The English Patient', 'A Walk to Remember'], 'Epic Fantasy': ['Lord of the Rings', 'Chronicles of Narnia', 'Beowulf']} movie_themes = { 'Action': ['Epic', 'Spy', 'Superhero'], 'Crime' : ['Gangster'], 'Fantasy' : ['Fairy Tale', 'Epic Fantasy'], 'Romance' : ['Romantic']} themes_keys = movie_themes.keys() theme_movies_keys = movie_sub_themes.keys() #Iterate in movie_themes #Check movie_themes keys in movie_sub_keys #if yes append the movie_sub_keys into the newdict newdict = {} for i in range(len(themes_keys)): a = [] for j in range(len(movie_themes[themes_keys[i]])): try: if movie_themes[themes_keys[i]][j] in theme_movies_keys: a.append(movie_sub_themes[movie_themes[themes_keys[i]][j]]) except: pass newdict[themes_keys[i]] = a # newdict contains nested lists # Program to unpack the nested list into single list # Storing the value into theme_movies_data theme_movies_data = {} for k, v in newdict.iteritems(): mylist_n = [j for i in v for j in i] theme_movies_data[k] = dict.fromkeys(mylist_n).keys() print (theme_movies_data) Output: {'Action': ['Gone With the Wind', 'Ben Hur','Hancock, Superman','Mission: Impossible','James Bond','Lawrence of Arabia','Salt','The Dark Knight Trilogy'], 'Crime': ['City of God', 'Reservoir Dogs', 'Gangs of New York'], 'Fantasy': ['Jack the Giant Killer','Beowulf','Into the Woods','Maleficent','Lord of the Rings','Chronicles of Narnia'], 'Romance': ['The English Patient', 'A Walk to Remember', 'Casablanca']} Apologies for not properly commenting the code. I am more concern about the running time. Answer: theme_movies_data and newdict are bad variable names, change them to ones easier to read. This will reduce the amount of comments you need in your code. You can simplify your code if you stop using range and use dict.iteritems more. You shouldn't need your try. You would know this if you use range less. You don't need dict.fromkeys(mylist_n).keys() it's just useless. new_dict = {} for key, themes in movie_themes.items(): a = [] for theme in themes: if theme in movie_sub_themes: a.append(movie_sub_themes[theme]) new_dict[key] = a theme_movies = {} for key, movie_groups in new_dict.iteritems(): theme_movies_data[key] = [ movie for movies in movie_groups for movie in movies ] print(theme_movies) You can remove the need for the second loop if you use a.extend. You can change the creation of a to a comprehension. You can change the creation of theme_movies to a dictionary comprehension. theme_movies = { key: sum( movie_sub_themes.get(theme, []) for theme in themes ) for key, themes in movie_themes.iteritems() } print(theme_movies) Alternately if you don't like sum: theme_movies = { key: [ movie for theme in themes for movie in movie_sub_themes.get(theme, []) ] for key, themes in movie_themes.iteritems() } print(theme_movies)
{ "domain": "codereview.stackexchange", "id": 30719, "tags": "python, performance, python-2.x, hash-map" }
Momentum and infinitesimal translation
Question: My problem is all about this previous question. I'm trying to understand the reasoning behind the definition of the momentum operator in quantum mechanics. Sakurai tells me that for the infinitesimal translation of the previously cited question: $$X=x+dx$$ $$P=p$$ I have the following generating function for this transformation: $$F(x,P)=xP+pdx$$ Ok, lets verify that, I know that this is a type-2 generating function, so the following must hold: $$p=\frac{\partial F(x,P)}{\partial x}$$ $$X=\frac{\partial F(x,P)}{\partial P}$$ I also know from the cited previous question that i can write $dx$ as $\varepsilon f(x)$. Ok, so I get: $$p=\frac{\partial }{\partial x}[xP+p\varepsilon f(x)]=P+p\varepsilon f'(x)$$ $$X=\frac{\partial }{\partial P}[xP+p\varepsilon f(x)]=x$$ then: $$p=P+p\varepsilon f'(x) \ \ \Rightarrow \ \ P=p(1-\varepsilon f'(x))$$ and so i get the following transformation: $$X=x$$ $$P=p(1-\varepsilon f'(x))$$ This is not what I was expecting according to Sakurai (to be precise Sakurai's book titled: Modern Quantum Mechanics, at page 44). So Question one: Why I get this result? But lets suppose that I don't have this problem and the calculation turns out fine, then I sill have another couple of problems: we all know that in QM the operator for infinitesimal translation is: $$T(dx)=1-iKdx$$ where 1 represents the identity matrix. Sakurai states that this strongly resembles the upper mentioned generating function $F$, so he states that we can speculate that the operator $K$ and the momentum $p$ are correlated in some way. But in one case, the QM case, the operator $K$ appears in the formula for the infinitesimal translation, however in the classical case $p$ appears in the generating function for the translation and not in the translation formula itself. Furthermore the resemblance is strong because Sakurai states that $xP$ is the generating function for the identity. This makes the resemblance even more convoluted to my eyes. So Question two: Why this reasoning about the correlation between $K$ and $p$ holds? One last thing: of course knowing the formula for an infinitesimal translation we can find the formula for a finite translation (in QM): $$T(\Delta x)=\exp\left(-\frac{ip\Delta x}{\hbar}\right)$$ this is completely fine for me, however sometimes the argument is made that the fact that we can write the finite translation operator in this way is proof/definition that $p$ is the generator of the infinitesimal translation. Question three: Is this good reasoning? I truly hope that I made myself clear. This problems are bugging me a lot. I know that part of my question has been partially covered in the other question I cited, but I hope that this still qualifies as a non duplicate question. Answer: There are two issues here. The first is that if $dx$ is expressed as a function of $x$, then that means that the coordinate change corresponds to adding a position dependent shift to the position coordinate. Adding a constant shift is what is prescribed here, so $dx$ has no $x$ dependence. If it would make you more comfortable, you could write it as $X = x + a$ instead. The second issue that you are running into is that during the first phase of the procedure, you write $$p = \frac{\partial F}{\partial x}$$ $$X = \frac{\partial F}{\partial P}$$ but these are, in general, implicit equations for the functions $p=p(X,P)$ and $x=x(X,P)$. In this case, $$p(X,P) = \frac{\partial}{\partial x}\big[x(X,P)\cdot P + p(X,P) dx\big] = P$$ $$X = \frac{\partial}{\partial P}\big[x(X,P)\cdot P + p(X,P) dx\big] = x(X,P) + \frac{\partial p}{\partial P}dx = x(X,P)+dx$$ where in the second line, we have used the fact that $p(X,P)=P$ from the first line. These relations can be (trivially) inverted to give $$P(x,p) = p$$ $$X(x,p) = x + dx$$ Why does this reasoning about the correlation between $K$ and $p$ hold? Sakurai claims that $$F =xP + p dx $$ $$T = 1 + (-iK) dx$$ look similar in the sense that $$F= (\text{identity generating function}) + p dx$$ $$T= (\text{identity operator}) + (-iK) dx$$ which suggests that $-iK$ might be a good choice for the momentum operator. This is meant to be merely suggestive. If you learn about the differential geometry of Hamiltonian mechanics, you find that the momentum is indeed the generator of spatial translations in a very precise sense. For now, consider it a plausibility argument. however sometimes the argument is made that the fact that we can write the finite translation operator in this way is proof/definition that is the generator of the infinitesimal translation. That is more or less the definition of a generator. To be more precise, if you have some Lie group $G$ which represents a set of transformations, then it will possess a corresponding Lie algebra $\frak{g}$ whose elements roughly correspond to infinitesimal translations. $\frak{g}$ is in particular a vector space which can be equipped with a basis $\{A_i\}$, and the elements of that basis are called the generators of the group because an arbitrary$^\dagger$ group element $g$ can be expressed as $$g=\exp\left[-i\sum_i c_i A_i\right]$$ for some constants $c_i$. In this case, the group $G$ is the group of translation operators. Since a finite translation can be expressed as $$g=\exp\left[-i \left(\frac{\Delta x}{\hbar}\right)p\right]$$ then that implies that $p$ is a generator of the group. $^\dagger$ At least, in a neighborhood of the identity element
{ "domain": "physics.stackexchange", "id": 68576, "tags": "quantum-mechanics, classical-mechanics, momentum, hamiltonian-formalism" }
Sort todo.txt items by due date in Python 3
Question: I am using the following format for my task management: https://github.com/todotxt/todo.txt Do some stuff +uni due:2022-12-31 Write some paper +uni due:2023-01-10 I am not using the syntax for priority. I know there is a command-line tool one can install to manage items but I just needed a sort function and decided to write it myself. I am not very experienced in Python so I would like to have your insights #!/usr/bin/env python3 import re from datetime import datetime, time import argparse DATE_PATTERN = re.compile(r"due:(\d{4}-\d{2}-\d{2})") def get_due_date(item): match = DATE_PATTERN.search(item) if not match: return time.max date_str = match.group(1) return datetime.strptime(date_str, "%Y-%m-%d").date() def sort_todos(filepath): with open(filepath, "r") as fh: lines = fh.readlines() lines.sort(key=get_due_date) with open(filepath, "w") as fh: fh.write("".join(lines)) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("filepath", help="Path to the file with todo items") args = parser.parse_args() sort_todos(args.filepath) Answer: It's a short script/utility, but it's very clear and well written. There's honestly not much to give feedback on. Some (very) minor improvements: use pathlib.Path and define this as the type for the filepath arg; that way you can just use Path.read_text() and Path.write_text and not have to use the context handlers sort your imports alphabetically, use type hints, and increase the spacing (PEP8 recommends 2 empty lines between global functions) turn get_due_date() into an internal/local function - it's only used by sort_todos() anyway store the date format in a variable (you could even have this as an argument to be more flexible with your todo-files) #!/usr/bin/env python3 import argparse from datetime import datetime, date, time from pathlib import Path import re DATE_PATTERN = re.compile(r"due:(\d{4}-\d{2}-\d{2})") DATE_FMT = "%Y-%m-%d" def sort_todos(filepath: Path) -> None: def get_due_date(item: str) -> date: match = DATE_PATTERN.search(item) if not match: return time.max date_str = match.group(1) return datetime.strptime(date_str, DATE_FMT).date() lines = filepath.read_text().split("\n") lines.sort(key=get_due_date) filepath.write_text("\n".join(lines)) if __name__ == "__main__": parser = argparse.ArgumentParser("Utility to sort todo-items in files") parser.add_argument("filepath", type=Path, help="File to sort") args = parser.parse_args() sort_todos(args.filepath) I wouldn't personally bother with docstrings for something this short and straight-forward, but they would be good to have if you're building something bigger---especially if it's a library to be used by others. But at that stage there would be other considerations too; packaging, tests, deployment, etc.
{ "domain": "codereview.stackexchange", "id": 44276, "tags": "python-3.x, sorting, regex, io, to-do-list" }
Hangman Game made with Python
Question: I know that some features overlap but this is because I added different functions to the code at different stages when I thought of them. Note: The code works on both Python 3 and 2. import random,sys Hangman = [''' +---+ | | | | | =========''',''' +---+ | | | | | | =========''', ''' +---+ | | O | | | | =========''', ''' +---+ | | O | | | | | =========''', ''' +---+ | | O | /| | | | =========''', ''' +---+ | | O | /|\ | | | =========''', ''' +---+ | | O | /|\ | / | | =========''', ''' +---+ | | O | /|\ | / \ | | ========='''] words = 'ant baboon badger bat bear beaver camel cat clam cobra cougar coyote crow deer dog donkey duck eagle ferret fox frog goat goose hawk lion lizard llama mole monkey moose mouse mule newt otter owl panda parrot pigeon python rabbit ram rat raven rhino salmon seal shark sheep skunk sloth snake spider stork swan tiger toad trout turkey turtle weasel whale wolf wombat zebra'.split() def menu(): pass def cpuPlays(): pass def GetRandomWord(word): chosenWord = random.choice(word) return chosenWord def wordLength(word,wordList): userEntury = input("Would you like Tier 1 or Tier 2 words? (1 or 2):") if userEntury == '1': index = wordList.index(word) index-=1 if len(word) >= 5: #print(word,'1') return word else: for n in range(len(wordList)): index+=1 if len(wordList[index]) >= 5: word = wordList[index] #print(word,'2') return word elif len(wordList) == index: index = len(wordList) - index while len(wordList[index])<5: index +=1 if len(wordList[index]) >= 5: word = wordList[index] #print(word,'3') return word elif userEntury == '2': index = wordList.index(word) index-=1 if len(word) <= 4: #print(word,'1') return word else: for n in range(len(wordList)): index+=1 if len(wordList[index]) <= 4: word = wordList[index] #print(word,'2') return word elif len(wordList) == index: index = len(wordList) - index while len(wordList[index])>4: index +=1 if len(wordList[index]) <=4: word = wordList[index] #print(word,'3') return word def display(hangmanPic,secretWord,numWrongLetters,correctLetters): blanks = ['-']*len(secretWord) # makes list of strings instead of putting all into one string for i in range(len(secretWord)):#repleaces blank letters with correct letters if secretWord[i] in correctLetters: blanks[i] = secretWord[i] #looks through each string and changes it if needed print("Missing Letters:") for letter in blanks: print(letter,end='') print(hangmanPic[numWrongLetters]) def getGuess(alreadyGuessed): while True: print("Guess Letter:") guess = input() guess = guess.lower() if len(guess) != 1: print("Please enter only 1 letter.") elif guess in alreadyGuessed: print("Letter is already guessed.") elif guess.isdigit(): print("Please enter a letter not integer.") else: return guess def playAgain(): print("Do you want to play again?(yes or no)") return input().lower().startswith('y') print("H A N G M A N") correctLetters = '' guessedLetters = '' wrongLetters = 0 randomWord = GetRandomWord(words) #print(randomWord) gameDone = False GameIsRunning = True WordLength = wordLength(randomWord,words) while GameIsRunning: display(Hangman,WordLength,wrongLetters,correctLetters) guess = getGuess(correctLetters + guessedLetters) if guess in randomWord: correctLetters += guess #Checks if player has won foundAllLetters = True for i in range(len(randomWord)): if randomWord[i] not in correctLetters: foundAllLetters = False break if randomWord[i] in correctLetters: foundAllLetters = True print("Well Done You found what the missing word is!") gameDone = True else: wrongLetters +=1 guessedLetters += guess #Check if player has lost if wrongLetters == len(Hangman)-1: print(Hangman[7]) print("""You have ran out of guesses the word was %s. You had %d correct guess(es) out of %d in total. """ % (randomWord,len(correctLetters),len(Hangman))) gameDone = True #Ask player to play again if gameDone == True: if playAgain(): wrongLetters = 0 guessedLetters = '' correctLetters = '' randomWord = GetRandomWord(words) WordLength = wordLength(randomWord,words) gameDone = False else: GameIsRunning = False exit() Answer: As per features: It would be nice if I didn't have to press Enter after each letter. Use unbuffered input. Same goes with Tier 1/2 and Yes/No. This way, you don't even have to use the "Please enter only 1 letter" message. If you don't necessarily want this to be platform-independent, you should use a curses window and clear it after each letter so the gallows will stay on top of the screen instead of repeating again and again. You could then improve by updating only those parts of screen which need updating (eg. the gallows and letters). Explain what is the difference between Tier 1 and Tier 2. Perhaps let the user provide a file with words to choose from. Would be simple enough to implement. Other remarks: In function wordLength, you forgot to account for invalid input. This results in a TypeError. You should print an error message and ask again. Sometimes, at the end when I loose and the program prints the word, it's clearly incorrect. Eg. letters are m--- and it says the word was llama. The function GetRandomWord seems pointless, it's the same as random.choice. If the argument is called words, then GetRandomWord(words) is not much more readable than random.choice(words). Although, one might argue that if you wanted to switch to another implementation of choice, it would be easier. But my guess is that wasn't your intention. Create a main function and check if __name__ == '__main__'. If yes, call that main function, otherwise don't do anything. It might not be necessary in this program, but if someone else wanted to use functions from your module, they would have to import it. And when they do, your game is going to start playing, which is not what they want.
{ "domain": "codereview.stackexchange", "id": 24969, "tags": "python, hangman" }
Basics of osmosis. What about excluded volume?
Question: I may not understand osmosis very well. Let us suppose two compartments filled with water, separated by a semi-permeable membrane. At equilibrium, both levels are equals. Let us introduce now a given volume of solute in one of the compartments (say right). At first, the level of the right compartment will increase, to accommodate the extra volume of solute. Because of this, the concentration of water in the right compartment has decreased, and is no more at equilibrium with the left compartment. Thus a net flux of water from left to right occurs until equilibrium is reached. That's what I understand from what I read so far, but I have a problem with this. In this particular example, it is true to say that the water concentration on the right side decreased when the solute is introduced, but because of the excluded volume (from the solute), the water pressure should remain the same, and thus I would not expect a net flux. Unless there is other effect I don't consider? Thank you for your explanation. Answer: The partial pressure of the water in the solution does, indeed, decrease. The total pressure of water plus solute, i.e. the pressure of the solution as a whole, stays the same. Is this what you are asking? How can we see this? Pressure is force per area. Since we didn't change the area, and because forces from different atoms/molecules in the solution are additive, the partial pressures are additive. This is called Dalton's Law. Strictly speaking, this is only true, if there are no internal forces between the components of a mixture, so it holds reasonably well for many gas mixtures at low pressure and high enough temperature, but it does not hold for concentrated ionic solutions, for which we also have to calculate strong interactions between the solvent atoms/molecules and the solute ions.
{ "domain": "physics.stackexchange", "id": 15940, "tags": "statistical-mechanics, osmosis" }
Do Halzen and Martin use p and n to represent complex numbers?
Question: In Halzen and Martin's Quarks and Leptons, on page 42, the $SU(2)$ isospin transformation represented by $e^{i\boldsymbol{\theta}\cdot\boldsymbol{\tau}/2}$ is said to act on the column represented by $$|\psi\rangle=\begin{pmatrix}\mathrm{p} \\\mathrm{n}\end{pmatrix}\tag{1}$$ with $$\mathrm{p}=\begin{pmatrix}1\\0\end{pmatrix}\hspace{0.3cm} \text{and}\hspace{0.3cm}\mathrm{n}=\begin{pmatrix}0 \\1\end{pmatrix}\tag{2}$$ I think this notation i.e., Eq, (1) used in conjunction with Eq.(2), is confusing. In the basis $|\mathrm{p}\rangle=\begin{pmatrix}1\\0\end{pmatrix}$, $|\mathrm{n}\rangle=\begin{pmatrix}0 \\1\end{pmatrix}$, the state $|\psi\rangle$ should be represented by a 2-component column (1) with its entries $\mathrm{p}$ and $\mathrm{n}$ being complex numbers. So are they using the notation that $$|\psi\rangle=\mathrm{p}|\mathrm{p}\rangle+\mathrm{n}|\mathrm{n}\rangle?\tag{3}$$ In this regard, Griffith's Introduction to Elementary particles, I think, uses a clearer notation. They represent general nucleon state $|N\rangle$ as $$|N\rangle=\alpha|p\rangle+\beta|n\rangle\tag{4}$$ with $$|p\rangle=\begin{pmatrix}1\\0\end{pmatrix}\hspace{0.3cm} \text{and}\hspace{0.3cm}|n\rangle=\begin{pmatrix}0 \\1\end{pmatrix}\tag{5}$$ so that the state $|N\rangle$ is indeed represented by a 2-component column vector with two complex numbers $\alpha$ and $\beta$. My question Do Halzen and Martin use the notation I wrote in Eq. (3) with $\mathrm{p}$, $\mathrm{n}$ being complex numbers? I ask this because it is not clear from their notation. Answer: This is definitely bad (unclear) notation. There are a couple ways I can think of to interpret it: first, and most sensibly, they are doing exactly what you put forward in your question, namely reusing the same variables for both state labels and coefficients without distinguishing which are which. The other possibility I can think of is that they are using the letters in $\begin{pmatrix}\mathrm{p}\\ \mathrm{n}\end{pmatrix}$ to represent particles. In this sense, you shouldn't think of $\begin{pmatrix}\mathrm{p}\\ \mathrm{n}\end{pmatrix}$ as a precisely defined mathematical expression; it's simply abusing the notation to convey how the isospin transformation mixes the particles (or really, their corresponding fields). Note that in this case, there are two almost entirely unrelated meanings for $\mathrm{p}$ and $\mathrm{n}$: $\mathrm{p}$ can mean either a proton, i.e. when it appears in the doublet, or the isospin state of a proton, i.e. when it appears in the definition $\mathrm{p} = \begin{pmatrix}1 \\ 0\end{pmatrix}$; and similarly for $\mathrm{n}$.
{ "domain": "physics.stackexchange", "id": 43284, "tags": "particle-physics, symmetry, notation, quantum-chromodynamics, isospin-symmetry" }
Kiln optimization problem
Question: Say I have a kiln for making castings. There are 3 shapes. I need to produce the following castings: 102 of A 364 of B 70 of C I can put 50 molds in the kiln at a time. I can have 75 molds made in any combination. First, what is the optimal combination of molds to make. Second, what is the schedule to make the castings in as few firings as possible. I am trying to wrap my head around this problem. I see that the second part resembles task scheduling problems I've read about, but I have no idea how to tackle the first problem. Any help/insight very much appreciated. Answer: Either there is an additional constraint, either this problem is far more simpler than task scheduling. Well you have to identify the limiting criterium of your problem which is 50 molds maximum in the kiln. Let's call S = A+B+C = 536, the total number of molds. So You cannot do better than 11 firings (11*50=550). First compute the euclidean division of the required elements to get the quotient and the remainder, that will be the constant molds and the variable molds to use. - x, a = A/11 = 9, 3 - y, b = B/11 = 33, 1 - z, c = C/11 = 6, 4 So the constant molds are (x, y, z) = (9, 33, 6), that is to say 48 molds. You now have 11 firings on the 2 (50-48) variables molds to complete the (a, b, c) = (3, 1, 4) remaining elements. Let's say we want to minimize the number of molds to manipulate. Then just take one extra mold of each A, B, C to have a total of 51 molds.
{ "domain": "cs.stackexchange", "id": 12991, "tags": "algorithms, optimization" }
setting sensor port permissions
Question: I'm working with different sensors that all require I run things like: $ sudo chmod a+rw /dev/ttyACM0 $ sudo chmod a+rw /dev/ttyUSB0 to set the port permissions before ROS can see the sensor data. I would like to set these permanently (or somehow in a config or launch file) so I don't need to manually process this step with each attempt to run my app. Please advise. Originally posted by TJump on ROS Answers with karma: 160 on 2014-10-20 Post score: 0 Answer: Add yourself to the dialout group using sudo adduser MyUser dialout and log out. Originally posted by tonybaltovski with karma: 2549 on 2014-10-20 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by TJump on 2014-10-20: Thanks Tony!! Comment by jarvisschultz on 2014-10-20: The answer works because those devices you listed are owned by the dialout group. The more general way to do this is with udev rules. See this page: http://www.reactivated.net/writing_udev_rules.html#ownership
{ "domain": "robotics.stackexchange", "id": 19784, "tags": "ros" }
Deriving gravitational potential energy using vectors
Question: Here is my attempt at derivation: First you must find a vector function for the gravitational force. By the inverse square law, the magnitude of gravitational force between two bodies of mass $m$ and $M$ of distance $r$ apart is: $$G \frac{M m}{r^2}$$ The direction of this force points towards the other body. If you let $\vec{r}$ be the position vector from the other body towards you, then $\frac{-\vec{r}}{\|\vec{r}\|}$ gives you a unit radial vector pointing in the direction of the other body. This can be scaled by the magnitude to give the force vector as: $$F(\vec{r}) = G \frac{M m}{\|\vec{r}\|^2} * \frac{-\vec{r}}{\|\vec{r}\|} = -G \frac{M m}{\|\vec{r}\|^3}\vec{r}$$ Now, potential energy is defined as $$ U = -W = -\int_C F(\vec{r}) \cdot d\vec{r}$$ The path along which we take the line integral needs to be from an infinite distance away (which we set as our reference point with zero potential) to our current position. The radial displacement vector $\vec{r}$ can be broken up into a component-wise displacement vector: $$\vec{r} = x\hat{i} + y\hat{j} + z\hat{k}$$ The differential $dr$ is then $$d\vec{r} = dx\hat{i} + dy\hat{j} + dz\hat{k}$$ Substituting into the expression for the dot product, we get that: $$ F(\vec{r}) \cdot d\vec{r} = -G \frac{M m}{\left(\sqrt{x^2 + y^2 + z^2}\right)^3}(x\hat{i} + y\hat{j} + z\hat{k}) \cdot (dx\hat{i} + dy\hat{j} + dz\hat{k})$$ This gives $$-\frac{G m M x}{\left(x^2+y^2+z^2\right)^{3/2}}dx -\frac{G m M y}{\left(x^2+y^2+z^2\right)^{3/2}}dy-\frac{G m M z}{\left(x^2+y^2+z^2\right)^{3/2}}dz$$ This can then be integrated term by term. The limits of integration of each should vary from $\infty$ to the current position ($x$, $y$, or $z$ since gravity is a conservative force (which can be verified mathematically by checking if the y-partial of the x component and the x-partial of the y component are equal) and thus path you take when computing work done does not matter -- it depends only on the initial and final positions. This gives $$\int_{\infty }^x -\frac{G m M x}{\left(x^2+y^2+z^2\right)^{3/2}} \, dx + \int_{\infty }^y -\frac{G m M y}{\left(x^2+y^2+z^2\right)^{3/2}} \, dy + \int_{\infty }^z -\frac{G m M z}{\left(x^2+y^2+z^2\right)^{3/2}} \, dz$$ This evaluates to $$\frac{G m M}{\sqrt{x^2+y^2+z^2}} + \frac{G m M}{\sqrt{x^2+y^2+z^2}} + \frac{G m M}{\sqrt{x^2+y^2+z^2}} = \frac{3 G m M}{\|\vec{r}\|}$$ Then taking potential as the negative of the work done we get $$U = -\frac{3 G m M}{\|\vec{r}\|}$$ However, this is clearly incorrect as there is a factor of three that should not be there. Did I integrate incorrectly or am I missing something else? Answer: I am not sure what is the path $C$ you are integrating over? In your definition you evaluate $U(C)$ which in the present case of force is independent on the explicit path you choose but still depends on initial and final point, i.e. $U(p_1,p_2)$. In your final result it seems you are actually 'walking' three times the path $p_1=(-\infty,y,z)$ to $p_2=(-x,y,z)$. only by relabeling your coordinate system. Thus the factor of $3$ in your final formula. Best practice is to parametrize the path $C$ in terms of a function $\vec{s}(\lambda)=(x(\lambda),y(\lambda),z(\lambda))$ and integrating over $\lambda$. In Detail In order to evaluate the potential you have to know the path you are integrating over: $$ U(C)=-\int_{\vec{r}\in C} d\vec{r}\cdot \vec{F}(\vec{r})$$ In the present case the function (the force) is conservative (vanishing curl) and thus the result only depends on the initial and final point $\vec{r}_{i}$ and $\vec{r}_f$ of $C$. Now, what is the initial point and the final point in your setting. You say $C$ should connect infinity with the point $\vec{r}_f=(x,y,z)$. Thus, what is $\vec{r}_i$ here? Basically you can choose any 'boundary' point of $\mathbb{R}^3$. A good choice would be $\vec{r}_i=(-\infty,y,z)$, though other choices as for instance $\vec{r}_i=(-\infty,-\infty,-\infty)$ are also suitable. Now you can basically choose any path connecting $\vec{r}_i$ and $\vec{r}_f$ it will give the same result. So take a very simple one $C_s$: $\vec{s}(\lambda)=(\lambda ,y,z)$ with $\lambda\in (-\infty,x]$. The integral is then parametrized and what you have to evaluate is $$ U(\vec{r}_i,\vec{r}_f)=-\int_{\vec{s}\in C_s} d\vec{r}\cdot \vec{F}(\vec{r})\equiv -\int_{-\infty}^{x}d \lambda \frac{d\vec{s}(\lambda)}{d\lambda}\cdot \vec{F}(\vec{s}(\lambda))=-\int_{-\infty}^{x}d \lambda \, \hat{e}_x\cdot \vec{F}(\vec{s}(\lambda))$$ Now insert the path $\vec{s}(\lambda)=(\lambda ,y,z)$ and you obtain $$U(\vec{r}_i,\vec{r}_f)= -\int_{-\infty}^{x}d \lambda \, \left(-G m M \frac{\lambda}{(\lambda^2+y^2+z^2)^{3/2}} \right) $$ Other paths will yield the same result, but in your case you follow a route starting from three different points $(-\infty,y,z)$, $(x,-\infty,z)$, and $(x,y,-\infty)$ and terminate at $(x,y,z)$. Thus a factor of $3$ appears. The same holds true for the initial point $\vec{r}_i=(-\infty,-\infty,-\infty)$. Basically, you can think of any path connecting $\vec{r}_i$ with $\vec{r}_f$ and then try to find a function that describes this curve. A naive and very simple choice would be a straight line: $ \vec{s}(\lambda)=(\lambda x, \lambda y, \lambda z)$ with $\lambda\in (-\infty,1]$ and $\frac{d\vec{s}(\lambda)}{d\lambda}=x\hat{e}_x+y\hat{e}_y+z \hat{e}_z$. However, actually you are crossing the origin for which the force diverges. Nevertheless, just for simplicity we following this path along which the force assumes the form $$\vec{F}(\vec{s}(\lambda))=-GmM \frac{ x\lambda \hat{e}_x+y\lambda \hat{e}_y+z\lambda \hat{e}_z}{( (x\lambda)^2+(y\lambda)^2+(z\lambda)^2 )^{3/2}}= -GmM \frac{ x\hat{e}_x+y\hat{e}_y+z \hat{e}_z}{( x^2+y^2+z^2 )^{3/2}} \frac{1}{\lambda^2}$$ Hence, the kernel of the integral looks like: $$\frac{d\vec{s}(\lambda)}{d\lambda}\cdot \vec{F}(\vec{s}(\lambda))= -GmM \frac{x^2+y^2+z^2}{(x^2+y^2+z^2)^{3/2}}\frac{1}{\lambda^2}\equiv -\frac{GmM}{r} \frac{1}{\lambda^2} $$ whereby $r=\sqrt{x^2+y^2+z^2}$ the radial of the end-point. Now, the integral can easily be evaluated: $$U(\vec{r}_i,\vec{r}_f)= -\int_{-\infty}^{1}d \lambda \, \left(- \frac{G mM}{r} \frac{1}{\lambda^2} \right) =-\frac{G mM}{r} \left[\frac{1}{\lambda} \right]^{1}_{-\infty}=-\frac{G mM}{r}$$ Surely, you can convert the result into spherical coordinates and use a path that is radial.
{ "domain": "physics.stackexchange", "id": 23183, "tags": "homework-and-exercises, newtonian-gravity, potential-energy, vector-fields" }