anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Consequences of non-comoving inertial frames in free fall
Question: Suppose we have a space ship on a highly eccentric orbit that passes by the International Space Station at its point of lowest approach. Now we have an inertial frame on the ISS that is accelerating dramatically with respect to another inertial frame on the space ship, and vice versa. What are some of the consequences of this? If we charge up the space ship, does the ISS see it radiate? Do the ISS and the space ship disagree about each others' proper times? Any articles that approach related questions in a relatively accessible manner? Sorry the question is kind of vague... Answer: Now we have an inertial frame on the ISS that is accelerating dramatically with respect to another inertial frame on the space ship, and vice versa. In GR it is important to distinguish between two types of acceleration: proper acceleration and coordinate acceleration. Proper acceleration is a relativistic invariant, all frames agree on its value. Proper acceleration is a physical quantity with physically measurable consequences. Coordinate acceleration is an artifact of the coordinates and different coordinates will disagree. Importantly, coordinate acceleration has no measurable consequences at all. What are some of the consequences of this? If we charge up the space ship, does the ISS see it radiate? Do the ISS and the space ship disagree about each others' proper times? Any articles that approach related questions in a relatively accessible manner? There are no physical consequences of this whatsoever. If you mean proper acceleration then both agree that the other is moving inertially and that they are undergoing no proper acceleration. They both agree about all relativistic invariants such as proper times. If you mean coordinate acceleration then there are again no physical consequences simply because coordinate acceleration itself is non-physical. Coordinate acceleration has no experimentally measurable consequences. Now, there are physical consequences of the curvature of spacetime and their different paths through it. But because the laws of physics are covariant, both will agree on the outcome of all physical experiments. Specifically, if one craft, being charged, predicts that some antenna will detect radiation, then so will the other.
{ "domain": "physics.stackexchange", "id": 80369, "tags": "general-relativity" }
Deadlock watchdog for shared structures
Question: In our application we have data which needs to be shared between multiple threads. Now and then we ran into a race condition and we could not figure out where that was. So I have implemented a deadlock watchdog. It is supposed to be used like this (example): public bool GetFromElementExistsCache(string id) { using (new Lock(_syncElementExistsCache)) { if (_elementExistsCache.ContainsKey(id)) { return _elementExistsCache[id]; } else { return false; } } } The implementation looks like this: public sealed class Lock : IDisposable { private static readonly object CACHE_LOCK = new object(); private static readonly LogService LOG = new LogService(typeof(Lock)); private readonly object _lockObject; private readonly bool _lockAcquired; private static readonly List<LockInformation> LOCK_OBJECTS = new List<LockInformation>(); public Lock(object lockObject) { _lockObject = lockObject; _lockAcquired = false; UpdateLockInfo(LockInformation.LockRequest.Reqested); Monitor.Enter(_lockObject, ref _lockAcquired); if (!_lockAcquired) { LOG.Warn($"Could not get lock, possible deadlock detected for lock object {_lockObject.GetHashCode()}!"); } UpdateLockInfo(LockInformation.LockRequest.Locked); } private void UpdateLockInfo(LockInformation.LockRequest lockStatus) { lock (CACHE_LOCK) { try { var lockExists = false; LockInformation lockInfo = null; foreach (var li in LOCK_OBJECTS) { if (li.LockObject == _lockObject && li.Thread == Thread.CurrentThread) { lockInfo = li; } if (li.LockStatus != LockInformation.LockRequest.Released) { lockExists = true; } } if (!lockExists) { LOCK_OBJECTS.Clear(); } // the current thread is allowed to update the status if it has created the lock entry if (lockInfo != null) { if (lockStatus == LockInformation.LockRequest.Released) { LOCK_OBJECTS.Remove(lockInfo); } else if (lockInfo.LockStatus == LockInformation.LockRequest.Released || lockInfo.Thread == Thread.CurrentThread) { lockInfo.LockStatus = lockStatus; lockInfo.Thread = Thread.CurrentThread; } } else { LOCK_OBJECTS.Add(new LockInformation { LockStatus = lockStatus, LockObject = _lockObject, Thread = Thread.CurrentThread }); } } catch (Exception e) { LOG.Warn("Error during updating lock information!", e); } } } public void Dispose() { if (_lockAcquired) { Monitor.Exit(_lockObject); UpdateLockInfo(LockInformation.LockRequest.Released); } } } The class which holds the lock information internal class LockInformation { public enum LockRequest { Reqested, Locked, Released } public object LockObject { get; set; } public LockRequest LockStatus { get; set; } public Thread Thread { get; set; } public override bool Equals(object obj) { var lockInformation = obj as LockInformation; if (lockInformation != null) { return lockInformation.LockObject == LockObject; } return false; } protected bool Equals(LockInformation lockInfo) { if (lockInfo == null) { return false; } return lockInfo.LockObject == LockObject; } public override int GetHashCode() { unchecked { int hashCode = LockObject?.GetHashCode() ?? 0; hashCode = (hashCode * 397) ^ LockStatus.GetHashCode(); hashCode = (hashCode * 397) ^ (Thread?.GetHashCode() ?? 0); return hashCode; } } public override string ToString() { var threadName = Thread.Name; if (string.IsNullOrEmpty(threadName)) { threadName = "null"; } return $"{LockObject.GetHashCode()} : {LockStatus} - {threadName}({Thread.ManagedThreadId})"; } } There is an additional class which takes care of the deadlock detection. So far the implementation works just fine. But I was wondering if you guys see any problemens in for example performance or thread identification etc. since I do not have much experience there. Also general tips for coding style etc. are much appreciated as well! Answer: I think this class creates more problems than it solves. It locks on static object, which is a pretty bad idea. I mean, the obvious problem is that you end up synchronizing code blocks which use different lockObjects and would otherwise be unrelated. But what also bugs me is that it looks like a pretty major modification to your original use case, which might lead to wrong conclusions about the nature of original bug. UpdateLockInfo method is pretty hard to follow. You do a lot of weird stuff. For example, why would you call LOCK_OBJECTS.Clear() if you already call LOCK_OBJECTS.Remove for released objects a few lines later? Or why would you set lockInfo.Thread = Thread.CurrentThread if you've already checked for equality? Those things make your code look fishy and bug-prone. Restructuring it might help. For example: var info = FindExistingLockInfo(); if (info == null) { if (lockStatus != LockInformation.LockRequest.Requested) throw ...; LOCK_OBJECTS.Add(new ...); } else if (lockStatus == LockInformation.LockRequest.Released) { LOCK_OBJECTS.Remove(info); } else { info.LockStatus = lockStatus; } Note, that neither the version above, nor your original implementation will work correctly for nested locks. To fix that you would have to implement some sort of reference counting. Your have inconsistent equality comparison. When you look for existing lockInfo you check ThreadId, when you call LOCK_OBJECTS.Remove - you don't. This can lead to incorrect items being deleted. It looks like your code is prone to leaking locks. For example, if Lock constructor throws after Monitor.Enter call due to denied access to log file, or due to thread being aborted, or due to any other reason, the lockObject is never released. Personally, I don't quite see how !_lockAcquired is any indication of possible deadlock. But maybe I am just missing something about Monitor.Enter semantics. I guess the bottom line is: if this class helps or have already helped you debug the problem you were having, then good for you, well done! :) However, in its current state I would not recommend using it in production code on regular basis. Multi-threading is already hard enough to get right, your class makes things even harder. Instead I would recommend following mr.eurotrash's advice. Refactor your code, so that you have only one access point to your shared resource, and then lock the entire thing from the inside. For us, mortals, in most cases this is the only bulletproof way to synchronize access.
{ "domain": "codereview.stackexchange", "id": 22633, "tags": "c#, multithreading" }
Solving the Euler-Lagrange equation with the Axion Lagrangian
Question: I am trying to show that for a constant axion field $\theta(\textbf{x},t)=const.$ the axion Lagrangian $\mathcal{L}_\theta=-\frac{\kappa\theta}{4\mu_0}F_{\mu\nu}\tilde{F}^{\mu\nu}$ does not lead to a change in the equations of motion - I'm trying to solve the Euler-Lagrange equation for this, but I have little background in field theory. So far, I have shown that $$\mathcal{L}_\theta=-2\partial_\mu(\varepsilon^{\mu\nu\rho\sigma}A_\nu \partial_\rho A_\sigma)$$ which I plug in into the Euler-Lagrange equation $$\partial_\mu\left(\frac{\partial\mathcal{L}_\theta}{\partial(\partial_\mu A_\nu)}\right)-\frac{\partial\mathcal{L}_\theta}{\partial A_\mu}=0$$ Since the Lagrangian does not depend on $A_\mu$, the latter term results equals zero. Hence I focus on the first term. Since my Lagrangian already contains $\mu,\nu,\rho,\sigma$, I use $\alpha,\beta$ to distinguish the indices of the derivatives from the indices of the Lagrangian - this is inspired by the procedure as shown here (https://quantummechanics.ucsd.edu/ph130a/130_notes/node452.html) for Maxwell's Lagrangian. I get $$ \partial\alpha\left[-2\varepsilon^{\mu\nu\rho\sigma}\left(\frac{\partial(\partial_\mu A_\nu)}{\partial(\partial_\alpha A_\beta)}\partial_\rho A_\sigma + \partial_\mu A_\nu\frac{\partial(\partial_\rho A_\sigma)}{\partial(\partial_\alpha A_\beta)}\right)\right]$$ I have tried expanding this in cases for $\alpha=\mu,\nu,\rho,\sigma$ to solve them separately. For $\alpha=\mu$, I get $$ \partial\mu\left[-2\varepsilon^{\mu\nu\rho\sigma}\left(\frac{\partial(\partial_\mu A_\nu)}{\partial(\partial_\mu A_\nu)}\partial_\rho A_\sigma + \partial_\mu A_\nu\frac{\partial(\partial_\rho A_\sigma)}{\partial(\partial_\mu A_\nu)}\right)\right]$$ The first term $\delta_\mu^\mu\delta_\nu^\nu=1$, and the latter $\delta_\mu^\rho\delta_\nu^\sigma$ would result in zero in combination with $\varepsilon^{\mu\nu\rho\sigma}$. Why is this the case? When solving for $\alpha=\rho$, I get (using the same reasoning) $$-\partial_\rho2\varepsilon^{\mu\nu\rho\sigma}\partial_\mu A_\nu$$ With the same reasoning, for $\alpha=\nu,\sigma$ the Kroneckers would result into nothing. So finally I have $$\partial_\mu\left(\frac{\partial\mathcal{L}_\theta}{\partial(\partial_\mu A_\nu)}\right)=-\partial_\mu2\varepsilon^{\mu\nu\rho\sigma}\partial_\rho A_\sigma -\partial_\rho2\varepsilon^{\mu\nu\rho\sigma}\partial_\mu A_\nu$$ How do I proceed from here? Answer: I've figured this out a while ago with help of a local professor. The answer (as said in the comments) is in a different LaTeX format so I'll save myself some time and put screenshots of the relevant part of the calculation.
{ "domain": "physics.stackexchange", "id": 96276, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, variational-calculus" }
Optimizing critical loop for consuming a byte-buffer
Question: I'm currently developing an Open Source project called sims3py using Python2.7. One overarching goal of the project is to ensure a Pure Python implementation of functions currently available only through a .Net-based library. Because the project must be Pure Python, 'cheats' using C code is not allowed (except for the implicit C code invoked by standard Python libraries). I have performed profiling on my code, and discovered that a certain function, decompress() was very slow. This function is called thousands of time by many tools I created based on the sims3py library. So naturally this is a right place to perform optimization. The decompression algorithm is documented here. (Please note that I've performed profiling; the times listed below are time taken only for the decompression part. More specifically, time taken to perform chunk = decompress(buff) from another place in the program. They are not the total time taken for the program to run. Just for the decompress() function). The first iteration uses a custom class called SlidingUnpacker. It works, but it's darned slow. When this first iteration was used against a test file (containing about 6000 chunks to decompress), the decompression of elements in that file took 240+ seconds. The second iteration, which is the current Public version, dispenses with SlidingUnpacker and accesses the blob to decompress directly (via memoryview()), and it's much faster; testing against the same test file shows that the second iteration took about 75 seconds. After reviewing how the code works, I've optimized decompress() further as follows: def decompress(byte_buffer, strict_size=True, ignore_extra=True): """ Performs a decompression on a memoryview(byte_buffer). :param byte_buffer: byte_buffer containing the compressed resource :type byte_buffer: memoryview :param strict_size: Whether to raise exception on decompressed size mismatch or not :type strict_size: bool :param ignore_extra: Whether to ignore compressed bytes beyond specified fullsize :type ignore_extra: bool :return: a bytes() containing the uncompressed resource :rtype: bytes """ assert isinstance(byte_buffer, memoryview) with io.BytesIO() as output: buf = byte_buffer comptype, magic = map(ord, buf[0:2]) if magic != 0xFB: # Not a valid compression format raise InvalidCompressionException(message='NoMagic, the Magic byte 0xFB not found!') s1 = 0 if comptype & 0x80: s1, s2, s3, s4 = map(ord, buf[2:6]) pos = 6 else: s2, s3, s4 = map(ord, buf[2:5]) pos = 5 fullsize = (s1 << 24) | (s2 << 16) | (s3 << 8) | s4 output_len = 0 control_0, control_1, control_2, control_3 = 0, 0, 0, 0 try: while True: if output_len >= fullsize and ignore_extra: break # The following is a sentinel. If buf[pos] results in IndexError, control_0 stays None and we can # detect end_of_buffer without 'hanging' control bytes control_0 = None control_0 = ord(buf[pos]) pos += 1 # 0x00 ~ 0x7F if (control_0 & 0x80) == 0: control_1 = ord(buf[pos]) pos += 1 num_plain = control_0 & 0x03 num_copy = ((control_0 & 0x1c) >> 2) + 3 copy_offset = ((control_0 & 0x60) << 3) + control_1 + 1 # 0x80 ~ 0xBF elif (control_0 & 0xc0) == 0x80: control_1 = ord(buf[pos]) control_2 = ord(buf[pos + 1]) pos += 2 num_plain = ((control_1 & 0xC0) >> 6) & 0x03 num_copy = (control_0 & 0x3F) + 4 copy_offset = ((control_1 & 0x3F) << 8) + control_2 + 1 # 0xC0 ~ 0xDF elif (control_0 & 0xE0) == 0xC0: control_1 = ord(buf[pos]) control_2 = ord(buf[pos + 1]) control_3 = ord(buf[pos + 2]) pos += 3 num_plain = control_0 & 0x03 num_copy = ((control_0 & 0x0C) << 6) + control_3 + 5 copy_offset = ((control_0 & 0x10) << 12) + (control_1 << 8) + control_2 + 1 # 0xE0 ~ 0xFB elif 0xE0 <= control_0 <= 0xFB: num_plain = ((control_0 & 0x1F) << 2) + 4 npos = pos + num_plain output.write(buf[pos:npos]) pos = npos output_len += num_plain continue # 0xFC ~ 0xFF else: num_plain = control_0 & 0x03 output.write(buf[pos:pos + num_plain]) pos += num_plain output_len += num_plain break npos = pos + num_plain output.write(buf[pos:npos]) pos = npos output_len += num_plain copy_pos = output_len - copy_offset # Pre-add num_copy to output_len because we WILL get num_copy bytes anyways... or Error trying output_len += num_copy # We do not use for: loop, because for: loop handles bytes one-by-one. # This construct tries to read as many bytes as possible per iteration while num_copy: output.seek(copy_pos) to_copy = output.read(num_copy) le = len(to_copy) if le: output.seek(0, 2) # Seek to end of stream output.write(to_copy) copy_pos += le num_copy -= le else: raise IndexError('There should be at least 1 char but got none.') except IndexError: # This will be raised by buf[] if we try reading beyond its bounds if control_0 is not None: # Exception raised if buffer is exhausted while algorithm still requires a control byte, or # control bytes specified a number of plain data to consume but the buffer exhausted before the # required number of bytes are received raise InvalidCompressionException( message='Truncated or corrupt resource, buffer exhausted after reading {0} bytes'.format(pos) ) # If we reach this point, this means that the compression structure has been decompressed successfully # although without 'end of compression' control (0xFC~0xFF), AND before encoded fullsize is reached. # Because technically we don't find any errors in the compressed structure, we do not do anything, letting # an external sanity check to decide. # (This situation is situation (3) as described in the sanity check's comments) sys.exc_clear() pass finally: pass # We reach this point only if one these are true: # (1) len(output) >= fullsize (while ignore_extra == True) # (2) End of compression control detected (0xFC ~ 0xFF) # (3) byte_buffer has been exhausted before (1) or (2) reached # In any case, all compression controls have been decoded properly (i.e., no incomplete control codes and/or # truncated data needed by control codes). So, technically the compressed data was NOT corrupt. # What we do depends on whether strict flag is set or not. if strict_size and fullsize != output_len: raise InvalidCompressionException(message='Size mismatch, want {0} got {1}'.format(fullsize, output_len)) return output.getvalue() This final iteration indeed improves the decompression. Testing the same test file, decompression now takes only about 63 seconds. However, I still feel the code can be optimized further. Can you provide suggestions as to how I can further optimize the code? Note: I've made available test vectors for this function. After unzipping the .7z archive, there should be a pair of files: testblob_compressed.bin and testblob_uncompressed.xml. The following code should be enough to test: filename = 'testblob_compressed.bin' ba = bytearray(os.path.getsize(filename)) with open(filename, 'rb') as fin: fin.readinto(ba) mv = memoryview(ba) output = decompress(mv) (output should be byte-identical with testblob_uncompressed.xml) UPDATE Just in case you're wondering, here's the final version of the decompress() function: https://bitbucket.org/pepoluan/sims3py/src/6f97b77fd4b12a4d294cd4a904742072e09a2747/sims3py/init.py Thanks to everyone pitching in, especially @Veedrac ! Answer: The lazy thing to note is that pypy runs this in about 20% of the time, so PyPy should be preferred if possible. Your with io.BytesIO() as output: can safely cover a smaller fraction of the code, so I suggest moving as much as possible (within reason) out of its context. You have except IndexError: ... # stuff pass finally: pass This should just be except IndexError: ... # stuff I don't get your justification for running sys.exc_clear. I suggest you make sure this is really the right thing to do, because it looks wrong. You have while True: if output_len >= fullsize and ignore_extra: break This looks like it would better be written while not (output_len >= fullsize and ignore_extra): It seems to me that your while loop: while num_copy: output.seek(copy_pos) to_copy = output.read(num_copy) le = len(to_copy) if le: output.seek(0, 2) # Seek to end of stream output.write(to_copy) copy_pos += le num_copy -= le else: raise IndexError('There should be at least 1 char but got none.') is not needed: If the argument is positive, and the underlying raw stream is not interactive, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). A simple assert should be fine. Your output.seek(0, 2) should be output.seek(0, io.SEEK_END). You might find things easier if you use a bytearray over a memoryview as you can avoid all of the ord calls. output can also be a bytearray. I find this gives a significant speed improvement. You spend a lot of upkeep on output_len; there's no real harm in using len(output), so I suggest you do so. There seems to be no good reason for this line: control_0, control_1, control_2, control_3 = 0, 0, 0, 0 so remove it. It looks to me like the if control_0 is not None: in the except IndexError can be replaced with a pos < len(buf) in the while and the try can be moved in. This does come at a slight speed cost so I avoided the change. Some of your bit twiddling can be simplified. The ifs: # 0x00 ~ 0x7F if control_0 < 0x80: ... # 0x80 ~ 0xBF elif control_0 < 0xX0: ... # 0xC0 ~ 0xDF elif control_0 < 0xE0: ... # 0xE0 ~ 0xFB elif control_0 < 0xFC: ... # 0xFC ~ 0xFF else: ... Your num_plain = ((control_1 & 0b11000000) >> 6) & 0b11 can be just num_plain = (control_1 >> 6) & 0b11 I don't see how to speed it up further, but this should be a 2x improvement when staying on CPython and a 10x improvement if moving to PyPy (an extra 5x from the better interpreter). def decompress(byte_buffer, strict_size=True, ignore_extra=True): """ Performs a decompression on a bytearray(byte_buffer). :param byte_buffer: byte_buffer containing the compressed resource :type byte_buffer: bytearray :param strict_size: Whether to raise exception on decompressed size mismatch or not :type strict_size: bool :param ignore_extra: Whether to ignore compressed bytes beyond specified fullsize :type ignore_extra: bool :return: a bytes() containing the uncompressed resource :rtype: bytes """ assert isinstance(byte_buffer, bytearray) buf = byte_buffer comptype, magic = buf[0:2] if magic != 0xFB: # Not a valid compression format raise InvalidCompressionException(message='NoMagic, the Magic byte 0xFB not found!') s1 = 0 if comptype & 0x80: s1, s2, s3, s4 = buf[2:6] pos = 6 else: s2, s3, s4 = buf[2:5] pos = 5 fullsize = (s1 << 24) | (s2 << 16) | (s3 << 8) | s4 output = bytearray() # If the compression structure has been decompressed successfully although without 'end of compression' # control (0xFC~0xFF), AND before encoded fullsize is reached, the pos < len(buf) condition will # break the loop. # Because technically we don't find any errors in the compressed structure, we do not do anything, letting # an external sanity check to decide. # (This situation is situation (3) as described in the sanity check's comments) try: while not (len(output) >= fullsize and ignore_extra): # The while ensures that buf[pos] always valid control_0 = None control_0 = buf[pos] pos += 1 # 0x00 ~ 0x7F if control_0 < 0x80: control_1 = buf[pos] pos += 1 num_plain = control_0 & 0b11 num_copy = ((control_0 >> 2) & 0b111) + 3 copy_offset = ((control_0 & 0b1100000) << 3) + control_1 + 1 # 0x80 ~ 0xBF elif control_0 < 0xC0: control_1 = buf[pos] control_2 = buf[pos + 1] pos += 2 num_plain = (control_1 >> 6) & 0b11 num_copy = (control_0 & 0b111111) + 4 copy_offset = ((control_1 & 0b111111) << 8) + control_2 + 1 # 0xC0 ~ 0xDF elif control_0 < 0xE0: control_1 = buf[pos] control_2 = buf[pos + 1] control_3 = buf[pos + 2] pos += 3 num_plain = control_0 & 0b11 num_copy = ((control_0 & 0b1100) << 6) + control_3 + 5 copy_offset = ((control_0 & 0b10000) << 12) + (control_1 << 8) + control_2 + 1 # 0xE0 ~ 0xFB elif control_0 < 0xFC: num_plain = ((control_0 & 0b11111) << 2) + 4 output += buf[pos:pos + num_plain] pos += num_plain continue # 0xFC ~ 0xFF else: num_plain = control_0 & 0b11 output += buf[pos:pos + num_plain] pos += num_plain break output += buf[pos:pos + num_plain] pos += num_plain # Don't use negative indices lest the addition makes the end point 0 copy_pos = len(output) - copy_offset if copy_pos < 0: raise IndexError('There should be {} values in buffer, got {}.'.format(num_copy, len(output))) to_copy = output[copy_pos:copy_pos + num_copy] output += to_copy except IndexError: # This will be raised by buf[] if we try reading beyond its bounds if control_0 is not None: # Exception raised if buffer is exhausted while algorithm still requires a control byte, or # control bytes specified a number of plain data to consume but the buffer exhausted before the # required number of bytes are received raise InvalidCompressionException( message='Truncated or corrupt resource, buffer exhausted after reading {0} bytes'.format(pos) ) # If we reach this point, this means that the compression structure has been decompressed successfully # although without 'end of compression' control (0xFC~0xFF), AND before encoded fullsize is reached. # Because technically we don't find any errors in the compressed structure, we do not do anything, letting # an external sanity check to decide. # (This situation is situation (3) as described in the sanity check's comments) # We reach this point only if one these are true: # (1) len(output) >= fullsize (while ignore_extra == True) # (2) End of compression control detected (0xFC ~ 0xFF) # (3) byte_buffer has been exhausted before (1) or (2) reached # In any case, all compression controls have been decoded properly (i.e., no incomplete control codes and/or # truncated data needed by control codes). So, technically the compressed data was NOT corrupt. # What we do depends on whether strict flag is set or not. if strict_size and fullsize != len(output): raise InvalidCompressionException(message='Size mismatch, want {} got {}'.format(fullsize, len(output))) return output One last thing that seems to help is removing the control_1, control_2 and control_3 intermediates: # 0x00 ~ 0x7F if control_0 < 0x80: num_plain = control_0 & 0b11 num_copy = ((control_0 >> 2) & 0b111) + 3 copy_offset = ((control_0 & 0b1100000) << 3) + buf[pos] + 1 pos += 1 # 0x80 ~ 0xBF elif control_0 < 0xC0: num_plain = (buf[pos] >> 6) & 0b11 num_copy = (control_0 & 0b111111) + 4 copy_offset = ((buf[pos] & 0b111111) << 8) + buf[pos + 1] + 1 pos += 2 # 0xC0 ~ 0xDF elif control_0 < 0xE0: num_plain = control_0 & 0b11 num_copy = ((control_0 & 0b1100) << 6) + buf[pos + 2] + 5 copy_offset = ((control_0 & 0b10000) << 12) + (buf[pos] << 8) + buf[pos + 1] + 1 pos += 3 # 0xE0 ~ 0xFB elif control_0 < 0xFC: num_plain = ((control_0 & 0b11111) << 2) + 4 output += buf[pos:pos + num_plain] pos += num_plain continue # 0xFC ~ 0xFF else: num_plain = control_0 & 0b11 output += buf[pos:pos + num_plain] pos += num_plain break This actually helps PyPy a lot more than CPython, getting PyPy to a full 20x the speed of the original CPython here.
{ "domain": "codereview.stackexchange", "id": 12264, "tags": "python, performance, python-2.x" }
More simplified way of creating page links dynamically?
Question: A little birdie suggested I bring this question here, so here it goes. Well I have a working script (see below) but it seems quite clunky and redundant; in my defense I wrote this code many moons ago, but that's not the point. I was curious if anyone has an idea on a more efficient way of writing this code, with less loops and conditionals and, well, noise in the code. Code in question: private function pageLinks($num, $page = 1, $search = false, $ne = false) { $query = ($search) ? '&query='.$search : null; $by = (is_numeric($ne)) ? '&by='.$ne : null; $links = 'Page(s):<a href="search.php?page=1' . $query . $by . '" class="tableLink">1</a>'; $count = 1; $npp = $this->numPerPage; $buttons = 9; $half = 4; for($i = 1; $i <= $num; $i++) { if(($i%$npp) === 0) { $count++; } } if($count < $buttons) { for($i = 2; $i <= $count; $i++) { $links .= '<a href="search.php?page=' . $i . $query . $by . '" class="tableLink">' . $i . '</a>'; } } elseif($page <= ($half + 2)) { for($i = 2; $i <= $buttons; $i++) { $links .= '<a href="search.php?page=' . $i . $query . $by . '" class="tableLink">' . $i . '</a>'; } $links .= '...<a href="search.php?page=' . $count . $query . $by . '" class="tableLink">' . $count . '</a>'; } elseif($page <= ($count - ($half + 2))) { $links .= '...'; for($i = $half; $i > 0; $i--) { $links .= '<a href="search.php?page=' . ($page - $i) . $query . $by . '" class="tableLink">' . ($page - $i) . '</a>'; } $links .= '<a href="search.php?page=' . ($page - $i) . $query . $by . '" class="tableLink">' . ($page - $i) . '</a>'; for($i = 1; $i <= $half; $i++) { $links .= '<a href="search.php?page=' . ($page + $i) . $query . $by . '" class="tableLink">' . ($page + $i) . '</a>'; } $links .= '...<a href="search.php?page=' . $count . $query . $by . '" class="tableLink">' . $count . '</a>'; } else { $links .= '...'; for($i = $buttons - 1; $i >= 0; $i--) { $links .= '<a href="search.php?page=' . ($count - $i) . $query . $by . '" class="tableLink">' . ($count - $i) . '</a>'; } } return($links); } The method is called like so: $links = $this->pageLinks($count, $page, $url, $ne); And the variables are as such: $count = total number of clients in database (int) $page = current page to build from (int) $url = the name or email for the search (String) $ne = is for the search string either by name (1) or email (2) (int) And the output is something like (as links): Page(s):1 2 3 4 5 6 7 8 9...33 Or if you're in the middle (page 20): Page(s):1...16 17 18 19 20 21 22 23 24...33 Now this isn't always called through a search function, hence the default values for $url and $ne, but that's not very important. My question is there a cleaner way to handle building of these links? Or am I stuck with this cluster of loops? Answer: This: $count = 1; for($i = 1; $i <= $num; $i++) { if(($i%$npp) === 0) { $count++; } } is equivalent to $count = floor($num / $npp) + 1; As for the paging, I'd do it like this: $from = $page - $half; if ($from <= 2) $from = 2; $to = $page + $half; if ($to >= $count - 1) $to = $count - 1; $extra = $query . $by; $links = $this->pageLink(1, $extra); if ($from > 2) $links .= "..."; for ($i = $from; $i <= $to; $i++) $links .= $this->pageLink($i, $extra); if ($i < $count) $links .= "..."; // I use $i instead of $to because $i == $to + 1, so I save one addition $links = $this->pageLink($count, $extra); Here, you also need: private function pageLink($num, $extra) { return '<a href="search.php?page=' . $num . $extra . '" class="tableLink">' . $num . '</a>'; } This was written by heart, so be weary of the possible bugs.
{ "domain": "codereview.stackexchange", "id": 4348, "tags": "php, object-oriented" }
The necessity of using tangent space as the vectors in general relativity
Question: I’ve recently learnt what manifolds are to prepare myself for a course in GR. My relevant mathematical background is linear algebra (abstract, proof-ish course) and multivariable/vector calculus (course mostly focussed on computations). I use Knuth and Renteln as my main references. The idea of an n-dimensional manifold is introduced as a combination of open sets whose union forms the manifold. Each such open set must have a continuous 1-to-1 map to an open set in n-dimensional Euclidean space; that is: each point within these open sets can be described as an n-tuple, just like vectors and points in ”regular” space can. Please correct me if I’m wrong here, I really want to get my definitions right. Subsequently the tangent space is introduced as a vector space for each point on the manifold, whose elements are differential operators. I recognise that that this vector space is very much usable to describe vector fields on the manifold. But as I read the definition of manifolds, I intuitively expected the displacement between points to be a good definition for vectors on my manifold. If our manifold consists of multiple charts, this would not be possible, since the space would not be closed: if we added the displacement between points to itself long enough we would eventually “exit” the open set in which the tuples are defined, which is by definition not possible. I’m still interested in knowing why in general relativity we use tangent spaces instead of conventional classic coordinate tuples. So my first question is: if a manifold is describable by a single chart, can we define a vector space simply by taking displacements between points on that chart as n-tuple vectors? For instance, take the manifold $\mathbf{r} = (x,y,z)^T = (f_1(u,v), f_2(u,v), f_3(u,v))^T$: a 2D-manifold embedded in 3D space. If the functions $f_i$ are well behaved so that the manifold is smooth and has no sharp edges/crossings/etc, wouldn’t $(u,v)$ tuples form a nice vector space? You could add them, scale them, and fulfil all the properties of a vector space. My second question is more to manage my own expectations for later: is this single-coordinate chart situation one that occurs at all in G.R.? If each/most manifold(s) in G.R. naturally requires two or more charts, then it would be senseless to take these displacement tuples as vectors. The reason I still ask is because I intuitively expected that a single chart should encompass all of space; I would be very surprised if I was working out a physics problem and I couldn’t describe my worldline in the same coordinate system everywhere. Answer: So my first question is: if a manifold is describable by a single chart, can we define a vector space simply by taking displacements between points on that chart as n-tuple vectors? Not in general. Consider the real line as a manifold $M$, equipped with the chart $(\mathbb R, x)$ where the chart map $x$ is defined by $$x:M \rightarrow \mathbb R$$ $$p \mapsto x(p) = \tan^{-1}(p)$$ I cannot define a vector space from the coordinates of points on the manifold, because the range of the chart map is only $x(\mathbb R)=(-\pi/2,\pi/2)$. You might say that this is simply a bad choice of chart (should that matter?), but note that if the Riemann curvature tensor of the manifold does not vanish everywhere, then it is not possible to construct a globally Euclidean chart. is this single-coordinate chart situation one that occurs at all in G.R.? [...]I would be very surprised if I was working out a physics problem and I couldn’t describe my worldline in the same coordinate system everywhere. Polar coordinates have singular behavior at the coordinate origin; spherical coordinates have singular behavior at the origin as well as the poles. Even manifolds as mundane as $\mathbb R^n$ generically require more than one chart if you use non-cartesian coordinates. It can be shown that if a manifold possesses intrinsic curvature then it cannot be described by a single, globally Euclidean chart. In other words, while it is possible to find very special cases in which you could find a single chart of the kind you want, it is only possible when the space possesses no curvature. Essentially, you could (on rare occasions) artificially shoehorn a vector space structure into your spacetime if you want, but it would be generally pointless. Don't bother. As far as I’m aware vectors do not need to follow any transformation requirements (though it’s nice if they do) but only a set of eight definitions. (comment on other answer) This is the mathematical definition of a vector space. The vectors that puppetsock is referring to are tangent vectors to a manifold; those objects (or rather, their components in a given chart), which are the ones you'll be dealing with in GR, do possess certain transformation properties which follow from the requirement that tangent vectors be chart-independent geometrical objects.
{ "domain": "physics.stackexchange", "id": 60230, "tags": "general-relativity, differential-geometry, vectors" }
Histogram of different classes of variables in one column
Question: I have a basic ggplot2 R question. I have a dataframe in which data looks like this: AA 4 AA 6 BB 6 AB 5 BA 4 AA 3 NN 2 AN 6 NN 5 AN 4 NA 3 BB 6 BN 5 NB 1 BN 7 The file is of multiple lines. I want to make a histogram using gplot2 that will plot the 9 kinds of variables AA,BB,AB,BA,AN,BN,NN,NB,NA on the x-axis and the sum of it's corresponding values in the next column on the y-axis. So for example, here the y-values should be AA= 13 BB= 12 AB= 5 BA= 4 NN= 7 AN= 10 NA= 3 BN= 12 NB= 1 Answer: This isn't a bioinformatics question, but it's quicker to answer than to close. df1 <- read.table(text = " AA 4 AA 6 BB 6 AB 5 BA 4 AA 3 NN 2 AN 6 NN 5 AN 4 NA 3 BB 6 BN 5 NB 1 BN 7", na.strings = "") library(ggplot2) ggplot(df1, aes(x = V1, y = V2)) + geom_bar(stat = "identity")
{ "domain": "bioinformatics.stackexchange", "id": 395, "tags": "r, visualization, ggplot2" }
Presentism: doesn't everything exist at the same moment?
Question: The Help Center recommends I 'fix' this question: (original question) "It seems self-evident that everything exist in the Now. Notwithstanding time-dilation and different rates of the passage of time and entropy, doesn't this all still happen in the same universal moment?" This question was asked with a rusty but apparently adequate awareness of special relativity. However, it may have been mistaken to assume that 'presentism' only refers to the present. There is a fuller definition according to this article: Is There an Alternative to the Block Universe View? If one can talk about a widely (explicitly or implicitly) accepted view on reality it is presentism – the view that it is only the present (the three- dimensional world at the moment `now') that exists. This common-sense view, which reflects the way we perceive the world, has two defining features: (i) the world exists only at the constantly changing present moment (past and future do not exist) and (ii) the world is three-dimensional. According to special relativity the universe is four-dimensional, so presentism in this form is ruled out. Nevertheless isn't the present universal? For example, regardless of the the observers' frames in the diagram below, they see event A at the same time, say, at t[0] = "the present". The exception to this post-relativistic presentism comes only from the realms of time-travel, which would permit the past and the present to coexist. Most speculative, of course. So is this question on presentism correct? (If not blindingly obviously so.) Addendum The following quote from "The Feynman Lectures on Physics, Vol. I, 17-3 Past, present and future" illustrates why the answer might be less than obvious. However, just because the present may be unobservable does not mean it does not exist. As far as I am aware, it does. It seems some physicists discount its existence because it is unobservable, which may be the reason for the confusion. Answer: According to the classic physics/Newton there was an absolute time in which things happens. But this intuitive perception of reality is left since Einstein proved that there is not an absolute time but a relative one depending on the observers' frame. Not our intuition is valid but what is measured. According to Newton, absolute time exists independently of any observer and progresses at a consistent pace throughout the universe. Unlike relative time, Newton believed absolute time was imperceptible and could only be understood mathematically. According to Newton, humans are only capable of perceiving relative time, which is a measurement of perceivable objects in motion. So already Newton would suggest that if you can't measure it it is imperceptible. So if in special relativity one thing happens at two different times only those are valid because measured and experienced. And to measure the time those measurements were made is not possible because the absolute reference time is imperceptible. So perhaps the common event happened at one moment but there is no access to. In this line see also Kant. But perhaps Kant would even go further and presume that time exists only in our head.
{ "domain": "physics.stackexchange", "id": 29718, "tags": "special-relativity, time, time-travel, wormholes" }
Pokemon battle simulator
Question: I learned Python and Pygame and now have coded a few games. For the last few months I have been working on this "luck free" pokemon battle simulator. I feel my program runs well in the current state and at the same time is being semi spaghetti-coded because as I try to improve the game I find it more challenging than it should. I know there are some things done wrong or improperly. Any constructive criticism you have to input is appreciated. There are multiple more files you can find here. Here's the game.py file where the majority of the battle code is: from __future__ import division import pygame, Functions, math, time, computer_move from yo_buttons import Button from pokemon import * from vision import * from pokemon_types import * class Game(): home_screen = Home_Screen() team_builder_screen = Team_Builder_Screen() play_screen = Play_Screen() options_screen = Options_Screen() gym_leaders_screen = Gym_Leaders_Screen() screens = [] screens.append(home_screen) screens.append(team_builder_screen) screens.append(play_screen) screens.append(options_screen) screens.append(gym_leaders_screen) current_screen_number = 0 Pokemon_Team = test_team Pokemon_List = Pokemon_Team.list opponent = test_opponent Opponent_Pokemon_List = opponent.list current_pokemon_number = 0 current_opponent_number = 0 current_turn_text = set([]) turn_text = set([]) battle_text = [] current_turn = 0 turn_index = -1 dead_text = "" current_turn_sprites = [Pokemon_List[0], Opponent_Pokemon_List[0]] turn_sprites = set([]) battle_sprites = [] current_sprites = [Pokemon_List[0], Opponent_Pokemon_List[0]] battle_sprites.append(current_sprites) pause = False previous_screen = None square_info = ["","","",""] show_party = False Pokemon_Party = [1, 2, 3, 4] Opponent_Party = [1, 2, 3, 4] Pokemon_Fainted = False switched_in = False switching = False opponent_switching = False second_move_info = None first_move_done = False second_move_done = False going_to_switch = False text_y = 75 can_switch = True opponent_can_switch = True next_move_text = pygame.image.load('images/next_move.png') next_pokemon_text = pygame.image.load('images/next_pokemon.png') exit_text = pygame.image.load('images/exit_text.png') continue_text = pygame.image.load('images/continue.png') game_over_text = pygame.image.load('images/game_over.png') text_bubbles = [next_move_text, next_pokemon_text, exit_text, continue_text, game_over_text] help_text_number = 0 turn_logged = False playing = True stop_that = False player_field = { "stealth rock": False, "spikes": 0, "toxic spikes": 0, } opponent_field = { "stealth rock": False, "spikes": 0, "toxic spikes": 0, } stealth_rock_sprite = pygame.image.load("images/stealth_rock.png") spike_sprite = pygame.image.load('images/triangle.png') @staticmethod def update(screen): Game.current_screen = Game.screens[Game.current_screen_number] Game.current_screen.Block_List.draw(screen) Button.update(screen, Game.current_screen.Button_List) Game.current_pokemon = Game.Pokemon_List[Game.current_pokemon_number] Game.opponent.pokemon = Game.Opponent_Pokemon_List[Game.current_opponent_number] Game.Opponent_Pokemon_List = Game.opponent.list Game.help_text = Game.text_bubbles[Game.help_text_number] if Game.current_screen_number == 2: Game.update_info(screen) elif Game.current_screen_number == 1: Game.show_stats(screen) elif Game.current_screen_number == 3: Game.options(screen) elif Game.current_screen_number == 4: Game.Gym_Leaders(screen) @staticmethod def update_info(screen): # sprites if Game.turn_index+1 == Game.current_turn: pokemon, opponent = Game.current_pokemon, Game.opponent.pokemon pokemon_health, opponent_health = Game.current_pokemon.current_health, Game.opponent.pokemon.current_health else: pokemon, opponent = Game.current_sprites[0], Game.current_sprites[1] if Game.turn_index == 0: pokemon_health, opponent_health = pokemon.max_health, opponent.max_health else: pokemon_health, opponent_health = Game.current_sprites[2], Game.current_sprites[3] sprites = [ [pokemon.back_image, (225, 250)], [opponent.front_image, (475, 50)], [pokemon.type.image, (495, 265)], [opponent.type.image, (75, 40)], ] if opponent.type2 != None: type2 = (opponent.type2.image, (107, 40)) sprites.append(type2) if pokemon.type2 != None: type2 = (pokemon.type2.image, (527, 265)) sprites.append(type2) help_text = [Game.help_text, (5, 300)] sprites.append(help_text) # text text = [ ["%s" % Game.square_info[0], 125, 385], ["%s" % Game.square_info[1], 375, 385], ["%s" % Game.square_info[2], 125, 442], ["%s" % Game.square_info[3], 375, 442], #["%s" % Game.current_pokemon.name, 525, 250], #["%s" % Game.opponent.pokemon.name, 100, 25], #["%s/%s" % (int(math.floor(Game.current_pokemon.current_health)), Game.current_pokemon.max_health), 525, 315], #["%s" % Game.get_percent(Game.opponent.pokemon.current_health, Game.opponent.pokemon.max_health), 105, 90], #["hp:", 30, 70], #["hp:", 450, 295], ["%s" % pokemon.name, 525, 250], ["%s" % opponent.name, 100, 25], ["%s/%s" % (int(math.floor(pokemon_health)), pokemon.max_health), 525, 315], ["%s" % Game.get_percent(opponent_health, opponent.max_health), 105, 90], ["hp:", 30, 70], ["hp:", 450, 295], ] for item in text: Functions.text_to_screen(screen, item[0], item[1], item[2], 20) pokemon_hp = math.floor((pokemon_health / pokemon.max_health) * 110) opponent_hp = math.floor((opponent_health / opponent.max_health) * 110) # Check if all pokemon in party are fainted or not # Also display Pokeballs for each pokemon x = 0 x_axis = 470 Lose, Win = True, True Game.can_switch = False for pokemon in Game.Pokemon_List: pokeball = [Pokemon.Pokeball, (x_axis, 334)] sprites.append(pokeball) if pokemon != Game.current_pokemon: Game.Pokemon_Party[x] = pokemon x+=1 if pokemon.current_health > 0: Game.can_switch = True if pokemon.current_health > 0: Lose = False else: icon = [Pokemon.icon_x, (x_axis, 334)] sprites.append(icon) x_axis += 24 x = 0 x_axis = 50 Game.opponent_can_switch = False for pokemon in Game.Opponent_Pokemon_List: pokeball = [Pokemon.Pokeball, (x_axis, 109)] sprites.append(pokeball) if pokemon != Game.opponent.pokemon: Game.Opponent_Party[x] = pokemon x+=1 if pokemon.current_health > 0: Game.opponent_can_switch = True if pokemon.current_health > 0: Win = False else: icon = [Pokemon.icon_x, (x_axis, 109)] sprites.append(icon) x_axis += 24 # Show moves or team in the squares if not Game.show_party: Game.square_info = [Game.current_pokemon.move1.name, Game.current_pokemon.move2.name, Game.current_pokemon.move3.name, Game.current_pokemon.move4.name,] secondary_info = [ ["power:", 185, 405], ["%s" % Game.current_pokemon.move1.power, 230, 405], ["power:", 435, 405], ["%s" % Game.current_pokemon.move2.power, 480, 405], ["power:", 185, 462], ["%s" % Game.current_pokemon.move3.power, 230, 462], ["power:", 435, 462], ["%s" % Game.current_pokemon.move4.power, 480, 462], ] info_sprites = [ [Game.current_pokemon.move1.type.image, (50, 400)], [Game.current_pokemon.move1.contact_image, (100, 400)], [Game.current_pokemon.move2.type.image, (300, 400)], [Game.current_pokemon.move2.contact_image, (350, 400)], [Game.current_pokemon.move3.type.image, (50, 457)], [Game.current_pokemon.move3.contact_image, (100, 457)], [Game.current_pokemon.move4.type.image, (300, 457)], [Game.current_pokemon.move4.contact_image, (350, 457)], ] elif Game.show_party: Game.square_info = [Game.Pokemon_Party[0].name, Game.Pokemon_Party[1].name, Game.Pokemon_Party[2].name, Game.Pokemon_Party[3].name] secondary_info = [ ["hp:", 160, 408], ["%s/%s" % (int(Game.Pokemon_Party[0].current_health), Game.Pokemon_Party[0].max_health), 210, 408], ["hp:", 410, 408], ["%s/%s" % (int(Game.Pokemon_Party[1].current_health), Game.Pokemon_Party[1].max_health), 460, 408], ["hp:", 160, 465], ["%s/%s" % (int(Game.Pokemon_Party[2].current_health), Game.Pokemon_Party[2].max_health), 210, 465], ["hp:", 410, 465], ["%s/%s" % (int(Game.Pokemon_Party[3].current_health), Game.Pokemon_Party[3].max_health), 460, 465], ] info_sprites = [ [Game.Pokemon_Party[0].type.image, (50, 400)], [Game.Pokemon_Party[1].type.image, (300, 400)], [Game.Pokemon_Party[2].type.image, (50, 457)], [Game.Pokemon_Party[3].type.image, (300, 457)], ] if Game.Pokemon_Party[0].type2 != None: sprite = (Game.Pokemon_Party[0].type2.image, (82, 400)) sprites.append(sprite) if Game.Pokemon_Party[1].type2 != None: sprite = (Game.Pokemon_Party[1].type2.image, (332, 400)) sprites.append(sprite) if Game.Pokemon_Party[2].type2 != None: sprite = (Game.Pokemon_Party[2].type2.image, (82, 457)) sprites.append(sprite) if Game.Pokemon_Party[3].type2 != None: sprite = (Game.Pokemon_Party[3].type2.image, (332, 457)) sprites.append(sprite) # Entry Hazards if Game.opponent_field["stealth rock"]: rock_icon = (Game.stealth_rock_sprite, (425, 100)) sprites.append(rock_icon) if Game.player_field["stealth rock"]: rock_icon = (Game.stealth_rock_sprite, (350, 300)) sprites.append(rock_icon) if Game.opponent_field["spikes"] > 0: x = 450 for spike in range(Game.opponent_field["spikes"]): spikes_icon = (Game.spike_sprite, (x, 150)) sprites.append(spikes_icon) x+=25 if Game.player_field["spikes"] > 0: x = 300 for spike in range(Game.player_field["spikes"]): spikes_icon = (Game.spike_sprite, (x, 340)) sprites.append(spikes_icon) x+=25 # Blitting the sprites for item in info_sprites: screen.blit(item[0], item[1]) for item in secondary_info: Functions.text_to_screen(screen, item[0], item[1], item[2], 18) for item in sprites: screen.blit(item[0], item[1]) #### Health Bars ### block_list = pygame.sprite.Group() health_bars = [ [50, 65, 110, 12, RED], [470, 290, 110, 12, RED], ] for block in health_bars: new_block = Block(block[0], block[1], block[2], block[3], block[4]) block_list.add(new_block) block_list.draw(screen) block_list = pygame.sprite.Group() # Health Bar Colors if 28 < pokemon_hp <= 55: pokemon_color = YELLOW elif 0 < pokemon_hp <= 28: pokemon_color = DARK_RED else: pokemon_color = BLUE if 28 < opponent_hp <= 55: opponent_color = YELLOW elif 0 < opponent_hp <= 28: opponent_color = DARK_RED else: opponent_color = BLUE health_bar_color = (25,25,25) blocks = [ #[50, 65, 110, 12, RED], #[470, 290, 110, 12, RED], [470, 290, pokemon_hp, 12, pokemon_color], [50, 65, opponent_hp, 12, opponent_color], [160, 63, 6, 16, health_bar_color], [580, 288, 6, 16, health_bar_color], [48, 63, 2, 16, health_bar_color], [468, 288, 2, 16, health_bar_color], [48, 77, 112, 2, health_bar_color], [468, 302, 112, 2, health_bar_color], [48, 63, 112, 2, health_bar_color], [468, 288, 112, 2, health_bar_color], ] for block in blocks: new_block = Block(block[0], block[1], block[2], block[3], block[4]) block_list.add(new_block) block_list.draw(screen) # Win or Lose if Win: Game.game_over(screen, Game.current_pokemon) if Lose: Game.game_over(screen, Game.opponent.pokemon) # Show the current turn info and pause the Game if Game.pause: if Game.playing: Game.help_text_number = 3 """for item in Game.current_turn_text: Functions.text_to_screen(screen, item[0], item[1], item[2], item[3], item[4])""" if not Game.pause: Game.log_turn(False) Game.current_turn_text = set([]) Game.first_move_done = False Game.second_move_done = False Game.text_y = 75 Game.help_text_number = 0 Game.turn_logged = False Game.stop_that = False # Check if a pokemon on the battlefield has fainted if math.floor(Game.current_pokemon.current_health) <= 0 or math.floor(Game.opponent.pokemon.current_health) <= 0: Game.Pokemon_Fainted = True Game.pause = True if Game.current_pokemon.current_health <= 0: Game.show_party = True Game.current_pokemon.fainted = True Game.current_pokemon.current_health = 0 Game.help_text_number = 1 if Game.opponent.pokemon.current_health <= 0: Game.opponent.pokemon.fainted = True Game.opponent.pokemon.current_health = 0 # Tell player a Pokemon has fainted if Game.Pokemon_Fainted: Game.pause = True if Game.current_pokemon.fainted and not Game.opponent.pokemon.fainted: if not Game.stop_that: dead_text = ("%s died!" % Game.current_pokemon.name, 150, Game.text_y, 25, RED) Game.battle_text[Game.current_turn - 1].add(dead_text) Game.stop_that = True elif Game.opponent.pokemon.fainted and not Game.current_pokemon.fainted: dead_text = ("%s died!" % Game.opponent.pokemon.name, 150, Game.text_y, 25, BLUE) Game.battle_text[Game.current_turn - 1].add(dead_text) if not Game.switching: Game.switch_opponent(True) else: #if not Game.going_to_switch: # Game.text_y += 35 Game.going_to_switch = True elif Game.current_pokemon.fainted and Game.opponent.pokemon.fainted: dead_text = ("%s died!" % Game.current_pokemon.name, 150, Game.text_y+35, 25, RED) Game.battle_text[Game.current_turn - 1].add(dead_text) dead_text = ("%s died!" % Game.opponent.pokemon.name, 150, Game.text_y, 25, BLUE) Game.battle_text[Game.current_turn - 1].add(dead_text) # Show specific turn info on the side if Game.current_turn > 0: for item in Game.battle_text[Game.turn_index]: try: Functions.text_to_screen(screen, item[0], 800, item[2], 20, item[4]) y += 35 except: print item Game.current_sprites = Game.battle_sprites[Game.turn_index+1] ### U-turn/Volt-Switch ### if Game.switching: Game.show_party = True Game.help_text_number = 1 # Opponent U-turn/Volt-switch if Game.opponent_switching: switch = computer_move.best_switch(Game.current_pokemon, Game.Opponent_Party) Game.switch_move(Game.opponent.pokemon, switch) # Switch after dying from U-turn/Volt-switch if not Game.switching and Game.going_to_switch: Game.switch_opponent() # Game Over help text if not Game.playing: Game.help_text_number = 4 # After Turn """if Game.second_move_done: if Game.current_pokemon.speed > Game.opponent.pokemon.speed: Game.after_turn_item(Game.current_pokemon) Game.after_turn_item(Game.opponent.pokemon) else: Game.after_turn_item(Game.opponent.pokemon) Game.after_turn_item(Game.current_pokemon) Game.second_move_done = False""" @staticmethod def show_stats(screen): text = [ ["%s" % Game.current_pokemon.max_health, 515, 150], ["%s" % Game.current_pokemon.attack, 515, 210], ["%s" % Game.current_pokemon.defense, 515, 270], ["%s" % Game.current_pokemon.special_attack, 515, 330], ["%s" % Game.current_pokemon.special_defense, 515, 390], ["%s" % Game.current_pokemon.speed, 515, 450], ] for item in text: Functions.text_to_screen(screen, item[0], item[1], item[2]) buttons = [ ["%s" % Game.current_pokemon.name, 75, 25, 250, 30, BLUE, BLUE, None], ["Type: %s" % Game.current_pokemon.type.name, 375, 5, 200, 30, BLUE, BLUE, None], #move1 & stats ["Move 1: %s" % Game.current_pokemon.move1.name, 25, 75, 275, 30, BLUE, BLUE, None], ["Type: %s" % Game.current_pokemon.move1.type.name, 25, 107, 250, 30, DODGER_BLUE, BLUE, None], ["Power:%s" % Game.current_pokemon.move1.power, 5, 142, 125, 30, DODGER_BLUE, BLUE, None], ["Contact: %s" % Game.current_pokemon.move1.contact, 135, 142, 200, 30, DODGER_BLUE, BLUE, None], # move 2 & stats ["Move 2: %s" % Game.current_pokemon.move2.name, 25, 175, 275, 30, BLUE, BLUE, None], ["Type: %s" % Game.current_pokemon.move2.type.name, 25, 207, 250, 30, DODGER_BLUE, BLUE, None], ["Power:%s" % Game.current_pokemon.move2.power, 5, 242, 125, 30, DODGER_BLUE, BLUE, None], ["Contact: %s" % Game.current_pokemon.move2.contact, 135, 242, 200, 30, DODGER_BLUE, BLUE, None], # move 3 & stats ["Move 3: %s" % Game.current_pokemon.move3.name, 25, 275, 275, 30, BLUE, BLUE, None], ["Type: %s" % Game.current_pokemon.move3.type.name, 25, 307, 250, 30, DODGER_BLUE, BLUE, None], ["Power:%s" % Game.current_pokemon.move3.power, 5, 342, 125, 30, DODGER_BLUE, BLUE, None], ["Contact: %s" % Game.current_pokemon.move3.contact, 135, 342, 200, 30, DODGER_BLUE, BLUE, None], # move 4 & stats ["Move 4: %s" % Game.current_pokemon.move4.name, 25, 375, 275, 30, BLUE, BLUE, None], ["Type: %s" % Game.current_pokemon.move4.type.name, 25, 407, 250, 30, DODGER_BLUE, BLUE, None], ["Power:%s" % Game.current_pokemon.move4.power, 5, 442, 125, 30, DODGER_BLUE, BLUE, None], ["Contact: %s" % Game.current_pokemon.move4.contact, 135, 442, 200, 30, DODGER_BLUE, BLUE, None], # points ["Points: %s" % Game.current_pokemon.points, 400, 75, 150, 30, BRIGHT_BLUE, BRIGHT_BLUE, None], ["Health", 340, 135, 100, 30, BLUE, BLUE, None], ["Attack", 340, 195, 100, 30, BLUE, BLUE, None], ["Defense", 340, 255, 100, 30, BLUE, BLUE, None], ["Sp.Atk", 340, 315, 100, 30, BLUE, BLUE, None], ["Sp.Def", 340, 375, 100, 30, BLUE, BLUE, None], ["Speed", 340, 435, 100, 30, BLUE, BLUE, None], ] if Game.current_pokemon.type2 != None: type2 = ["%s" % Game.current_pokemon.type2.name, 375, 35, 200, 30, BLUE, BLUE, None] buttons.append(type2) button_list = [] for item in buttons: button = Button(item[0], item[1], item[2], item[3], item[4], item[5], item[6], item[7]) button_list.append(button) base_stats = [ ["%s" % Game.current_pokemon.base_health, 460, 150], ["%s" % Game.current_pokemon.base_attack, 460, 210], ["%s" % Game.current_pokemon.base_defense, 460, 270], ["%s" % Game.current_pokemon.base_special_attack, 460, 330], ["%s" % Game.current_pokemon.base_special_defense, 460, 390], ["%s" % Game.current_pokemon.base_speed, 460, 450], ] for item in base_stats: Functions.text_to_screen(screen, item[0], item[1], item[2], 20) Button.update(screen, button_list) Pokemon.update() Game.current_pokemon.move1 = Game.current_pokemon.move_list[Game.current_pokemon.move1_number] Game.current_pokemon.move2 = Game.current_pokemon.move_list[Game.current_pokemon.move2_number] Game.current_pokemon.move3 = Game.current_pokemon.move_list[Game.current_pokemon.move3_number] Game.current_pokemon.move4 = Game.current_pokemon.move_list[Game.current_pokemon.move4_number] Game.current_pokemon.move_set[0] = Game.current_pokemon.move1 Game.current_pokemon.move_set[1] = Game.current_pokemon.move2 Game.current_pokemon.move_set[2] = Game.current_pokemon.move3 Game.current_pokemon.move_set[3] = Game.current_pokemon.move4 @staticmethod def options(screen): button_list = [] y = 100 for pokemon in Game.opponent.list: button = Button("%s" % pokemon.name, 335, y, 300, 30, RED, RED, None) button_list.append(button) y += 50 y = 100 for pokemon in Game.Pokemon_List: button = Button("%s" % pokemon.name, 5, y, 300, 30, BLUE, BLUE, None) button_list.append(button) y += 50 text = [ ["Your Team", 155, 50], ["Opponent Team", 485, 50], ["%s Pokemon" % len(Pokemon.All_Pokemon), 300, 25], ["%s" % len(Opponent.All_Pokemon), 500, 25], ] for item in text: Functions.text_to_screen(screen, item[0], item[1], item[2]) Button.update(screen, button_list) @staticmethod def Gym_Leaders(screen): text = [ ["Gym Leaders Room", 325, 25], ["Click to Challenge!", 325, 50] ] for item in text: Functions.text_to_screen(screen, item[0], item[1], item[2]) @staticmethod def reset(): Game.pause = False Game.current_pokemon_number = 0 Game.current_opponent_number = 0 for pokemon in Game.Pokemon_List: pokemon.current_health = pokemon.max_health pokemon.fainted = False pokemon.first_turn = True for pokemon in Game.Opponent_Pokemon_List: pokemon.current_health = pokemon.max_health pokemon.fainted = False pokemon.first_turn = True Game.current_turn_text = set([]) Game.show_party = False Game.Pokemon_Fainted = False Game.battle_text[:] = [] Game.turn_index = -1 Game.current_turn = 0 Game.switching = False Game.opponent_switching = False Game.first_move_done = False Game.second_move_done = False Game.going_to_switch = False Game.text_y = 75 Game.help_text_number = 0 Game.turn_logged = False Game.playing = True Game.stop_that = False Game.battle_sprites[:] = [] Game.current_sprites = [Game.Pokemon_List[0], Game.Opponent_Pokemon_List[0]] Game.battle_sprites.append(Game.current_sprites) Game.player_field["stealth rock"] = False Game.opponent_field["stealth rock"] = False Game.player_field["spikes"] = 0 Game.opponent_field["spikes"] = 0 @staticmethod def attack(attacker, defender, move, y): type_advantage = Game.type_advantage(defender, move) power = Game.damage_calc(attacker, defender, move, type_advantage) x = y damage = None if defender.current_health - power > 0 and type_advantage > 0: defender.current_health -= power damage = power elif defender.current_health - power <= 0: damage = defender.current_health defender.current_health = 0 x+=35 Game.text_y += 35 if type_advantage == 2 or type_advantage == 4: type_advantage_text = ["It's super effective!", 375, y+35] elif type_advantage == 0.5 or type_advantage == 0.25: type_advantage_text = ["It's not very effective...", 375, y+35] elif type_advantage == 0: type_advantage_text = ["It didn't do anything...", 375, y+35] else: type_advantage_text = "" x -= 35 Game.text_y -= 35 last_text = "" if attacker == Game.current_pokemon: if type_advantage > 0: last_text = "%s lost %s HP!" % (defender.name, Game.get_percent(damage, defender.max_health)) color = BLUE elif attacker == Game.opponent.pokemon: if type_advantage > 0: last_text = "%s lost %s HP!" % (defender.name, int(math.floor(damage))) color = DARK_RED text = [ ["%s used %s!" % (attacker.name, move.name), 375, y], ["%s" % last_text, 375, x+35] ] if type_advantage != 1: text.append(type_advantage_text) for item in text: Game.current_turn_text.add((item[0], item[1], item[2], 20, color)) attacker.first_turn = False Game.text_y += 70 return damage @staticmethod def do_moves(move): Game.pause = True opponent_move = Game.opponent_move() if not isinstance(move, int) and not isinstance(opponent_move, int): if move.priority == opponent_move.priority: if Game.current_pokemon.speed > Game.opponent.pokemon.speed: first = Game.current_pokemon second = Game.opponent.pokemon elif Game.opponent.pokemon.speed >= Game.current_pokemon.speed: first = Game.opponent.pokemon second = Game.current_pokemon elif move.priority > opponent_move.priority: first = Game.current_pokemon second = Game.opponent.pokemon elif move.priority < opponent_move.priority: first = Game.opponent.pokemon second = Game.current_pokemon if first == Game.current_pokemon: move1 = move move2 = opponent_move elif first == Game.opponent.pokemon: move1 = opponent_move move2 = move can_move = True if move1.effect != None: can_move = Game.move_with_effect(first, second, move1, move2, Game.text_y) else: Game.attack(first, second, move1, Game.text_y) #Game.text_y += 175 Game.first_move_done = True if (second.current_health > 0 and can_move): if not Game.switching and not Game.opponent_switching: if move2.effect != None: Game.move_with_effect(second, first, move2, 1, Game.text_y) else: Game.attack(second, first, move2, Game.text_y) Game.second_move_done = True elif Game.switching or Game.opponent_switching: Game.second_move_info = (second, first, move2, Game.text_y) elif isinstance(move, int) and isinstance(opponent_move, int): if Game.current_pokemon.speed > Game.opponent.pokemon.speed: Game.switch_pokemon(Game.current_pokemon, move, Game.text_y) Game.switch_pokemon(Game.opponent.pokemon, opponent_move, Game.text_y) elif Game.current_pokemon.speed <= Game.opponent.pokemon.speed: Game.switch_pokemon(Game.opponent.pokemon, opponent_move, Game.text_y) Game.switch_pokemon(Game.current_pokemon, move, Game.text_y) Game.second_move_done = True else: if isinstance(move, int): defender = Game.switch_pokemon(Game.current_pokemon, move, Game.text_y) Game.first_move_done = True if opponent_move.effect == None: Game.attack(Game.opponent.pokemon, defender, opponent_move, Game.text_y) elif opponent_move.effect != None: Game.move_with_effect(Game.opponent.pokemon, defender, opponent_move, move, Game.text_y) Game.second_move_done = True elif not isinstance(move, int): defender = Game.switch_pokemon(Game.opponent.pokemon, opponent_move, Game.text_y) Game.first_move_done = True if move.effect == None: Game.attack(Game.current_pokemon, defender, move, Game.text_y) elif move.effect != None: Game.move_with_effect(Game.current_pokemon, defender, move, opponent_move, Game.text_y) Game.second_move_done = True if not Game.turn_logged: Game.log_turn() #Game.log_sprites(Game.current_pokemon, Game.opponent.pokemon) #else: # Game.log_turn(False) @staticmethod def damage_calc(attacker, defender, move, type_advantage): # Damage = (0.44*(attack/defense)*move power)*modifier # Modifier = STAB * Type effectiveness * other(items, abilities) if move.contact == "physical": attack = attacker.attack defense = defender.defense elif move.contact == "special": attack = attacker.special_attack defense = defender.special_defense power = move.power #STAB STAB = 1 if attacker.type.name == move.type.name: STAB = 1.5 if attacker.type2 != None and STAB == 1: if attacker.type2.name == move.type.name: STAB = 1.5 # Damage calculation damage = math.floor((0.2 * (attack / defense) * power + 2) * (type_advantage * STAB)) return damage @staticmethod def type_advantage(defender, move): type_advantage = 1 for weakness in defender.type.weakness_list: if move.type.name == weakness: type_advantage = 2 break for resist in defender.type.resist_list: if move.type.name == resist: type_advantage = 0.5 break for immune in defender.type.immune_list: if move.type.name == immune: type_advantage = 0 break if defender.type2 != None and type_advantage > 0: for weakness in defender.type2.weakness_list: if move.type.name == weakness: if type_advantage == 1: type_advantage = 2 elif type_advantage == 2: type_advantage = 4 elif type_advantage == 0.5: type_advantage = 1 break for resist in defender.type2.resist_list: if move.type.name == resist: if type_advantage == 1: type_advantage = 0.5 elif type_advantage == 2: type_advantage = 1 elif type_advantage == 0.5: type_advantage = 0.25 break for immune in defender.type2.immune_list: if move.type.name == immune: type_advantage = 0 break return type_advantage @staticmethod def get_percent(numerator, denominator): number = int(round((numerator/denominator)*100)) percent = ("%s%%" % number) return percent @staticmethod def switch_pokemon(pokemon, pokemon_number, y): if pokemon == Game.current_pokemon: old = Game.current_pokemon new = Game.Pokemon_Party[pokemon_number] new_number = Game.Pokemon_List.index(new) Game.current_pokemon_number = new_number Game.show_party = False poke = Game.opponent.pokemon color = GREEN field = Game.player_field else: old = Game.opponent.pokemon new = Game.Opponent_Party[pokemon_number] new_number = Game.Opponent_Pokemon_List.index(new) Game.current_opponent_number = new_number poke = Game.current_pokemon color = YELLOW Game.switched_in = True field = Game.opponent_field text = [ ["%s come back!" % old.name, 375, y], ["Go %s!" % new.name, 375, y+35] ] for item in text: Game.current_turn_text.add((item[0], item[1], item[2], 20, color)) new.first_turn = True Game.switching = False Game.text_y += 35 Game.entry_hazard_damage(new, field) Game.text_y += 35 return new @staticmethod def send_in_pokemon(pokemon_number): if Game.Pokemon_Party[pokemon_number].current_health > 0: Game.text_y += 35 Game.pause = True old = Game.current_pokemon new = Game.Pokemon_Party[pokemon_number] new_number = Game.Pokemon_List.index(new) Game.current_pokemon_number = new_number text = ("Go %s" % new.name, 150, Game.text_y, 20, GREEN) Game.battle_text[Game.current_turn - 1].add(text) Game.current_turn_text.add(text) Game.entry_hazard_damage(new, Game.player_field) Game.Pokemon_Fainted = False Game.show_party = False new.first_turn = True Game.text_y += 35 @staticmethod def switch_opponent(dead=False): #if not Game.switching: next_switch = computer_move.best_switch(Game.current_pokemon, Game.Opponent_Party, dead) Game.send_in_opponent(next_switch) @staticmethod def game_over(screen, winner): Game.pause = True Game.show_party = False Game.playing = False Game.help_text_number = 4 if winner == Game.current_pokemon: text = ("You win!", 150, 425, 30, WHITE) elif winner == Game.opponent.pokemon: text = ("You are out of Pokemon!", 150, 425, 30, WHITE) Game.battle_text[Game.current_turn - 1].add(text) @staticmethod def send_in_opponent(pokemon_number): if isinstance(pokemon_number, int): if Game.Opponent_Party[pokemon_number].current_health > 0: # old = Game.opponent.pokemon new = Game.Opponent_Party[pokemon_number] new_number = Game.Opponent_Pokemon_List.index(new) Game.current_opponent_number = new_number Game.text_y += 35 text = ("Go %s" % new.name, 150, Game.text_y, 20, YELLOW) Game.current_turn_text.add(text) Game.battle_text[Game.current_turn - 1].add(text) Game.entry_hazard_damage(new, Game.opponent_field) Game.Pokemon_Fainted = False Game.show_party = False Game.switched_in = True new.first_turn = True Game.going_to_switch = False Game.text_y += 35 @staticmethod def log_turn(next_turn=True): #turn_text = Game.current_turn_text if next_turn: Game.turn_index = Game.current_turn Game.current_turn += 1 turn_number = ("Turn %s" % Game.current_turn, 800, 40, 30, (255,255,255)) Game.turn_text.add(turn_number) Game.battle_text.append(Game.turn_text) Game.turn_logged = True Game.log_sprites(Game.current_pokemon, Game.opponent.pokemon) else: for text in Game.turn_text: Game.battle_text[Game.current_turn-1].add(text) for text in Game.current_turn_text: Game.turn_text.add(text) Game.turn_text = set([]) @staticmethod def log_sprites(pokemon, opponent): Game.current_turn_sprites = (pokemon, opponent, pokemon.current_health, opponent.current_health) Game.battle_sprites.append(Game.current_turn_sprites) @staticmethod def move_with_effect(attacker, defender, move, defender_move, y): can_move = True if attacker == Game.current_pokemon: attacker_color = BLUE elif attacker == Game.opponent.pokemon: attacker_color = DARK_RED if move.power > 0: if move.effect == "flinch": if attacker.first_turn: Game.attack(attacker, defender, move, y) if defender.current_health > 0 and not isinstance(defender_move, int): flinch_text = ("%s flinched!" % defender.name, 375, y+110, 25, DARK_RED) Game.current_turn_text.add(flinch_text) can_move = False else: flinch_text = ("%s used %s..." % (attacker.name, move.name), 375, y, 25, attacker_color) flinch_text2 = ("But it failed!", 375, y+35, 25, attacker_color) Game.current_turn_text.add(flinch_text2) Game.current_turn_text.add(flinch_text) elif move.effect == "sucker_punch": if isinstance(defender_move, int): sucker_text = ("%s used %s..." % (attacker.name, move.name), 375, y, 25, attacker_color) sucker_text2 = ("But it failed!", 375, y+35, 25, attacker_color) Game.current_turn_text.add(sucker_text) Game.current_turn_text.add(sucker_text2) else: Game.attack(attacker, defender, move, y) Game.text_y -= 70 elif move.effect == "recoil": damage = Game.attack(attacker, defender, move, y) Game.text_y -= 35 if damage != None: recoil_damage = math.floor(damage/3) attacker.current_health -= recoil_damage recoil_text = ("%s was hurt by recoil!" % attacker.name, 375, y+105, 25, attacker_color) Game.current_turn_text.add(recoil_text) if attacker.current_health <= 0: attacker.current_health = 0 elif move.effect == "drain": damage = Game.attack(attacker, defender, move, y) Game.text_y -= 35 if damage != None: drain_damage = math.floor(damage/2) attacker.current_health += drain_damage drain_text = ("%s drained some health!" % attacker.name, 375, y+105, 25, GREEN) Game.current_turn_text.add(drain_text) if attacker.current_health > attacker.max_health: attacker.current_health = attacker.max_health elif move.effect == "switch": damage = Game.attack(attacker, defender, move, y) if damage: if Game.can_switch and attacker == Game.current_pokemon: Game.switching = True if defender.current_health <= 0: Game.going_to_switch = True elif Game.opponent_can_switch and attacker == Game.opponent.pokemon: Game.opponent_switching = True if Game.first_move_done: Game.second_move_done = True else: Game.first_move_done = True if isinstance(defender_move, int): Game.second_move_done = True Game.second_move_info = None Game.text_y += 70 else: can_move = Game.non_damaging_move(attacker, defender, move, defender_move, y, attacker_color) return can_move @staticmethod def non_damaging_move(pokemon, opponent, move, opponent_move, y, color): can_move = True if pokemon == Game.current_pokemon: field = Game.opponent_field elif pokemon == Game.opponent.pokemon: field = Game.player_field text = ("%s used %s!" % (pokemon.name, move.name), 375, y, 25, color) Game.current_turn_text.add(text) Game.text_y += 35 if move.effect == "stealth rock": if field["stealth rock"]: failed_text = ("But it failed!", 375, y+35, 25, YELLOW) Game.current_turn_text.add(failed_text) Game.text_y += 35 field["stealth rock"] = True elif move.effect == "spikes": if field["spikes"] < 3: field["spikes"] += 1 else: failed_text = ("But it failed!", 375, y+35, 25, YELLOW) Game.current_turn_text.add(failed_text) Game.text_y += 35 return can_move @staticmethod def switch_move(pokemon, switch_number): if Game.second_move_info != None: y = 175 else: y = 360 Game.text_y += 35 new = Game.switch_pokemon(pokemon, switch_number, Game.text_y) if (Game.second_move_info != None and not Game.second_move_done): if Game.second_move_info[2].effect == None: Game.attack(Game.second_move_info[0], new, Game.second_move_info[2], Game.text_y) elif Game.second_move_info[2].effect != None: Game.move_with_effect(Game.second_move_info[0], new, Game.second_move_info[2], 1, Game.text_y) Game.second_move_info = None Game.switching = False Game.opponent_switching = False #if Game.second_move_info == None: Game.log_turn(False) @staticmethod def entry_hazard_damage(switch, field): percent = math.floor(switch.max_health * 0.125) if field["stealth rock"]: Game.text_y += 35 TA = Game.type_advantage(switch, stealth_rock_damage) damage = math.floor(percent * TA) switch.current_health -= damage text = ("%s was hurt by Stealth Rock!" % switch.name, 150, Game.text_y, 20, YELLOW) Game.current_turn_text.add(text) Game.battle_text[Game.current_turn - 1].add(text) if field["spikes"] > 0 and (switch.type != Flying and switch.type2 != Flying): Game.text_y += 35 if field["spikes"] == 2: percent = math.floor(switch.max_health * 0.1667) elif field["spikes"] == 3: percent = math.floor(switch.max_health * 0.25) switch.current_health -= percent text = ("%s was hurt by Spikes!" % switch.name, 150, Game.text_y, 20, YELLOW) Game.current_turn_text.add(text) Game.battle_text[Game.current_turn - 1].add(text) @staticmethod def after_turn_item(pokemon): pokemon = pokemon if pokemon == Game.current_pokemon: color = GREEN else: color = YELLOW if pokemon.item != None and pokemon.current_health > 0: if pokemon.item.name == "Leftovers" and pokemon.current_health < pokemon.max_health: gained = math.floor(pokemon.max_health * 0.0625) pokemon.current_health += gained if pokemon.current_health > pokemon.max_health: pokemon.current_health = pokemon.max_health gained_text = ("%s ate some Leftovers" % pokemon.name, 375, Game.text_y, 25, color) Game.current_turn_text.add(gained_text) Game.text_y += 35 Game.log_turn(False) @staticmethod def opponent_move(): opponent = Game.opponent.pokemon pokemon = Game.current_pokemon strongest_move = computer_move.opponent_move(pokemon, opponent) type_advantage = computer_move.move_type_advantage(pokemon, strongest_move) damage = Game.damage_calc(opponent, pokemon, strongest_move, type_advantage) switch = False kill = False best_switch = Game.Opponent_Party[computer_move.best_switch(pokemon, Game.Opponent_Party)] test = computer_move.who_wins(pokemon, opponent) if computer_move.best_switch(pokemon, Game.Opponent_Party): if test == "loses":# and pokemon.base_speed > opponent.base_speed: switch = True """if best_switch.current_health > 0: offense_advantage, defense_advantage = computer_move.pokemon_type_advantage(pokemon, opponent) offense_advantage2, defense_advantage2 = computer_move.pokemon_type_advantage(opponent, pokemon) pokemon_type_advantage = (offense_advantage2 * defense_advantage2) / 2 opponent_type_advantage = (offense_advantage * defense_advantage) / 2 if (offense_advantage < offense_advantage2 and defense_advantage < defense_advantage2) and (pokemon.speed > opponent.speed): switch = True""" if pokemon.current_health - damage <= 0 and pokemon.speed < opponent.speed: switch = False kill = True minimum = Game.current_pokemon.max_health / 4 if (damage >= minimum and not switch) or Game.switched_in: opponent_move = strongest_move Game.switched_in = False else: opponent_move = computer_move.best_switch(pokemon, Game.Opponent_Party) test_move = computer_move.stay_alive(pokemon, opponent, Game.Opponent_Party) if test_move and not kill: opponent_move = test_move #return spikes return opponent_move Are there better ways I could handle the moves and maybe make a turn() method? I plan on working on this project for a long time so I want to make it better. There are other Pokemon type battle questions out there but I feel this game is more complex than most. Video of the program Answer: if you use variable/attibute names like move1 or type2 it is almost always a sign that it wants to be a list (moves, types) sprites = [ [pokemon.back_image, (225, 250)], [opponent.front_image, (475, 50)], [pokemon.type.image, (495, 265)], [opponent.type.image, (75, 40)], ] if opponent.type2 != None: type2 = (opponent.type2.image, (107, 40)) sprites.append(type2) if pokemon.type2 != None: type2 = (pokemon.type2.image, (527, 265)) sprites.append(type2) sprites = [ [pokemon.back_image, (225, 250)], [opponent.front_image, (475, 50)], ] for current_pokemon,locations in [ (pokemon,[(495, 265),(527, 265)]), (opponent,[(75, 40),(107, 40)]) ]: for type,location in zip(current_pokemon.types,locations): sprites.append([(type.image,location)] use list literals, instead of: screens = [] screens.append(home_screen) screens.append(team_builder_screen) screens.append(play_screen) screens.append(options_screen) screens.append(gym_leaders_screen) use: screens = [ home_screen, team_builder_screen, play_screen, options_screen, gym_leaders_screen ] misleading variable name: next_move_text is actually an image avoid repetition at almost all costs let the computer do the repetition repeated code makes it much harder to change/improve code whenever there are multiple lines after each other that are almost the same, you found a place to improve next_move_text = pygame.image.load('images/next_move.png') next_pokemon_text = pygame.image.load('images/next_pokemon.png') exit_text = pygame.image.load('images/exit_text.png') continue_text = pygame.image.load('images/continue.png') game_over_text = pygame.image.load('images/game_over.png') text_bubbles = [next_move_text, next_pokemon_text, exit_text, continue_text, game_over_text] to image_filename_list = [ 'images/next_move.png', 'images/next_pokemon.png', 'images/exit_text.png', 'images/continue.png', 'images/game_over.png', ] text_bubbles = [pygame.image.load(filename) for file_name in image_filename_list] player_field = { "stealth rock": False, "spikes": 0, "toxic spikes": 0, } opponent_field = { "stealth rock": False, "spikes": 0, "toxic spikes": 0, } to def get_empty_field(): return { "stealth rock": False, "spikes": 0, "toxic spikes": 0, } player_field = get_empty_field() opponent_field = get_empty_field() x_axis = 470 Lose, Win = True, True Game.can_switch = False for pokemon in Game.Pokemon_List: pokeball = [Pokemon.Pokeball, (x_axis, 334)] sprites.append(pokeball) if pokemon != Game.current_pokemon: Game.Pokemon_Party[x] = pokemon x+=1 if pokemon.current_health > 0: Game.can_switch = True if pokemon.current_health > 0: Lose = False else: icon = [Pokemon.icon_x, (x_axis, 334)] sprites.append(icon) x_axis += 24 x = 0 x_axis = 50 Game.opponent_can_switch = False for pokemon in Game.Opponent_Pokemon_List: pokeball = [Pokemon.Pokeball, (x_axis, 109)] sprites.append(pokeball) if pokemon != Game.opponent.pokemon: Game.Opponent_Party[x] = pokemon x+=1 if pokemon.current_health > 0: Game.opponent_can_switch = True if pokemon.current_health > 0: Win = False else: icon = [Pokemon.icon_x, (x_axis, 109)] sprites.append(icon) x_axis += 24 to def check_pokemon_party(x_axis,y_axis,pokemons,current_pokemon,pokemon_party): party_defeated = True can_switch = False for pokemon in pokemons: pokeball = [Pokemon.Pokeball, (x_axis, y_axis)] sprites.append(pokeball) if pokemon != current_pokemon: pokemon_party[x] = pokemon x+=1 if pokemon.current_health > 0: can_switch = True if pokemon.current_health > 0: party_defeated = False else: icon = [Pokemon.icon_x, (x_axis, y_axis)] sprites.append(icon) x_axis += 24 return party_alive,can_switch Lose, Game.can_switch = check_pokemon_party( 470, 334, Game.Pokemon_List, Game.current_pokemon, Game.Pokemon_Party ) Win, Game.opponent_can_switch = check_pokemon_party( 50, 109, Game.Opponent_Pokemon_List, Game.opponent.pokemon, Game.Opponent_Party ) dont use an index to remember the current_item but use the item itself (unneeded complexity): Game.current_pokemon_number = new_number to Game.current_pokemon = Game.Pokemon_List[new_number] Game.help_text_number = 3 to Game.help_text = Game.text_bubbles[3] using a dictionary instead of a list can make things much more readable: image_filename_list = [ 'images/next_move.png', 'images/next_pokemon.png', 'images/exit_text.png', 'images/continue.png', 'images/game_over.png', ] text_bubbles = [pygame.image.load(filename) for file_name in image_filename_list] ... Game.help_text = Game.text_bubbles[3] image_names = ['next_move', 'next_pokemon', 'exit_text', 'continue', 'game_over'] text_bubbles = { name: pygame.image.load('images/{}.png'.format(name)) for name in image_names } ... Game.help_text = Game.text_bubbles['exit_text'] use polymorphism instead of 'if' in some cases: if Game.current_screen_number == 2: Game.update_info(screen) elif Game.current_screen_number == 1: Game.show_stats(screen) elif Game.current_screen_number == 3: Game.options(screen) elif Game.current_screen_number == 4: Game.Gym_Leaders(screen) to class InfoScreen(): def draw(game): #draw screen class StatScreen(): def draw(game): #draw screen ... Game.current_screen = InfoScreen() Game.current_screen.draw(Game) def do_moves(move): Game.pause = True opponent_move = Game.opponent_move() if not isinstance(move, int) and not isinstance(opponent_move, int): .... to class SwitchMove() ... @property def priority(): return (0,0) def do(game): # switch pokemon class PokemonMove() ... @property def priority(): return (1, self.pokemon.speed) def do(game): # execute pokemon move ... def do_moves(move): moves = (move,Game.opponent_move()) moves = sorted(moves,key= lambda x: x.priority) for move in moves: move.do(game) convert comments into functions makes the block more explicit and less likely to become out of date: '# Blitting the sprites for item in info_sprites: screen.blit(item[0], item[1]) for item in secondary_info: Functions.text_to_screen(screen, item[0], item[1], item[2], 18) for item in sprites: screen.blit(item[0], item[1]) def blitting_the_sprites(): for item in info_sprites: screen.blit(item[0], item[1]) for item in secondary_info: Functions.text_to_screen(screen, item[0], item[1], item[2], 18) for item in sprites: screen.blit(item[0], item[1]) blitting_the_sprites() extract repeating structure into subobjects(more consistently structured): Game.Pokemon_List Game.current_pokemon Game.Pokemon_Party Game.Opponent_Pokemon_List Game.opponent.pokemon Game.Opponent_Party class Player(): def __init__(self, pokemon_list, current_pokemon, pokemon_party): self.pokemon_list = pokemon_list self.current_pokemon = current_pokemon self.pokemon_party = pokemon_party Game.player = Player(...) Game.opponent = Player(...) Game.player.pokemon_list Game.player.current_pokemon Game.player.pokemon_party Game.opponent.pokemon_list Game.opponent.current_pokemon Game.opponent.pokemon_party remove commented out code temporarily it is fine but it should last only about a day and should not be checked in because commented out code distracts from the other code
{ "domain": "codereview.stackexchange", "id": 23241, "tags": "python, pygame, battle-simulation, pokemon" }
Why does a cork float to the side of a glass?
Question: Why does a cork ball float to the side of a glass as illustrated in the following GIF? What is the physical phenomenon behind this observation and why does it happen? Answer: It's a combination of two effects: buoyancy and adhesion. Buoyancy lifts the cork up as much as possible, until it displaces its own weight of water (Archimedes' principle). For this reason, the cork will seek the highest point of the water level. Because of adhesion between the water molecules and the glass, the water level is highest at the edges (the water level is concave). As a result, the cork moves to the sides. If you'd fill up the glass to the brim, the water level becomes convex (due to surface tension), and the cork will stay in the middle. See also this site and this youtube video. Extra Info By coincidence, a very similar question came up yesterday on a Dutch science program, and I learned there's actually a name for this phenomenon: the Cheerios effect. The name is derived from the fact that small floating objects on a liquid, like bubbles on water or cheerios on milk, tend to clump together, or stick to the walls. The reason is the same as my answer above: there are two forces acting on a floating object: the buoyancy (which tries to push the object out of the liquid) and the surface tension (which tries to keep the object in the liquid). The result is a compromise, where the object is pushed partially out of the liquid, causing the surface to deform: it forms a small hill. Nearby floating objects are affected by this deformation: a floating object seeks the highest point in a liquid (the buoyancy causes it to rise and move upward along the surface), so it will move towards the 'hill' formed by the other object. Therefore, bubbles (or cheerios) will cluster together. A similar effect happens with objects that are denser than the liquid, but are not too heavy, so that they don't sink thanks to the surface tension. Paper clips are an example. These objects actually push down the liquid, creating a small 'valley' in the surface around them. But such object will also seek the lowest point on the surface, which means that nearby dense objects will again be attracted to each other. So paper clips also cluster together. What happens when an object less dense than the liquid (e.g. a cheerio) is next to an object denser than the liquid (e.g. a paper clip)? The first creates a hill and seeks the highest point, the second creates a valley and seeks the lowest point. So the result is that they will repel each other! There's a very nice paper that explains these effects in more detail: The 'Cheerios effect' (Vella & Mahadevan, 2004).
{ "domain": "physics.stackexchange", "id": 8664, "tags": "everyday-life, home-experiment, surface-tension, buoyancy, fluid-statics" }
Does particle parity play any role in matter anti-matter annihilation?
Question: If a left handed electron and a right handed antimatter electron were to meet, would they still annihilate? In the same way, if a left handed electron and a left handed antimatter electron meet, will they still annihilate? Is matter antimatter annihilation a physical process that cares about particle parity? My question is for all fermions in the standard model including neutrinos. Answer: The dominant force in the Standard Model is electromagnetism, by far, a vectorlike interaction which preserves parity, with interaction vertex $$ eA_\mu (\overline {e_L } ~\gamma^\mu e_L + \overline {e_R } ~\gamma^\mu e_R), $$ so L-chiral electrons annihilate with R-chiral positrons, and R-chiral electrons annihilate with L-chiral positrons. The secondary interaction is the charged weak current, which does not involve R-chiral leptons, violating parity maximally, $$ gW^-_\mu \overline {e_L } ~\gamma^\mu \nu_L + \mathrm {h.c.}, $$ so a L neutrino annihilates against a R positron, and a L electron against a R antineutrino, only. The tertiary interaction is the P-violating, but not maximally, neutral current interaction, $$ \propto {-g\over 2\cos\theta_w}Z_\mu \overline {e_L } ~\gamma^\mu e_L +e\tan\theta_W \sin\theta_W Z_\mu \overline {e } ~\gamma^\mu e , $$ plus more obscure terms annihilating L-neutrinos with R-chiral antineutrinos. This term often confuses students, as less memorable; a mnemonic is taking the vanishing Weinberg angle limit. Electrons without subscript mean a sum of L and R-chiral components, which may now be involved in the weak annihilation into Zs, as in electromagnetism, albeit at a lower rate.
{ "domain": "physics.stackexchange", "id": 94422, "tags": "standard-model, antimatter, neutrinos, parity, chirality" }
Large table updates using AJAX in Internet Explorer 11
Question: I have a website which only needs to support IE11. It is a single page application, which has about 200 table rows and each table row has 5 child rows. There is a pulsing function that updates the table as records come in. Table rows are skipped over if no update comes in. However, when receiving large updates (which should only occasionally happening), the application will hang as it slowly processes the javascript. I've tried to limit the JavaScript as much as possible, but still have a long running function. I am a backend developer by nature, and was wondering if anyone had any tips to help support large table Ajax updates for IE since IE so poorly handles JS. function writeTableLines(tempRows){ /* This Function takes care of updating the text and coloring of required dynamic fields. All other values are not dynamically written. */ for( i in tempRows){ //i is the computer name tempValues = tempRows[i]; // For Row selector = "[id='"+i+"']"; // Network Name network_selector = "[id='"+i+"_network']"; $(network_selector).text(tempValues['network']); if (tempValues['network_color']){ $(network_selector).addClass(tempValues['network_color']); $(selector).find('.name').addClass(tempValues['network_color']); }else{ $(network_selector).removeClass('warning'); $(selector).find('.name').removeClass('warning'); } // Boot Time boot_selector = "[id='"+i+"_boot']"; $(boot_selector).text(tempValues['boot']); if (tempValues['boot_color']){ $(boot_selector).addClass(tempValues['boot_color']); $(selector).find('.name').addClass(tempValues['boot_color']) }else{ $(boot_selector).removeClass('issue'); $(selector).find('.name').removeClass('issue'); } // Last Checked In Timestamp check_in_selector = "[id='"+i+"_checked_in']"; $(check_in_selector).text(tempValues['checked_in']); if (tempValues['service_unresponsive']){ $(check_in_selector).addClass('redline'); $(selector).find('.name').addClass('redline'); }else{ $(check_in_selector).removeClass('redline'); $(selector).find('.name').removeClass('redline'); } util_selector = $(selector).find('td.util').find('a'); $(util_selector).text(tempValues['util']) if (tempValues['util_class']){ $(util_selector).addClass(tempValues['util_class']); }else{ $(util_selector).removeClass('redline warning'); } workgroup_selector = $(selector).find('td.workgroup'); if (($.trim(tempValues['workgroup'])) != $.trim($(workgroup_selector).text())){ if ((tempValues['workgroup'] != selected) && (selected != 'All')){ $(workgroup_selector).addClass('warning'); }else{ $(workgroup_selector).removeClass('warning'); } } $(workgroup_selector).text(tempValues['workgroup']) toggle_links(i, tempRows[i]); $('#connectionGrid').trigger('updateAll', [false]); } } This function iterates over only received data. For each row item that was received, update the text of the cell, and add coloring as necessary. I'm thinking I might just be screwed since its IE, but am open to all suggestions and ideas. Image of the rows - child rows only available when expanded, but still need updates. Answer: Searching a whole, large DOM for elements is a real performance killer. When possible, always try to search a fragment, or traverse the DOM relative to a known element. With a little rearrangement of the HTML, "network", "boot" and "check_in" elements can be found within the corresponding "selector" element, similar to the way "util" and "workgroup" elements are currently found. This alone should give a significant performance boost. HTML There's a missing </tr> somewhere. Move <tbody> and </tbody> inside the loop/if lines to give one tbody per computer block. (Hopefully tbodys will not mess up tablesorter). Move id="{{computer.name}}" into the <tbody> tag. Give a class name to elements that need to be addressed : For example, change : <td class="info" colspan="1" id="{{computer.name}}_network">{{ computer.active_drive.name }}</td> to: <td class="info network" colspan="1" id="{{computer.name}}_network">{{ computer.active_drive.name }}</td> Then, if they are not required elsewhere, purge all IDs in the repeated block. Javascript The javascript can now be written to exploit the tbody wrappers. function writeTableLines(tempRows) { /* This Function takes care of updating the text and coloring of required dynamic fields. All other values are not dynamically written. */ var tempValues, $tbody, $name, $network, $boot, $check_in, $util, $workgroup, $connectionGrid = $('#connectionGrid'); // Avoid creating so many strings in the loop by defining class names and selectors here. // This is more a memory consideration than speed. var clss = { 'warning': 'warning', 'issue': 'issue', 'redline': 'redline' 'redlineWarning': 'redline warning', }; var selectors = { 'network': '.network', 'boot': '.boot', 'check_in': '.checked_in', 'name': '.name', 'util': 'td.util a', 'workgroup': 'td.workgroup' }; for(i in tempRows) { tempValues = tempRows[i]; // Find the container $tbody = $('#' + i); // This is the only element in each block that needs an ID. if($tbody.length == 0) return; // avoid unnecessary work if element is not found // Now find elements by class, within the container $network = $tbody.find(selectors.network); $boot = $tbody.find(selectors.boot); $check_in = $tbody.find(selectors.checked_in); $name = $tbody.find(selectors.name); $util = $tbody.find(selectors.util); $workgroup = $tbody.find(selectors.workgroup); // In all the code below, address tempValues properties with dot.notation, not associative['notation'] $network.text(tempValues.network); if (tempValues.network_color) { $network.addClass(tempValues.network_color); $name.addClass(tempValues.network_color); } else { $network.removeClass(clss.warning); $name.removeClass(clss.warning); } $boot.text(tempValues.boot); if (tempValues.boot_color) { $boot.addClass(tempValues.boot_color); $name.addClass(tempValues.boot_color); } else { $boot.removeClass(clss.issue); $name.removeClass(clss.issue); } $check_in.text(tempValues.checked_in); if (tempValues.service_unresponsive) { $check_in.addClass(clss.redline); $name.addClass(clss.redline); } else { $check_in.removeClass(clss.redline); $name.removeClass(clss.redline); } $util.text(tempValues.util); if (tempValues.util_class) { $util.addClass(tempValues.util_class); } else { $util.removeClass(clss.redlineWarning); } if (($.trim(tempValues.workgroup)) != $.trim($workgroup.text())) { if (tempValues.workgroup != selected && selected != 'All') { $workgroup.addClass(clss.warning); } else { $workgroup.removeClass(clss.warning); } } $workgroup.text(tempValues.workgroup); toggle_links(i, tempValues); $connectionGrid.trigger('updateAll', [false]); } } Some of the code looks to be as little dodgy. For example, .addClass(tempValues.network_color) ... .removeClass('warning') means that any added class that is not warning will never be removed (unless by some other code). Contrast with .addClass('redline') ... .removeClass('redline'), which is guaranteed to add/remove the same class. Aside: With the tbodys in place, you could consider styling them with eg a border that will expand/contract as the details are shown/hidden. If performance is still poor, you'll need to investigate deeper to discover what's taking time. Though I'm not an expert driver, Chrome debug tools are very good for diagnosis. Edit Back to a single <tbody> but with class="info network", class="info boot", class="info check_in", in place, try selecting as follows : // Find the parent row var $tr = $('#' + i); // A parent row if($tr.length == 0) return; // avoid unnecessary work if element is not found var $childRows = $tr.nextUntil(".parent"); // the parent's child rows $network = $childRows.find(selectors.network); $boot = $childRows.find(selectors.boot); $check_in = $childRows.find(selectors.checked_in); $name = $tr.find(selectors.name); $util = $tr.find(selectors.util); $workgroup = $tr.find(selectors.workgroup); This will be slightly less efficient than finding elements within tbody containers but still better than finding by ID - and Tablesorter will still work.
{ "domain": "codereview.stackexchange", "id": 18303, "tags": "javascript, jquery, ajax, internet-explorer" }
Bernoulli's principle on a curve ball
Question: I've seen a few excellent answers here on the Magnus force, which explains why balls with a spin will curve. However, my intuition is still telling me that the Bernoulli's principle would push it the opposite way and I need help understanding why my reasoning is flawed. Imagine that you kick a soccer ball on the left side so that it's spinning clockwise (viewing the ball from above) and the ball will curve to the right. Since the left side of the ball is spinning against the air, wouldn't this mean faster relative speeds and thus a lower pressure than the right side? And wouldn't this lower pressure on the left side cause it to curve left instead of right? Answer: What produces lift is circulation, which causes the airflow to be deflected in one direction, causing an equal reaction in the other direction. If you want to think in terms of Bernoulli on your soccer ball, the air on the left side is being slowed, while that on the right side is being accelerated by the spin of the ball.
{ "domain": "physics.stackexchange", "id": 17991, "tags": "newtonian-mechanics, fluid-dynamics, projectile, drag, bernoulli-equation" }
Text mining match in Python
Question: I have one column called A and one column called B where column B is a more detailed description of column A, for example: A B want red apple I love RED apple 100% organic orange I love citric fruit 0% BANANA lover I have 2 banana 50% Basically the algorithm is if we have all values of A in B then give 100% match, if similar then give 80%, and if no value of A is found in B then 0%. Answer: As far as I understand from your question, you are trying to compare sentences on word level, but it seems like you are interested in finding the number of words in sentence A that are contained in sentence B (not te intersection itself) So you could use something very simple (as a first approach) Try: def simmilar(s1,s2): l1 = s1.split() l2 = s2.split() l1 = [s.lower() for s in l1] l2 = [s.lower() for s in l2] n = len(set(l1)) return len(set(l1) & set(l2))/n df.assign(result = df.apply(lambda x: simmilar(x["A"], x["B"]), axis = 1)) Result:
{ "domain": "datascience.stackexchange", "id": 9994, "tags": "python, text-mining" }
Unitary Fermi Gas vs. Fermi Liquid
Question: The unitary limit of a Fermi gas is described here as when the scattering length is comparable or exceeds the interparticle distance. For $ak_F<0$, this is the BCS limit of a weakly interacting Fermi gas. When $0<ak_F<1$, the interaction is stronger and we are in the BEC limit. My question is how well can we describe the unitary limit of the Fermi gas with a Fermi liquid description? It is my understanding that Fermi liquid theory is just the phenomenological approach to understanding the physical model of a unitary Fermi gas, but if we remain far from the BCS and BEC limits and remain firmly in the world of unitarity, when will this Fermi liquid description fail? I have seen studies such as this that use a Fermi liquid theory to describe the unitary Fermi gas, but I have yet to see a reference which tells me when, exactly, this Fermi liquid description fails. Answer: 1) I would not call the Landau Fermi Liquid theory "just phenomenological". It is a rigorous description of a cold Fermi liquid that is continuously connected to a free Fermi gas. In particular, the excitations have the same quantum numbers (spin, charge, etc) as the excitations of a free Fermi gas. Of course, the theory can be used for phenomenology, and the parameters are often fitted to experiments. 2) The unitary Fermi gas is not a Fermi liquid, because it is a high $T_c$ superfluid, the fermionic excitations acquire large gaps, and the only low energy mode is a Goldstone phonon. 3) The weakly attractive Fermi gas (the BCS limit) is also a superfluid, but in this case the gap is exponentially small. This means that there is a regime $T_c\ll T\ll T_F$ in which the Landau Fermi-liquid description is valid. Indeed, this theory can be used to compute $T_c$. 4) This does not mean that one cannot try to use the Landau theory as an approximate phenomenological theory to understand thermodynamics and quasi-particle properties at $T\sim T_c \sim T_F$. This has indeed been done, see, for example, http://www.nature.com/nature/journal/v463/n7284/abs/nature08814.html .
{ "domain": "physics.stackexchange", "id": 55613, "tags": "condensed-matter, many-body, fermi-liquids" }
Work along a Circle Arc Path
Question: Consider a mass with weight W and force F acting on the horizontal direction going through an arc of a frictionless circular path of radius r and angle $\theta$. Does that mean that the work done by the horizontal force F, assuming a datum on the beginning of the arc curve, is: $$\int_0^{\frac{2\pi{r}\theta}{360}}{Fcos\theta{ds}}$$ Here is a drawing I made about it. I have been researching on work by variable forces for a while now, but I can't seem to find a definitive answer to this query. Is my understanding correct? And is there a better way to express this? Thank you. Answer: If your force is applied to a rigid body on a fixed axle, then the weight is supported by the axle, and your integral for the work done by a horizontal force is correct, with ds = r dθ (angles in radians), and a limit not expressed in terms of the variable. If not on a fixed axle, things get more complicated.
{ "domain": "physics.stackexchange", "id": 79255, "tags": "homework-and-exercises, newtonian-mechanics" }
Notification system using PHP+jQuery+Ajax
Question: I have this code to display a counter on the side of <i class="fas fa-bell mr-3"></i>. I want to know if this code is good on security and perfomance. I just started using jquery and ajax, i had heard people saying that someone could disable the javascript and do bad things. What you guys think about my code? <div> <ul class="navbar-nav textoPerfilDesk dropMenuHoverColor"> <li class="nav-item dropdown pr-2 dropleft navbarItem "> <a class="nav-link dropdown-toggle-fk" href="#" id="navbarDropdownMenuLink" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> <i class="fas fa-bell mr-3"></i> </a> <div class="dropdown-menu dropdown-menu-fk py-3" aria-labelledby="navbarDropdownMenuLink"> <a class="dropdown-item dropMNitemNT" href="um-link"> <span class="d-flex"> <img class="imgNT" src="img/1.jpg"> <span class="pl-2 pt-1"> titutlo </span> </span> </a> </div> </li> </ul> <span class="text-white divCountNT" id="datacount"></span> </div> script: <script> $(document).ready(function(){ var intervalo, carregaDiv; (carregaDiv = function(){ $("#datacount").load('select.php', function(){ intervalo = setTimeout(carregaDiv, 1000); }); })(); $('.fa-bell').on('click', function (){ clearTimeout(intervalo); $.ajax({ url: "update.php", complete: function(){ setTimeout(carregaDiv, 1000); } }); }); }); </script> select.php <?php require_once 'db.php'; if(!isset($_SESSION))session_start(); if(isset($_SESSION['userid'])) { $userid = $_SESSION['userid']; } $status = 'unread'; $sql = $conn->prepare("SELECT * FROM noti WHERE status = :status AND userid = :userid"); $sql->bindParam(':userid', $userid, PDO::PARAM_INT); $sql->bindParam(':status', $status, PDO::PARAM_STR); $sql->execute(); $countNT = $sql->rowCount(); echo $countNT; $conn = null; ?> update.php <?php require_once 'db.php'; if(!isset($_SESSION))session_start(); if(isset($_SESSION['userid'])) { $userid = $_SESSION['userid']; } $status = 'read'; $sql = $conn->prepare("UPDATE noti SET status = :status WHERE userid = :userid"); $sql->bindParam(':user_id', $userid, PDO::PARAM_INT); $sql->bindParam(':status', $status, PDO::PARAM_STR); $sql->execute(); $countNT = $sql->rowCount(); echo $countNT; $conn = null; ?> Answer: JAVASCRIPT SECURITY Javascript is running on the client, and is therefore under full control of the user. It can be disabled, inspected, manipulated, and everything else that can done in a programming language. You knew this, didn't you? Javascript is, almost by definition, insecure. Things that have to do with the security of your site, like validating passwords, should not be done in Javascript. And in your code you don't do anything security related in Javascript. All you do is set a timer running and call two PHP scripts. No risks there. PHP SECURITY The PHP scripts are another matter. Here is where things really happen, and you should implement your security measures here. Even though these scripts implement AJAX calls, they can be executed by anybody. You seem to have users, that can log in. Their user ID is stored in $_SESSION['userid']. I notice that you don't do anything, in your PHP scripts, when this ID is absent. You still execute the database queries. That is a bad idea. When the two current PHP scripts are called, without an user ID, they will probably just perform database queries that are invalid. No real harm done. But you shouldn't rely on just pure luck. Good security should leave no doubts about what will happen. I therefore propose I slight change to your code. Instead of writing this: if (isset($_SESSION['userid'])) { $userid = $_SESSION['userid']; } you could write this: if (!isset($_SESSION['userid'])) die('Not logged in.'); $userid = $_SESSION['userid']; this means that the PHP scripts will halt execution when there's no user, as they should. PERFORMANCE You code is evidently not very efficient. Polling the database every second does not scale very well. There are other ways to do this. For instance with web sockets: https://developer.mozilla.org/en-US/docs/Web/API/Websockets_API ( you would use a combination of the tools mentioned there). Updates will be quicker, without polling. For now polling will probably be fine for you, after all you're still learning Jquery and that is a challenge in itself. It takes time to understand how everything hangs together.
{ "domain": "codereview.stackexchange", "id": 32831, "tags": "php, jquery, security, ajax" }
Shuffling an array of cards
Question: I have a deck of flashcards that I want to shuffle. Here's the method I'm using to shuffle them: func (deck *Deck) Shuffle() { rand.Seed(time.Now().UnixNano()) randomIndexes := rand.Perm(len(deck.Cards)) shuffledCards := make([]Card, len(deck.Cards)) for i := 0; i < len(deck.Cards); i++ { shuffledCards[i] = deck.Cards[randomIndexes[i]] } deck.Cards = shuffledCards } Is this an efficient way of doing it, or is there a better way? Answer: I would recommend something closer to a Knuth shuffle, where you swap items in place in the array, rather then allocating a new one. This is untested and is based on the Fisher-Yates shuffle: func (deck *Deck) Shuffle() { rand.Seed(time.Now().UnixNano()) for i := len(deck.Cards)-1; i > 0; i-- { j := rand.Intn(i+1) // i+1 rather than i; the upper bound is not inclusive deck.Cards[i], deck.Cards[j] = deck.Cards[j], deck.Cards[i] } } Unrelated to shuffling, you should also check out bucketed flashcards. Consider the Leitner system.
{ "domain": "codereview.stackexchange", "id": 5493, "tags": "optimization, go, shuffle" }
$r$-representation of Operator
Question: I am watching this video https://www.youtube.com/watch?v=sYgX5pdncG8 at 14:30, it has $\langle r|H|r'\rangle = H(r) \delta(r-r') $ Can you help me to understand why it is so? I thought it should be $H(r,r')$. I believe I am missing some fundamental knowledge in quantum mechanics. Answer: In general you are correct, the notation $$H(r, r') = \langle r | \hat H |r' \rangle$$ is good. This indicates it is the matrix element of the Hamiltonian in the position representation and as such depends on both coordinates. In the case in question (and many physical cases) the Hamiltonian is diagonal in position space (potentials are local) and this is indicated by the delta function: $H(r, r') = H(r) \delta(r - r') = H(r') \delta(r - r') $. The first factor contains all the "interesting" physics about the form of the Hamiltonian and the delta function tells us that only the diagonal elements are non zero.
{ "domain": "physics.stackexchange", "id": 63638, "tags": "quantum-mechanics, dirac-delta-distributions, matrix-elements" }
Finding the 20th percentile pixel of an image using a histogram instead of std::sort
Question: I have a 2-dimensional matrix (an image) in which I need to find the 20th percentile value. My first attempt was to sort the values and then index using std::floor(0.2*(srcSor.size())/100). My code was originally straightforward like this: #include <opencv2/highgui.hpp> #include <opencv2/imgproc.hpp> #include <iostream> int main(int argc, char* argv[]) { //Read the image .jpeg cv::Mat src=cv::imread(argv[1], CV_8UC1); std::vector<double> srcSor = src.reshape(0, 1); std::sort (srcSor.begin(), srcSor.end(), std::greater<double>()); float A = srcSor[std::floor(0.2*(srcSor.size())/100)]; //Then later I use A value.. . . return 0; } I found that std::sort took a long time to sort a vector of 1.3m unsigned values. So now I'm using an histogram to do the same thing; this takes less time to find that value of that certain index. The idea comes from the fact that the values of the matrix are unsigned int in the range 0-255: #include <opencv2/highgui.hpp> #include <opencv2/imgproc.hpp> #include <iostream> int main(int argc, char* argv[]) { //Read the image .jpeg cv::Mat src=cv::imread(argv[1], CV_8UC1); std::vector<unsigned> histo(256,0.0); for(unsigned j=0;j<src.rows;j++) for(unsigned i=0;i<src.cols;i++) histo[(int)src.at<uchar>(j,i)]++; std::vector<unsigned> sumHH(256,0.0); sumHH[255]=histo[255]; for(unsigned i=254;i>=1;i--) sumHH[i]=sumHH[i+1]+histo[i]; int A=0; int indx=std::floor(0.2*(src.rows*src.cols)/100); for(unsigned i=1;i<255;i++) if(indx<sumHH[255-i]) { A=255-i; break; } //Then later I use A value.. . . return 0; } But I think that my code is not really readable, can anyone suggest a better way to write my code and/or better optimized? Answer: Well, index=std::floor(0.2*(srcSor.size())/100 is equivalent to the much simpler index = srcSor.size() / 500, ignoring some inaccuracy in representing 0.2. Lose the fixed-size std::vectors. You don't depend on any of the benefits of using dynamic memory, so a simple array is sufficient and far more efficient. Next, you really don't need the cumulative counts in sumHH at all. Just test whether you have reached the target immediately. Don't cast unless you have to. Casting means overriding the type-system, and is thus error-prone. Don't use obscure abbreviations. Resulting code: unsigned histogram[256] = {}; for (auto j = src.rows; j--;) for (auto i = src.cols; i--;) ++histogram[src.at<uchar>(j,i)]; int A = 255; for (auto index = src.rows * src.cols / 500; index > histogram[A]; --A) index -= histogram[A];
{ "domain": "codereview.stackexchange", "id": 26555, "tags": "c++, performance, image, statistics, opencv" }
CSS preview generator (in python)
Question: Given a CSS file the script generates an HTML preview of all the CSS definitions by recursively drilling down the chain of nested classes, ids and tag names. Rules with position: absolute|fixed|sticky are wrapped into iframe elements in order to avoid overlap. The output goes to stdout since I wanted to avoid keeping storing too much data in memory. Content that goes inside iframes is printed first to a temporary buffer rather than stdout because that text needs to be html-escaped. Would be interested to get feedback on: performance and efficiency (CPU, memory): is there anything inherently inefficient/slow with the chosen approach that can be optimised? code organisation obvious (or subtle) code smells naming and legibility possible criteria to check correctness or well-formedness of the generated output #!/usr/bin/env python3 # -*- coding: utf-8 -*- # # Generates an HTML file for previewing all styles defined in a CSS file # # dependencies: cssutils # USAGE: # css_preview_generator.py style.css > preview.html import html import io import sys import cssutils image_placeholder = "data:image/svg+xml;charset=UTF-8,%3Csvg xmlns='http://www.w3.org/2000/svg' width='300' height='150' viewBox='0 0 300 150'%3E%3Crect fill='yellow' width='300' height='150'/%3E%3Ctext fill='rgba(0,0,0,0.5)' x='50%25' y='50%25' text-anchor='middle'%3E300×150%3C/text%3E%3C/svg%3E" def render(s, out): if out and not out.closed: return print(s, end='', file=out) else: print(s, flush=True) def down_the_rabbit_hole(chunks, full_selector, out=None): if len(chunks): chunk = chunks.pop(0) render_open_tag(chunk, out) down_the_rabbit_hole(chunks, full_selector, out) render_close_tag(chunk, out) else: render(full_selector, out) prefix_map = { '.': 'class', '#': 'id' } def extract_class_id(defn, extracted_attrs=''): try: for prefix in prefix_map.keys(): if prefix in defn: items = defn.split(prefix) value = ' '.join(items[1:]) # return a tuple of (tagname, 'class="bla blu"') or (tagname, 'id="abc"') tag = items[0] if any(suffix in tag for suffix in prefix_map.keys()): return extract_class_id(tag, f'{prefix_map[prefix]}="{value}"') else: return items[0], f'{extracted_attrs} {prefix_map[prefix]}="{value}"' except Exception as e: print(e, file=sys.stderr) return defn, '' def render_open_tag(definition, out): if definition.startswith(('.', '#')): _, class_or_id = extract_class_id(definition) render(f'<div {class_or_id}>', out) else: if definition == 'a' or definition.startswith(('a.', 'a#')): tag, class_or_id = extract_class_id(definition) render(f'''<a {class_or_id} href="#">''', out) elif definition == 'img' or definition.startswith(('img.','img#')): render(f'<img src="{image_placeholder}" alt="[image]">', out) else: tag, class_or_id = extract_class_id(definition) if tag.lower() == 'td': render(f'<table><thead><tr><th>[th]</th></thead><tbody><tr><td {class_or_id}>[td]<br/>', out) else: render(f'<{tag} {class_or_id}>', out) def render_close_tag(definition, out): if definition.startswith(('.', '#')): render('</div>', out) else: if definition == 'a' or definition.startswith(('a.', 'a#')): render(f'⚓️ {definition}</a>', out) else: tag, _ = extract_class_id(definition) if tag.lower() == 'td': render('</td></tr></tbody></table>', out) else: render(f'</{tag}>', out) if __name__ == '__main__': if len(sys.argv) == 1 or sys.argv[1] in ('-h', '--help'): print(f'Usage: {sys.argv[0]} style.css > preview.html') sys.exit(-1) already_seen = [] css_file = sys.argv[1] sheet = cssutils.parseFile(css_file) print(f'''<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>CSS preview: {css_file}</title> <link href="{css_file}" rel="stylesheet" type="text/css" /> </head> <body> ''') selectors_requiring_iframe = [] # build a list of absolute & fixed rules for rule in sheet: if isinstance(rule, cssutils.css.CSSStyleRule): position = getattr(rule.style, 'position', None) if position in ('fixed', 'absolute', 'sticky'): for single_selector in rule.selectorList: # type: cssutils.css.Selector selectors_requiring_iframe.append(single_selector.selectorText) # deduplicate list selectors_requiring_iframe = list(dict.fromkeys(selectors_requiring_iframe)) for rule in sheet: if isinstance(rule, cssutils.css.CSSStyleRule): selectors: cssutils.css.SelectorList = getattr(rule, 'selectorList', []) full_selectors_text = rule.selectorText print(f'CSS Rule: {full_selectors_text}', file=sys.stderr) for single_selector in selectors: # type: cssutils.css.Selector current_selector_text = single_selector.selectorText if not single_selector or current_selector_text.startswith(('html', 'body')): continue # 1. convert '>' to space # 2. '~' '*' '+' and '[]' (not supported, ignoring them, convert to space, breaks semantics FIXME) for c in '>*~+': if c in current_selector_text: current_selector_text = current_selector_text.replace(c, ' ') for c in ':[': if c in current_selector_text: current_selector_text = current_selector_text.split(c)[0] if current_selector_text in already_seen: continue else: already_seen.append(current_selector_text) if ' ' in current_selector_text: current_selector_text = current_selector_text.replace(' ', ' ') position = getattr(rule.style, 'position', None) # if current selector is a child of an absolute/fixed rule then also wrap it in an iframe matching_abs_parents = [sel for sel in selectors_requiring_iframe if sel in current_selector_text] need_iframe = position in ('fixed', 'absolute', 'sticky') or len(matching_abs_parents) need_table = False out = None if need_iframe: print( f'''<iframe style="border:1px dotted #acad9e;" width="400" height="300" srcdoc="{html.escape(f'<html><head><link href="{css_file}" rel="stylesheet" type="text/css"/></head><body style="background:#f6f4ee">')}''', end='') out = io.StringIO() print(f'\t{current_selector_text}', file=sys.stderr) down_the_rabbit_hole(current_selector_text.split(), full_selectors_text, out) if need_iframe: print(html.escape(out.getvalue()), end='') out.close() print('"></iframe>') print(''' </body> </html>''') The latest version is here. Answer: Since you only use already_seen to check if current_selector_text has been seen, consider changing the type of already_seen from list to set (slight performance improvement). The function name down_the_rabbit_hole is unclear as to what it does. This will confuse people who have not heard of this phrase before. Since you are already checking for arguments with if len(sys.argv) == 1 or sys.argv[1] in ('-h', '--help'):, consider handling the case when more than 1 argument is given (rather than just ignoring the rest of them). You assign False to need_table, but need_table is unused. The line # -*- coding: utf-8 -*- is unnecessary in Python 3 under most cases. You only have 1 type annotation on line 125 (selectors: cssutils.css.SelectorList = getattr(rule, 'selectorList', [])). Is there a reason why none of the other variables have an annotation (or why just this line has one)?
{ "domain": "codereview.stackexchange", "id": 38563, "tags": "python, html, css, generator" }
Light in a Medium Question
Question: For the medium let’s use water. When light passes through water its wavelength decreases. It’s frequency stays constant. It changes its direction upon entering the water unless it enters the water orthogonal to the surface. It exits the water at the same angle it enters. The current explanation for this is the absorption and reemission of photons by the atoms in the water. I can understand how this explanation works for the frequency remaining the same, but how does it explain the decrease in wavelength? As far as the change In direction goes that’s too much for one question. I’ll just accept the marching band explanation we are all familiar with for now. It’s the wavelength question that I would like to know. Answer: One of the Maxwell equations in the vacuum has the magnetic and electric constants: $$\nabla \times \mathbf B = \mu_0 \epsilon_0 \frac{\partial \mathbf E}{\partial t}$$ so that in the wave equation, derived from the above and the other three: $$\frac{\partial^2 \mathbf E}{\partial t^2} = \frac{1}{\mu_0 \epsilon_0}\frac{\partial^2 \mathbf E}{\partial x^2}$$ the wave speed is determined by then. In the water, that constants are different: $\mu_w$ and $\epsilon_w$, and the speed is smaller. For a plane wave of the form: $\mathbf E = E(k_w(x - vt))$, where $v = (\mu_w \epsilon_w)^{-1/2}$ if the frequency $\omega = k_wv$ is the same as in the vacuum, then $k_wv = kc$ => $$k_w = \frac{c}{v}k = \left(\frac {\mu_w \epsilon_w}{\mu_0 \epsilon_0}\right)^{1/2}k$$
{ "domain": "physics.stackexchange", "id": 68033, "tags": "electromagnetism, optics, waves, visible-light" }
White light should be black?
Question: If I understand correctly polarisation of electromagnetic wave, two opposite phase wave of same polarisation can cancel each over, so the question is : As white light have all polarisation, all the wave are in equally direction around 360°, and as there is all phase, it should cancel all each over and should be black ? If it's because all electromagnetic wave are not exactly well oppositely synchronized and they "stay alive", is it possible to estimate the "loose" of the synchronized one (there must be some) or is it negligible ? Answer: this is an animation of polarized light: Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This 3D animation shows a plane linearly polarized wave propagating from left to right. The electric and magnetic fields in such a wave are in-phase with each other, reaching minima and maxima together In order to have a situation where two such waves would cancel and produce your "black": the waves should have the exact frequency and be at the exact opposite direction. This experiment can only be done with monochromatic sources, as are lasers,it has been done and can be viewed here. . The energy of the wave goes back to the source of the wave. White light depends on our perception of color and is composed by a large number of frequencies , see color perception here. To get a situation as in the experiment linked in 1) two white beams against each other are needed. The fact that white light is composed by innumerable frequencies means that the probability of overlap of two same frequency and polarization parts of the white beams is very small, so blackness (total extinction of white light) cannot result.
{ "domain": "physics.stackexchange", "id": 84657, "tags": "visible-light" }
Building an action server and action client located in two separate packages
Question: Hi, I have implemented an action server and an action client for my application. They are located in two different packages and I sometimes encounter an issue when running catkin_make : if I delete my /build and /devel folders the first catkin_make always fails (always because of the action messages not found during the compilation) but if I run catkin_make again there are no issues anymore. I suspect that the first time all the action messages are created but not correctly linked with my libraries so they are not found and the second time it's working since they have already been created. So I'm aware this issue comes from my CMakeList.txt and package.xml configurations, I've followed the different actionlib tutorials where this configuration is detailled but it's always when the server and client are located in the same package. I don't want you to correct my configuration, I want to correctly understand how it should be done, so my questions are : Do I need to put the /action folder containing the action message in both packages ? (I've currently created only one folder inside the package containing the action server) If not, what is the proper way to link the auto generated action message header files to the other package ? What is the minimum required configuration for my CMakeList.txt and package.xml for my package containing the action server ? Same question for the other package containing the action server ? Originally posted by Delb on ROS Answers with karma: 3907 on 2018-04-17 Post score: 0 Answer: The best way is to create a third package that only contains the message definition. Otherwise you will need to build the server-package whenever you want to use the client-package and this can get messy if it has some external dependencies. This is especially if you want to run an action client on an arduino or other smaller system on which it is impossible or inconvenient to build the full project. In most cases, this third package is called [base_name]_msgs to show that it only contains message (or action) definitions. Both other packages will only depend on this package and not on each other. Originally posted by NEngelhard with karma: 3519 on 2018-04-17 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Delb on 2018-04-18: Thank you, now it's working. I did like the move_base_msgs package and now everything compile smoothly each time.
{ "domain": "robotics.stackexchange", "id": 30669, "tags": "catkin-make, ros-kinetic, actionlib, cmake" }
Structure of benzene dimer in an vibrationally excited state
Question: It is known that the benzene dimer has few stable structures, T-shaped, parallel-displaced, and sandwich structure. Now if we excite the dimer such that it goes to a higher vibrational level (which is close to the dissociation limit but the dimer is still bound). What happened to the structures then? Can we still have different structures or it will be some kind of mixture? Answer: The binding energy of the benzene dimer is really quite small. Ref. [1] reports the parallel-displaced and T-shaped dimers to have a binding energy at about 2.6 kcal/mol. This is about 900 cm$^{-1}$. So, it is quite likely that if any vibrational mode with more energy than this is excited that upon redistribution of the energy, the dimer will fall apart. On the other hand, excitation of vibrations less than the binding energy will likely lead to a very rapid interconversion between the T-shaped and parallel-displaced (or maybe sandwich) isomers. I have read that it is believed this happens basically no matter what. Also, the sandwich isomer has a binding energy closer to 1.5 kcal/mol, so it is potentially possible to excite some low frequency mode which would cause the sandwich isomer to dissociate but not the T-shaped or parallel-displaced isomers. Note that because this is such a weakly bound dimer, experimental measurement can only be made at very low temperatures. Ref. [2] describes one measurement of the conformers using supersonic jet expansion to form the dimers. This means temperatures close to milliKelvin. References: [1]: Janowski, T., & Pulay, P. (2007). High accuracy benchmark calculations on the benzene dimer potential energy surface. Chemical Physics Letters, 447(1-3), 27-32. [2]: Scherzer, W., Krätzschmar, O., Selzle, H. L., & Schlag, E. W. (1992). Structural isomers of the benzene dimer from mass selective hole-burning spectroscopy. Zeitschrift für Naturforschung A, 47(12), 1248-1252.
{ "domain": "chemistry.stackexchange", "id": 10037, "tags": "organic-chemistry, physical-chemistry" }
Boundary Conditions fro Static Structural Analysis of a Turbine Rotor Using FEM
Question: I am trying to perform static structural analysis of a turbine rotor which is rotating at a given angular speed, say 1000 rad/s. I also know the pressure distribution from CFD analysis. What are the boundary conditions for static structural analysis? I can use angular velocity as inertial load, pressure distribution from CFD as applied load. Do I have to restrain the translation/rotation of surface 1, shown in figure? If yes, why? Answer: If your CFD analysis for the pressure loads was correct, the resultant of all the pressure loads should contain axial load and a torque on the complete assembly. If there is no torque, the turbine is useless! The axial load is not useful for the complete machine, but you can't avoid it. Whether it is rotating or not, the turbine wheel can move in space as a rigid body. So you need to restrain it to prevent that, and create reaction forces that balance the applied axial load and torque. The best way to do this is to restrain it at the location where it would be attached to the turbine shaft. You will then get the "correct" stress distribution in the disk. If you restrain it at some other arbitrary point or points, you will probably get an unrealistic high stress at those points. (The turbine disk is probably not attached to the shaft at your "surface 1", unless it is a shrink fit on the shaft - for example there is no practical way that you could bolt it to the shaft through that surface.) Actually, a much more efficient way to model this would be to model a sector of the turbine wheel containing just one blade, and derive the correct boundary conditions from the fact that the deflections of all the other sectors are identical (when measured in cylindrical polar coordinates) - but going into the details of how to do that is outside the scope of your actual question.
{ "domain": "engineering.stackexchange", "id": 2372, "tags": "finite-element-method, nastran" }
Does a black hole become a normal star again?
Question: Stars which exceed the Tolman-Oppenheimer-Volkoff limit can become a black hole. What happens to star after it becomes a black hole? Does it regain its status of star? Answer: There's several ways this question could be answered, but they all come down to an emphatic "no" - a black hole will not return to being a main sequence star. The simplest way to see this is probably that a black hole has a much higher entropy than a star or even another type of stellar remnant of even vaguely similar mass and so there simply could not exist a spontaneous process by which a black hole develops back into a star. A black hole once formed will stay a black hole, however it is believed that the Hawking process will lead to the black hole eventually evaporating. The time scale for evaporation though of a stellar remnant black hole is mindbogglingly long and they will not evaporate until long after stellar formation has ceased in the Universe.
{ "domain": "astronomy.stackexchange", "id": 743, "tags": "black-hole" }
Why does UVA penetrate deeper than UVB, though it's weaker?
Question: Even though UVA radiation ranges in longer wavelengths (315–400 nm) than UVB (280–315 nm) and thus is less energetic, UVA is able to penetrate deeper into the skin and even reach the dermis. Why is there such a discrepancy? Answer: Our skin contains quite a lot of water and I think this is answers the question: The absorption of UV light by this water goes drastically up when you shift to shorter wavelengts with a minimum absorption in the UV-A spectrum. See this figure (from here): So water has a very low absorption around 340-350nm (UV-A) and a very high absorption at shorter wavelength (UV-B and UV-C). This also explains why we get a lot more UV-A through the atmosphere than for harder UV-Radiation.
{ "domain": "biology.stackexchange", "id": 8199, "tags": "biophysics, skin, radiation, uv" }
Correctness of the Betweenness centrality formula
Question: Betweenness centrality is defined as the number of shortest paths that go through a node in the graph.The formula is: $$\sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}}$$ Where $\sigma_{st}$ is the total number of shortest paths from node $s$ to node $t$ and $\sigma _{st}(v)$ is the number of those paths that pass through $v$. However it doesn't seem to me that the formula calculates what is defined. Why do we divide by the total number of shortest paths between $s$ and $t$ each time? Shouldn't we just divide by $2$ to compensate the fact that $s$ and $t$ will appear twice in different orders? Answer: Suppose we want to quantify the extent to which $v$ is between $s$ and $t$. There could be a few ways. One way to describe that extent is the probability of passing through $v$ if we want to reach from $s$ to $t$ by a randomly-selected shortest path. Assuming each shortest path is selected with equal probability, we will get $\frac{\sigma_{st}(v)}{\sigma_{st}}$, where $\sigma_{st}$ is the total number of shortest paths from node $s$ to node $t$ and $\sigma _{st}(v)$ is the number of those paths that pass through $v$. In particular, the extent of $v$ between $s$ and $t$ is 0 if none of the shortest paths from $s$ to $v$ goes through $v$ while it is 1 if all of them must go through $v$. Assigning the same weight to each pair of starting node and destination node, we can see that $\sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}}$ measure the extent in which $v$ is the center of betweenness. The graph is created by https://graphonline.ru/ If you use $\frac{\sigma_{st}(v)}2$ to quantify the extent to which $v$ is between $s$ and $t$, there is no problem if you just care about $v$ considering $s$ and $t$ as fixed. However, take a look at the above graph. How much is $v_3$ between $v_0$ and $v_4$? There are 3 shortest paths from $v_0$ to $v_4$, 2 of which pass through $v_3$. We get $\frac{\sigma_{v_0v_4}(V_3)}2 = 2/2=1$. How much is $v_5$ between $v_0$ and $v_6$? There is only 1 shortest path from $v_0$ to $v_6$, which passes through $v_5$. We get $\frac{\sigma_{v_0v_6}(v_5)}2 = 1/2=0.5$. Since $1>0.5$, we would like to conclude that $v_3$ is more between $v_0$ and $v_4$ than $v_5$ is between $v_0$ and $v_6$. However, we can go to $v_4$ without passing through $v_3$ while we must pass through $v_5$ to reach $v_6$ by a shortest path. So $v_3$ should be less between $v_0$ and $v_4$ than $v_5$ is between $v_0$ and $v_6$. This simple example shows that it does not make much sense to divide by 2 or, in fact, any constant if we want to normalize the measurement. Exercises Exercise 1. What are the centers of the graph above in terms of the betweenness centrality? (Note there could be multiple centers.) Exercise 2. Suppose we define the betweenness centrality of $v$ as $\sum_{s \neq v \neq t} \frac{\tau_{st}(v)}{\tau_{st}}$, where $\tau_{st}$ is the total number of distinct edges in the union of all shortest paths from $s$ to $t$ and $\tau_{st}(v)$ is the number of those edges that are also incident to $v$. Would you consider the above definition a better definition of betweenness centrality? Exercise 3. Suppose we define the betweenness centrality of $v$ as $\sum_{s \neq v \neq t} \frac{\rho_{st}(v)}{\rho_{st}}$, where $\rho_{st}$ is the total number of distinct edges in the union of all shortest paths from $s$ to $t$ and $\rho_{st}(v)$ is the number of distinct edges that are on a shortest path from $s$ to $t$ that passes through $v$. Would you consider the above definition a better definition of betweenness centrality?
{ "domain": "cs.stackexchange", "id": 13825, "tags": "graphs, social-networks, network-analysis" }
Why are certain interactions of fields allowed but others are not?
Question: I read that the interaction $hhh$ is allowed but $\gamma\gamma\gamma$ is not allowed. But both fields correspond to bosons. Why should one kind of interaction be allowed but not the other? Answer: Because. The QED lagrangian is even under charge conjugation, and so not even effective lagrangians of it support this term (Furry's theorem). Specifically, photons are odd under C, the transformation that reverses charges. Reversing charges means reversing the vector potentials coupling to them, maintaining the lagrangian invariant. It is our world, not another, notional one. Just so. By contrast, the Weak sector does not conserve C, or P, or even CP (!), so assigning a C eigenvalue to the Higgs that would constrain its couplings is meaningless.
{ "domain": "physics.stackexchange", "id": 74559, "tags": "particle-physics" }
How to unify all the subunits in a PDB file?
Question: A PDB may contain a TER token. For example, hemoglobin has 4 subunits separated by a TER identifier. If we take hemoglobin's biological assembly pdb file we see: ATOM 1067 NH1 ARG A 141 26.176 8.362 17.810 1.00 11.11 N ATOM 1068 NH2 ARG A 141 24.650 9.068 16.200 1.00 13.86 N ATOM 1069 OXT ARG A 141 26.697 14.784 20.720 1.00 10.99 O TER 1070 ARG A 141 ATOM 1071 N HIS B 2 3.670 -13.643 19.447 1.00 38.58 N ATOM 1072 CA HIS B 2 2.695 -14.734 19.744 1.00 32.83 C ATOM 1073 C HIS B 2 1.379 -14.140 20.199 1.00 30.79 C Is there an easy way to merge all the subunits into a single VMD frame? Answer: If you don't care about residue and atom numbering, which you probably do, then sed '/^TER/d' < 1A3N.pdb > 1A3N_combined.pdb will do. Since atom entries must be reordered, it looks like MDAnalysis will strip the TER entries automatically: #!/usr/bin/env python2 import MDAnalysis u = MDAnalysis.Universe('1A3N.pdb') with MDAnalysis.Writer('1A3N_combined.pdb') as writer: writer.write(u) So, ATOM 1067 NH1 ARG A 141 26.176 8.362 17.810 1.00 11.11 N ATOM 1068 NH2 ARG A 141 24.650 9.068 16.200 1.00 13.86 N ATOM 1069 OXT ARG A 141 26.697 14.784 20.720 1.00 10.99 O TER 1070 ARG A 141 ATOM 1071 N HIS B 2 3.670 -13.643 19.447 1.00 38.58 N ATOM 1072 CA HIS B 2 2.695 -14.734 19.744 1.00 32.83 C ATOM 1073 C HIS B 2 1.379 -14.140 20.199 1.00 30.79 C becomes ATOM 1067 NH1 ARG A 141 26.176 8.362 17.810 1.00 11.11 A N ATOM 1068 NH2 ARG A 141 24.650 9.068 16.200 1.00 13.86 A N ATOM 1069 OXT ARG A 141 26.697 14.784 20.720 1.00 10.99 A O ATOM 1070 N HIS B 2 3.670 -13.643 19.447 1.00 38.58 B N ATOM 1071 CA HIS B 2 2.695 -14.734 19.744 1.00 32.83 B C ATOM 1072 C HIS B 2 1.379 -14.140 20.199 1.00 30.79 B C where there are now chain IDs in the last column. See how this doesn't renumber residues? Modify the script; resids is a property, so the in-place object mutation works properly: #!/usr/bin/env python2 import numpy as np import MDAnalysis u = MDAnalysis.Universe('1A3N.pdb') u.residues.resids = np.arange(1, 1 + len(u.residues.resids)) with MDAnalysis.Writer('1A3N_combined.pdb') as writer: writer.write(u) Finally: ATOM 1067 NH1 ARG A 141 26.176 8.362 17.810 1.00 11.11 A N ATOM 1068 NH2 ARG A 141 24.650 9.068 16.200 1.00 13.86 A N ATOM 1069 OXT ARG A 141 26.697 14.784 20.720 1.00 10.99 A O ATOM 1070 N HIS B 142 3.670 -13.643 19.447 1.00 38.58 B N ATOM 1071 CA HIS B 142 2.695 -14.734 19.744 1.00 32.83 B C ATOM 1072 C HIS B 142 1.379 -14.140 20.199 1.00 30.79 B C
{ "domain": "chemistry.stackexchange", "id": 9397, "tags": "software, proteins" }
Manage Labels in a Gitlab Project
Question: Background This is the first (and only) Go program I've written so far. My employer just formed a small team to make a prototype written in Go ready for production, so I (and everyone else on my team) am learning Go as quickly as I can. We're setting up our documentation, version control, issue tracking, etc... using Gitlab. I was reading up on best practices for organizing and tagging Github issues, thinking about how to best organize the labels for our issues, and decided to write a Go program to manage the labels in our Gitlab projects. It's a good excuse to write my first Go program! We'll be able to use this to create the same labels across all our Gitlab projects. The gist is that this program defines labels and categories. Each category is assigned a color, and each label is assigned a category. Thus all labels in a category have the same color. I use Gitlab's REST API to create labels or update their colors. Feedback I'm Looking For I'll take any feedback you have! These things interest me most: Errors How to make my code more idiomatic A good approach to add concurrency (i.e. create or update labels in parallel). A good approach for testing The Code All my code can be found in my git project, but I'll include a slightly slimmed-down version in this question. labelmaker/main.go package main import ( "labelmaker/category" "labelmaker/label" "log" "os" ) var labels = []label.Label{ {"angular", category.Platform}, {"bug", category.Problem}, } func main() { if !label.Ready() { log.Println("Exiting because we don't have the current labels") return } for _, l := range labels { if l.Exists() { l.Update() } else { l.Create() } } } func init() { log.SetOutput(os.Stdout) } category/category.go package category type Category struct { Name string Description string Color string } var Problem Category = Category{ Name: "problems", Description: "Issues that make the product feel broken. High priority, especially if its present in production.", Color: "#CC0033", } var Platform Category = Category{ Name: "platform", Description: "If the repository covers multiple parts, this is how we designate where the issue lives. (i.e. iOS and Android for cross-platform tablet app).", Color: "#A295D6", } label/label.go package label import ( "labelmaker/category" "labelmaker/gitlab" ) type Label struct { Name string Category category.Category } func (l *Label) Exists() bool { return gitlab.Exists(l.Name) } func (l *Label) Update() { gitlab.Update(l.Name, l.Category.Color) } func (l *Label) Create() { gitlab.Create(l.Name, l.Category.Color) } func Ready() bool { return gitlab.Labels != nil } gitlab/gitlab.go package gitlab import ( "bytes" "encoding/json" "net/http" ) const ( personalAccessToken string = "abc123" projectId string = "1" labelsUrl string = "http://localhost:8080/api/v3/projects/" + projectId + "/labels?private_token=" + personalAccessToken ) type Label struct { Name string Color string Description string Open_issues_count int Closed_issues_count int Open_merge_requests_count int Subscribed bool } var Labels map[string]Label func Exists(name string) bool { _, ok := Labels[name] return ok } func Create(name string, color string) error { return send(name, color, "POST") } func Update(name string, color string) error { return send(name, color, "PUT") } func send(name string, color string, method string) (err error) { body := map[string]string{ "name": name, "color": color, } buff := new(bytes.Buffer) json.NewEncoder(buff).Encode(body) var req *http.Request req, err = http.NewRequest(method, labelsUrl, buff) if err != nil { return } req.Header.Set("Content-Type", "application/json; charset=utf-8") client := &http.Client{} var resp *http.Response resp, err = client.Do(req) if resp != nil { defer resp.Body.Close() } if err != nil { return } var labelData Label err = json.NewDecoder(resp.Body).Decode(&labelData) if err == nil { Labels[labelData.Name] = labelData } return } func fetch() (labels map[string]Label, err error) { labels = make(map[string]Label) var resp *http.Response resp, err = http.Get(labelsUrl) if resp != nil { defer resp.Body.Close() } if err != nil { Labels = nil return } var labelData []Label err = json.NewDecoder(resp.Body).Decode(&labelData) if err == nil { for _, l := range labelData { labels[l.Name] = l } } return } func init() { Labels, _ = fetch() } Answer: Don't overuse var. var resp *http.Response resp, err = http.Get(labelsUrl) // or resp, err := http.Get(labelsUrl) Stick to the latter, short variable declaration. No need to use var unless you really need one. Here we must use var because it is package level block. var Problem Category = Category{ Name: "problems", Description: "Issues that make the product feel broken. High priority, especially if its present in production.", Color: "#CC0033", } // or var Problem = Categoty{ // <-- no type here Name: "problems", Description: "Issues that make the product feel broken. High priority, especially if its present in production.", Color: "#CC0033", } If we have multiple definitions we may use var as a block: var ( Problem = Category{ Name: "problems", Description: "Issues that make the product feel broken. High priority, especially if its present in production.", Color: "#CC0033", } Platform = Category{ Name: "platform", Description: "If the repository covers multiple parts, this is how we designate where the issue lives. (i.e. iOS and Android for cross-platform tablet app).", Color: "#A295D6", } ) var blocks behave simulary to import, const, and type blocks. No need to repeat your self. init is not a good place to do an HTTP request. It will block all other code and main for rather long time until it succeeds or fails for reasons. Also it's important to handle possible networking errors which you've ignored with _. It's nice that you've used streaming API instead of general Marshal/Unmarshal calls which may get very memory hungry. Consider using os.Exit(1) or log.Fatal* functions to terminate process with non-zero return code. panic will do the thing as well. Fields from struct Label doesn't follow Go naming conventions. If you need to handle different JSON names use field tags.
{ "domain": "codereview.stackexchange", "id": 31640, "tags": "beginner, go, rest, git" }
How to estimate condensation from air?
Question: How to estimate the amount of water condensing from air on a surface, given the air's temperature and relative humidity and how they change over time, the surface temperature, material's thermal properites, roughness and whatever else needs to be given about the air and surface? For my purpose, we may assume the surface starts off dry, but the more general situation would be as interesting. If enough water condenses, it'll form drops and run off - can we account for that? Are there some fairly simple formulas or rules of thumb? High accuracy is not needed; I'll be happy to get grams per sq meter per second (or whatever) to within a factor of two. (What if we wanted higher accuracy?) Answer: So you want to know how much water a certain surface adsorbs. This is really dependent on the surface material/conditions. Check adsorption and relative humidity on Wikipedia. To where I have analyzed, it seems that there is about enough information in the two articles. I am not a specialist on the subject so I might be missing some important factor.
{ "domain": "physics.stackexchange", "id": 51, "tags": "thermodynamics, thermal-radiation, water, estimation" }
PTAS (polynomial time approximatin scheme) for euclidean TSP/Minimum-Cost k-Connected subgraph problem
Question: Problem 1 I have read "On Approximation of the Minimum-Cost k-Connected Spanning Subgraph Problem" (by A. Czumaj, A. Lingas), and even in the abstract are 2 statements "We present a polynomial time approximation scheme for Minimum-Cost k-Connected subgraph problem" and in the second paragraph of the abstract they state that there is no PTAS unless P=NP. I am sure there is a small difference between the tow problems, but I cannot see what it is. Could someone clarify for me what problem does not have a PTAS and how its different from the problem in the first paragraph of the abstract? Problem 2 Because I didn't really get the paper from the first problem, I read "Polynomial Time Approximation Scheme from Euclidean TSP and other Geometric Problems" (by S. Arora), and I have the same problem here, he claims to give a PATS for euclidean TSP, but in the intro states "[...] showed that if P≠NP then metric TSP and many other problems do not have a PTAS". I see there some kind of contradiction, they are giving a PTAS, but say that some showed that there cannot be any. What am I missing? Answer: Each of the papers shows that there is a polynomial-time approximation scheme (PTAS) for the problem it studies if the input instance is Euclidean and that there is no PTAS if the input instance is arbitrary (if P≠NP).
{ "domain": "cstheory.stackexchange", "id": 2063, "tags": "graph-theory, np-hardness, approximation-algorithms, tsp" }
Bootstrap Comment post HTML markup
Question: I am creating styles for the comment posts for a projects using BootStrap. I am wondering if this is good HTML markup and the extending of BootStrap's CSS. Using HTML5 markup recommendations the HTML for the comment posts is as follows. <div class="container"> <div class="row"> <div class="col-md-8"> <h2 class="page-header">Comments</h2> <section class="comment-list"> <!-- First Comment --> <div class="row"> <div class="col-md-2 col-sm-2 hidden-xs"> <figure class="thumbnail"> <img class="img-responsive" src="http://www.keita-gaming.com/assets/profile/default-avatar-c5d8ec086224cb6fc4e395f4ba3018c2.jpg" /> <figcaption class="text-center">username</figcaption> </figure> </div> <div class="col-md-10 col-sm-10"> <div class="panel panel-default arrow left"> <div class="panel-body"> <header class="text-left"> <div class="comment-user"><i class="fa fa-user"></i> That Guy</div> <time class="comment-date" datetime="16-12-2014 01:05"><i class="fa fa-clock-o"></i> Dec 16, 2014</time> </header> <div class="comment-post"> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. </p> </div> <p class="text-right"><a href="#" class="btn btn-default btn-sm"><i class="fa fa-reply"></i> reply</a></p> </div> </div> </div> </div> </section> </div> I tried to work off BootStrap's CSS and add my style to it. The CSS is as follows /*Comment List styles*/ .comment-list .row { margin-bottom: 0px; } .comment-list .panel .panel-heading { padding: 4px 15px; position: absolute; border:none; /*Panel-heading border radius*/ border-top-right-radius:0px; top: 1px; } .comment-list .panel .panel-heading.right { border-right-width: 0px; /*Panel-heading border radius*/ border-top-left-radius:0px; right: 16px; } .comment-list .panel .panel-heading .panel-body { padding-top: 6px; } .comment-list figcaption { /*For wrap text is thumbnail*/ word-wrap: break-word; } /* Portrait tablets and medium desktops */ @media (min-width: 768px) { .comment-list .arrow:after, .comment-list .arrow:before { content: ""; position: absolute; width: 0; height: 0; border-style: solid; border-color: transparent; } .comment-list .panel.arrow.left:after, .comment-list .panel.arrow.left:befor { border-left: 0; } /*****Left Arrow*****/ /*Outline effect style*/ .comment-list .panel.arrow.left:before { left: 0px; top: 30px; /*Use boarder color of panel*/ border-right-color: inherit; border-width: 16px; } /*Background color effect*/ .comment-list .panel.arrow.left:after { left: 1px; top: 31px; /*Change for different outline color*/ border-right-color: #FFFFFF; border-width: 15px; } /*****Right Arrow*****/ /*Outline effect style*/ .comment-list .panel.arrow.right:before { right: -16px; top: 30px; /*Use boarder color of panel*/ border-left-color: inherit; border-width: 16px; } /*Background color effect*/ .comment-list .panel.arrow.right:after { right: -14px; top: 31px; /*Change for different outline color*/ border-left-color: #FFFFFF; border-width: 15px; } } .comment-list .comment-post { margin-top: 6px; } I have included a Bootply link to the example. comment post example on Bootply Answer: About your HTML Removing all the div elements that seem to be needed only for presentation, you have this markup: <h2 class="page-header">Comments</h2> <section class="comment-list"> <!-- First Comment --> <figure class="thumbnail"> <img class="img-responsive" src="http://www.keita-gaming.com/assets/profile/default-avatar-c5d8ec086224cb6fc4e395f4ba3018c2.jpg" /> <figcaption class="text-center">username</figcaption> </figure> <header class="text-left"> <div class="comment-user"><i class="fa fa-user"></i> That Guy</div> <time class="comment-date" datetime="16-12-2014 01:05"><i class="fa fa-clock-o"></i> Dec 16, 2014</time> </header> <div class="comment-post"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p> </div> <p class="text-right"><a href="#" class="btn btn-default btn-sm"><i class="fa fa-reply"></i> reply</a></p> </section> Element for all comments The class name comment-list suggests that the section groups all comments (which would be the appropriate element for a list of comments). If that’s the case, the heading ("Comments") should be a child of this section. <section> <h2>Comments</h2> <!-- all comments --> </section> Element for a single comment For each comment, you should use the article element. <section> <h2>Comments</h2> <article><!-- First comment --></article> <article><!-- Second comment --></article> </section> Element for CSS icons Using the i element for CSS/font icons is not appropriate. Use span instead. datetime format Your time element’s datetime value is not in the correct format. In your case it probably should be <time datetime="2014-12-16T01:05">Dec 16, 2014</time> Username / avatar I don’t understand why you seem to reference two usernames ("username" in figure, "That Guy" in header), but using figure for the avatar doesn’t seem to be the best choice to me (not saying that it would necessarily be wrong). Is the username really the caption of the avatar image? I’d rather think that these two, the avatar and the username, stand on their own. But if you think it make sense, keep it like that.
{ "domain": "codereview.stackexchange", "id": 11453, "tags": "html5, twitter-bootstrap" }
How is the Hartree accuracy calculated between the exact and VQE results?
Question: In the Simulating Molecules using VQE section of the Qiskit textbook it states an accuracy of $0.0016$ Hartree between the exact and VQE results with Hartree energy values of $-1.86712098$ and $-1.80040360007339$ for the ground state of $H_2$, respectively. How is $0.0016$ computed? and is it an error metric similar to absolute error? Answer: I think you may have misread the section - the document says: When noise mitigation is enabled, even though the result does not fall within chemical accuracy (defined as being within 0.0016 Hartree of the exact result), it is fairly close to the exact solution. So, the VQE result is not within chemical accuracy, but it is fairly close to chemical accuracy. How is this number derived? As stated on Wikipedia: Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. So, if you convert kcals to Hartrees and mols to molecules, you yield about 0.00159 hartrees/molecule, exactly what IBM says.
{ "domain": "quantumcomputing.stackexchange", "id": 1959, "tags": "qiskit, mathematics, chemistry" }
A question about speed
Question: A object with mass 0.5 kg slides in straight line for 200 m, where the friction act on it is 0.4 times the normal reaction acting on it. Find its initial speed. I let $u$ be the speed. $u^2=2as=2*0.4mg*200=2*0.4*0.5*9.81*200,u=28.01ms^{-1}$. But the answer should be $40ms^{-1}$, can someone tell me what's wrong with my solution? Thank you. EDIT: Correct answer: $u^2=-2as=-2(f/m)s=-2(-0.4N/m)s=-2(0.4mg/m)s=2*0.4*10*200=1600$ $u=40ms^{-1}$ Answer: You're solution is wrong. For one, the law is $v^2 = u^2 + 2aS$ where v is the final velocity and u is the initial velocity. You've substituted the initial velocity in the wrong place, can't you see? :) Secondly, why do you think the acceleration is equal to 0.4(N)? It is given in the question that the friction is equal to 0.4(N), not acceleration! Think of a way to inter-relate the two?? ;) Thirdly, try taking the acceleration due to gravity (g) as 10, it'll simplify stuff. Try solving it now, and update your question with your answer!
{ "domain": "physics.stackexchange", "id": 8341, "tags": "homework-and-exercises, newtonian-mechanics, friction, speed, kinematics" }
Where can I find 6 DOF manipulator packages for gazebo?
Question: Hello. Where can I find 6 DOF manipulator with a 4-wheel mobile base packages for gazebo? Thanks. Originally posted by NickRos on ROS Answers with karma: 1 on 2023-02-14 Post score: 0 Answer: You can use, Kuka Youbot, https://github.com/ctruillet/youbot_description or you can use URDf and links from this https://github.com/zyjiao4728/Planning-on-VKC note on this you have to convert the Ros code to ROS2. Originally posted by Ranjit Kathiriya with karma: 1622 on 2023-02-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38277, "tags": "ros, manipulator, mobile-base" }
Longest Common Prefix in an array of Strings
Question: Please be brutal, and treat this as if I was at an interview at a top 5 tech firm. Question: Write a function to find the longest common prefix string amongst an array of strings. Time it took: 17 minutes Worst case complexity analysis: n possible array elements, each can have length m that we are traversing, hence O(n*m); m could be a constant, since it's rare to find a string with length, so in a sense, I imagine this could be treated as O(n *constant length(m)) = O(n)? Space complexity analysis: O(n) public String longestCommonPrefix(String[] strs) { String longestPrefix = ""; if(strs.length>0){ longestPrefix = strs[0]; } for(int i=1; i<strs.length; i++){ String analyzing = strs[i]; int j=0; for(; j<Math.min(longestPrefix.length(), strs[i].length()); j++){ if(longestPrefix.charAt(j) != analyzing.charAt(j)){ break; } } longestPrefix = strs[i].substring(0, j); } return longestPrefix; } Answer: Pure functions should generally be declared static. You shouldn't need to take substrings in a loop — that's inefficient. Think of scanning a two-dimensional ragged array of characters. Check that all of the first characters match, then that all of the second characters match, and so on until you find a mismatch, or one of the strings is too short. public static String longestCommonPrefix(String[] strings) { if (strings.length == 0) { return ""; // Or maybe return null? } for (int prefixLen = 0; prefixLen < strings[0].length(); prefixLen++) { char c = strings[0].charAt(prefixLen); for (int i = 1; i < strings.length; i++) { if ( prefixLen >= strings[i].length() || strings[i].charAt(prefixLen) != c ) { // Mismatch found return strings[i].substring(0, prefixLen); } } } return strings[0]; } Space complexity: O(1). Worst-case time complexity: O(n m) to scan every character in every string.
{ "domain": "codereview.stackexchange", "id": 7050, "tags": "java, algorithm, interview-questions" }
NSUser defaults cellForRowAtIndexPath
Question: I have been going back and forth on one certain part of my code. It works, but it doesn't seem right for some reason, as I believe there must be an easier/cleaner way to implement the NSUserDefaults. All of which are dates. I apologize that the code is so long. A lot of it is repetitive, and I want to learn the correct way to do this. In my header file: #define UserDefault [NSUserDefaults standardUserDefaults] In my Implementation file, under cellForRowAtIndexPath: //------------------------------------------------------------------------------------------------------- - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath //------------------------------------------------------------------------------------------------------- { static NSString *simpleTableIdentifier = @"SimpleTableCell"; SimpleTableCell *cell = (SimpleTableCell *)[tableView dequeueReusableCellWithIdentifier:simpleTableIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"SimpleTableCell" owner:self options:nil]; cell = [nib objectAtIndex:0]; } ////////////////ADD EDIT GEAR ICON TO ACCESSORY CELL VIEW////////////////// UIImage *settingsImage = [UIImage imageNamed:@"Edit Wheel"]; UIButton *settingsButton = [UIButton buttonWithType:UIButtonTypeCustom]; [settingsButton setImage:settingsImage forState:UIControlStateNormal]; [settingsButton setFrame:CGRectMake(0, 0, 28.0, 28.0)]; settingsButton.backgroundColor = [UIColor clearColor]; settingsButton.showsTouchWhenHighlighted = YES; [settingsButton addTarget:self action:@selector(settingsButtonTapped:event:) forControlEvents:UIControlEventTouchUpInside]; cell.accessoryView = settingsButton; /////////////DATE FORMATTER///////////////////////// NSDateFormatter *df = [[NSDateFormatter alloc] init]; df.dateStyle = NSDateFormatterLongStyle; UILabel *dateLabel = (UILabel*) [cell viewWithTag:100]; UILabel *daysLeftLabel = (UILabel*) [cell viewWithTag:101]; ///////////////////ADD IMAGE TO EACH CELL ACCORDING TO THUMBNAILS ARRAY////////////////////// cell.thumbnailImageView.image = [UIImage imageNamed:[thumbnails objectAtIndex:indexPath.row]]; dateLabel.numberOfLines = 1; dateLabel.minimumScaleFactor = 8./dateLabel.font.pointSize;; dateLabel.adjustsFontSizeToFitWidth = YES; /////////////////////////////////NSUSER DEFAULTS//////////////////////////////////////////// NSData *marriageDate = [UserDefault objectForKey:@"MarriageDate"]; NSDate *marriageAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:marriageDate]; NSData *engagedDate = [UserDefault objectForKey:@"EngagedDate"]; NSDate *engagementAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:engagedDate]; NSData *movedInDate = [UserDefault objectForKey:@"MovedInDate"]; NSDate *movedInAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:movedInDate]; NSData *inLoveDate = [UserDefault objectForKey:@"InLoveDate"]; NSDate *inLoveAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:inLoveDate]; NSData *startedDatingDate = [UserDefault objectForKey:@"StartedDatingDate"]; NSDate *startedDatingAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:startedDatingDate]; NSData *firstKissDate = [UserDefault objectForKey:@"FirstKissDate"]; NSDate *firstKissAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:firstKissDate]; NSData *firstMetDate = [UserDefault objectForKey:@"FirstMetDate"]; NSDate *firstMetAnniversary = [NSKeyedUnarchiver unarchiveObjectWithData:firstMetDate]; //////////SEE IF DATA EXISTS-IF IT DOES, LOAD IT, IF NOT LOAD STANDARD LABELS////////// if (indexPath.row == 0) { if ([UserDefault objectForKey:@"MarriageDate"]) { dateLabel.text = [NSString stringWithFormat:@"Married Since: %@",[df stringFromDate:marriageAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:marriageAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You have been married for %ld days!", (long)dayss]; } /////////////LOAD STANDARD LABELS IF NO DATA////////////////// else { dateLabel.text = [NSString stringWithFormat:@"Date You Got Married"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 1) { if ([UserDefault objectForKey:@"EngagedDate"]) { dateLabel.text = [NSString stringWithFormat:@"Engaged On: %@",[df stringFromDate:engagementAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:engagementAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"First Engaged %ld Days Ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date You Got Engaged"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 2) { if ([UserDefault objectForKey:@"MovedInDate"]) { dateLabel.text = [NSString stringWithFormat:@"Moved In On: %@",[df stringFromDate:movedInAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:movedInAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You first moved in together %ld days ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date You Moved In Together"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 3) { if ([UserDefault objectForKey:@"InLoveDate"]) { dateLabel.text = [NSString stringWithFormat:@"In Love On: %@",[df stringFromDate:inLoveAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:inLoveAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You fell in love %ld days ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date You First Fell In Love"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 4) { if ([UserDefault objectForKey:@"StartedDatingDate"]) { dateLabel.text = [NSString stringWithFormat:@"Started Dating: %@",[df stringFromDate:startedDatingAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:startedDatingAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You started dating %ld days ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date You Started Dating"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 5) { if ([UserDefault objectForKey:@"FirstKissDate"]) { dateLabel.text = [NSString stringWithFormat:@"First Kiss On: %@",[df stringFromDate:firstKissAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:firstKissAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You had your first kiss %ld days ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date Of Your First Kiss"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } if (indexPath.row == 6) { if ([UserDefault objectForKey:@"FirstMetDate"]) { dateLabel.text = [NSString stringWithFormat:@"First Met On: %@",[df stringFromDate:firstMetAnniversary]]; components = [[NSCalendar currentCalendar] components: NSCalendarUnitDay fromDate:firstMetAnniversary toDate: todaysDate options: 0]; dayss = [components day]; daysLeftLabel.text = [NSString stringWithFormat:@"You first met your partner %ld days ago!", (long)dayss]; } else { dateLabel.text = [NSString stringWithFormat:@"Date You First Met"]; daysLeftLabel.text = [NSString stringWithFormat:@"Tap Gear Icon To Get Started"]; } } return cell; } Any help would be greatly appreciated. I know its good practice to keep cellforrowatindexpath as uncluttered as possible. Answer: #define UserDefault [NSUserDefaults standardUserDefaults] NO! There's a reason that Apple didn't bring #define pre-processor macros into the Swift language. Generally speaking, the primary function of these macros is to make debugging more difficult. In almost all cases of macros, you can get the same thing with constants or functions. Both of these allow for better, more accurate syntax highlighting, and they give an indication as to the actual return type (or depending on the macro, the argument types). So here, if we want to shorten the standardUserDefaults call down slightly, we can create a function: NSUserDefaults * standardUserDefaults() { return [NSUserDefaults standardUserDefaults]; } But I don't really know how much value this really has. Honestly, we should probably be calling into user defaults so infrequently that this macro should have relatively little value. For example, you already have some redundant calls into user defaults in your existing code base, but I'd make the case that we could minimize this further. If I've read your code correctly, we are storing seven values in user defaults, correct? They're all dates for particular events, correct? So why don't we stick all of these dates into a dictionary which then goes into a single key for user defaults? So now, we only access user defaults when we need to store or retrieve our dictionary of @"AnniversaryDates". Throughout your code, you have big sections marked of with comments looking something like this: ////////////////ADD EDIT GEAR ICON TO ACCESSORY CELL VIEW////////////////// This is a pretty clear indicator that you're already aware of how to break your method down into smaller methods. All you have to do is simply do it. But let's be clear... some of these things can and should be in the logic of the SimpleTableCell class (which probably deserves a better name). I haven't seen the UI, but it seems like our cell has about four UI elements and an action. Well, these UI elements can be set up in the nib. The image for the settings button can be set in the nib. And the action for tapping that button can be tied to the cell. The cell can then forward that tap to whatever by whatever means (a delegate). But importantly, for having a nib, we're doing way, way too much UI work programmatically. The code for the date formatter doesn't belong in here. I'd make a case for an NSDateFormatter extension. + (NSDateFormatter *)anniversaryDateFormatter { static NSDateFormatter *formatter; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ formatter = [[NSDateFormatter alloc] init]; formatter.dateStyle = NSDateFormatterLongStyle; }); return formatter; } And now when we need it, we're just grabbing it via a simple call: [NSDateFormatter anniversaryDateFormatter]; As for this big if-else block you have, well... it simply shouldn't exist at all. We should have proper model objects. Something along the lines of this: @interface Anniversary : NSObject @property AnniversaryType anniversaryType; @property NSDate *anniversaryDate; @property (readonly) NSString *dateOfAnniversaryDescription; @property (readonly) NSString *daysAgoDescription; @end Where AnniversaryType is an enumeration we define to account for the different types of anniversaries we care about in the app. From here, we simply build out an array of these objects, and use these model objects to set up our cell: Anniversary *anniversary = anniversaries[indexPath.row]; cell.dateLabel.text = anniversary.dateOfAnniversaryDescription; cell.daysLeftLabel.text = anniversary.daysAgoDescription ?: @"Tap Gear Icon To Get Started"; And yes... we should either publicly expose these labels in the cell's header, or we should provide methods that allow us to set this text. We absolutely shouldn't be hacking ourselves references to these labels using their tags as you did here: UILabel *dateLabel = (UILabel*) [cell viewWithTag:100]; UILabel *daysLeftLabel = (UILabel*) [cell viewWithTag:101]; To be explicitly clear about how much of your logic should be in the method you've shared with us, if I were writing this, I'd be shooting for something about like this: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *reuseID = @"SimpleTableCell"; SimpleTableCell *cell = (SimpleTableCell *)[tableView dequeueReusableCellWithIdentifier:reuseID]; if (!cell) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:reuseID owner:self options:nil]; cell = nib.firstObject; } Anniversary *anniversary = anniversaries[indexPath.row]; cell.thumbnail = anniversary.thumbnail; cell.anniversaryDate = anniversary.dateOfAnniversaryDescription; cell.daysSinceAnniversary = anniversary.daysAgoDescription ?: @"Tap Gear Icon To Get Started"; return cell; } And this is effectively how this method should look every time. It has two parts. The first part is the dequeuing or allocation of the necessary cell. The second part is setting all of the cell's properties based on your model object.
{ "domain": "codereview.stackexchange", "id": 17857, "tags": "objective-c, ios" }
Why Empty message in std_msg?
Question: As in the title, I don't understand why is there an Empty type in std_msgs? What is it used for? Thanks! Originally posted by Sparkle Eyes on ROS Answers with karma: 99 on 2019-05-26 Post score: 1 Answer: It can be useful in situations where you don't particularly care about content of a message, but just that one has been received. Example: maybe a bump sensor would use an empty message because you just care that you hit something, there may be no other information available worth giving a client. Originally posted by stevemacenski with karma: 8272 on 2019-05-26 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Sparkle Eyes on 2019-05-27: Thanks Steve for the answer and the example!
{ "domain": "robotics.stackexchange", "id": 33068, "tags": "ros, std-msgs, ros-kinetic" }
Log probe requests of WiFi devices
Question: I developed a program that collects probe requests from WiFi devices. It already works but I think that a programmer can improve it. #!/usr/bin/env python import platform import threading import signal import sys import time import subprocess import re import os import os.path import argparse import distutils.spawn if (platform.system() == 'Windows'): print "Windows does not seem to be supported, try it at your own risk!" sys.exit(1) formatString = "{0: <18} {1: <32} {2: <18}\n" dontstop = False def flush_data(): with open(output,'w+') as f: if (onlyprobes == False): f.write(formatString.format("MAC", "SSID", "Last seen") ) for key, value in entries.iteritems(): if (onlyprobes == False): f.write(formatString.format(value.mac, value.ssid, time.strftime("%Y%m%d-%H:%M:%S", value.timeLastSeen))) else: f.write(value.ssid + "\n") f.flush() def signal_handler(signal, frame): exit_text = '' if (flush == True): exit_text=" flushing all to file and" print os.linesep + 'You pressed CTRL+C,' + exit_text + ' exiting...' if (dontstop == True): switchThread.running = False switchThread.join() if ((flush == True) and (is_stdout == False)): flush_data() sys.exit(0) class switchChannelThread (threading.Thread): def __init__(self, threadID, name, delayInSeconds, channels): threading.Thread.__init__(self) self.threadID = threadID self.name = name self.delayInSeconds = delayInSeconds self.channels = channels self.running = True def run(self): #print 'Starting switch channel thread using a delay of %d seconds' % self.delayInSeconds while self.running: for channel in self.channels: if verbose: print 'Switching to channel %d' % (channel) if osname != "Darwin": if subprocess.call([iwconfigPath, interface, "channel", str(channel)]) != 0: self.running = False sys.exit(4) else: if subprocess.call([airportPath, interface, "-c%d" % channel]) != 0: self.running = False sys.exit(4) time.sleep(float(self.delayInSeconds)) if not self.running: return class Entry (object): def __init__(self, mac, ssid, time): self.mac = mac self.ssid = ssid self.timeLastSeen = time osname = os.uname()[0] if osname != "Darwin": defaultInterface = "" else: defaultInterface = "en1" # command line parsing: parser = argparse.ArgumentParser(description='Show and collect wlan request probes') parser.add_argument('-i', '--interface', default=defaultInterface, help='the interface used for monitoring') parser.add_argument('--tshark-path', default=distutils.spawn.find_executable("tshark"), help='path to tshark binary') parser.add_argument('--ifconfig-path', default=distutils.spawn.find_executable("ifconfig"), help='path to ifconfig') parser.add_argument('--iwconfig-path', default=distutils.spawn.find_executable("iwconfig"), help='path to iwconfig') parser.add_argument('-o', '--output', default='-', help='output file (path or - for stdout)') parser.add_argument('-c', '--channel', default='all', help='channel/s to hop (i.e. 3 or 3,5,7 or 3-9 or all or 0 for current') parser.add_argument('--verbose', action='store_true', help='verbose information') parser.add_argument('-p', '--only-probes', action='store_true', help='only saves probe data spit by newline') parser.add_argument('--flush', action='store_true', help='stores the data on the file only when interrupted') parser.add_argument('--delay', default=5, help='delay between channel change') args = parser.parse_args() tsharkPath = args.tshark_path ifconfigPath = args.ifconfig_path iwconfigPath = args.iwconfig_path interface = args.interface verbose = args.verbose onlyprobes = args.only_probes output = args.output flush = args.flush channel = args.channel delay = args.delay is_stdout = not ( (output != '') and (output != '-') ) if (interface == ""): print "Please specify interface" sys.exit(0) # only on osx: airportPath = "/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport"; # check all params if not os.path.isfile(tsharkPath): print "tshark not found at path {0}".format(tsharkPath) sys.exit(1) if not os.path.isfile(ifconfigPath): print "ifconfig not found at path {0}".format(ifconfigPath) sys.exit(1) if osname != "Darwin": if not os.path.isfile(iwconfigPath): print "iwconfig not found at path {0}".format(iwconfigPath) sys.exit(1) # start interface if subprocess.call([ifconfigPath, interface, 'up']) != 0: print "cannot start interface: {0}".format(interface) sys.exit(2) # Set interface in monitor mode retVal = 0 if osname != 'Darwin': retVal = subprocess.call([iwconfigPath, interface, "mode", "monitor"]) else: retVal = subprocess.call([airportPath, interface, "-z"]) if retVal != 0: print "cannot set interface to monitor mode: {0}".format(interface) sys.exit(3) # start thread that switches channels #Regex made with regex101.com c_list = re.compile('^(([1-9]|1[0-4]),){2,14}$') c_range = re.compile('^([1-9]|1[0-4])-([1-9]|1[0-4])$') c_single = re.compile('^([1-9]|1[0-4])$') d_valid = re.compile('^(\d*\.*\d*)$') if (channel != '0'): dontstop = True try: float(delay) except: print "Wrong delay specified!" sys.exit(1) if (d_valid.match(str(delay)) and (float(delay) > 0)): if (channel == 'all'): channel = range(1, 13 if (osname == "Darwin") else 15) switchThread = switchChannelThread(1, 'SwitchChannel', delay, channel) switchThread.start() elif c_single.match(channel): channel = [int(channel)] switchThread = switchChannelThread(1, 'SwitchChannel', delay, channel) switchThread.start() elif c_range.match(channel): rchannel = channel.split('-') schannel = int(rchannel[0]) echannel = int(rchannel[1]) if (schannel > echannel): channel = range(echannel, schannel + 1) else: channel = range(schannel, echannel + 1) switchThread = switchChannelThread(1, 'SwitchChannel', delay, channel) switchThread.start() elif c_list.match(channel + ','): channel = channel.split(',') channel = [int(i) for i in channel] switchThread = switchChannelThread(1, 'SwitchChannel', delay, channel) switchThread.start() else: print "Wrong channel/s specified!" sys.exit(1) else: print "Wrong delay specified!" sys.exit(1) signal.signal(signal.SIGINT, signal_handler) print "Running..." # start tshark and read the results displayFilter = "wlan.fcs_good==1 and not wlan_mgt.ssid==\\\"\\\""; fieldParams = "-T fields -e wlan.sa -e wlan_mgt.ssid -Eseparator=,"; tsharkCommandLine = "{0} -i {1} -n -l {2}" if (osname != 'Darwin'): tsharkCommandLine += " subtype probereq -2 -R \"{3}\"" else: tsharkCommandLine += " -y PPI -2 -R \"wlan.fc.type_subtype==4 and {3}\"" tsharkCommandLine = tsharkCommandLine.format(tsharkPath, interface, fieldParams, displayFilter) if verbose: print 'tshark command: %s\n' % tsharkCommandLine, DEVNULL = open(os.devnull, 'w') popen = subprocess.Popen(tsharkCommandLine, shell=True, stdout=subprocess.PIPE, stderr=DEVNULL) # collect all Entry objects in entries entries = {} if ( (is_stdout == False) and (flush == False) ): try: f=open(output,'w+') if (onlyprobes == False): f.write(formatString.format("MAC", "SSID", "Last seen")) except Exception as e: print "An error has occurred: " + str(e) sys.exit(1) for line in iter(popen.stdout.readline, ''): line = line.rstrip() # if verbose: # print 'line: "%s"' % (line,) if line.find(',') > 0: mac, ssid = line.split(',', 1) if line in entries: #if verbose: # print "entry found (seen before): mac: '{0}', ssid: '{1}'".format(mac,ssid) entry = entries[line] entry.timeLastSeen = time.localtime() else: localtime=time.localtime() if ( (is_stdout == False) and (flush == False) ): if (onlyprobes == False): f.write(formatString.format(mac, ssid, time.strftime("%Y%m%d-%H:%M:%S", localtime))) else: f.write(ssid + "\n") f.flush() print "New entry found: mac: '{0}', ssid: '{1}'".format(mac,ssid) entries[line] = Entry(mac, ssid, localtime) I need a small orientation about code optimization. I think that many lines of code can be grouped to avoid code duplication in a clever way. My main style troubles are about code size reduction and readability. How can I simplify this code? if not ( (output != '') and (output != '-') ): Should I use myClassName, MyClassName, myclassname or my_class_name? Should I use myFunctionName, MyFunctionName, myfunctionname or my_function_name? Answer: Some points on style. Python has an official style guide which has a lot of information on how to properly layout your code. Don't put if, elif, while, etc. conditions in brackets (i.e if a == 2 and not if (a == 2)) This mostly comes from other languages which require this. Use newlines and indentations after if, else, etc. (Don't do if condition: code(), do if condition: code() in all circumstances.) Don't compare to Booleans. (if condition == True to if condition and if condition == False to if not condition) Use lower_case_with_underscores for variable names (see this) Boolean values should not be negative and be verbs or verb+noun (is_stopping = True or stopping = True, not dont_stop = False or not_stopping = False, then do if not stopping rather than if not_stopping) Have an entry point to your program. Right now, your code has no entry point, it is just in the global namespace. Do this: #!shebang imports classes function definitions def main(): everything that is not an import, class or function if __name__ == "__main__": # i.e. The module is not being imported main()
{ "domain": "codereview.stackexchange", "id": 12571, "tags": "python, beginner, networking" }
Checking substring in 8086 ASM
Question: I have tried like this to check a substring in a mainstring in 8086. Is there any shorter way of doing this? My implementation seems lengthy. DATA SEGMENT STR1 DB 'MADAM' LEN1 DW ($-STR1); storing the length of STR1 STR2 DB 'MADAA' LEN2 DW ($-STR2); stroing the length of STR2 DATA ENDS CODE SEGMENT LEA SI, STR1 LEA DI, STR2 MOV DX, LEN1 MOV CX, LEN2 CMP CX, DX; comparing main & substring length JA EXIT; if substring size is bigger than there is no chance to be found it in main string JE SAMELENGTH; if main & sub string both have same length the we can compare them directly JB FIND; general case (substring length < mainstring length): we can apply our main process SAMELENGTH: CLD REPE CMPSB JNE RED JMP GREEN FIND: MOV AL, [SI]; storing the ascii value of current character of mainstring MOV AH, [DI]; storing the ascii value of current character of substring CMP AL,AH; comparing both character JE CHECK; JNE NOTEQL NOTEQL: INC SI; if both character don't match then we would point to the next char of main string DEC DX; DX keeps track of how many character of mainstring is left to process CMP DX, 0000H; checking if there are any character left in the main string for further comparison JE RED; if no character is left in main string then obviously the substring doesn't exists in main string JMP FIND CHECK: MOV CX, LEN2; CX is used internally for REPE CMPSB. So storing length of the substring in CX would limit the number of characters for comparison to exact length of substring. ; For example to compare between "madam" & "ada" we need to compare *ada* portion of main string with substring ada, no more, no less MOV SP, SI; storing the index of current character of main string so if the following REPE CMPSB find mismatch then the process can be started over from the next character of main string (SEE line 1 of TEMPRED) by going to TEMPRED > FIND ADD SP, 0001H CLD REPE CMPSB JNE TEMPRED JMP GREEN TEMPRED:; substring not found starting from the current character of main string, but it is possible to find match if we start from next character in main string MOV SI,SP; going to the next character of main string (after REPE CMPSB of CHECK segment) DEC DX LEA DI, STR2; reloading substring index in DI (after REPE CMPSB of CHECK segment) JMP FIND; if a character matches but the following substring mismatches in main string then we start over the same process from the next character of main string by going to FIND segment GREEN: MOV BX, 0001H; substring found JMP EXIT RED: MOV BX, 0000H; substring not found JMP EXIT EXIT: CODE ENDS END RET Answer: There are a number of things that could be improved with this code. I hope you find these suggestions helpful. Specify which assembler Unlike C or Python, there are a great many variations in assembler syntax, even for the same architecture, such as the x86 of this code. Generally, it's useful to note which assembler, which target processor and which OS (if any) in the comments at the top of the file. In this case, it looked most like 16-bit TASM, so that's the compiler I used to test this code. Use an ASSUME directive The code would not assemble for me until I added an ASSUME directive. The ASSUME directive doesn't actually generate any code. It simply specifies which assumptions the assembler should make when generating the output. It also helps human readers of your code understand the intended context. In this particular case, I added this line just after the CODE SEGMENT declaration: ASSUME CS:CODE, DS:DATA, ES:DATA The CS and DS assumptions are obvious, but the ES assumption is less so. However, the code uses the CMPSB instruction and based on the context, this means an implicit assumption that ES also points to the DATA segment. In my case, (emulated 16-bit DOS), I had to add a few statements to the start of the code to actually load the DS and ES segment registers appropriately. Avoid instructions outside any segment The EXIT code currently looks like this: EXIT: CODE ENDS END RET The problem is that the CODE ENDS closes the CODE segment and the END directive tells the assembler that there is no more code and thus the RET instruction may or may not be assembled, and may or may not actually be placed in the CODE segment. You probably meant instead to do this: EXIT: RET CODE ENDS END Eliminate convoluted branching Avoid needless branching. They make your code harder to read and slower to execute. For example, the code currently has this: JA EXIT JE SAMELENGTH JB FIND SAMELENGTH: CLD REPE CMPSB JNE RED JMP GREEN ; ... code elided GREEN: MOV BX, 0001H; substring found JMP EXIT RED: MOV BX, 0000H; substring not found JMP EXIT EXIT: This could be very much simplified: JA EXIT JB FIND ; fall through to same length SAMELENGTH: XOR BX,BX ; assume string not found CLD REPE CMPSB JNE EXIT INC BX ; indicate that string was found EXIT: There are a number of such simplifications possible with little effort. Know your instruction set The code currently has this set of instructions DEC DX CMP DX, 0000H JE RED However, the DEC instruction already sets the Z flag, so the CMP instruction is not needed. Use REPNE SCASB as appropriate The code at the location FIND is largely the same as would have been done by using REPNE SCASB. The only difference is in which registers are used. The code you have isn't necessarily wrong, but it could probably be shorter. Avoid using SP as a general register Just after CHECK, the code saves a copy of the pointer (not an index as the comment falsely claims) to the SP register. However, SP is a stack pointer, so this code can only be used in an environment in which the stack is not used. That could be the case, but it makes the code much less portable to code it that way, especially because the AX or BX registers could just as easily have been used here. Consider using standard length lines The comments in the code are very long and the semicolon is right after the instruction. Neither of these things are necessarily wrong, but they are different from the usual convention which is to align the semicolon character in some column and making sure that lines are no more than 72 characters long (some use 78).
{ "domain": "codereview.stackexchange", "id": 38016, "tags": "strings, assembly" }
Why do basis vectors transform covariantly?
Question: I have some trouble understanding transformation rules of basis vectors. My question/goal is to obtain a mathematical derivation to see why basis vectors transform covariantly and other vectors (components) contravariantly. I have three questions. Question 1: why can't I use the vector component transformation equation (with $R_{ji}$) to transform each components of the old basis vectors into the new ones? It should work for any vector right? Question 2: why is $R_{ji}$ sometimes expressed as $\frac{\partial x^{'i}}{\partial x^j}$ as I have read in some sources? Question 3: as the change of basis should be independent of any vector in particular why do I still have vector component terms in my derivation? As I have learned the components of a vector transform under coordinate transformation as \begin{equation} V'^i=\sum_j R_{ji}V^j \end{equation} where $R_{ij}$ is a rotation matrix. Now of course the vector V itself is a geometrical object and independent of coordinates. The vector should be defined as follows: \begin{equation} \boldsymbol{\vec{V}}=V^i\boldsymbol{\hat{e}_i} \end{equation} So I tried to express the vector in both old coordinate terms and new ones: \begin{equation} \boldsymbol{\vec{V}}=V^i\boldsymbol{\hat{e}_i}=V'^i\boldsymbol{\hat{e}^{'}_i} \end{equation} \begin{equation} \sum_j R_{ji}V^j \boldsymbol{\hat{e}^{'}_i}=V^i\boldsymbol{\hat{e}_i} \end{equation} And therefore \begin{equation} \boldsymbol{\hat{e}^{'}_i}=\frac{V^i}{\sum_j R_{ji}V^j }\boldsymbol{\hat{e}_i} \end{equation} This seems strange since the new basis vector should not depend on any particular vector. When I try to set the components of the vector equal to one in order to just leave us with the basis vectors I find the following expression: \begin{equation} \boldsymbol{\hat{e}^{'}_i}=\frac{1}{\sum_j R_{ji}}\boldsymbol{\hat{e}_i} \end{equation} is this equal to $R_{ij}$ and would this imply covariant transformation? I have a big feeling that my derivation is very wrong since my knowledge lacks at linear algebra. I hope someone can answer (some of) these questions. Thank you in advance! -Jesse Answer: First you need to decide whether you want to use sum symbols or whether you want Einstein summation convention (implicit sums). The vector decomposition you have written down is actually Einstein: $$\vec V = V^i \hat e_i := \sum_i V^i \hat e_i$$ Second, you should keep upper indices upper, and lower indices lower, on both sides. Rather than writing (wrongly, so to say) $$V^{\prime i}=\sum_j R_{ji}V^j$$ you should write $$V^{\prime i}=\sum_j R^i_jV^j$$ because then $i$ is upper on both sides. Upper or lower positions indicates its covariance/contravariance which makes checking correctness/consistency of your calculations much easier. Third, in the equation $$\sum_j R_{ij} V^j \hat e^{\prime}_i = V^i \hat e_i$$ you have combined the former mistakes in the most unfortunate ways. The correct (verbose) expression is $$\sum_{i}\sum_{j} R^i_j V^j \hat e^{\prime}_i = \sum_{i} V^i \hat e_i$$ That is why you can't just divide by the sum over $j$ an rearrange them to the other side. But what you actually can derive of this equation is the transformation behavior of the basis vectors. If you compare the left hand side and the right hand side (maybe by swapping/renaming indices $i,j$ on the left hand side, to make it clearer), you see that $$\sum_j R^j_i \hat e^{\prime}_j = \hat e_i$$ must be satisfied in order for the former equation to hold true. Note, however, that in the other transformation direction you have to use the inverse transform $$\hat e^{\prime}_j = \sum_i (R^{-1})^i_j \hat e_i$$ where $R^{-1}$ is defined such that $R^{-1}R=I$ (I=unity matrix), or $$\sum_j (R^{-1})^i_j R^j_k=\delta^i_k$$ Only for orthogonal transforms is the inverse of $R$ equal to its transpose. As to question number 2, this is usually only used when the transformation is nonlinear (although you may use it for linear transforms as well). Since $$x^{\prime i} = x^{\prime i}(x)$$ the derivative of any function $f(x^{\prime})$ can be expressed by the chain rule of differentation: $$\frac{\partial f}{\partial x^j}=\sum_i \frac{\partial f}{\partial x^{\prime i}}\frac{\partial x^{\prime i}}{\partial x^j}=\sum_i \frac{\partial f}{\partial x^{\prime i}}R^i_j$$
{ "domain": "physics.stackexchange", "id": 77498, "tags": "coordinate-systems, vectors, linear-algebra, covariance" }
Is the "capacity to do work" of a body equivalent to the concept of "resistance to change in motion"?
Question: For any body we have $E = mc^2$, where $E$ is the capacity of the body to do work on its surroundings and $m$ is the resistance the body has to move from its state of rest. Therefore, the capacity to do work is proportional to the body's inertia. It would seem at first glance that these properties should be unrelated. Why can't we have a feather-light body which is very potent in its capacity to perform work on its surroundings? Answer: The premise isn't quite correct; we could write $E=\sqrt{(mc^2)^2+(pc)^2}+V+E_0$, where $p$ is the momentum, $V$ is potential energy, and $E_0$ is a constant that sets the reference zero. At small speed $v$, this simplifies to $E=mc^2+\frac{1}{2}mv^2+V+E_0$. Thus, we could do work by converting matter to energy, by slowing the body, or by allowing the body to move to a lower potential level. Electrons, for example, are light but can provide work when allowed to move in an electric field.
{ "domain": "physics.stackexchange", "id": 97822, "tags": "general-relativity, special-relativity" }
Transmission of Gaussian Beam Through Graded-Index Slab
Question: The $ABCD$ matrix of a glass graded-index slab with refractive index $n(y)=n_0(1-\frac{1}{2}\alpha^{2}y^{2})$ and length $d$ is $A=\cos(\alpha d)$, $B=\frac{1}{\alpha}\sin(\alpha d)$, $C=-\alpha \sin(\alpha d)$, $D=\cos(\alpha d)$ for paraxial rays along the z axis. Usually, $\alpha$ is chosen to be sufficiently small so that $\alpha^{2}y^{2} << 1$. A Gaussian beam of wavelength $\lambda_0$, waist radius $W_0$ in free space, and axis in the z direction enters the slab at its waist. How can I use the $ABCD$ law to get an expression for the beam width in the $y$ direction as a function of $d$? Answer: The ABCD law can be used for Gaussian beam propagation using the complex beam radius $q$. Defining $\frac{1}{q} = \frac{1}{R}-i\frac{2}{kW^2}$, $R = R(z)$ being the radius of curvature of the beam and $W = W(z)$ the halfwidth at point $z$ and $k = 2\pi/\lambda_0$, the complex beam radius transforms as $q \to \frac{Aq+B}{Cq+D}$. In your case (waist at the beginning of the medium, radius of curvature at the waist being infinite), so that $q = ikW_0^2/2$ at the front of the medium.
{ "domain": "physics.stackexchange", "id": 4815, "tags": "homework-and-exercises, optics, waves, photons, visible-light" }
Design of a N-ary tree
Question: A friend of mine and I recently started a project and at this moment we are writing a common library. He began writing an N-ary three and he is convinced that he has the best design with which I am not agree. I want to convince him, that his design of the N-ary Tree is very bad, and with this design we will face problems. Due to lack of experience in complex projects from my side -- I can't convince him with my arguments. Here is the current design and implementation: template <typename T> class Tree { private: T data; vector< Tree<T> > successors; public: Tree(T x) { data = x; } T root() { return data; } vector< Tree<T> > succ() { return successors; } void addChild(Tree<T> t) { successors.push_back(t); } void member(T element) { if (element == data) return true; for (int i = 0; i < successors.size(); i++) { if(successors[i].member(element)) { return true; } } return false; } void fromRightParentToLeft() { if (successors.empty()) { cout << data << "\n"; return; } if (successors.size() == 1) { successors[0].fromRightParentToLeft(); cout << data << "\n"; } else { for (int i = succ().size(); i > 1; i--) { succ()[i-1].fromRightParentToLeft(); } cout << data << "\n"; succ()[0].fromRightParentToLeft(); } } }; So far I can see few problems: We will face problem when he will implement a method for node deletion, because it is somehow wrong a node to have the ability to delete itself. He traverses the tree in a inorder-ish manner, which is pointless when dealing with general N-ary trees. Can someone point other potential traps with this approach? The tree must meet the following requirements: Inteface: node addition, node deletion, search for a particular node, basic tree traversal. Other: It should be: easily maintainable, with fast performance. In the general case the tree will have ~10k nodes, which will contain integers. Answer: Some things: By definition, an N-ary tree is a tree where any node must have no more than N children. However, you never allow the user to define that N, and nowhere do you specify the invariant that for any node, said node must have at most N children. Your tree is not an N-ary tree, it's just a "tree" (with an arbitrary number of children per node). You're returning bool values in void member(T). You should decouple traversal from the tree itself. Instead of thinking about traversal like tree.traverse() you should think like traverse(tree). Reason: to separate and clear up responsibilities. A data structure holds data and nothing else. If you implement traversal behavior inside the tree, the user implementing his own would break any sort of consistency, unless he modifies the structure itself or extends it, which is not exactly desired in this kind of situations (class MySpecialPostOrderTraversalTree : Tree). Furthermore, separating the behavior from the data structure leaves room for other extensions (e.g. a generic traversal algorithm that works on your tree, and on some other kinds of trees, defined by your user). You should definitely remove the console output from your tree, even if you leave traversal there. At least, you should allow your user to specify a method to call for each visited node. If he wants to print the content of the nodes, he can do it himself. You don't want to give someone a tree that randomly starts to print out stuff to stdout. Also see the concept of mapping functions over collections. You keep working with values. For example, succ() returns a copy of the vector of children. On large trees this will have a negative effect, constantly copying vectors of elements which contain vectors of elements (and so on) is not exactly a good idea. Look into pointers, and base your tree on them. Start with std::unique_ptr and std::shared_ptr, don't use naked pointers. See janos's answer for naming suggestions that I feel there's no need to duplicate here :)
{ "domain": "codereview.stackexchange", "id": 10940, "tags": "c++, tree, library" }
Finding Safety Factor for Plastic Snap Fit Cantilever
Question: I'm trying to get a safety factor of $n\geq2$ for the root of my snap fit beam (pic at bottom of the post). I have gone through the calculations to get the SF at the root's edge from the geometric and material properties (using a variety of plastics), and my SF seems unnaturally small. I don't have a lot of experience with this yet, so I wanted to check here to make sure I didn't forget something. If someone would be kind enough to check my work, I would greatly appreciate it. Is my work incorrect, or have I just overestimated the flexibility of my materials? Find: Safety factor at the root edge, $n_e$ (I also found SF at the root center, $n_c$, from transverse shear, but this isn't critical since stress from bending moment is zero there.) Geometric properties Beam length: $l=6.40~\text{mm}$ Maximum deflection: $y_{d,\text{max}}=1.20~\text{mm}$ Beam depth: $h=1.60~\text{mm}$ Beam width: $b=7.31~\text{mm}$ Root fillet radius: $r_f=0.48~\text{mm}$ Material properties (ex.) SABIC LNP STAT-KON 5E003M: $S_y=50~\text{MPa}$, $E=9060~\text{MPa}$ (this is a fairly brittle plastic, but I got very small SFs for many other plastics as well) Equations (beam root) $$I=\frac{1}{12}bh^3~~~~~~~~~~~~~~P=\frac{6y_dEI}{x^2(x-3l)}=-\frac{3y_{d,\text{max}}EI}{l^3}~~~~~~~~~~~~~~y_{d,\text{max}}=y_{d,x=l}$$ Equation for $P$ is based on the cantilever diagram at the bottom of the post. Variable shear force and bending moment: $$V(x)=P~~~~~~~~~M(x)=P(l-x)$$ At $x=0$: $$V_{x=0}=-\frac{3y_{d,\text{max}}EI}{l^3}~~~~~~~~~M_{x=0}=-\frac{3y_{d,\text{max}}EI}{l^2}$$ Normal stress at the root edge: $$\sigma=\frac{My}{I}~~~~~~y=\frac{1}{2}h~~~~~~\sigma_\text{max,nom}=\frac{Mh}{2I}~~~~~~\sigma_\text{max}=K_t\sigma_\text{max,nom}$$ Stress concentration factor $K_t$ can be found by: From my design, $K_t$ turns out to be $\approx1.40$. Shear stress at root center: $$\tau_\text{max,nom}=\frac{3V}{2A}~~~~~~~~~~~\tau_\text{max}=\tau_\text{max,nom}$$ Von Mises stresses: $\sigma_c'=\sqrt{3\tau_\text{max}^2}$ at root center, $\sigma_e'=\sigma_\text{max}$ at root edge. Safety factor: $$n_c=\frac{S_y}{\sigma_c'}~~~~~~~~~~~~~~~~n_e=\frac{S_y}{\sigma_e'}$$ Results (beam root) For 5E003M, $n_e=0.056$ and $n_c=0.725$. As you can see, these safety factors are terrible! As far as changing the material properties, I could decrease the width or depth of the beam, but the length must stay the same. Any insight would be appreciated, thanks! :D Answer: This is a suggested procedure to achieve your goal. I'll simplify your model to a cantilever beam with a constant cross-section as shown below, and assume the critical section is located at the root of the fillet. The first step is to determine the maximum deflection $\delta_v$. The second step is to find out the equation for the deflection. For this exercise, I'll ignore the shear deformation, but apply the method developed by Timoshenko to account for "large deflection". In his book, "Mechanics of Materials", co-authored with Gere, Timoshenko provided a table for ease of pinpoint the correct equation as shown below. In the table, column (m) is the numerical value of $\dfrac{PL^2}{EI}$, and column (n) is the numerical value of $\dfrac {\delta_v}{L}$. The equation of deflection is simply (m)/(n). $(m)$ = $\dfrac{PL^2}{EI}$, $(n)$ = $\dfrac {\delta_v}{L}$, $\dfrac{(m)}{(n)}$ = $\dfrac{PL^2}{EI}$/$\dfrac {\delta_v}{L}$. Let $\lambda = \dfrac{m}{n}$ and rearrange the terms, $\delta_v = \dfrac {PL^3}{EI \lambda}$, or $P = \dfrac{EI \lambda\delta_v }{L^3}$. The third step is to check shear stress: At this point, I'll conservatively introduce the form factor to account for the escalated shear stress due to shear deformation. For a rectangular section, the form factor $f_s$ = 6/5 = 1.2, so $\sigma = f_s*\dfrac{3P}{2A} \leq \dfrac{f_y}{n}$, where $n$ is the desired safty factor. The fourth step is to determine the bending stress. Assume elastic behavior, $\sigma_b = \dfrac{6M}{bh^2} \leq \dfrac{F_y}{n}$, or assume plastic behavior, $\sigma_b = \dfrac{4nM}{bh^2} \leq F_y$ The last step is to determine $P$: Since $M = P*a$, plug $M$ into the two equations above, you will get the $P$ that satisfies the limit of the bending stress with the desired safety factor. However, you also need to back-check/compare the $P$ derived from steps two (deflection) and three (shear stress) before making the conclusion. If you still couldn't get satisfactory force with the desired safety factor, you will have to increase the depth/thickness of the member or adjust the deflection limit. Hope this helps.
{ "domain": "engineering.stackexchange", "id": 4181, "tags": "beam, plastic" }
Audio Signal Noise Filter Problem
Question: I'm currently working with audio signals and have a problem: C = A*B + N, where C = recorded signal from microphone consisting of: A = known music file data played on speakers next to microphone B = some convolution on the recorded A-sound due to speaker->mic roundtrip (I mean the recorded signal won't be 100% the same as the audio data from the file before it is played to the speaker and recorded by mic. (Is this an impulse response?)) N = some additional noise sounds recorded by microphone My goal: an approximate estimation as to whether there is a signal N and how loud it is. I don't have a need for accurate data! Additional info: I'm working with Apple's vDSP API. I have cross correlated the signals A and C, so I have the time window in which the signals overlap. In the overlapping window, I have both signals in the time and frequency domain. Currently I'm helpless if, for example, a Wiener filter is the right approach and if I'm capable to apply one with my known parameters(Is a known noise required? or the impulse response of the environment?). I tried to apply a Wiener deconvolution by dividing C/A in the frequency domain with no success. Once more: I don't need accurate data, just a rough guess how much N is there in the signal C. Actually a SNR like measure would be sufficient. Answer: Wiener filtering is one approach. It might even be the best approach. A Wiener filter is designed to minimize the noise (in the least squares sense) and invert the effect of the impulse response, given a known signal and a signal that is known to be tainted with noise and an impulse response. Once you have a Wiener filter, you can then compare the amplitude of the filtered signal with the amplitude of the direct signal to estimate the noise (I think). Now I've never implemented a Wiener filter myself, so I will defer to a textbook with chapter dedicated to the subject: Advanced Digital Signal Processing (Electrical and Computer Engineering) by Glenn Zelniker and Fred J. Taylor. It's out of print, but I'm sure you can pick up a used copy for cheap somewhere. I suggest it because it's more mathematical than engineering in it's approach and that might appeal to you. Many other textbooks have info on the subject. If a more ad-hoc approach suits you, here is a suggestion. I've never tried it, but it might work: Create a filterbank, F, (a properly windowed FFT will work) to analyze both the known music file A and the input C. You will also need some measurement function, M. I would suggest $M(x) = | x |$, rather than $M(x) = x^2$, but both will work, and you may also want to do some smoothing over time. By comparing the results of $M(F(C))$ to $(M(F(A))$ when the noise is low, you should be able to determine the effect of B. Of course, you may not know that the noise is low, in which case, you'll have to get a bit more clever, creating some estimate based on some statistics. The goal, however, is to get to a point where you can predict the output of $(M(F(C))$ from $(M(F(A)))$, to within some bounds. The lower the bounds the better you will be at detecting the noise. To estimate the noise, you will compare how far above those bounds the signal has actually gone. This will work well for band-limited noise, but for quieter, broadband noise, it may not work well. In that case, you may want to create an aggregate statistic based not just on how far above the bounds a signal has gone, but how many bounds have been crossed, or simply apply your measurement function directly to your unfiltered signal: $M(C)$ vs $(M(A))$.
{ "domain": "dsp.stackexchange", "id": 593, "tags": "filters, audio, noise, denoising, deconvolution" }
Why does a precessing wheel remain horizontal, instead of flipping?
Question: Suppose I have a wheel with an axle, such that one side of the axle is tied to a rope. I'm initially holding wheel in such a way, that the radius vectors of the wheel are perpendicular to a board. I release the other side of the axle. It is obvious that gravity produces a torque that goes into the board and turns the wheel in such a way that the face is now facing downwards. Now comes the non-intuitive second part. I've given the wheel some spin in the beginning. This means the wheel has some initial angular momentum. Now I release it. Gravity would apply a torque into the board, which would induce a small change in angular momentum. This resultant angular momentum would be somewhere between the direction of the torque and the initial angular momentum, which is sideways. As gravity keeps on trying to produce the torque, the direction of the torque changes as the wheel turns ever so slightly. This causes the angular momentum to change again. Hence the angular momentum keeps on changing, causing the wheel to rotate horizontally - something we call precession. Now I clearly seem to understand why angular momentum chases the torque, causing the wheel to turn. What I don't understand is, how does the wheel manage to remain horizontal. Let us consider the scenario again. Gravity induces an angular momentum into the board. However there is also some angular momentum sideways due to the spin. Shouldn't the wheel go down while spinning at the same time ? Does the wheel go down only when the total angular momentum and torque is in the same direction, as in the case of non-spinning wheel ? Moreover, in the spinning case, the angular momentum chases the torque but never catches up. Is that why it remains horizontal ? Can anyone provide me with an intuitive explanation as to why precession prevents the wheel from flipping due to torque due to gravity ? Answer: Shouldn't the wheel go down while spinning at the same time ? It does. The axle tilts so that the wheel (and Earth's) gravitational potential energy decreases (centre of mass of wheel moves down towards the Earth) to provide the extra kinetic energy of the wheel due to the precession. Watch the video Veritasium - Gyroscopic Precession from $3:05$ at a speed of $0.25\times$ and you will see that the axle of the wheel which is horizontal just before release becomes inclined after release.
{ "domain": "physics.stackexchange", "id": 83948, "tags": "rotational-dynamics, angular-momentum, torque, gyroscopes, precession" }
Can we perform an n-d range search over an arbitrary box without resorting to simplex methods?
Question: Suppose I have some set of points in d-dimensional space, each with some mass. Our problem size will be the number of points in this set. After some roughly (within polylog factors) linear initialisation using roughly linear storage, I would like to know the total mass of all the points that fall within various queried subspaces in polylogarithmic time. If our queries are all axis-parallel boxes, ie sets of points defined by a cartesian product of intervals on each axis, we can apply some standard, easily-googleable range searching methods and achieve this without too much difficulty. If our queries are all simplices, there are currently no known methods which satisfy both these criteria, either we use polynomial space and initialisation to achieve a polylog query time, or we use roughly linear space and have sublinear, but not logarithmic, query times. What if our queries are all boxes, but each is rotated in some way? We can define such a box in a few ways, but for the sake of concreteness suppose I will give you a sequence of sets of hyperplanes, each with exactly one parallel partner, intersecting all others orthogonally, defining some boxey subspace between them. Is there a way of solving this slightly simpler problem with roughly linear initialisation and storage but polylogarithmic queries? Alternatively, if I were to give you a method of doing this, would you be able to use it to solve the simplex case of this problem in a similarly easy way? Answer: No. Rotated-box queries and simplex queries are both generalizations of slab queries, where a slab is the volume between two parallel hyperplanes. Most lower bound proofs for simplex range searching actually assume that all query simplices are slabs of constant thickness. In particular, Chazelle [2] proved the following theorem. Let $P$ be a random set of $n$ points, generated independently and uniformly in the unit hypercube $[0,1]^d$. With probability at least $1/2$, any data structure of size $s$ that answers queries in $P$ for slabs of width $1/24$ in time $t$ in the semigroup arithmetic model must satisfy the inequality $st^d = \Omega((n/\log n)^d)$, or $st^2 = \Omega(n^2)$ when $d=2$. The constants $1/2$ and $1/24$ are not particularly important here; different choices merely change the constant hidden in the $\Omega(\,)$ notation. Thus, if you are allowed only linear space, your query time must be $\Omega(n^{1-1/d} / \operatorname{polylog} n)$; on the other hand, if you require polylogarithmic query time, you must use $\Omega(n^d / \operatorname{polylog} n)$ space. More recently, Arya, Mount, and Xia [2] proved nearly identical lower bounds (under some reasonable assumptions about the underlying semigroup) for the even simpler case of halfspace queries, where a halfspace consists of all points on one side of a single halfplane. tl;dr: Don't rotate your boxes. Sunil Arya, David M. Mount, and Jian Xia. Tight lower bounds for halfspace range searching. Discrete & Computational Geometry 47:711–730, 2012. Bernard Chazelle. Lower bounds on the complexity of polytope range searching. Journal of the AMS 2(40):637–666, 1989.
{ "domain": "cstheory.stackexchange", "id": 2156, "tags": "ds.algorithms, ds.data-structures, cg.comp-geom" }
Drug and antibiotic resistance in organism
Question: Out of gram positive or gram negative or both which one is having a higher resistance to drug or antibiotics? How the organism gains that gene for drug or antibiotics resistance.? Kindly explain it in detail. Answer: The major effect of drugs or antibiotics on bacterium is on its cell wall. Gram positive bacteria have a thick outer layer of a polymer called peptidoglycan (formed by repeated units of a disaccharide: NAG-NAM) which has many cross-linkages. Beta-lactam antibiotics such as penicillin, cephalosporin etc. competitively inhibit an enzyme called transpeptidase (also penicillin binding protein); which forms the cross-linkages in peptidoglycan; and thus weakening the cell wall. As a result of which osmotic pressure now becomes enough to cause lysis of cell wall and hence the bacterium dies. Also drugs such as Vancomycin inhibit the formation of peptidoglycan polymer. On the other hand, gram negative bacteria have an outer layer of lipopolysaccharide (endotoxin) beneath which lies a thin cell wall of peptidoglycan. Hence beta-lactam antibiotics do not have much of an effect on the gram negative bacteria. To conclude, gram positive bacteria are much more sensitive to beta-lactam antibiotics and lysozyme than gram negative bacteria. Also it is worth mentioning that gram negative bacteria are more sensitive to the lytic action of anti-bodies because lipopolysaccharide in their outer membrane produces a high immune response (read about Sepsis for more details on endotoxin). Hope this is helpful.
{ "domain": "biology.stackexchange", "id": 5868, "tags": "bacteriology, antibiotics, antibiotic-resistance" }
How hard is this combinatorial optimisation problem?
Question: Suppose we have multiple intervals $R_1,R_2,...,R_i$ of non-negative integers. These intervals may overlap and we use $R_h(\mathrm{median})$ to denote the median integer in the $h$-th interval $R_h$, and $x_R$ to denote the interval that the integer $x$ comes from. It is possible that two integers $x$ and $y$ are equal but they come from different intervals (i.e., $x=y$ but $x_R\neq y_R$). Suppose we have a threshold $T$, a positive integer $N$, and a function $f(x)$ whose input is an integer $x$ and it is defined as: $f(x)=x$, if $x \leq N$. $f(x)=max(0,2N-x)$, if $x>N$. I want to choose integers from intervals $R_1,R_2,...,R_i$ to maximise the cumulative scores $f()$ without exceeding the threshold $T$. There are two constraints Chosen integers must come from different intervals Given two chosen integers $x$ and $y$, if $x_R(\mathrm{median}) \leq y_R(\mathrm{median})$, then $$ \frac{x_R(\mathrm{median})}{x} \leq \frac{y_R(\mathrm{median})}{y}. $$ The objective is maximising the following: $$ \sum_{x \in R_1,R_2,...,R_i} p_x f(x) $$ subject to: $p_x=\{0,1\}$, $\sum_{x \in R_1,R_2,...,R_i} xp_x \leq T$, $\forall (x, y)$, if $p_x=p_y=1$, then $x_R \neq y_R$, $\forall (x, y)$, if $p_x=p_y=1$ and $x_R(\mathrm{median}) \leq y_R(\mathrm{median})$, then $$ \frac{x_R(\mathrm{median})}{x} \leq \frac{y_R(\mathrm{median})}{y}. $$ I have proved that this problem is weakly NP-hard by reduction from knapsack problem when each range contains exactly one integer. But I think the my problem should not be just weakly NP-hard. I know this problem can be reduced to the Knapsack Problem with conflict graphs, but not vice versa. Is it possible to perform reduction from existing strongly NP-hard problems to the one described above? Answer: This problem can be solved with dynamic programming in pseudo-polynomial time (proof below). Therefore, it is not possible to show that this problem is strongly NP-hard (unless P=NP). First, let's restate the problem: Given: values $N$ and $T$ and positive integer intervals $R_1$, $R_2$, $\ldots$, and $R_n$ Output: the largest possible value of $\sum_{i=1}^nf(x_i)$ (where $f$ is defined in terms of $N$ as described above) such that the variables $x_1, x_2, \ldots, x_n$ satisfy all of the following: $x_i \in R_i$ for $i = 1,2,\ldots,n$ $\sum_{i=1}^nx_i \le T$ for all pairs $i,j$, if $\text{median}(R_i) \le \text{median}(R_j)$ then $\frac{\text{median}(R_i)}{x_i} \le \frac{\text{median}(R_j)}{x_j}$ We can assume without loss of generality that the $R_i$s are sorted by median. That is, assume that $\text{median}(R_1) \le \text{median}(R_2) \le \ldots \le \text{median}(R_n)$. If we do, the final condition can be simplified. It is both necessary and sufficient that $\frac{\text{median}(R_1)}{x_1} \le \frac{\text{median}(R_2)}{x_2} \le \ldots \le \frac{\text{median}(R_n)}{x_n}$. (Note: this is not precisely correct. Assuming no intervals have the same median, this is correct, but if multiple intervals have the same median it is actually also necessary that the $x_i$ values chosen for those intervals have to all be the same. This makes things more complicated without really making the problem any harder, so I'm going to ignore this possibility for the rest of this answer.) With that done, we can define the subproblems that we will use for our dynamic programming solution. Given: all the inputs for the problem as well as (1) an index $k$ with $1 \le k \le n$, (2) a threshold $t$ with $0 \le t \le T$, and (3) a value $X \in R_k$ Output: the largest possible value of $\sum_{i=1}^kf(x_i)$ such that the variables $x_1, x_2, \ldots, x_k$ satisfy all of the following: $x_i \in R_i$ for $i = 1,2, \ldots, k$ $x_k = X$ $\sum_{i = 1}^kx_i \le t$ for $i = 1,2, \ldots, k-1$, it is the case that $\frac{\text{median}(R_i)}{x_i} \le \frac{\text{median}(R_{i+1})}{x_{i+1}}$ Let $S[k][t][X]$ denote the output of this subproblem. Note that the overall problem's answer can be expressed in terms of several subproblems: the overall answer is $\max_{X \in R_n}S[n][T][X]$. Note also that the number of subproblems is small (polynomial in the size of the numbers in the input of the problem). In particular, $k$ takes on $n$ possible values, $t$ takes on $T+1$ possible values, and $X$ takes on $\sum_{i=1}^n|R_i|$ possible values. Each of these is polynomial in the numbers used as inputs to the problem. Therefore, the overall number of subproblems is also polynomial in the numeric value of the input. Thus, provided we can show that each subproblem can be solved in pseudo-polynomial time, it will also be the case that all the subproblems together can be solved in pseudo-polynomial time, and therefore that the overall problem can be solved in pseudo-polynomial time. In order to actually solve the subproblems, we will use dynamic programming. In other words, we will build up a table of values for $S[k][t][X]$. We can compute these subproblems in order primarily by $k$. In other words, we can assume that by the time we are computing $S[k][t][X]$, the value of $S[k'][t'][X']$ has already been computed provided $k' < k$. So how do we actually compute these values? The base case, where $k=1$, is easy. $S[1][t][X]$ is defined as the largest possible value of $\sum_{i=1}^kf(x_i) = f(x_1)$ where $x_1 = X \le t$. Therefore, provided $X \le t$, we have that $S[1][t][X] = f(X)$. Otherwise, no such value is possible and so we can set $S[1][t][X]$ to $-\infty$ to indicate that no real value is achievable. Otherwise, let's say we're trying to compute $S[k][t][X]$ for some $k > 1$. This value is defined as the largest possible value of $\sum_{i=1}^kf(x_i)$ subject to the following constraints: $x_i \in R_i$ for $i = 1,2, \ldots, k$ $x_k = X$ $\sum_{i = 1}^kx_i \le t$ for $i = 1,2, \ldots, k-1$, it is the case that $\frac{\text{median}(R_i)}{x_i} \le \frac{\text{median}(R_{i+1})}{x_{i+1}}$ Since we know $x_k = X$, the value we are maximizing is $\sum_{i=1}^kf(x_i) = f(x_k) + \sum_{i=1}^{k-1}f(x_i) = f(X) + \sum_{i=1}^{k-1}f(x_i)$. This is the same as just maximizing $\sum_{i=1}^{k-1}f(x_i)$. What conditions do the variables $x_1, \ldots, x_{k-1}$ have to satisfy? We can reframe the above conditions as follows: $x_i \in R_i$ for $i = 1,2, \ldots, k-1$ $\sum_{i = 1}^{k-1}x_i \le t - X$ for $i = 1,2, \ldots, k-2$, it is the case that $\frac{\text{median}(R_i)}{x_i} \le \frac{\text{median}(R_{i+1})}{x_{i+1}}$ it is the case that $\frac{\text{median}(R_{k-1})}{x_{k-1}} \le \frac{\text{median}(R_{k})}{X}$ By the final condition, only some of the possible values of $x_{k-1}$ are actually permissible values. In particular, we have the condition that $\frac{\text{median}(R_{k-1})}{x_{k-1}} \le \frac{\text{median}(R_{k})}{X}$, which can be rearranged to be $x_{k-1} \ge X \times \frac{\text{median}(R_{k-1})}{\text{median}(R_{k})}$. Thus, $x_{k-1}$ must be some value from $R_{k-1} \cap [X \times \frac{\text{median}(R_{k-1})}{\text{median}(R_{k})}, \infty)$. Call this other interval $R_{k-1}'(X)$ Suppose we fix some value $Y \in R_{k-1}'(X)$ and add the constraint that $x_{k-1} = Y$. At that point, we're trying to maximize $\sum_{i=1}^{k-1}f(x_i)$ subject to the following: $x_i \in R_i$ for $i = 1,2, \ldots, k-1$ $x_{k-1} = Y$ $\sum_{i = 1}^{k-1}x_i \le t - X$ for $i = 1,2, \ldots, k-2$, it is the case that $\frac{\text{median}(R_i)}{x_i} \le \frac{\text{median}(R_{i+1})}{x_{i+1}}$ But this is just the definition of $S[k-1][t-X][Y]$. It should be pretty clear at this point that $S[k][t][X]$ is just the maximum over $Y \in R_{k-1}'(X)$ of $f(X) + S[k-1][t-X][Y]$. Note: if $R_{k-1}'(X)$ is empty then no choice of values is possible and so we again set $S[k][t][X]$ to $-\infty$. Similarly, we can assume that if $t-X < 0$ then the value $S[k-1][t-X][Y]$ is just defined to be $-\infty$ by default. (Note that this formula appropriate handles the case where some of the $S[k-1][t-X][Y]$ values are $-\infty$). Anyway, assuming you deal with all the edge cases, this shows that you can compute $S[k][t][X]$ in terms of the previous values $S[k'][t'][X']$ with $k' < k$. The runtime of this computation is linear in the size of $R_{k-1}'(X)$, which is at most $|R_{k-1}|$. Therefore, the computation of each $S[k][t][X]$ is pseudo-polynomial in the input, implying (as previously argued) that the computation of all the subproblems together also runs in pseudo-polynomial time. EDIT: I was asked to address two specific issues in the comments. First of all, what happens if the function $f$ can depend on which interval $R_i$ it is taken from. And second, what happens if not every interval has to contribute a value (i.e. if you are allowed to skip an $x_i$). I believe the problem is still solvable with the same pseudo-polynomial time dynamic programming approach. I also mentioned earlier that the solution above ignores the possibility that multiple ranges share the same medians. I have not edited the solution above, but to address these concerns, I'm adding a coded up version of my algorithm (using python), fixed to address all these issues. I assume that the input consists of a value T (representing $T$) and a list Rs, whose elements represent the ranges $R_i$ and which can be looped over. I assume that the elements of Rs are in sorted order by median and that there exists a function median that can identify the median of any such range. I also assume that there exists a function f which takes as input two values: an index i and a value x. f(i, x) is used to represent $f_i(x)$. def solve(T, Rs): if len(Rs) == 0: return 0 # S is going to be the subproblem lookup table. S[k][t][X] will denote # the largest possible output value using xs only from ranges Rs[0] # through Rs[k] (with each range contributing at most one x), such that # the sum of the xs is at most t, and such that range Rs[k] does contribute # a value, and the value it contributes is X. S = [] # base case layer = [{X: float(-inf) for X in Rs[0]} for t in range(T+1)] # default to -inf S.append(layer) for t in range(T+1): for X in Rs[0]: if X <= t: S[0][t][X] = f(0, X) # inductive case for k in range(1, len(Rs)): layer = [{X: float(-inf) for X in Rs[0]} for t in range(T+1)] # default to -inf S.append(layer) for t in range(T+1): for X in Rs[k]: # the minimum necessary condition is that X <= t if X <= t: # if no other range contributes an x, then the smallest possible # value is f(k, X) layer[t][X] = f(k, X) # if some other range does contribute an x, then there is a largest # index range which does so; the index of that range is some i < k # and we can loop over the possibilities for i in range(k): # we can also loop over the possible values of what x that range # contributed for Y in Rs[i]: # in order for this to be valid, the following must hold: if median(Rs[i]) / Y <= median(Rs[k]) / X: # also, if the medians are the same, we also have the # constraint that X=Y; therefore, this case is only valid # if the following holds if median(Rs[i]) != median(Rs[k]) or X == Y: # the best possible value in this case is the following value = f(k, X) + S[i][t-X][Y] # if this is better than anything so far, use this value S[k][t][X] = max(S[k][t][X], value) # Now lets actually get the overall answer. One possibility is to use no xs # from any range: best = 0 # otherwise, there will be some largest index range which contributes an x # and it will contribute some specific X for k in range(len(Rs)): for X in Rs[k]: # the largest possible value in that case subject to all the constraints: value = S[k][T][X] # we can use that value instead of the best one saved if it's better best = max(best, value) return best
{ "domain": "cstheory.stackexchange", "id": 5284, "tags": "ds.algorithms, np-hardness, co.combinatorics, integer-programming, proofs" }
Physical interpretation of radial null geodesics in Schwarzschild geometry
Question: (Note: $c =1$ throughout) The Schwarzschild metric is $$ds^2 = (1- \frac{2m}{r})dt^2 - \frac{1}{1-\frac{2m}{r}}dr^2 - r^2 d\Omega ^2,$$ with $d\Omega^2$ being the square of the solid angle element and $m = GM$, where $M$ is the mass of the object. Radial null geodesics in this geometry are given by $$t_\pm(r) = \pm(r+2m\log |r-2m|+C),$$ where $r = \pm k\lambda$, $\lambda$ an affine parameter, with the plus sign indicating that the particle is outgoing, and the minus sign indicating that it is ingoing. My question is: what does this physically represent? After thinking about it for some time, I have considered that it might be the time taken for a light particle to reach a distance $r$ from the singularity, but where is it falling from? If anyone could clarify my confusions, it would be greatly appreciated. Edit: The photon would be falling from an initial position $r_0$, where $\mp (r_0 + \log|r_0 -2m|) = C$ as mentioned by Triatticus and myself. Answer: The radial null geodesic $t_\pm(r)$ indeed represents the time taken for the light to reach a radial coordinate $r$ from the frame of reference of a distant observer (or an "observer at infinity"). The constant $C$ encodes the initial position and must be chosen in such a way that $t_\pm(r_0) = 0$. This implies that $C = \mp(r_0 +\log|r_0 - 2m|)$, in which case, all of the problems mentioned above (in both the question and the comments) are solved. For further reference see https://www.reed.edu/physics/courses/Physics411/html/411/page2/files/Lecture.31.pdf.
{ "domain": "physics.stackexchange", "id": 91695, "tags": "general-relativity, black-holes, geodesics" }
Evaluation functions of Minimax algorithm
Question: Let's say we have the following relationship between $f_1$ and $f_2$: $$f_2(s) = \sqrt{1 + f_1(s)}$$ And $f_1$ returns a positive value. Why is it that minimax search using $f_2$ is guaranteed to the same action as using $f_1$ but is not guaranteed to return the same action as $f_1$ when used in expectimax search? I understand that expectimax takes into account the weight of the arcs to compute the possible path. But I am still not able to understand how the aforementioned conclusion is being made. Answer: An example for Expectimax (root node is a Max node): (image from CSE AI Faculty / Dan Klein, Stuart Russell, Andrew Moore) changing the evaluation function changes the action taken. For Minimax, since $f_1(s_i) \ge f_1(s_j) \implies f_2(s_i) \ge f_2(s_j)$, the best node (action taken) doesn't change (Minimax is insensitive to monotonic transformations):
{ "domain": "cs.stackexchange", "id": 6862, "tags": "artificial-intelligence, game-theory" }
An integer is even subset of another integer
Question: An integer m is defined to be an even subset of another integer n if every even factor of m is also a factor of n. 18 is an even subset of 12 because the even factors of 18 are 2 and 6 and these are both factors of 12. But 18 is not an even subset of 32 because 6 is not a factor of 32. I wrote the following code to check for even subsets. public class EvenSubset { public static void main(String args[]) { System.out.println("The result is: " + isEvenSubset(18, 32)); } public static boolean isEvenSubset(int m, int n) { boolean status = false; int firstNumber = 0; int secondNumber = 0; for(int i = 2; i < m ; i++) { if(( m % i == 0) && (i % 2 == 0)) { firstNumber = i; for(int j = 2; j < n ; j++) { if( (n % j == 0) && (j % 2 == 0)){ secondNumber = j; if(firstNumber == secondNumber) { status = true; break; } else { status = false; } } } if(!status){ status = false; } } } return status; } } Answer: I thought that this coding style looked familiar… and indeed it is. As I previously remarked, flag variables suck. You shouldn't need a variable like status. You also don't need firstNumber and secondNumber. They are just the same as i and j, respectively. In fact, the whole algorithm can be implemented more simply, just translating the problem description directly into code. Iterate through the possible even factors of m. If you find a candidate that is a factor of m but isn't a factor of n, then you can conclude that m is not an even subset of n. public static boolean isEvenSubset(int m, int n) { for (int evenFactor = 2; evenFactor < m; evenFactor += 2) { if ((m % evenFactor == 0) && (n % evenFactor != 0)) { return false; } } return true; } However, there is an optimization that we can do, since factors always occur in pairs. There is also a quick test we can do to immediately detect an obvious result, if m is odd. public static boolean isEvenSubset(int m, int n) { // Optional optimization: m has no even factors if (m % 2 != 0) return true; int sqrtM = (int)(Math.sqrt(m) + 1); for (int evenFactor = 2; evenFactor <= sqrtM; evenFactor += 2) { if (m % evenFactor == 0) { if (n % evenFactor != 0) { return false; } int otherFactor = m / evenFactor; if ((otherFactor % 2 == 0) && (n % otherFactor != 0)) { return false; } } } return true; }
{ "domain": "codereview.stackexchange", "id": 18110, "tags": "java, algorithm" }
A tiny threads pool in C
Question: I'm a newbie in C, and currently following Stanford CS107 - Programming Paradigms. For assignment 6, I find it'd be better to isolate the threads management from the service logic. The following code is a producer-consumer pattern using semaphore to limit the maximum number of threads. recycle_threads() itself is a thread, which automatically waits for previous threads to exit and free the resources. Users can call create_thread() function whenever they want to run a function in a new thread. Feel free to leave any comment, about the coding style, design, or utility. Especially, I want to know for what purpose, a real-life thread pool is designed, and what's the feature it must realize. threads_pool.h #ifndef _THREADS_POOL_ #define _THREADS_POOL_ #define THREAD_POOL_INIT_OK 0 #define THREAD_POOL_INIT_ERR 1 typedef void *(*thread_fn)(void *arg); /** * Create a new thread pool */ int threads_pool_init(unsigned size); /** * Apply a new thread in threads pool. */ void create_thread(thread_fn thd_fn, void *arg); /** * Notice no more threads to be created. * Threads pool can start to recycle and destroy resources. */ void threads_pool_close(void); #endif // _THREADS_POOL_ threads_pool.c #include "threads_pool.h" #include <pthread.h> #include <semaphore.h> #include <signal.h> #include <stdio.h> #include <stdlib.h> /* threads pool */ static pthread_t *pool; static unsigned pool_size; static unsigned create_p; static unsigned recycle_p; /* semaphore */ static sem_t *create_sem; static sem_t *recycle_sem; static const char *kThreadPoolCreateSem = "/create_sem"; static const char *kThreadPoolRecycleSem = "/recycle_sem"; /* recycle thread id */ static pthread_t recycle_tid; /* signal recycle thread to exit */ static volatile sig_atomic_t recycle_sig; #define RECYCLE_RUN 0 #define RECYCLE_TO_EXIT 1 /** * Producer of producer-consumer pattern. */ void create_thread(thread_fn thd_fn, void *arg) { sem_wait(create_sem); pthread_create(pool + create_p++ % pool_size, NULL, thd_fn, arg); sem_post(recycle_sem); } /** * Consumer of producer-consumer pattern. * Automatically recycle exited threads. */ static void *recycle_threads(void *arg) { while (RECYCLE_RUN == recycle_sig) { sem_wait(recycle_sem); pthread_join(*(pool + recycle_p++ % pool_size), NULL); sem_post(create_sem); } while (recycle_p < create_p) { sem_wait(recycle_sem); pthread_join(*(pool + recycle_p++ % pool_size), NULL); sem_post(create_sem); } pthread_exit(NULL); } /** * Create the threads pool, and run recycle_thread() thread. */ int threads_pool_init(unsigned size) { pool = malloc(size * sizeof *pool); pool_size = size; create_p = 0; recycle_p = 0; create_sem = sem_open(kThreadPoolCreateSem, O_CREAT | O_EXCL, S_IRUSR | S_IWUSR, size); recycle_sem = sem_open(kThreadPoolRecycleSem, O_CREAT | O_EXCL, S_IRUSR | S_IWUSR, 0); recycle_sig = RECYCLE_RUN; int code = pthread_create(&recycle_tid, NULL, recycle_threads, NULL); if (0 == code) return THREAD_POOL_INIT_OK; return THREAD_POOL_INIT_ERR; } /** * Dispose resources */ static void threads_pool_destroy(void) { free(pool); sem_unlink(kThreadPoolCreateSem); sem_unlink(kThreadPoolRecycleSem); } /** * Signal recycle_threads() to exit, and wait it to finish it's job. */ void threads_pool_close(void) { recycle_sig = RECYCLE_TO_EXIT; pthread_join(recycle_tid,NULL); threads_pool_destroy(); } A simple use-case, simulating tickets selling: agents_tickets.c /** * Using semaphore to limit maximum thread number. * https://stackoverflow.com/questions/66404929/always-unlink-the-posix-named-semaphore-in-shared-memory?noredirect=1 */ #include "threads_pool.h" #include <pthread.h> #include <semaphore.h> #include <stdio.h> typedef struct { unsigned agent_id; // simulate an agent unsigned tickets_tosell; // agent's personal goal of the day unsigned *tickets_pool; // shared tickets pool pthread_mutex_t *pool_lock; // mutex lock for visiting the shared tickets pool } agent; /** * Constructor */ static void new_agent(agent *a, unsigned agentid, unsigned tickets_num, unsigned *pool, pthread_mutex_t *lock) { a->agent_id = agentid; a->tickets_tosell = tickets_num; a->tickets_pool = pool; a->pool_lock = lock; } /** * Implement void *(*start_rtn)(void *); * ------------------------------------- * Each thread execute this function. */ static void *sell_tickets(void *agent_addr) { agent *a = (agent *)agent_addr; while (a->tickets_tosell > 0) { pthread_mutex_lock(a->pool_lock); // begin of race condition (*a->tickets_pool)--; fprintf(stdout, "agent@%d sells a ticket, %d tickets left in pool.\n", a->agent_id, *a->tickets_pool); fflush(stdout); pthread_mutex_unlock(a->pool_lock); // end of race condition a->tickets_tosell--; fprintf(stdout, "agent@%d has %d tickets to sell.\n", a->agent_id, a->tickets_tosell); fflush(stdout); } pthread_exit((void *)&a->agent_id); } typedef struct { unsigned num_agents; unsigned num_tickets; } project; void run(project *p) { unsigned tickets_pool; pthread_mutex_t lock; agent agents[p->num_agents]; unsigned id; tickets_pool = p->num_tickets; // shared resource pthread_mutex_init(&lock, NULL); threads_pool_init(10); for (int i = 0; i < p->num_agents; i++) { id = i + 1; new_agent(&agents[i], id, p->num_tickets / p->num_agents, &tickets_pool, &lock); create_thread(sell_tickets, &agents[i]); } threads_pool_close(); pthread_mutex_destroy(&lock); } int main(void) { project p; p.num_agents = 30; p.num_tickets = 300; run(&p); } Answer: Keep threads alive What you implemented is not really a thread pool, but just an elaborate way of limiting how many concurrent threads there can be. You still spawn a separate thread for each task and destroy it afterwards. But it is exactly this overhead of creating and destroying threads that you typically want to reduce by using a thread pool. The normal solution is to keep threads in the pool alive indefinitely, and have an atomic queue of work that threads can pick work from. This can be implemented using mutexes and condition variables, see for example this StackOverflow question.
{ "domain": "codereview.stackexchange", "id": 40849, "tags": "beginner, c, multithreading, concurrency, pthreads" }
Approaches for matching leads to salesmen
Question: I'm starting to tackle a new problem where we are trying to optimally match new leads (perspective customers) for our product to our sales representatives in the hopes of improving bottom-line metrics like conversion rate, average sale price, etc. We have a bunch of data from the leads when they fill out their info on web forms and from 3rd party data providers we use to enrich the core web form data (we try and pull their soft credit score, income, etc. based on the info they provide, this is all automated). On the salesman side, we don't have nearly as much data on them (mainly just who they are and their sales performance history). I suppose we could actually run them through our data enrichment service to pull additional info on them though. My question is simply: from an ML perspective what, would be the best way to structure this problem? I was thinking of just building models for each salesman and assigning the lead to the salesman with the highest predicted score (e.g. for conversion) but this seems a bit crude. I was also considering recommender systems given the matching nature of the problem but my background is more in traditional ML so not sure what subtype would be best to start with (content-based, collaborative, etc.). Any input is greatly appreciated. Answer: I'm going to make a bunch of assumptions about the shape of your data and model choice, just to make the setup simple and concrete. Hopefully the broader ideas will generalize from there. Suppose you wrangle your data into a matrix with a response vector of zeros and ones representing whether a sale was made. This is a nice simple supervised classification problem and logistic regression is probably the first thing you try in this case. If you ignore which sales rep each customer was assigned to, you will get a model that tells you the probability of a sale based on customer characteristics (income, etc). But it won't tell you anything about which is the best sales rep to assign. If you fit a separate model for each sales rep then you could compare the outputs of each model. But I share your concern about this approach. Each model could pick up some idiosyncrasies. Also, if a sales rep got lucky with getting good leads in the past then they are likely to fit to a model with a high general probability of sale for all leads simply because the constant term is higher in their models than others. There might be another constraint in your ideal system here -- presumably you want to avoid building a model that just assigns all leads to the best sales rep. Another approach would be to fit a single logistic regression but include instrumental (or dummy) variables for the sales reps. A column for each rep with a one if they worked with that customer and a zero otherwise. Your coefficients vector will then include a coefficient for each sales rep. This feels like a step in the right direction but will necessarily result in a model where all leads are assigned to a single sales rep -- to the one with the highest coefficient. One nice aspect to this approach is that it requires no additional information about the sales reps. Only information about which customers they previously worked with is needed. A next step from there might be to add cross-terms. That is, features that are the product of a customer characteristic and a sales rep indicator. This isn't guaranteed produce a model where leads are assigned evenly between sales reps but might produce recommendations of the form "assign low income customers to rep A and high income customers to rep B". (Whether or not such a recommendation is politically acceptable in your firm is a different question entirely.) I'm not sure that model is going to get you to something that you would be willing to use in production but it might be a nice first step to get a sense of the data and which variables tend to be predictive. One last thought: your dataset might include some information that precludes some sales reps working with some customers. The rep and the customer have to be in the same geographic region, perhaps. You're definitely going to want to work that in somehow. If customers and sales reps are split into disjoint regions then you just fit a model per region. If it is more complicated than that then your model will be, inevitably, more complicated.
{ "domain": "datascience.stackexchange", "id": 10502, "tags": "machine-learning, classification, regression, recommender-system, scoring" }
Is there a simple characterization of regular languages closed under circular shifts?
Question: A language $L$ is closed under circular shifts if, for every word $w = a_1 ... a_n$ and circular shift $w' = a_i ... a_n a_1 ... a_{i-1}$ of $w$, then $w \in L$ iff $w' \in L$. It is equivalent to require that $L$ is closed under conjugation, i.e., for every word $w = uv$, letting $w' = vu$, we have $w \in L$ iff $w' \in L$. These closure requirements are weaker that requiring the language to be closed under permutation or commutative, i.e., for any word $w = a_1 ... a_n$, for every permutation $\sigma$ of $\{1, ..., n\}$, letting $w' = a_{\sigma(1)} ... a_{\sigma(n)}$, we have $w \in L$ iff $w' \in L$. Is there a simple characterization of the regular languages that are closed under cyclic shifts? I'm thinking about a characterization that would make such languages easy to understand. For instance, the commutative regular languages can be easily understood: membership to the language is determined by the Parikh image, and then being regular means the language only imposes a threshold or modularity condition on the components of the Parikh image. Some related work: It is known that the closure of a regular language under cyclic shifts is also regular, and the same is also known of context-free languages. There is a study of the state complexity of the operation of closing under cyclic shift here but it doesn't characterize the languages that are already closed. There is a notion of cyclic language that has been studied, e.g., here. A cyclic language is closed under conjugation and also satisfies requirement that for every word $w$ and power $n$ we have $w \in L$ iff $w^n \in L$, i.e., membership to the language is determined by the primitive root of words. (This requirement is discussed in this question.) There is also a notion of strongly cyclic language in this paper which is defined in terms of automata but do not seem to be the class I ask about. There is a related notion of circular languages studied in bioinformatics (e.g., here) where words are quotiented to see them as circular, but I'm not sure of the relationship. Answer: We can propose an automaton model characterizing regular circular languages: a C-automaton is an NFA where all states are initial. A run must see an accepting state somewhere, and must start and end in the same state. C-automata can clearly only accept regular circular languages, since a rotation of a run is still a run (circularity), and a normal NFA can guess the existence of a run (regularity). Moreover, starting from any NFA $A$, we can obtain a C-automaton for the circular closure of $L(A)$ (it is how we can prove the first item mentioned by @a3nm in the question), so the model is able to recognize any regular circular language, since such a language is its own circular closure. This automaton model can in turn help to prove that the proposition of @MarzioDeBiasi for a MSO characterization is correct. Let us consider the logic $MSO[S']$, where S' is the successor relation $(x,x+1)$ augmented with the pair $(last,first)$. It is clear that this logic can only express regular circular languages, since $S'$ is invariant by rotation. Moreover, the fact that a C-automaton has an accepting run can be expressed in $MSO[S']$: the formula can guess a labeling by states, and verify all the conditions asked in a run of a C-automaton. Notice that the condition that the first state $q_0$ is the same as the last will be verified in the same way as all other transitions, by verifying that there is a transition $(q_{n-1},a_n,q_0)$ on the last letter. In fact the formula cannot distinguish this case from the other transitions. Remark that the equality $x=y$ can be expressed by $\exists t. S'(t,x)\wedge S'(t,y)$, so we do not need to add it explicitly in the signature. This allows to express languages such as "there is a unique occurrence of $a$". We can conclude that both C-automata and $MSO[S']$ characterize the class of regular circular languages.
{ "domain": "cstheory.stackexchange", "id": 5828, "tags": "fl.formal-languages, regular-language, permutations" }
C# Get All Diagonals Jagged Array
Question: How can I improve this? The idea is to store every left to right diagonals in a list Not looking for a more efficient algorithm instead something readable, LINQ perhaps? input: 3, 1 2, 5, 7 1, 5, 8, 3, 1, 4 6, 8, 7, 1 4 6 6, 2, 5 output: 1, 7, 3, 3, 5, 8, 1, 7, 3, 5, 8, 1, 2, 5, 7, 8, 1, 5, 7, 1, 8, 6, 2, -- var arr = new[] { new[]{3,1}, new[]{2,5,7}, new[]{1,5,8,3,1,4}, new[]{6,8,7,1}, new[]{4}, new[]{6}, new[]{6,2,5} }; GetAllDiagonals(arr); Console.ReadKey(); static void GetAllDiagonals(int[][] array) { var e = new List<List<int>>(); for (var i = 0; i < array.Length; i++) { for (var j = array[i].Length - 1; j >= 0; j--) { var n = i; var x = j; var next = i + 1 <= array.Length - 1 && j < array[n + 1].Length - 1; var index = 0; if (next) { e.Add(new List<int>()); index = e.Count - 1; } while (next) { e[index].Add(array[n][x]); next = n + 1 <= array.Length - 1 && x < array[n + 1].Length - 1; n++; x++; } } } for (var i = 0; i < e.Count; i++) { for (var j = 0; j < e[i].Count; j++) { Console.Write(e[i][j] + ", "); } Console.WriteLine(); } } Answer: Disclaimer: Apologise for my poor visualisation. I've used Excel to draw the bellow diagrams. Finding the diagonal step-by-step Let's play a little bit with your example As you have said it in your question you want to find all left to right diagonals. I've used the following informal definition for the left to right diagonal: A descending line which starts either from the left or from the top side of the matrix until there is a number in the way of it The minimum length of the line is two I haven't read any requirements regarding the ordering so, lets suppose you want to find them from left to right Algorithm I hope you have noticed the following part (highlighted with bold) in my informal definition: A descending line which starts either from the left or from the top side of the matrix until there is a number in the way of it That means you have to have two top level iterations On the 1st column from bottom to the top On the 1st row from left to right To find a diagonal you need the following steps: Increment both column and row indices Check whether there is a number under the new indices If yes repeat step 1 and 2 If not then check line's length If it is greater than one then you have found a diagonal If it is 1 then you continue the iteration on the top-level Implementation Now let's see how do we implement the above algorithm. Let's start with the diagonals search which starts from left static List<List<int>> FindDiagonalsWhichStartsFromLeft(int[][] input) { var diagonals = new List<List<int>>(); //Bottom top iteration on first column for (int row = input.Length - 1; row > 0; row--) { int rowIndex = row, columnIndex = 0; var diagonal = new List<int>() { input[rowIndex][columnIndex] }; //#2.1 If yes repeat step 1 and 2 while (true) { //#1 Increment both column and row indices rowIndex++; columnIndex++; //#2 Check whether there is a number under the new indices if (rowIndex >= input.Length || columnIndex >= input[rowIndex].Length) { break; } diagonal.Add(input[rowIndex][columnIndex]); } //#2.2.1 If it is greater than one then you have found a diagonal if (diagonal.Count > 1) { diagonals.Add(diagonal); } //#2.2.2 If it is 1 then you continue the iteration on the top-level } return diagonals; } Please note that we have done a reverse loop here from the last row till the 2nd row. We skipped the 1st row because otherwise the main diagonal will be found twice Of course you can avoid this duplicate in another way like starting from the 2nd column whenever you are searching for diagonals which is starting from the top. (Where to put the prevention logic is up to you.) Now let's see the other iteration static List<List<int>> FindDiagonalsWhichStartsFromTop(int[][] input) { var diagonals = new List<List<int>>(); //Left to Right iteration on first row for (int column = 0; column < input[0].Length; column++) { int rowIndex = 0, columnIndex = column; var diagonal = new List<int>() { input[rowIndex][columnIndex] }; //#2.1 If yes repeat step 1 and 2 while (true) { //#1 Increment both column and row indices rowIndex++; columnIndex++; //#2 Check whether there is a number under the new indices if (rowIndex >= input.Length || columnIndex >= input[rowIndex].Length) { break; } diagonal.Add(input[rowIndex][columnIndex]); } //#2.2.1 If it is greater than one then you have found a diagonal if (diagonal.Count > 1) { diagonals.Add(diagonal); } //#2.2.2 If it is 1 then you continue the iteration on the top-level } return diagonals; } As you can see the only difference here is the outer loop. So, the "core logic" is untouched which means we can extract that into its own method static void GetDiagonal(int[][] input, List<int> diagonal, int rowIndex, int columnIndex) { //#2.1 If yes repeat step 1 and 2 while (true) { //#1 Increment both column and row indices rowIndex++; columnIndex++; //#2 Check whether there is a number under the new indices if (rowIndex >= input.Length || columnIndex >= input[rowIndex].Length) { break; } diagonal.Add(input[rowIndex][columnIndex]); } } For the sake of completeness let me share with you the full source code (without comments for the sake of brevity) static void Main() { var arr = new[] { new[]{3,1}, new[]{2,5,7}, new[]{1,5,8,3,1,4}, new[]{6,8,7,1}, new[]{4}, new[]{6}, new[]{6,2,5} }; FindAndPrintDiagonals(arr); } static void FindAndPrintDiagonals(int[][] input) { var diagonalsFromLeft = FindDiagonalsWhichStartsFromLeft(input); var diagonalsFromTop = FindDiagonalsWhichStartsFromTop(input); foreach (var diagonal in diagonalsFromLeft.Union(diagonalsFromTop)) { Console.WriteLine(string.Join(" ", diagonal)); } } static List<List<int>> FindDiagonalsWhichStartsFromLeft(int[][] input) { var diagonals = new List<List<int>>(); for (int row = input.Length - 1; row > 0; row--) { int rowIndex = row, columnIndex = 0; var diagonal = new List<int>() { input[rowIndex][columnIndex] }; GetDiagonal(input, diagonal, rowIndex, columnIndex); if (diagonal.Count > 1) diagonals.Add(diagonal); } return diagonals; } static List<List<int>> FindDiagonalsWhichStartsFromTop(int[][] input) { var diagonals = new List<List<int>>(); for (int column = 0; column < input[0].Length; column++) { int rowIndex = 0, columnIndex = column; var diagonal = new List<int>() { input[rowIndex][columnIndex] }; GetDiagonal(input, diagonal, rowIndex, columnIndex); if (diagonal.Count > 1) diagonals.Add(diagonal); } return diagonals; } static void GetDiagonal(int[][] input, List<int> diagonal, int rowIndex, int columnIndex) { while (true) { rowIndex++; columnIndex++; if (rowIndex >= input.Length || columnIndex >= input[rowIndex].Length) break; diagonal.Add(input[rowIndex][columnIndex]); } } Here is a working dotnetfiddle link. You should see the following output on the console: 6 2 1 8 2 5 7 3 5 8 1 1 7 3
{ "domain": "codereview.stackexchange", "id": 44123, "tags": "c#, algorithm, array, jagged-array" }
Speech Recognition Part 2: Classifying Data
Question: Now that I have generated training data, I need to classify each example with a label to train a TensorFlow neural net (first building a suitable dataset). To streamline the process, I wrote this little Python script to help me. Any suggestions for improvement? classify.py: # Builtin modules import glob import sys import os import shutil import wave import time import re from threading import Thread # 3rd party modules import scipy.io.wavfile import pyaudio DATA_DIR = 'raw_data' LABELED_DIR = 'labeled_data' answer = None def classify_files(): global answer # instantiate PyAudio p = pyaudio.PyAudio() for filename in glob.glob('{}/*.wav'.format(DATA_DIR)): # define stream chunk chunk = 1024 #open a wav format music wf = wave.open(filename, 'rb') #open stream stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), channels=wf.getnchannels(), rate=wf.getframerate(), output=True) #read data data = wf.readframes(chunk) #play stream while answer is None: stream.write(data) data = wf.readframes(chunk) if data == b'': # if file is over then rewind wf.rewind() time.sleep(1) data = wf.readframes(chunk) # don't know how to classify, skip sample if answer == '.': answer = None continue # sort spectogram based on input spec_filename = 'spec{}.jpeg'.format(str(re.findall(r'\d+', filename)[0])) os.makedirs('{}/{}'.format(LABELED_DIR, answer), exist_ok=True) shutil.copyfile('{}/{}'.format(DATA_DIR, spec_filename), '{}/{}/{}'.format(LABELED_DIR, answer, spec_filename)) # reset answer field answer = None #stop stream stream.stop_stream() stream.close() #close PyAudio p.terminate() if __name__ == '__main__': try: # exclude file from glob os.remove('{}/ALL.wav'.format(DATA_DIR)) num_files = len(glob.glob('{}/*.wav'.format(DATA_DIR))) Thread(target = classify_files).start() for i in range(0, num_files): answer = input("Enter letter of sound heard: ") except KeyboardInterrupt: sys.exit() Answer: Most of your comments aren't that great. Commenting about PEP8 compliance shouldn't be needed, and saying you're instantiating an object before doing it duplicates the amount we have to read for no actual gain. os.path.join is much better at joining file locations by the OSes separator than '{}/{}'.format. Please use it instead. An alternate to this in Python 3.4+ could be pathlib, as it allows you to extend the path by using the / operator. I have however not tested that this works with the functions you're using. Here's an example of using it: (untested) DATA_DIR = pathlib.PurePath('raw_data') ... os.remove(DATA_DIR / 'All.wav') You should move chunk out of the for loop, making it a function argument may be a good idea too. Making a function to infinitely read your wf may ease reading slightly, and giving it a good name such as cycle_wave would allow people to know what it's doing. As it'd work roughly the same way as itertools.cycle. This could be implemented as: def cycle_wave(wf): while True: data = wf.readframes(chunk) if data == b'': wf.rewind() time.sleep(1) data = wf.readframes(chunk) yield data For your spec_filename you can use re.match to get a single match, rather than all numbers in the file name. You also don't need to use str on the object as format will do that by default. Rather than removing a file from your directory, to then search the directory, you can instead remove the file from the resulting list from glob.glob. Since it returns a normal list, you can go about this the same way you would otherwise. One way you can do this, is as followed: files = glob.glob('D:/*') try: files.remove('D:/$RECYCLE.BIN') except ValueError: pass If you have multiple files you want to remove you could instead use sets, and instead use: files = set(glob.glob('D:/*')) - {'D:/$RECYCLE.BIN'} All of this together can get you: import glob import sys import os import shutil import wave import time import re from threading import Thread import scipy.io.wavfile import pyaudio DATA_DIR = 'raw_data' LABELED_DIR = 'labeled_data' answer = None def cycle_wave(wf): while True: data = wf.readframes(chunk) if data == b'': wf.rewind() time.sleep(1) data = wf.readframes(chunk) yield data def classify_files(chunk=1024): global answer join = os.path.join p = pyaudio.PyAudio() files = set(glob.glob(join(DATA_DIR, '*.wav'))) - {join(DATA_DIR, 'ALL.wav')} for filename in files: wf = wave.open(filename, 'rb') stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), channels=wf.getnchannels(), rate=wf.getframerate(), output=True) for data in cycle_wave(wf): if answer is not None: break stream.write(data) # don't know how to classify, skip sample if answer == '.': answer = None continue # sort spectogram based on input spec_filename = 'spec{}.jpeg'.format(re.match(r'\d+', filename)[0]) os.makedirs(join(LABELED_DIR, answer), exist_ok=True) shutil.copyfile( join(DATA_DIR, spec_filename), join(LABELED_DIR, answer, spec_filename) ) # reset answer field answer = None #stop stream stream.stop_stream() stream.close() #close PyAudio p.terminate() if __name__ == '__main__': join = os.path.join try: # exclude file from glob files = set(glob.glob(join(DATA_DIR, '*.wav'))) - {join(DATA_DIR, 'ALL.wav')} num_files = len(files) Thread(target = classify_files).start() for _ in range(0, num_files): answer = input("Enter letter of sound heard: ") except KeyboardInterrupt: sys.exit() But I've left out proper handling of streams, in most languages, that I've used streams in, it's recommended to always close the steam. In Python it's the same. You can do this normally in two ways: Use with, this hides a lot of the code, so it makes using streams seamless. It also makes people know the lifetime of the stream, and so people won't try to use it after it's been closed. Here's an example of using it: with wave.open('<file location>') as wf: print(wf.readframes(1024)) Use a try-finally. You don't need to add an except clause to this, as if it errors you may not want to handle it here, but the finally is to ensure that the stream is closed. Here's an example of using it: p = pyaudio.PyAudio() try: stream = p.open(...) try: # do some stuff finally: stream.stop_stream() stream.close() finally: p.terminate() I'd personally recommend using one of the above in your code. I'd really recommend using with over a try-finally, but pyaudio doesn't support that interface. And so you'd have to add that interface to their code, if you wanted to go that way.
{ "domain": "codereview.stackexchange", "id": 25366, "tags": "python, multithreading, sorting, file, audio" }
Kalman filter for position and velocity: introducing speed estimates
Question: Thanks to everyone who posted comments/answers to my query yesterday (Implementing a Kalman filter for position, velocity, acceleration ). I've been looking at what was recommended, and in particular at both (a) the wikipedia example on one dimensional position and velocity and also another website that considers a similar thing. Update 26-Apr-2013: the original question here contained some errors, related to the fact that I hadn't properly understood the the wikipedia example on one dimensional position and velocity. With my improved understanding of what's going on, I've now redrafted the question and focused it more tightly. Both examples that I refer to in the introductory paragraph above assume that it's only position that's measured. However, neither example has any kind of calculation $(x_k-x_{k-1})/dt$ for speed. For example, the Wikipedia example specifies the ${\bf H}$ matrix as ${\bf H} = [1\ \ \ 0]$, which means that only position is input. Focussing on the Wikipedia example, the state vector ${\bf x}_k$ of the Kalman filter contains position $x_k$and speed $\dot{x}_{k}$, i.e. $$ \begin{align*} \mathbf{x}_{k} & =\left(\begin{array}[c]{c}x_{k}\\ \dot{x}_{k}\end{array} \right) \end{align*} $$ Suppose the measurement of position at time $k$ is $\hat{x}_k$. Then if the position and speed at time $k-1$ were $x_{k-1}$ and $\dot{x}_{k-1}$, and if $a$ is a constant acceleration that applies in the time interval $k-1$ to $k$, from the measurement of $\hat{x}$ it's possible to deduce a value for $a$ using the formula $$ \hat{x}_k = x_{k-1} + \dot{x}_{k-1} dt + \frac{1}{2} a dt^2 $$ This implies that at time $k$, a measurement $\hat{\dot{x}}_k$ of the speed is given by $$ \hat{\dot{x}}_k = \dot{x}_{k-1} + a dt = 2 \frac{\hat{x}_k - {x}_{k-1}}{dt} - \dot{x}_{k-1} $$ All the quantities on the right hand side of that equation (i.e. $\hat{x}_k$, $x_{k-1}$ and $\dot{x}_{k-1}$) are normally distributed random variables with known means and standard deviations, so the $\bf R$ matrix for the measurement vector $$ \begin{align*} \mathbf{\hat{x}}_{k} & =\left(\begin{array}[c]{c}\hat{x}_{k}\\ \hat{\dot{x}}_{k}\end{array} \right) \end{align*} $$ can be calculated. Is this a valid way of introducing speed estimates into the process? Answer: Is this a valid way of introducing speed estimates into the process? If you choose your state appropriately, then the speed estimates come "for free". See the derivation of the signal model below (for the simple 1-D case we've been looking at). Signal Model, Take 2 So, we really need to agree on a signal model before we can move this forward. From your edit, it looks like your model of the position, $x_k$, is: $$ \begin{array} xx_{k+1} &=& x_{k} + \dot{x}_{k} \Delta t + \frac{1}{2} a (\Delta t)^2\\ \dot{x}_{k+1} &=& \dot{x}_{k} + a \Delta t \end{array} $$ If our state is as before: $$ \begin{align*} \mathbf{x}_{k} & =\left(\begin{array}[c]{c}x_{k}\\ \dot{x}_{k}\end{array} \right) \end{align*} $$ then the state update equation is just: $$ \mathbf{x}_{k+1} = \left(\begin{array}[c]{c} 1\ \ \Delta t\\ 0\ \ 1\end{array} \right) \mathbf{x}_{k} + \left(\begin{array}[c]{c} \frac{(\Delta t)^2}{2} \\ \Delta t \end{array} \right) a_k $$ where now our $a_k$ is the normally distributed acceleration. That gives different $\mathbf{G}$ matrix from the previous version, but the $\mathbf{F}$ and $\mathbf{H}$ matrices should be the same. If I implement this in scilab (sorry, no access to matlab), it looks like: // Signal Model DeltaT = 0.1; F = [1 DeltaT; 0 1]; G = [DeltaT^2/2; DeltaT]; H = [1 0]; x0 = [0;0]; sigma_a = 0.1; Q = sigma_a^2; R = 0.1; N = 1000; a = rand(1,N,"normal")*sigma_a; x_truth(:,1) = x0; for t=1:N, x_truth(:,t+1) = F*x_truth(:,t) + G*a(t); y(t) = H*x_truth(:,t) + rand(1,1,"normal")*sqrt(R); end Then, I can apply the Kalman filter equations to this $y$ (the noisy measurements). // Kalman Filter p0 = 100*eye(2,2); xx(:,1) = x0; pp = p0; pp_norm(1) = norm(pp); for t=1:N, [x1,p1,x,p] = kalm(y(t),xx(:,t),pp,F,G,H,Q,R); xx(:,t+1) = x1; pp = p1; pp_norm(t+1) = norm(pp); end So we have our noisy measurements $y$, and we've applied the Kalman filter to them and used the same signal model to generate $y$ as we do to apply the Kalman filter (a pretty big assumption, sometimes!). Then following plots show the result. Plot 1: $y$ and $x_k$ versus time. Plot 2: A zoomed view of the first few samples: Plot 3: Something you never get in real life, the true position vs the state estimate of the position. Plot 4: Something you also never get in real life, the true velocity vs the state estimate of the velocity. Plot 5: The norm of the state covariance matrix (something you should always monitor in real life!). Note that it very quickly goes from its initial very large value to something very small, so I've only shown the first few samples. Plot 6: Plots of the error between the true position and velocity and their estimates. If you study the case where the position measurements are exact, then you find that the Kalman udpate equations produce exact results for BOTH position and speed. Mathematically it's straightforward to see why. Using the same notation as the wikipedia article, exact measurements mean that $\mathbf{z}_{k+1}=x_{k+1}$. If you assume that the initial position and speed are known so that $\mathbf{P}_k=0$, then $\mathbf{P}_{k+1}^{-}=\mathbf{Q}$ and the Kalman gain matrix $\mathbf{K}_{k+1}$ is given by $$ \mathbf{K}_{k+1} = \left(\begin{array}[c]{c}1\\ 2/dt\end{array} \right) $$ This means that the Kalman update procedure produces $$ \begin{align*} \mathbf{\hat{x}}_{k+1} & = \mathbf{F}_{k+1}\mathbf{x}_k + \mathbf{K}_{k+1}\left(\mathbf{z}_{k+1} - \mathbf{H}_{k+1} \mathbf{F}_{k+1}\mathbf{x}_k\right)\\ & = \left(\begin{array}[c]{c}x_k + \dot{x}_k dt\\ \dot{x}_k\end{array} \right) + \left(\begin{array}[c]{c}1\\ 2/dt\end{array} \right) \left(x_{k+1} - \left( x_k + \dot{x}_k dt\right) \right)\\ & = \left(\begin{array}[c]{c}x_{k+1}\\ 2 \left(x_{k+1} - x_k \right) /dt - \dot{x}_k\end{array} \right) \end{align*} $$ As you can see, the value for the speed is given by exactly the formula you were proposing to use for the speed estimate. So although you couldn't see any kind of calculation $(x_k-x_{k-1})/dt$ for speed, in fact it is hidden in there after all.
{ "domain": "dsp.stackexchange", "id": 883, "tags": "kalman-filters" }
Examining a DocumentTermMatrix in RTextTools
Question: I created a DocumentTermMatrix for text mining using RTextTools. The rows for this DocumentTermMatrix correspond to dataframe rows and matix columns correspond to words. My question is : How can I get the words (labels vector) for examining the DocumentTermMatrix ? In other words, How can I get the vector of these 904 words? require(RTextTools,quietly=TRUE) data(USCongress) doc_matrix <- create_matrix(USCongress$text, language="english", removeNumbers=TRUE, stemWords=TRUE, removeSparseTerms=.998) dim(USCongress) [1] 4449 6 dim(doc_matrix) [1] 4449 904 Answer: a documenttermmatrix is a simple_triplet_matrix. You can turn this into a simple matrix with the as.matrix command and then use all matrix functions. # turn into simple matrix mat <- as.matrix(doc_matrix) # vector of the words word_vector <- colnames(mat) # Dataframe containing words and their frequency df_words <- data.frame(words = colnames(mat), frequency = colSums(mat), row.names = NULL)
{ "domain": "datascience.stackexchange", "id": 520, "tags": "r, text-mining" }
L1 / Variational Distance between distributions
Question: My statistics knowledge is somewhat poor, so I have to ask one (dumb) question. Let $\beta$ be a real number in the interval $\big[0, \frac{1}{2}\big)$ and $\mathcal{D}_1, \mathcal{D}_2, \mathcal{D}_3$ be three distributions over a space $\mathcal{X}$, with the property that $\mathcal{D}_1 = \beta \cdot \mathcal{D}_2 + (1-\beta) \cdot \mathcal{D}_3$. What is the statistical variational distance between $\mathcal{D}_1$ and $\mathcal{D}_2$? Thanks you a lot! Answer: Using the relation between total variation and $L_1$/$\ell_1$ distance of the probability/distribution/mass functions, we have $$\begin{align} d_{\rm TV}(D_1, D_2) &= \frac{1}{2}\lVert D_1-D_2\rVert_1 = \frac{1}{2}\lVert \beta D_2 +(1-\beta)D_3 - D_2\rVert_1\\ &= \frac{1-\beta}{2}\lVert D_3 - D_2\rVert_1 = (1-\beta)d_{\rm TV}(D_2, D_3). \end{align}$$
{ "domain": "cstheory.stackexchange", "id": 4113, "tags": "st.statistics, edit-distance" }
Pure Javascript to create forms and validation check Project
Question: Goal I have developed a random check that I have thought of. I have used purely JavaScript to get used to the language and learn it. I have also used JavaScript to create elements, classes, id etc... and at the end, I have a simple validation check to check if all inputs have been filled. I would to soon improve the validation check by prompting to show which field is empty. Finally, I'd like to only know where I can improve my code, in terms of simplicity, how I can make it simpler? I'll be happy to hear any recommendations! Code: <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous"> <div class="container mt-4"> <h3>Form:</h3> <form id="form" class="mt-4 mb-4" action="/reports_send/21-TEMP-01a" method="POST"> <div style="border: 1px solid black; padding: 40px; border-radius: 25px;"> <div class="container mt-4"> <div id="errors" class="mt-4"></div> </div> <h4>Select Room</h4> <div id="RoomSelect"> </div> <div id="RoomInputs"> </div> <button class="btn btn-primary">Submit</button> </div> </form> </div> <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script> <script> // -----------------------------PART-1--------------------------------------------------- // Will be used to add select element var RoomSelectId = document.getElementById("RoomSelect"); // Create select element var selectElement = document.createElement("select"); selectElement.setAttribute("id", "RoomMenu"); selectElement.setAttribute("class", "form-control mb-4"); // Drying room 1 var dryingRoom1 = document.createElement("option"); dryingRoom1.value = "DryingRoom1"; dryingRoom1.text = "Drying Room 1"; selectElement.appendChild(dryingRoom1); // Drying room 2 var dryingRoom2 = document.createElement("option"); dryingRoom2.value = "DryingRoom2"; dryingRoom2.text = "Drying Room 2"; selectElement.appendChild(dryingRoom2); // Dry Store var dryStore = document.createElement("option"); dryStore.value = "DryStore"; dryStore.text = "Dry Store"; selectElement.appendChild(dryStore); RoomSelectId.appendChild(selectElement); // -----------------------------PART-1-END----------------------------------------------- // -----------------------------PART-2--------------------------------------------------- // Creating inputs for temperature and humidity // Get div of room inputs var roomInputsId = document.getElementById("RoomInputs"); // Get all options var roomOptions = document.getElementById("RoomMenu"); // Create all inputs, such as temperature and humidity for(var i = 0; i < roomOptions.length ; i++) { var divElement = document.createElement("div"); divElement.setAttribute("class", `form-group RoomDivEl ${roomOptions.options[i].value}`); divElement.style.display = "none"; //Title var title = document.createElement("h4"); title.appendChild(document.createTextNode(roomOptions.options[i].innerHTML)); divElement.appendChild(title); // Temperature // Actual var actualTemp = document.createElement("label"); actualTemp.innerHTML = "Temperature °C - <strong>Actual</strong>"; divElement.appendChild(actualTemp); var actualTempInput = document.createElement("input"); actualTempInput.setAttribute("class", "form-control"); actualTempInput.setAttribute("type", "number"); actualTempInput.setAttribute("name", `ActualTemp${roomOptions.options[i].value}`); divElement.appendChild(actualTempInput); // Minimum var minTemp = document.createElement("label"); minTemp.innerHTML = "Temperature °C - <strong>Minimum</strong>"; divElement.appendChild(minTemp); var minTempInput = document.createElement("input"); minTempInput.setAttribute("class", "form-control"); minTempInput.setAttribute("type", "number"); minTempInput.setAttribute("name", `minTemp${roomOptions.options[i].value}`); divElement.appendChild(minTempInput); // Maximum var maxTemp = document.createElement("label"); maxTemp.innerHTML = "Temperature °C - <strong>Maximum</strong>"; divElement.appendChild(maxTemp); var maxTempInput = document.createElement("input"); maxTempInput.setAttribute("class", "form-control"); maxTempInput.setAttribute("type", "number"); maxTempInput.setAttribute("name", `maxTemp${roomOptions.options[i].value}`); divElement.appendChild(maxTempInput); // Actual var actualHumidity = document.createElement("label"); actualHumidity.innerHTML = "Relative Humidity - <strong>Actual</strong>"; divElement.appendChild(actualHumidity); var actualHumidityInput = document.createElement("input"); actualHumidityInput.setAttribute("class", "form-control"); actualHumidityInput.setAttribute("type", "number"); actualHumidityInput.setAttribute("name", `actualHumidity${roomOptions.options[i].value}`); divElement.appendChild(actualHumidityInput); // Invisible input box to be used to get Room Name var invisibleRoomName = document.createElement("input"); invisibleRoomName.setAttribute("name", `RoomName${roomOptions.options[i].value}`); invisibleRoomName.setAttribute("value", `${roomOptions.options[i].innerHTML}`); invisibleRoomName.style.display = "none"; divElement.appendChild(invisibleRoomName); // Minimum var minHumidity = document.createElement("label"); minHumidity.innerHTML = "Relative Humidity - <strong>Minimum</strong>"; divElement.appendChild(minHumidity); var minHumidityInput = document.createElement("input"); minHumidityInput.setAttribute("class", "form-control"); minHumidityInput.setAttribute("type", "number"); minHumidityInput.setAttribute("name", `minHumidity${roomOptions.options[i].value}`); divElement.appendChild(minHumidityInput); // Maximum var maxHumidity = document.createElement("label"); maxHumidity.innerHTML = "Relative Humidity - <strong>Maximum</strong>"; divElement.appendChild(maxHumidity); var maxHumidityInput = document.createElement("input"); maxHumidityInput.setAttribute("class", "form-control"); maxHumidityInput.setAttribute("type", "number"); maxHumidityInput.setAttribute("name", `maxHumidity${roomOptions.options[i].value}`); divElement.appendChild(maxHumidityInput); // Combine all into the div element roomInputsId.appendChild(divElement); } // Set desfault option to index of 0 for(var i = 0; i < roomInputsId.getElementsByClassName("RoomDivEl").length; i++) { roomInputsId.getElementsByClassName("RoomDivEl")[0].style.display = "block"; } // -----------------------------PART-2-END----------------------------------------------- // -----------------------------PART-3--------------------------------------------------- // Event listener to access its chuld class and target selected option roomOptions.addEventListener("change", (event) => { const selectOption = roomInputsId.getElementsByClassName(event.target.value); // Hide all Divs for(var i = 0; i < roomInputsId.getElementsByClassName("RoomDivEl").length; i++) { roomInputsId.getElementsByClassName("RoomDivEl")[i].style.display = "none"; } // Show selected div selectOption[0].style.display = "block"; }) // -----------------------------PART-3-END----------------------------------------------- // -----------------------------PART-4--------------------------------------------------- // Check if every temp and humidity hass ben done document.getElementById("form").addEventListener("submit", (e) => { // Error messages array used in the loop below var errorMessages = []; // For loop to go over each Fridge and Freezer temperature value for(var i = 0; i < document.forms["form"].getElementsByTagName("input").length; i++) { // Checking if any values is empty if(!document.forms["form"].getElementsByTagName("input")[i].value) { errorMessages.push("Fill"); } } if(errorMessages.length > 0) { e.preventDefault(); document.getElementById("errors").innerHTML = '<div class="alert alert-danger" role="alert"><p><strong>Please complete all the Temperatures and Humidity Checks</strong></p></div>'; } }); // -----------------------------PART-4-END----------------------------------------------- </script> Answer: setAttribute? When assigning properties to elements, I'd prefer to use dot notation assignment instead of setAttribute - it's more concise and a bit easier to read and write. For example: selectElement.setAttribute("id", "RoomMenu"); selectElement.setAttribute("class", "form-control mb-4"); can turn into selectElement.id = 'RoomMenu'; selectElement.className = 'form-control mb-4'; Use modern syntax It's 2020. For clean, readable code in a reasonably professional project, I'd recommend writing in the latest and greatest version of the language - or at least in ES2015. Modern syntax offers quite a few benefits, such as the ability to use const, concise arrow functions, and much more. If you're worried about browser compatibility, use Babel to transpile your code automatically into ES5 for production, while keeping the source code modern, readable, and concise. You're already using template literals (which are ES2015) - might as well go the rest of the way. Element creation DRYing You create a lot of elements dynamically, and then assign various properties and attributes. You could do this more elegantly by abstracting it into a function, and then calling that function whenever you need to make an element. For example, for Part 1, you could do: const createElement = (tagName, parent, properties) => { const element = parent.appendChild(document.createElement(tagName)); Object.assign(element, properties); return element; }; // Will be used to add select element const RoomSelectId = document.getElementById("RoomSelect"); const selectElement = createElement( 'select', RoomSelectId, { id: 'RoomMenu', className: 'form-control mb-4' } ); createElement( 'option', selectElement, { value: 'DryingRoom1', text: 'Drying Room 1' } ); createElement( 'option', selectElement, { value: 'DryingRoom2', text: 'Drying Room 2' } ); createElement( 'option', selectElement, { value: 'DryStore', text: 'Dry Store' } ); And so on. Don't re-select elements you already have If you have a reference to an element already, eg: var selectElement = document.createElement("select"); selectElement.setAttribute("id", "RoomMenu"); Then there's no need to select it again later with var roomOptions = document.getElementById("RoomMenu"); That adds extra computation for no reason, and is confusing. Just keep using the old variable name of selectElement. Manual iteration? Having to mess with indicies of an array manually is a bit ugly. When you have a collection you want to iterate over, and you don't care about the indicies, don't iterate over the indicies if possible - instead, just iterate over the collection. For example, in Part 2, instead of for(var i = 0; i < roomOptions.length ; i++) { // numerous references to roomOptions.options[i] you can use for(const option of roomOptions.options) { // numerous references to option Text insertion You do: title.appendChild(document.createTextNode(roomOptions.options[i].innerHTML)); There are 2 issues here: When you start with an empty element and want to populate it with text, it's easier to assign to its textContent than to go through document.createTextNode and appendChild Unless you're deliberately setting or retrieving HTML markup from an element, it's more appropriate to use textContent than innerHTML. The code above can be replaced by: title.textContent = option.textContent; This applies to other areas of the code as well. You have many instances of .innerHTML = when you're only assigning text, so you should assign to the .textContent instead. (Using innerHTML is not only less appropriate and potentially slower, but it can result in arbitrary code execution when the HTML being set isn't trustworthy) Selectors and array methods are great You have: var errorMessages = []; // For loop to go over each Fridge and Freezer temperature value for(var i = 0; i < document.forms["form"].getElementsByTagName("input").length; i++) { // Checking if any values is empty if(!document.forms["form"].getElementsByTagName("input")[i].value) { errorMessages.push("Fill"); } } if(errorMessages.length > 0) { e.preventDefault(); document.getElementById("errors").innerHTML = '<div class="alert alert-danger" role="alert"><p><strong>Please complete all the Temperatures and Humidity Checks</strong></p></div>'; } Rather than document.forms['form'].getElementsByTagName("input"), you can use a selector string to select the inputs which are children of #form: document.querySelectorAll('#form input'); Selector strings are terse, flexible, and correspond directly to CSS selectors, and so are probably the preferred method of selecting elements. (You can change to this method in other places in the code as well, such as when you define selectOption) Since you want to check if any of the inputs have an empty value, rather than pushing unexamined values to an array, use Array.prototype.some instead: if ([...document.querySelectorAll('#form input')].some(input => !input.value)) { e.preventDefault(); document.getElementById("errors").innerHTML = '<div class="alert alert-danger" role="alert"><p><strong>Please complete all the Temperatures and Humidity Checks</strong></p></div>'; } Variable names You have: roomInputsId: This is an element, not an ID; better to remove the "Id" suffix. But since it's an element, not multiple elements: the s makes it sound plural when it's not. Maybe call it roomInputsContainer instead? RoomSelectId: Similar to above, but it's also using PascalCase. Ordinary variables in JS nearly always use camelCase - reserve PascalCase for classes, constructors, and namespaces, for the most part. roomOptions: Like the first - this is a single element, so it shouldn't be plural Overall If this is for something professional that needs to be maintained, and you have to do this sort of thing frequently on pages (dynamically creating, appending, removing, validating elements), I'd consider using a standard framework instead; they're a bit more maintainable in the long run over multiple developers and multiple years.
{ "domain": "codereview.stackexchange", "id": 39438, "tags": "javascript, html" }
What happens to a body if it rotates extremely fast?
Question: I am thinking on a object, e.g. ball or planet that starts rotating with increasing speed. Let's assume that his speed get's closer to the speed of light, what happens to this object? There are several forces acting. But I always get caught thinking that it will get heavier and heavier because of the additional energy which is needed to accelerate it. Is that all? Or anything else interesting happens? Answer: A phenomenon which has been observed in stars and planets is that when the body rotates faster and faster, the object becomes more and more elongated, thus some points in the object are farther away from the rotation axis, increasing the relative moment of inertia and requiring more torque to accelerate the body by the same amount. This effect has also been observed in everyday objects and in everyday devices. For example, the governors on steam engines exhibit this effect. When it spins slowly, the weights are almost all the way in. When it rotates faster, the weights swing outward and are further away from the shaft, requiring more torque to maintain constant angular acceleration.
{ "domain": "physics.stackexchange", "id": 28504, "tags": "newtonian-mechanics, special-relativity, rotation" }
how to formalize the class(?) of computational models and their equivalence
Question: Introductory books to theoretical computer science usually introduce a the Turing machine and some of its variants, as well as the Random Access machine as computational models. Sometimes more specialized or even exotic systems (such as quantum computers) can be found briefly sketched. The equivalence of these models ( say two models A and B ) is usually proven by mimicing the mechanics of A with the "gears" of B and then providing "cheap" conversion functions which convert input of A to input B and output of B back to output A - the same is then done for A and B reversed. Whereas proofs provide valid statements about well-defined mathematical objects, I don't think they prove what is ought to be proved, namely the equivalance of two computational models, because that question is non mathematically defined. I doubt that this can be done. First of all, the conversion functions between the different input/output formats cannot necessarily be formalized as computations, because they act in "both worlds" (say binary strings and a cell setting in the game of life), and thus require a new computational model for themselves. Hence at least two further computational models are introduced. The triviality of the conversion functions doesn't seem be formalized. - One might argue that the conversions are the most vital part of the proof, because the "mimicry" of one model A within another model B has no mathematicly well-defined relation with the model A. Second, I wonder whether the "class" of computational models can be formalized. I don't think it is a set, because, for example, you could build several Turing machines with 0 and 1 represented by arbitrary inequal sets. - One might define a category of computational models, with the arrows, say from object A to object B, present when A can be simulated by B. This raises the question what we would want the objects to be. I do not know. I regard these points, (i) non having a formal definition for the conversion functions to be trivial, (ii) having no definition of a (Turing-equivalent) model of computation as unsatisfying. Does anybody know whether and how this has been treated in literature so far? I am aware this touches what to understand the question of what to call "computation", but so do the usual practices in computer science anyway, in my opinion. Answer: I recommend that you look at realizability theory. In realizability computational models are known as partial combinatory algebras (PCA). They cover a wide range of computational models. There is a 2-category of PCAs in which we can speak not only of equivlence, but also about general morphisms from one computational model to another. Some references: Jaap van Oosten's book on realizability. John Longley's Ph.D. thesis. John's Ph.D. is probably the best source to read about comparison of PCAs. An abridged version can be found in my Ph.D. thesis. I should point out that realizability theory is just one way of organizing and structuring the world of computational models in a mathematically precise way. Nevertheless, it is a very sophisticated machinery which shows what can be done.
{ "domain": "cstheory.stackexchange", "id": 954, "tags": "reference-request, lo.logic, computability" }
Understanding the quantum physics behind light polarization
Question: I'd like to help use the following exercise to reinforce my understanding of some basic concepts of quantum states. Here's a picture of the setup: Now, I'll try and make my series of questions about the insights taken from this exercise as clear as possible. First question - why is the intensity after going through the first slit $1/2 \ I_0$ regardless of slit orientation? Firstly, I attempted the following, which was flat out wrong (also I don't know how to use Bra-ket notation here, I'm going to use unit vectors instead.. hopefully that's okay? Please excuse this hideous 'notation', just trying to illustrate my confusions): $$V' = \cos{(7\pi/12)} \hat V + \sin{(7\pi/12)}\hat H$$ Where $V'$ is the new vertical polarization state, and $\hat V$ and $\hat H$ are the vertical and horizontal states respectively. I posited this because $\hat V$ and $\hat H$ represent the basis vectors $(0,1)$, $(1,0)$ that represent the vertical and horizontal aperture arrangements of the filters. We are instructed, as far as I could understand, to construct a new basis vector when there is an arbitrary aperture angle as a linear combination of the typical horizontal and vertical arrangements.. which led to something like this. This led me to state that the probability for absorption of the vertical filter to be.. $$cos^2(7\pi/12) \approx 0.06$$ It's clear that this stuff is fuzzy to me, but hopefully my confusion is laid relatively bare enough. $7\pi/12$ is merely the angle I took from $\theta_0 = 0$, so $\alpha = \pi/12 + \pi/2$. This was an attempt to answer the first part of the question, which I'm aware was an answer of probability and not intensity, but I wasn't sure how to do this any other way. Anyway, that garbage attempt for physics aside, I'm told that the answer to this is that the intensity is halved from the first aperture, regardless of orientation of the slit. I don't really know why - my only guess is that it has something to do with the light being unpolarized. But, I mean, thinking off the cuff, if the light is a beam with a diameter roughly equal to the filter, then the filter would have to be a gap half the size of the circle in order for only half the light to make it through! I don't have any good justification as to why this doesn't make sense. Second question - what is the sense behind the approach to find the probability for a photon to be detected by a PMT placed after the final filter? My lecturer's answer states that after the light goes through the first slit, it is polarized along the $\alpha$ axis, which I think means it basically forms a line parallel and on top of the slightly diagonal, dotted line in the first filter. He contends that, since it makes an angle $\alpha + \beta$ with the transmission axis of the second filter (which is not apparent to me visually), the beam is reduced by a factor $$cos^2(\alpha + \beta)$$ and by a similar argument for the third filter: $$cos^2(\beta + \gamma)=cos^2(55^0) \approx 0.33$$ Which leads to a final intensity of $I \approx 0.068 I_0$. So my specific confusions I need addressed are: Why the intensity after going through the first slit is $1/2 \ I_0$ regardless of slit orientation? Why we use a cosine squared term. I thought I knew why in my screw up above but I don't think I do anymore. Why, in order to find the proper new orientation of polarization, we add the previous angle with the new one as our cosine argument Answer: Your question is interesting as the answer involves more than basic quantum mechanics. Unpolarized light is actually best described as a mixed state, rather than a pure state. Basically, unpolarized light is a statistical superposition of two perpendicular polarization in the sense that half of the photons have one polarization, and half of the photon the other. Mixed states are described by a density matrix, which here would take the form \begin{align} \rho &=\frac{1}{2}\vert H\rangle\langle H\vert +\frac{1}{2}\vert V\rangle\langle V\vert\, ,\\ &=\frac{1}{2}\vert \updownarrow \rangle\langle \updownarrow\vert +\frac{1}{2}\vert \leftrightarrow \rangle\langle \leftrightarrow\vert \tag{1} \end{align} In mixed states, there is no possibility for the vertically polarized and horizontally polarized states to interfere, and the factors of $\frac{1}{2}$ are the statistical weights of each polarization. Mixed states are represented by operators rather than kets. There is a subtle difference between the interpretation of statistical weights, which are also probabilities, and the probabilities obtained by overlaps like $\vert\langle\psi\vert\phi\rangle\vert^2$: both types probabilities have different origins. One is tied to incoherent averages, as referred to in the linked wiki page. Moreover, by symmetry, "which one" of the two polarization is completely undefined if the light is completely unpolarized. In other word, the completely unpolarized light is equally well described by the mixture $$ \rho =\frac{1}{2}\vert \nearrow \rangle\langle \nearrow\vert +\frac{1}{2}\vert \nwarrow\rangle\langle \nwarrow\vert\, . $$ (unfortunately, slanted double arrows are not so easily accessible so $\nearrow$ is a standin for $\updownarrow$ rotated by $45^\circ$ to the right, and $\nwarrow$ is a standin for $\leftrightarrow$ rotated by $45^\circ$ to the right) or for that matter, using any slanting of $\updownarrow$ and $\leftrightarrow$. If you set your polarizer along the vertical axis, you are making (in the parlance) a projective measurement and the outcome is always the pure state $\vert \updownarrow\rangle\langle \updownarrow\vert $. Since the statistical weight of this state is $1/2$, you eliminate half of the intensity. Since there is nothing special about $\vert \updownarrow\rangle\langle \updownarrow\vert $, this will be true of any orientation. This is, in the formalism of mixed states, the content of the statement immediately below your figure. After the first polarizer, the state is pure, and pure states are those that you are familiar with. We can do away with all the density matrix stuff and think of the state as the ket $\vert \updownarrow\rangle$, or any slanted thereof where the slanting would be parallel to the axis of the polarizer. Your $V'$ state is described by a ket $$ \vert V'\rangle =\cos\theta \vert \updownarrow\rangle +\sin\theta \vert \leftrightarrow\rangle\, . \tag{2} $$ What is the difference between (1) and (2)? In (1) is it not possible to find an orientation of the polarizers that will result in no transmission of the light. If there was such an orientation, light would be polarized at $90^\circ$ w/r to this direction. In (2) on the other hand, the ket $$ \vert H'\rangle = -\sin\theta \vert \updownarrow\rangle +\cos\theta \vert \leftrightarrow\rangle $$ is orthogonal to $\vert V'\rangle$ so the probability of obtaining the outcome $H'$ as a result of having the system initially in $\vert V'\rangle$ is $ \vert \langle H'\vert V'\rangle\vert^2=0$, i.e. no light would pass through a filter aligned parallel to the $V'$ direction if light was initially described by the polarization ket $\vert H'\rangle$. Note that, from (2), you can recover Malus' law. The probability of light initially polarized as $\vert \updownarrow\rangle$ to exit in the state $\vert V'\rangle$ is $$ \vert \langle \updownarrow\vert V'\rangle \vert^2 =\cos^2\theta $$ if $V'$ makes and angle $\theta$ with the vertical. The attenuation in intensity from Malus' law follows immediately from the (discrete) probability of a photon going though two polarizers sets at a relative angle $\theta$. Finally, it is possible to have partially polarized states, in the sense that (1) generalizes to $$ \rho= a\vert H\rangle\langle H\vert +b\vert V\rangle\langle V\vert $$ where $a+b=1$, where $a$ is the statistical probability of having an "H" photon and $b$ the prob of finding a $V$ photon in the beam. In this case, and contrary to (1), there is an orientation where the intensity of light will be minimal but non-zero. The best reference on density matrix in 2-level systems, with discussion that is done in terms of Stern-Gerlach magnet but otherwise immediately applicable to polarization, is the first chapter of Blum, Karl. Density matrix theory and applications. Springer Science & Business Media, 2013, available as a google books. Indeed, section 1.1.2 is on polarization (in terms of Pauli matrices).
{ "domain": "physics.stackexchange", "id": 48073, "tags": "optics, visible-light, polarization, quantum-optics" }
change path of roscd
Question: I am following the tutorials of the ros man page. I have a query. Whenever I use roscd beginner_tutorials I get: /catkin_ws/install/share/beginner_tutorials$ How can I change roscd so that it gives me the path of beginner_tutorials in the /catkin_ws/devel/lib/beginner_tutorials folder Originally posted by Asfandyar Ashraf Malik on ROS Answers with karma: 729 on 2013-06-11 Post score: 7 Answer: This question is directly related to a problem you are trying to solve in your previous question. Make sure you are familiar with this tutorial. In short, there are install space and development space. If you source devel/setup.bash then roscd beginner_tutorials should bring you to ~/catkin_ws/src/beginner_tutorials. But if you source install/setup.bash then it will bring you to the directory where the package was installed. Originally posted by Boris with karma: 3060 on 2013-06-11 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14507, "tags": "ros, ros-groovy, roscd" }
Remove unwanted characters from a string
Question: From what I've seen in other posts, if I actually know the unwanted characters, then I can do string.replace(). But in my case, any characters can appear in my input string, and I only want to keep the characters that I want (without messing up the order of course). private string RemoveUnwantedChar(string input) { string correctString = ""; for (int i = 0; i < input.Length; i++) { if (char.IsDigit(input[i]) || input[i] == '.' || input[i] == '-' || input[i] == 'n' || input[i] == 'u' || input[i] == 'm' || input[i] == 'k' || input[i] == 'M' || input[i] == 'G' || input[i] == 'H' || input[i] == 'z' || input[i] == 'V' || input[i] == 's' || input[i] == '%') correctString += input[i]; } return correctString; } Characters that I want: 0123456789 numkMGHzVs%- How can I tidy this code to be neater and more readable? Answer: By having a const string which contains all of your wanted chars, you could do either a simple call to Contains() or check if IndexOf() will return a value > -1. using string concatenation in a loop is mostly a bad idea. Use a StringBuilder instead. omitting braces {} although they are optional for single lined if statements is a bad idea because it makes your code error prone. Implementing the mentioned points will lead to private const string allowedCharacters = "numkMGHzVs%-."; private string RemoveUnwantedChar(string input) { StringBuilder builder = new StringBuilder(input.Length); for (int i = 0; i < input.Length; i++) { if (char.IsDigit(input[i]) || allowedCharacters.Contains(input[i])) { builder.Append(input[i]); } } return builder.ToString(); } @Caricorc made a good suggestion in the comments In my opinion allowedCharacters should be an argument to the function to allow reusability. So by passing the allowedCharacters as an optional parameter with an additional check with IsNullOrEmpty(). If performance is an issue, you could also pass a HashSet<char> to the method or have an overloaded method like so private string RemoveUnwantedChar(string input, string allowedCharacters = "0123456789numkMGHzVs%-.") { if (string.IsNullOrEmpty(allowedCharacters)) { return input; } return RemoveUnwantedChar(input, new HashSet<char>(allowedCharacters)); } private string RemoveUnwantedChar(string input, HashSet<char> allowedCharacters) { if (allowedCharacters.Count == 0) { return input; } StringBuilder builder = new StringBuilder(input.Length); for (int i = 0; i < input.Length; i++) { if (allowedCharacters.Contains(input[i])) { builder.Append(input[i]); } } return builder.ToString(); } you can reuse it somewhere else.
{ "domain": "codereview.stackexchange", "id": 16554, "tags": "c#, strings" }
Local vs. Global Parameters? In a Launch File?
Question: Hello, What is the difference between setting a parameter in a launch file like this (i.e. outside of a node tag): <param name="param01" value="value01" /> <node name="node01" pkg="package01" type="node01" /> and this (i.e. wrapped inside of a node tag): <node name="node01" pkg="package01" type="node01"> <param name="param01" value="value01" /> </node> Thanks a lot! Originally posted by gavinmachine on ROS Answers with karma: 353 on 2011-11-25 Post score: 1 Answer: The first example defines a "relative" name. The second defines a "private" parameter. Most ROS nodes look for parameters in their private namespace, so the second is generally preferred. But, there are sometimes reasons to define parameters at a higher level, as in the first example. Originally posted by joq with karma: 25443 on 2011-11-25 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by dornhege on 2011-11-25: It's relative, not global. When pushed into a namespace it will not be at the top level anymore (unless you use the /). When using the second variant, the parameters are private without the ~ already. Comment by gavinmachine on 2011-11-25: Would the top parameter also be considered a global? Also, does that mean that private param names should always start with "~" and global with "/" (following the link you provided)?
{ "domain": "robotics.stackexchange", "id": 7422, "tags": "ros, roslaunch, parameter" }
How to know if a Feynman diagram is planar?
Question: A planar diagram is defined as being one of the leading diagrams for $N \to \infty$ (large $N$ expansion), and, as I understand it, it should have the lowest genus when compared to a non-planar diagram. It is of course very useful to be able to distinguish planar from non-planar just by looking at the diagram without having to compute color factors every time. To illustrate my misunderstanding, let us consider the two following diagrams: Here the circles top and bottom correspond to trace, i.e. these diagrams represent a two-point function with composite operators of the form: $$\mathcal{O}_k (x) \propto \text{Tr}\ T^{a_1} T^{a_2} T^{a_3} T^{a_4} \phi^{a_1} \phi^{a_2} \phi^{a_3} \phi^{a_4}, \tag{1}$$ with the $a$'s corresponding to color indices and the $\phi$'s are scalar fields. I would expect diagram $(2)$ to be non-planar, however when I do the computation for each of them I find that both diagrams have the same planarity, since: $$\begin{align} \text{Tr}\ T^a T^b T^c T^d\ \text{Tr}\ T^a T^b T^c T^d & = \frac{1}{2} \text{Tr}\ T^a T^b T^c T^a T^b T^c \\ &= \frac{1}{4} \text{Tr}\ T^a T^b\ \text{Tr}\ T^a T^b \\ & = \frac{1}{8} \text{Tr}\ T^a T^a \\ &= \frac{N^2}{16}, \end{align} \tag{2.a}$$ and $$\begin{align} \text{Tr}\ T^a T^b T^c T^d\ \text{Tr}\ T^b T^a T^c T^d & = \frac{1}{2} \text{Tr}\ T^a T^b T^c T^b T^a T^c \\ &= \frac{1}{4} \text{Tr}\ T^a T^b\ \text{Tr}\ T^b T^a \\ &= \frac{1}{4} \text{Tr}\ T^a T^b\ \text{Tr}\ T^a T^b \\ &= \frac{N^2}{16}, \end{align} \tag{2.b}$$ where I made heavy use of the following identities: $$\text{Tr}\ T^a A\ \text{Tr}\ T^a B = \frac{1}{2} \text{Tr}\ A B, \tag{3.a}$$ $$\text{Tr}\ T^a A T^a B = \frac{1}{2} \text{Tr}\ A\ \text{Tr}\ B. \tag{3.b}$$ Is that right? If yes, how can I distinguish planarity diagrammatically then? Answer: Concerning OP's diagram (1) & (2) and OP's calculations, note that the labelling of the second vertex is reversed, i.e. the color factor becomes $$ {\rm Tr}(T^a T^b T^c T^d) {\rm Tr}(T^d T^c T^b T^a)~\stackrel{(3.a')+(3.b')}{=}~({\rm Tr}\mathbb{1})^4+\text{subleading terms}, \tag{2.a'}$$ and $$ {\rm Tr}(T^a T^b T^c T^d) {\rm Tr}(T^d T^c T^{\color{red}{a}} T^{\color{red}{b}})~\stackrel{(3.a')+(3.b')}{=}~({\rm Tr}\mathbb{1})^3 +\text{subleading terms},\tag{2.b'}$$ respectively. We see that diagram (2) has less index contractions [i.e. factors of ${\rm Tr}\mathbb{1}=N$], which is a hallmark of a non-planar diagram. Here we have repeatedly used the formulas $${\rm Tr}( T^a A) {\rm Tr}( T^a B)~=~ {\rm Tr}(A B)+ \text{subleading terms}, \tag{3.a'}$$ and $${\rm Tr}( T^a A T^a B) ~=~ {\rm Tr}(A){\rm Tr}(B)+\text{subleading terms}. \tag{3.b'}$$ References: D. Tong, Gauge theory lecture notes; chapter 6.
{ "domain": "physics.stackexchange", "id": 67824, "tags": "quantum-field-theory, feynman-diagrams, topology, large-n" }
Boyd & Vandenberghe, question 2.31(d). Stuck on simple problem regarding interior of a dual cone
Question: Crossposted at Mathematics SE and MathOverflow In Boyd & Vandenberghe's "Convex Optimization", question 2.31(d) asks to prove that the interior of the dual cone $K^*$ is equal to (1) $\text{int } K^* = \{ z \mid z^\top x > 0 $ for all $ x \in K \}.$ Recall that the dual cone of a cone K is the set: $K^* = \{ y \mid y^\top \ge 0 $ for all $ x \in K \}.$ I've spent a solid chunk of time trying to prove this simple and seemingly evident statement about the interior of the dual cone but keep getting stuck on the problem of bounding the inner product of x with a perturbed version of z (details provided below). Frustratingly, the proof in the book's answer key (available online) takes as given this very fact that that I am stuck on proving. My work in proving statement (1) is given below. I hope someone can show me the piece of mathematical technology I'm missing. Thanks! Question 2.31(d): Let $K$ be a convex cone and $K^* = \{ y \mid y^\top x \ge 0 $ for all $ x \in K \}$ be its dual cone. Let $S = \{ z \mid z^\top x > 0 $ for all $ x \in K \}.$ Show that $S = \text{int } K^*.$ Certainly $S \subseteq K^*.$ Now consider some arbitrary point $z_0 \in S$. For all $x \in K$ we have $z_0^\top x > 0$. It's clear that we need to find an $\epsilon$ such that for all $z' \in D(z_0, \epsilon)$, $~~~~ z'^\top x > 0 $ for all $ x \in K.$ Unfortunately, I don't know how to show $z'^\top x > 0$ for $z' \in D(z_0, \epsilon)$ when $\epsilon$ is chosen sufficiently small. I do know we can write $z'$ as $z_0 + \gamma u$ where $\|u\| = 1$ and $\gamma \in (0,\epsilon)$. And I do know that $~~~~ z_0^\top x - \gamma \|x\| ~\le~ z_0^\top x + \gamma u^\top x ~\le~ z_0^\top x + \gamma \|x\|$. where $z_0^\top x > 0$ and $\gamma \|x\| \ge 0.$ However, I don't know how to show the critical piece, that $~~~~ z_0^\top x - \gamma \|x\| > 0$ when $\epsilon$ is chosen sufficiently small since $x$ can range over $K$ and therefore $\|x\|$ can be arbitrarily large. Frustratingly, the solution in B&V just takes it for granted that for sufficiently small $\epsilon$, $~~~~ z_0^\top x + \gamma u^\top x > 0$. I've looked online for some matrix perturbation theory results to apply but nothing I've found has been useful. Any help is greatly appreciated, Ted Answer: Arnaud is correct. The critical piece I was missing is that for any $z$, $z^\top x > 0 \iff z^\top x / \|x\| > 0.$ So pick $z \in S$. Let $x_0 = \text{arg inf } \{ z^\top x \mid x \in \text{cl}(K) \cap \{0\}, \|x\|=1 \}$ and let $c_{x_0} = z^\top x_0 > 0$. We know $x_0$ exists because the set $\text{cl}(K)\cap \{x \mid \|x\| = 1 \}$ is closed and bounded. Now let $\varepsilon = c_{x_0} / 2$ so that for any $z' = z + \gamma u$ where $\|u\|=1$ and $\gamma \in (0,\varepsilon)$, we have $~~~~ z'^\top x_0 \ge c_{x_0} - \gamma > c_{x_0} / 2 > 0.$ Furthermore, for any other $x \in \text{cl}(K)\cap \{x \mid \|x\| = 1 \}$ we know $~~~~ z'^\top x \ge z'^\top x_0 > 0.$ Therefore, for any $x \in \text{cl}(K) \backslash \{0\}$ we have $~~~~ z'^\top x / \|x\| > 0$ which proves $z'^\top x > 0$ whenever $z' \in D(z,\varepsilon)$.
{ "domain": "cstheory.stackexchange", "id": 1607, "tags": "ds.algorithms, optimization, convex-optimization" }
Which learning rate should I choose?
Question: I'm training a segmentation model, Unet++, on 2d images and I am now trying to find the optimal learning rate. The backbone of the model is Resnet34, I use Adam optimizer and the loss function is the dice loss function. Also, I use a few callbacksfunctions: callbacks = [ keras.callbacks.EarlyStopping(monitor='val_loss', patience=15, verbose=1, min_delta=epsilon, mode='min'), keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, verbose=1, mode='min', cooldown=0, min_lr=1e-8), keras.callbacks.ModelCheckpoint(model_save_path, save_weights_only=True, save_best_only=True, mode='min'), keras.callbacks.ReduceLROnPlateau(), keras.callbacks.CSVLogger(logger_save_path) ] I plotted the curves of training loss over epochs for a few learning rates: The validation loss and training loss seem to decrease slowly. However, the validation loss isn't oscillating (it is almost always decreasing). The validation and training losses decreased quickly on first 2/3 epochs. After 6 or 7 epochs, the validation loss increases again. I have a few questions (I hope it is not too much): *What is normally the best way to find the learning rate i.e. How many epochs should I wait before considering that the learning rate isn't good? What are the criteria on the loss function to determine if a learning rate is "good"? Is there a big difference if I use a small learning (which still converges) instead of the "optimal" learning rate ? Is it normal that the validation loss function oscillates over the training? Which learning rate should I use according to my results? Even a partial response would help me a lot. Answer: I am afraid the that besides learning rate, there are a lot of values for you to make a choice for over a lot of hyperparameters, especially if you’re using ADAM Optimization, etc. A principled order of importance for tuning is as follows Learning rate Momentum term , num of hidden units in each layer, batch size. Number of hidden layers, learning rate decay. To tune a set of hyperparameters, you need to define a range that makes sense for each parameter. Given a number of different values you want to try according to your budget, you could choose a hyperparameter value from a random sampling. Specifically to learning rate investigation though, you may want to try a wide range of values, e.g. from 0.0001 to 1, and so you can avoid sampling random values directly from 0.0001 to 1. You can instead go for $x=[-4,0]$ for $a=10^x$ essentially following a logarithmic scale. As far as number of epochs go, you should set an early stopping callback with patience~=50, depending on you "exploration" budget. This means, you give up training with a certain learning rate value if there is no improvement for a defined number of epochs. Parameter tuning for neural networks is a form of art, one could say. For this reason I suggest you look at basic methodologies for non-manual tuning, such as GridSearch and RandomSearch which are implemented in the sklearn package. Additionally, it may be worth looking at more advanced techniques such as bayesian optimisation with Gaussian processes and Tree Parzen Estimators. Good luck! Randomized Search for parameter tuning in Keras Define function that creates model instance # Model instance input_shape = X_train.shape[1] def create_model(n_hidden=1, n_neurons=30, learning_rate = 0.01, drop_rate = 0.5, act_func = 'ReLU', act_func_out = 'sigmoid',kernel_init = 'uniform', opt= 'Adadelta'): model = Sequential() model.add(Dense(n_neurons, input_shape=(input_shape,), activation=act_func, kernel_initializer = kernel_init)) model.add(BatchNormalization()) model.add(Dropout(drop_rate)) # Add as many hidden layers as specified in nl for layer in range(n_hidden): # Layers have nn neurons model.add(Dense(nn, activation='relu' model.add(Dense(n_neurons, activation=act_func, kernel_initializer = kernel_init)) model.add(BatchNormalization()) model.add(Dropout(drop_rate)) model.add(Dense(1, activation=act_func_out, kernel_initializer = kernel_init)) opt= Adadelta(lr=learning_rate) model.compile(loss='binary_crossentropy',optimizer=opt, metrics=[f1_m]) return model Define parameter search space params = dict(n_hidden= randint(4, 32), epochs=[50], #, 20, 30], n_neurons= randint(512, 600), act_func=['relu'], act_func_out=['sigmoid'], learning_rate= [0.01, 0.1, 0.3, 0.5], opt = ['adam','Adadelta', 'Adagrad','Rmsprop'], kernel_init = ['uniform','normal', 'glorot_uniform'], batch_size=[256, 512, 1024, 2048], drop_rate= [np.random.uniform(0.1, 0.4)]) Wrap Keras model with sklearn API and instantiate random search model = KerasClassifier(build_fn=create_model) random_search = RandomizedSearchCV(model, params, n_iter=5, scoring='average_precision', cv=5) Search for optimal hyperparameters random_search_results = random_search.fit(X_train, y_train, validation_data =(X_test, y_test), callbacks=[EarlyStopping(patience=50)])
{ "domain": "datascience.stackexchange", "id": 8615, "tags": "deep-learning, training, loss-function, optimization, learning-rate" }
The weight of a 2cc glass pharmaceutical flame sealed ampoule?
Question: This is one of the flame sealed type whose neck is broken before use. There seems to be lots of info on dimensions but none with regard to weight. Answer: I grabbed 10 empty, unsealed ones and threw them on the analytical balance and got mean: 2.4071 g, s.d.: 0.01264 g for Wheaton amber glass pre-scored vials (part number 176796). The variation between vials isn't too bad, but I don't know if they can be filled and sealed consistently enough to determine gas density like that to any useful precision and other brands may be better or worse.
{ "domain": "chemistry.stackexchange", "id": 4321, "tags": "equipment" }
Feasibility of using ROS for commercial products
Question: Hello I have a question about the feasibility of using ROS in (not so big) commercial products. We have developed a working prototype of a mobile robot that needs to run SLAM - we are using gmapping here - and has to drive around based on the inputs from some Infrared sensors, and such. “Sorry if i can’t disclose any more details here.” now we want to go ahead with commercialising the product but right now we have a couple of computers that run ROS and its packages that we developed in order to properly work the Robot. My question is do we HAVE TO put a computer (e.g SBC) inside our commercial product in order to use our current ROS based softwares? or is there another way to adopt or cross compile our current ROS based softwares / architecture so it can be put into an embedded board for the production units? Or what is your suggestion about what we can do with our ros based system that we have now for our purpose? Thank you and regards. Steve J. Originally posted by stevej_80 on ROS Answers with karma: 75 on 2015-10-15 Post score: 1 Answer: Putting an SBC onto your robot will be by far the easiest. There is a community using yocto to create images for different embedded targets. There are instructions for here the title page says hydro but it's also linked to from the indigo install page so should work for indigo too. Originally posted by tfoote with karma: 58457 on 2015-10-15 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 22789, "tags": "ros, embedded" }
Spectrum in 1/3 octave bands from FFT
Question: I’m looking to use an FFT to generate a frequency spectrum in 1/3 octave bands. After reading many posts on this site as well as others, I believe the appropriate approach is set out below. I’ve dry-run the calculations in Excel and the results look reasonable, but reasonable is not necessarily correct. To ask a specific question, is the method I’m using valid? In particular, I question if the method applied in Step 2 is correct. Background. I’m sampling the electric signal on the output of a pre-amp and my goal is a frequency spectrum in dB. My reference (denominator in the dB calc) is arbitrary and chosen to set an appropriate scale for the spectrum output display. I’m not attempting to take measurements from the output and don’t need to calibrate it against an objective reference. My project will be implemented in C++. Step 1. Start with a 1024 sample FFT at 48,000 Hz (513 bins at 46.9 Hz each). Result is the DFT, Fn for n = 0 – 512. Units are in V. Note, normally, I’d normalize the FFT output by multiplying each bin by 2/N (N is number of samples, i.e. 1024). However, because we eventually plot the spectra in dB relative to an arbitrary amplitude, any normalizing I do here will be washed away when I compute the results in dB so I don’t bother with it in my project. Step 2. Convert the DFT into 1/3 octave bands. I obtained the following formulas from here: https://www.ap.com/technical-library/deriving-fractional-octave-spectra-from-the-fft-with-apx/ The amplitude in V for each 1/3 octave band is: Where Lb is the amplitude for each 1/3 octave band in V, for b = 1 to 32, and gn,b is the gain multiplier for FFT bin n and 1/3 octave band b: Where fn is the frequency of bin n and fb is the center frequency of band b. k is the octave bandwidth designator, 1 for full octave and 3 for 1/3 octave. Result is Lb for b = 1 to 32 in V. Step 3. Disregard lower bands. Below about 250 Hz, for an FFT of length 1024, there are not enough FFT bins within each band to make the results meaningful. Therefore, we disregard the lower bands and keep only the 1/3 octave bands starting with b13 = 251 Hz. Step 4. Convert the amplitude data in each band to dB. For this computation, I’m computing the dB value relative to the approximate maximum amplitude likely to appear in any frequency band. This will be Lref. Final value in dB for each 1/3 octave frequency band above 250 Hz is: Below is a sample of individual gn,b terms (FFT bins down the left, 1/3 octave bands across the top): Assistance, input, corrections and comments appreciated. Thanks! Answer: It seems like you’ve by and large got it, but I did notice a couple things. You’ll probably want to include support for windowing and the corresponding correction factor as shown by APs article on FFT noise. Rectangular windows aren’t great for RMS type measurements (at least in my opinion) for all the side lobe type explanations that are common with windows. Second, I’m pretty sure you need to sum from 1 to N, not N/2. You may have realized that because the input signal is real, that there is redundant data that doesn’t need to be calculated twice. That is true, with the exception of the 0 and N/2 (if N is even) bins. Granted this probably wouldn’t make the output too wrong, but it would be off a bit. Edit: you know what, AP may have designed those coefficients so that windowing either isn’t necessary, or would do more harm than good. You may still want to try it, since it’s easy enough to support, but it may not help. Also, I don’t see how this would be more computationally efficient than doing the calculations directly in the time domain.
{ "domain": "dsp.stackexchange", "id": 9524, "tags": "fft, audio, frequency-spectrum" }
Randomizing and mutating algo class - style
Question: I've updated this question to be about code style only, as all of it's answered focused on this aspect. For the codes function, see Randomizing and mutating algo class - functionality algo is an algorithm that's dynamic. algo instances can be created, asked to become random, mutated and also run. They can be set to remember changes they make to themselves between runs or not. They also output values, these could be used for anything from value sequences such as mathematical number sequences to controls for a bot in a game. It's straightforward to specify limits on memory or computation steps for each as well and needless to say are entirely sandboxed. By sandboxed I mean that they only compute and produce output as described, they cannot for example use local or global variables, print to the console, #include or write to files. algos can be used where algorithms need to be portable and must be only able to calculate/compute. There is no distinction between data and instructions in an algo. A use is as values for directed search for algorithms such as with evolutionary algorithms, MCTS or others. Another is in a data file that includes algorithms, like an image that includes its own way to decompress itself, that can therefore be constructed using the specific image that is to be decompresed. They are deliberately general, being a component that could be used in many contexts and conceptually simple, as a number is. Can this code be reviewed? // by Alan Tennant #include <iostream> #include <vector> #include <string> #include <time.h> // for rand class algo { public: std::vector<unsigned short> code; std::vector<unsigned int> output; bool debug_info; algo() { reset1(false); instructions_p = 11; } void random(unsigned int size) { code.clear(); output.clear(); for(unsigned int n = 0; n < size; n++) { code.push_back(rand() % instructions_p);} reset1(true); } void run( unsigned long most_run_time, unsigned int largest_program, unsigned int largest_output_size, bool reset) { if (reset && !is_reset_p) { reset1(true); output.clear(); code2 = code; code_pos = 0; } is_reset_p = false; size_t code_size = code2.size(); if (debug_info && !can_resume_p) std::cout<<"can't resume, reset first"<<std::endl; if(code_size == 0 || most_run_time == 0) { out_of_time_p = true; out_of_space_p = false; run_time_p = most_run_time; } else if (can_resume_p) { unsigned short instruction; bool cont = true; if(debug_info) { std::cout<<"size: "<<code_size<<std::endl<<std::endl;} while(cont) { instruction = code2[code_pos] % instructions_p; if(debug_info) {std::cout<<code_pos<<", ";} code_pos = (code_pos + 1) % code_size; switch(instruction) { case 0: if(debug_info) {std::cout<<"end";} cont = false; can_resume_p = false; break; case 1: if(debug_info) { std::cout<<"goto p1";} code_pos = code2[(code_pos + 1) % code_size]; break; case 2: if(debug_info) { std::cout<<"if value at p1 % 2 = 0 then goto p2";} if(code2[code2[code_pos] % code_size] % 2 == 0) { code_pos = code2[(code_pos + 1) % code_size];} else { code_pos += 2;} break; case 3: if(debug_info) {std::cout<<"value at p1 = value at p2";} code2[code2[code_pos] % code_size] = code2[code2[(code_pos + 1) % code_size] % code_size]; code_pos += 2; break; case 4: if(debug_info) { std::cout<<"value at p1 = value at p2 + value at p3";} code2[code2[code_pos] % code_size] = ( code2[code2[(code_pos + 1) % code_size] % code_size] + code2[code2[(code_pos + 2) % code_size] % code_size] ) % USHRT_MAX; code_pos += 3; break; case 5: { if(debug_info) {std::cout<<"value at p1 = value at p2 - value at p3";} long v1 = (long)code2[code2[(code_pos + 1) % code_size] % code_size] - code2[code2[(code_pos + 2) % code_size] % code_size]; code2[code2[code_pos] % code_size] = abs(v1) % USHRT_MAX; code_pos += 3; } break; case 6: { if(debug_info) {std::cout<<"toggle value at p1";} size_t v1 = code2[code_pos] % code_size; unsigned short v2 = code2[v1]; if(v2 == 0) {code2[v1] = 1;} else {code2[v1] = 0;} code_pos++; } break; case 7: if(debug_info) { std::cout<<"output value at p1";} output.push_back(code2[code2[code_pos] % code_size]); code_pos++; break; case 8: if(debug_info) {std::cout<<"increase size";} code2.push_back(0); break; case 9: { if(debug_info) {std::cout<<"increment value at p1";} size_t v1 = code2[code_pos] % code_size; code2[v1] = (code2[v1] + 1) % USHRT_MAX; code_pos++; } break; case 10: { if(debug_info) {std::cout<<"decrement value at p1";} size_t v1 = code2[code_pos] % code_size; code2[v1] = abs((code2[v1] - 1) % USHRT_MAX); code_pos++; } break; } if(debug_info) {std::cout<<std::endl;} run_time_p++; code_size = code2.size(); code_pos = code_pos % code_size; if(run_time_p == most_run_time) { cont = false; out_of_time_p = true;} if(code_size > largest_program) { cont = false; can_resume_p = false; out_of_space_p = true; if (debug_info) std::cout<<"became too large"<<std::endl; } if(output.size() > largest_output_size) { cont = false; can_resume_p = false; output.pop_back(); if (debug_info) std::cout<<"too much output"<<std::endl; } } if (debug_info) { std::cout<<std::endl<<"size: "<<code_size<<std::endl<< std::endl<<"output: "<<std::endl; size_t output_size = output.size(); for (size_t t = 0; t < output_size; t++) std::cout<<output[t]<<std::endl; } } } void mutate(unsigned int largest_program) { output.clear(); size_t size; // special mutations while(rand() % 4 != 0) // 3/4 chance { size = code.size(); if(rand() % 2 == 0) // 1/2 chance { // a bit of code is added to the end (would prefer inserted anywhere) if(size < largest_program) { code.push_back(rand() % instructions_p);} } else { // a bit of code is removed from the end (would prefer removed from anywhere) if(size != 0) code.pop_back(); } // a section of code is moved, not yet implemented. } // mutate bits of the code size = code.size(); if (size > 0) { unsigned int most_mutation = unsigned int(size * 0.1f); if(most_mutation < 9) most_mutation = 8; unsigned int mutation = rand() % most_mutation; for(unsigned int n = 0; n < mutation; n++) code[rand() % size] = rand() % instructions_p; } reset1(true); } #pragma region unsigned long run_time() { return run_time_p; } bool out_of_time() { return out_of_time_p; } bool out_of_space() { return out_of_space_p; } bool can_resume() { return can_resume_p; } bool is_reset() { return is_reset_p; } private: bool can_resume_p, is_reset_p, out_of_time_p, out_of_space_p; unsigned int code_pos; unsigned short instructions_p; unsigned long run_time_p; std::vector<unsigned short> code2; void reset1(bool say) { out_of_time_p = false; out_of_space_p = false; run_time_p = 0; code2 = code; code_pos = 0; can_resume_p = true; is_reset_p = true; if (say && debug_info) std::cout<<"reset"<<std::endl; } #pragma endregion }; void main() { srand((unsigned int)time(NULL)); algo a = algo(); a.random(50); std::cout<<std::endl<<std::endl; a.run(10, 100, 10, false); std::cout<<std::endl<<std::endl; a.run(10, 100, 10, false); } I've improved the code in some of the ways described, above is the original code. It's usual for me to prioritize in this way, but I'm avoiding function calls in the main loop of run because I've found in this program they made a big difference to performance. Apart from that I like HostileFork and Anders K idea of using numeric_limits<unsigned short>::max() over USHRT_MAX. I'm not keen on int main(), but in the interests of conformity it's been changed from void main(). <cstdlib> and <ctime> have replaced <time.h>. From the answers, GCC doesn't seem that good, but to support it I have changed unsigned int(size * 0.1) to static_cast<unsigned int>(size * 0.1). Answer: For starters, code as written won't compile in GCC. :-( "void main() is explicitly prohibited by the C++ standard and shouldn't be used" https://stackoverflow.com/questions/204476/what-should-main-return-in-c-c Rather than #include <time.h>, #include <cstdlib>. That will correctly give you rand and abs. There used to be some flak about including anything that ends in ".h" out of the standard library and instead use the "c-prefixed-and-non-suffixed" versions, but last I checked it didn't actually make a difference. Still, some people think it does, so best not to upset them. Read this post of mine about numeric_limits, and replace your USHRT_MAX and UINT_MAX appropriately. http://hostilefork.com/2009/03/31/modern_cpp_or_modern_art/ GCC doesn't like this line: unsigned int most_mutation = unsigned int(size * 0.1); I'm not sure what the point of that is supposed to be. If you want to static cast it for clarity, I guess that's fine: unsigned int most_mutation = static_cast<unsigned int>(size * 0.1); With those changes, it compiles in GCC. Speaking of which: whatever compiler you are using...it can be helpful to have a virtual machine around to use some different ones on your code (Clang, GCC, MSVC) and see what they report. The next best-practices step is to bump the warnings all-the-way-up. I'm now using settings I got from an answer by David Stone. Here's what that gives us: test.cpp:226:0: error: ignoring #pragma region [-Werror=unknown-pragmas] test.cpp:272:0: error: ignoring #pragma endregion [-Werror=unknown-pragmas] test.cpp: In member function ‘void algo::run(long unsigned int, unsigned int, unsigned int, bool)’: test.cpp:107:86: error: use of old-style cast [-Werror=old-style-cast] test.cpp:67:27: error: switch missing default case [-Werror=switch-default] test.cpp: In member function ‘void algo::mutate(unsigned int)’: test.cpp:212:40: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion] test.cpp:215:77: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion] test.cpp:218:50: error: conversion to ‘unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion] test.cpp:220:35: error: conversion to ‘size_t {aka unsigned int}’ from ‘int’ may change the sign of the result [-Werror=sign-conversion] test.cpp: In function ‘int main()’: test.cpp:277:34: error: use of old-style cast [-Werror=old-style-cast] You can address those on your own, but I'll cover the pragma philosophy. Firstly: I never indent preprocessor directives--it calls them out more clearly. But even better: don't use 'em, especially not for a fluffy IDE feature (why doesn't it use a certain comment to cue that?) Pragmas are "implementation defined", and putting little bits of dirt in your program like this is a slippery slope. GCC used to discourage this: In some languages (including C), even the compiler is not bound to behave in a sensible manner once undefined behavior has been invoked. One instance of undefined behavior acting as an Easter egg is the behavior of early versions of the GCC C compiler when given a program containing the #pragma directive, which has implementation-defined behavior according to the C standard. In practice, many C implementations recognize, for example, #pragma once as a rough equivalent of #include guards — but GCC 1.17, upon finding a #pragma directive, would instead attempt to launch commonly distributed Unix games such as NetHack and Rogue, or start Emacs running a simulation of the Towers of Hanoi. Now I'll just ramble about formatting and style, without trying to grok the "big picture" of what your code is for. :) This is a matter of taste, but in implementation files using namespace std; can make your code less wordy. (Doing it in headers is not considered a good practice, as it would then be inherited by all implementation files that used the headers...giving them less control over potential name collisions.) Also, it's generally considered a bad idea to make data members in your classes public. Narrowing the interface through methods gives you more wiggle room to modify the implementation without clients of the class to be rewritten. So: using namespace std; class algo { private: vector<unsigned short> code; vector<unsigned int> output; bool debug_info; public: algo() { /* stuff... */ I personally like to see spaces between things and logical breaks in output. So following on that I'd turn: std::cout<<std::endl<<"size: "<<code_size<<std::endl<< std::endl<<"output: "<<std::endl; size_t output_size = output.size(); for (size_t t = 0; t < output_size; t++) std::cout<<output[t]<<std::endl; ...into the much-less-claustrophobic: cout << endl; cout << "size: " << code_size << endl; cout << endl; cout << "output: " << endl; for (size_t t = 0; t < output_size; t++) { cout << output[t] << endl; } Again in the "matter of taste" department, I prefer to use the C++ keywords for logical operations, so I would write: if (reset && !is_reset_p) ...as: if (reset and (not is_reset_p)) But that's just me, since I never try to write C++ code that will still compile in C...so I figure the keywords are there, why not use 'em for readability. Plus I use a proportional font to edit, and the ! for not can be hard to see...e.g. !list.isNull(). Also, I indent the breaks in my switch statements and not the cases. It makes it more parallel to an "if" by putting the case at the same level as the switch, and makes the cases stand out better. You also don't chew up screen space as rapidly: switch(instruction) { case 0: if (debug_info) { cout << "end"; } cont = false; can_resume_p = false; break; case 1: I also would suggest always putting braces around code in if statements, and not putting code on the same line as the condition. Beyond aesthetics, this makes editing the condition and editing the code edits to different lines in version control systems. (And always using braces means you don't have to generate a diff on the condition line when you go from one-line of code to multiple-lines of code or vice-versa.) The use of unsigned types is verbose and of questionable advantage. I never really thought that reclaiming half the numeric range mattered much (most of the time). But once I believed it was better for quantities that were always supposed to be unsigned to be labeled thusly, for type correctness. It isn't really the win you'd think, and probably obscures as many bugs as it prevents. I won't rehash the pros and cons here...I'll just say that if you want a real type-safe solution (e.g. in computer security code) you need something like SafeInt: http://safeint.codeplex.com/ Regarding the big picture I won't invest that time. I'll just say that if you're trying to write generalized algorithms for C++, then be sure to spend a little while looking at <algorithm>: http://en.cppreference.com/w/cpp/algorithm It may provide inspiration for your design. Perhaps what you really want is a templated class with some kind of iteration interface? You might consider asking a StackOverflow question that doesn't include your code above, but rather establishes a clear usage scenario. Then ask for suggestions on what methodology to use to meet the need. Your description is too vague, and suffers from the problem that you don't clearly define what it doesn't do...and while terms "sandboxed" might be meaningful in your mind it doesn't convey a clear requirement for this code to me.
{ "domain": "codereview.stackexchange", "id": 2758, "tags": "c++, algorithm, search, random, simulation" }
Do swarms show intelligence?
Question: Can you think of collective and swarm behaviors as "intelligence"? Would such a concept apply to a) ant colonies and b) fish swarms? Answer: Intelligence is something which has to have a definition, and there are many, but I would cautiously say no. The reason that I say this is because swarming behavior can be largely reproduced by a simple set of rules - matching distance to your neighbors and direction and speed as well. To me this really removes any intention or even conscious element to swarm behavior although its clear individuals do make decisions, they don't have to be more intelligent than a mouse looking for food or other kinds of animal behavior. That being said, others have literally built definitions of intelligence inspired by swarming - a decentralized sort of mind which makes decisions based on the interactions of component individuals. You could argue that our own brains are a similar sort of ensemble, and therefore no more different than our own intelligence if we look closely enough. So take your pick really.
{ "domain": "biology.stackexchange", "id": 1688, "tags": "terminology, ichthyology, sociality, intelligence" }
Replacing letters with numbers with its position in alphabet
Question: If anything in the text isn't a letter, ignore it and don't return it. a being 1, b being 2, etc. As an example: alphabet_position("The sunset sets at twelve o' clock.") Should return "20 8 5 19 21 14 19 5 20 19 5 20 19 1 20 20 23 5 12 22 5 15 3 12 15 3 11" as a string. It is my naive solution and my code is below in python 2.7 def alphabet_position(text): dictt = {'a':'1','b':'2','c':'3','d':'4','e':'5','f':'6','g':'7','h':'8', 'i':'9','j':'10','k':'11','l':'12','m':'13','n':'14','o':'15','p':'16','q':'17', 'r':'18','s':'19','t':'20','u':'21','v':'22','w':'23','x':'24','y':'25','z':'26' } arr = [] new_text = text.lower() for i in list(new_text): for k, j in dictt.iteritems(): if k == i: arr.append(j) return ' '.join(arr) Answer: First of all, you don't need to hardcode the letters and their positions in the alphabet - you can use the string.ascii_lowercase. Also, you don't have to call list() on a new_text - you can just iterate over it character by character. Then, what if we would construct a mapping between letters and letter indexes in the alphabet (with the help of enumerate()). Then, use a list comprehension to create an array of numbers which we then join to produce a result: from string import ascii_lowercase LETTERS = {letter: str(index) for index, letter in enumerate(ascii_lowercase, start=1)} def alphabet_position(text): text = text.lower() numbers = [LETTERS[character] for character in text if character in LETTERS] return ' '.join(numbers)
{ "domain": "codereview.stackexchange", "id": 33890, "tags": "python, strings, python-2.x" }
Do humans use the doppler effect to localize sources of sound?
Question: Consider a source of sound such as a person speaking or a party of people which makes a continual drone sound of the the same frequency. If a human shakes their head side-to-side with sufficient angular speed, they are in effect obtaining different frequencies of the same sound source and should be able to apply the Doppler effect to approximately localize (from prior experience) the sound source. Do humans use the Doppler effect to localize sources of sound and have there been any studies proving this? Edit: A link to the Weber-Fechner law and a link to the wiki article discussing the just-noticable-difference (JND) for music applications were added to the OP for reference, based on the accepted answer. Answer: A person would not be able to localize a sound using the Doppler effect created by shaking their head. Say a person shakes their head at 20 cm/s. The speed of sound is about 330 m/s. This gives a frequency change of 0.06%. The "just noticeable difference" to discern two frequencies played in succession is about 0.6% (source), so about an order of magnitude too coarse.
{ "domain": "physics.stackexchange", "id": 77220, "tags": "newtonian-mechanics, acoustics, frequency, doppler-effect, ultrasound" }
Locus equation of a projectile in terms of $\tan\theta$
Question: This is the locus equation of a projectile projected from the ground at an angle $\theta$, with an initial velocity $u$ : $$ y = x\tan\theta - \frac{gx^2}{2u^2\cos^2\theta} \tag{1} $$ I just read that this quadratic equation can be expressed in terms of $\tan\theta$ too. Rewriting equation $\left(1\right) :$ $$ \begin{align} y &~=~ x\tan\theta - \frac{gx^2\sec^2\theta}{2u^2} \\[5px] &~=~ x\tan\theta - \frac{gx^2(1+ \tan^2\theta)}{2u^2} \end{align} $$ On expanding and rearranging the terms, we end up with this quadratic in terms of $\tan\theta$ $$ \frac{gx^2}{2u^2}\tan^2\theta - x\tan\theta + \left(\frac{gx^2}{2u^2}+y\right) = 0 \tag{2} $$ This is what I read: If values of $u$, $x$, and $y$ are constants, then we would get two values of $\tan\theta$, i.e., two values of angle of projection $\theta :$ $\theta_1$ and $\theta_2 .$ What it means in physical terms (according to what I read) is, if we are projecting a projectile with an initial velocity $u$ and we want it to touch a particular coordinate $\left(x,y\right) ,$ then we can make it pass through the given coordinate by projecting it at two different angles $\theta_1$ and $\theta_2$ that we get from Equation $\left(2\right) ,$ and no more than these two angles (keeping magnitude of initial velocity $u$ the same). (I had to type everything just to make my question clear. I could’ve asked my question directly, without typing the equations. But just wanted to get it across better.) My question is: I’m not able to figure out if discriminant of Equation $\left(2\right)$ is positive, is it? In order to get two values of $\theta$ from that equation, its discriminant must be positive. I have a doubt it’s not (I am probably wrong). But I can’t seem to figure out whether it’s positive or not. Secondly, is there such an equation as Equation $\left(2\right) ?$ I mean, is it a correct equation? I didn’t find it in my book, or in any other book. I came across some random study materials, and that’s where I saw this. I want to know if this equation is correct? Answer: Your math is correct. The reason it is difficult to determine the sign of the discriminant is that it depends on the values for $x$, $y$, and $u$. There are three possibilities: Discriminant is positive: There are two values of $\theta$ that will hit the target $(x,y)$. Discriminant is zero: There is exactly one angle that will result in hitting the target. Any other angle falls short. Disciminant is negative: There are no firing angles that will hit the target. The target is too far away or too high or both. For an example of the last possibility, if your initial velocity is $u = 30\,\textrm{m/s}$ and your target is at $(x,y) = (1000\,\textrm{m}, 1000\,\textrm{m})$, then there is definitely no angle that will get the projectile to the target. The discriminant of equation (2) will be negative.
{ "domain": "physics.stackexchange", "id": 55933, "tags": "kinematics, projectile" }
Integrating Ampère's force law to find the force between wires
Question: I want to find the force between parallel current carrying wires using Ampère's force law. Ampère's force law is given as \begin{align} dF&= \frac{\mu}{4\pi} \frac{I_1dL_2\times(I_2dL_2\times r_{21})}{r^2} \\\int \mathrm{d}F&=\frac{\mu}{4\pi} \int{}{}\frac{I_1dL_1\times(I_2dL_2\times r_{21}}{r^2} \\&=\frac{\mu}{4\pi} \int{}{}\frac{I_1dL_1\times(I_2dL_2r_{21}\sin(90))}{r^2} \\&=\frac{\mu}{4\pi} \int{}{}\frac{I_1dL_1I_2dL_2r_{21}\sin(90)}{r^2} \\&=\frac{\mu}{4\pi} \int{}{}\frac{I_1dL_1I_2dL_2}{r} \\&=\frac{\mu}{4\pi} \int_{A}^{B} \int_{C}^{D}(\frac{I_1I_2}{r}dL_1)dL_2 \\&=\frac{\mu}{4\pi} \int_{A}^{B} [\frac{I_1I_2}{r}L_1]_{C}^{D}dL_2 \\&=\frac{\mu}{4\pi} \int_{A}^{B} (\frac{I_1I_2}{r}(C-D))dL_2 \\&=\frac{\mu}{4\pi} (\frac{I_1I_2}{r}(C-D)(B-A)) \end{align} Since the length of the wire I'm calculating, is the same B-A=C-D $$F=\frac{\mu}{4\pi} (\frac{I_1I_2}{r}(B-A)^2)$$ How ever when I calculate the force between two wires I get $$F=I_2 \Delta L B$$ $$F=I_2 \Delta L \frac{\mu I_1}{2 \pi r} $$ As you can see I get a slightly different answer. I really don't know where I am going wrong with this and I don't have anyone around me who I can ask this about. I would appreciate any help.Thank you. Answer: Your mistake was assuming that the angle between $dL_2$ and $r_{12}$ is 90°=constant everywhere. Pick one Point $r$ in vicinity of wire 2. Biot-Savat-law: $B = \int_A^B \frac{I_2dL_2 \times e_{12}}{r_{12}^2}$ where $e_{12}$ is unit Vector pointing from some Point on the wire to the reference Point $r$. Not only the current element $I_2dL_2$ (which indeed has relative angle 90°) will make the magnetic field at the Point $r$. Also the current Elements more far apart (relative angle $\alpha$ as a function of $L_2-r$) will contribute. Thus: $B(r) = \int_A^B \frac{I_2 sin(\alpha(L_2-r))}{|L_2-r|^2} dL_2 e_*$ . The unit Vector $e_*$ Points perpendicular to the plane in which the reference Point and the wire lies in. Parallelity of wires 1 and 2 imply that the angle between $I_1dL_1$ and $e_*$ are 90°. The cross product with $I_1 dL_1$ and $B(r)$ can be simplified; hence: $F= \int_C^D \frac{\mu}{4 \pi} I_1|B(L_1)|dL_1$. Now you can switch to new variables $\delta L = L_2-L_1$, $L = L_1$ in this integral. You can perform then Integration over $L$ easy. Finally $F = \frac{\mu I_1 I_2 \Delta L}{4 \pi} \int \frac{sin(\alpha(\delta L))}{\delta L^2}d \delta L$ where $\int \frac{sin(\alpha(\delta L))}{\delta L^2}d\delta L$ is a number.
{ "domain": "physics.stackexchange", "id": 52928, "tags": "homework-and-exercises, magnetic-fields, electric-current" }
Why is the power unchanged by a transformer?
Question: If we have a step up transformer then the voltage at the secondary side will be more than the voltage at the primary side. Since we all know that POWER = VOLTAGE * CURRENT and because voltage at the secondary side is now more than the voltage at the primary side. So will it not make the power at the secondary side more that the power at the primary side according to the relation P=VI? Then why is it said that power will remain same at the both side of the transformer when voltage on both sides is not same? Answer: So will it not make the power at the secondary side more that the power No. A transformer cannot generate power out of thin air and so the power on both sides is (roughly) the same. That means if the voltage on the secondary side is higher then the secondary current is actually lower than the primary. Let's look at simple example. Let's say you have a light bulb that consumes 100W at 200V. Assuming it's a simple resistor, the light bulb will draw a current of 0.5A and it's resistance is 400 Ohm. Now we connect the lightbulb to a 100V outlet using a 2:1 step up transformer. On the secondary side all remains the same: we have 200V and consume 100W. On the primary side the voltage is half of the secondary, i.e. 100V but the primary current is double: 1A. So the primary power is also 100W and that's the power that's drawn from the outlet. A transformer is similar to the transmission of a car. For a transformer: as the voltage goes up, the current goes down. For a transmission: as you go into high gear, the wheels turn faster but there is less force (or torque). Otherwise you could go infinitely fast by going to a higher and higher gear.
{ "domain": "physics.stackexchange", "id": 62972, "tags": "electromagnetism, energy-conservation, power, electronics, dissipation" }
How does the form of the electromagnetic wave equation indicate relativistic invariance?
Question: One question in a test I am going to take is: How does the form of the electromagnetic wave equations $$ \Delta \phi - \frac{1}{c^2}\frac{\partial^2 \phi}{\partial t^2} = - 4\pi \rho $$ indicate relativistic invariance? Is there a way to directly conclude this? Answer: Scalars are Lorentz invariant, also the differential operator (d'Alembert). Edit: The question contains only one component of the EM field not a scalar/anything else. So for the answer see the comments below.
{ "domain": "physics.stackexchange", "id": 72327, "tags": "electromagnetism, special-relativity" }
How does LightGBM deal with value scale?
Question: I understand that the loss metric can be used as linear, or log, or other things. This is documented at http://lightgbm.readthedocs.io/en/latest/Parameters.html?highlight=logloss#metric-parameters I would like to understand how LightGBM works on variables with different scale. In other words, is it necessary for me to harmonize scale when running LightGBM? (I am used to linear regression where you need to get into linear scale.) If I had inputs x1, x2, x3, output y and some noise N then here are a few examples of different scales. $y = x1 + x2 + x3 + N $ $y = exp(x1 + x2 + x3 + N) $ $y = log(x1 + x2 + x3 + N) $ $y = sqrt(x1 + x2 + x3 + N) $ $y = log(x1 * x2 * x3 * N) $ Answer: Generally, in tree-based models the scale of the features does not matter. This is because at each tree level, the score of a possible split will be equal whether the respective feature has been scaled or not. You can think of it like here: We're dealing with a binary classification problem and the feature we're splitting takes values from 0 to 1000. If you split it on 300, the samples <300 belong 90% to one category while those >300 belong 30% to one category. Now imaging this feature is scaled between 0 and 1. Again, if you split on 0.3, the sample <0.3 belong 90% to one category while those >0.3 belong 30% to one category. So you've changed the splitting point but the actual distribution of the samples remains the same regarding the target variable.
{ "domain": "datascience.stackexchange", "id": 1921, "tags": "regression, xgboost, gradient-descent" }
Slepian or DPSS window
Question: I am planning to use Slepian or DPSS window in my application where I want central lobe to be concentrated and also have low bandwidth: http://en.wikipedia.org/wiki/Window_function#DPSS_or_Slepian_window However, since the generating function is missing and looking at some online resources was not very helpful. So, I am wondering if someone can explain. OR If someone has DPSS window code (C++, Matlab) and would be willing to share. UPDATE (after getting answer from @jojek): Thanks, @jojek, I was just reading numerical recipe in C (third edition) to understand Slepian window. In their terminology, every Slepian window is defined by two indices jres and kT . Here kT indicates eigen vectors and “jres” some sort of frequency resolution. In their terminology, I am interested in Slepian(2,0) and Slepian(3,0). (Please refer sample page no 664: http://www.nr.com/nr3sample.pdf) Question. 1: If I understand it right, your solution give me kT = 0 which is what I am also looking for. However, I am still confused about how to choose frequency cut-off. Question. 2: Numerical recipe in C discusses the origin of this Slepian window and I am interested in knowing where the relevant expression [1] comes from: "Copying from Numerical recipe in C" There are two key ideas in multitaper methods, somewhat independent of each other, originating in the work of Slepian. The first idea is that, for a given data length N and choice jres, one can actually solve for the best possible weights w , meaning the ones that make the leakage smallest among all possible choices. The beautiful and nonobvious answer is that the vector of optimal weights is the eigenvector corresponding to the smallest eigenvalue of the symmetric tridiagonal matrix with diagonal elements ¼ [N^2 –(N-1-2j)^2 cos(2 pi jres/ N) ]; j = 0, 1, …. N-1 And off-diagonal element: -1/2 j (N-j) --------------------[1] Regards, Dushyant Answer: If you follow the reference link no. 43 from Wikipedia, then you will end up on this website of Stanford University. They are providing all necessary theory behind DPSS window, together with this MATLAB function (not to mention, that MATLAB already has the dpss function) : function [w,A,V] = dpssw(M,Wc); % DPSSW - Compute Digital Prolate Spheroidal Sequence window of % length M, having cut-off frequency Wc in (0,pi). k = (1:M-1); s = sin(Wc*k)./ k; c0 = [Wc,s]; A = toeplitz(c0); [V,evals] = eig(A); % Only need the principal eigenvector [emax,imax] = max(abs(diag(evals))); w = V(:,imax); w = w / max(w);
{ "domain": "dsp.stackexchange", "id": 2038, "tags": "filter-design, window-functions" }
Validating a CSV list of contacts and convert it to JSON
Question: I've written a class that takes a file, validates the formatting of the lines from an input file and writes the set of valid lines to an output file. Each line of the file should have a first name, last name, phone number, color, and zip code. A zip code is valid if it has only 5 characters, a phone number can have only 10 digits (in addition to dashes/parentheses in appropriate places). The accepted formats of each line of the input file are the following: Lastname, Firstname, (703)-742-0996, Blue, 10013 Firstname Lastname, Red, 11237, 703 955 0373 Firstname, Lastname, 10013, 646 111 0101, Green The program needs to write a JSON object with all of the valid lines from the input file in a list sorted in ascending alphabetical order by (last name, first name). These are the test cases I ran with it as well as the JSON output. I think I've identified all of the edge cases with the tests but I could have missed something. This code should exemplify good design choices and extensibility and should be production quality. Should anything be added/removed from the solution to meet these requirements? Also, any tests that would make the code fail are welcome. The code for the solution is below: __main__.py import sys from file_formatter import FileFormatter if __name__ == "__main__": formatter = FileFormatter(sys.argv[-1],"result.out") formatter.parse_file() file_formatter.py """ file_formatter module The class contained in this module validates a CSV file based on a set of internally specified accepted formats and generates a JSON file containing normalized forms of the valid lines from the CSV file. Example: The class in this module can be imported and passed an initial value for the input data file from the command line like this: $ python example_program.py name_of_data_file.in Classes: FileFormatter: Takes an input file and output its valid lines to a result file. """ import json class FileFormatter: """ Takes an input file and output its valid lines to a result file. Validates the formatting of the lines from an input file and writes the set of valid lines to an output file. Attributes: info_configs: A list containing lists of "accepted" configurations of the data from each line of the input file. in_file_name: Name of the input file. res_file_name: Name of the output file. """ info_configs = [["phone","color","zip"], ["color","zip","phone"], ["zip","phone","color"]] def __init__(self,start_file_name,out_file_name): """Initialize FileFormatter class with the input and output file names.""" self.in_file_name = start_file_name self.res_file_name = out_file_name def validate_line(self, line): """Validates that each line is in the correct format. Takes a line from a file, validate that the first two elements are properly formatted names, then validates that the remaining elements (phone number, zip code, color) in the line are properly formatted. Args: line: A line from a file Returns: A list of tokenized elements from the original line (string) in the correct order according to the specified format. For example: [Lastname, Firstname, (703)-742-0996, Blue, 10013] or [Firstname, Lastname, Red, 11237, 703 955 0373] or [Firstname, Lastname, 10013, 646 111 0101, Green] If a value of None is returned, some element in the line wasn't in the correct format. """ line = tokenize(line) if len(line) != 5: return None full_name = (line[0],line[1]) if not is_name(full_name): return None config = ["","",""] entry = { "color": "", "firstname": "", "lastname": "", "phonenumber": "", "zipcode": ""} phone_idx = 0 zip_idx = 0 color_idx = 0 for i in range(2,len(line)): if is_phone_number(line[i]): phone_idx = i-2 config[phone_idx] = "phone" if is_zip_code(line[i]): zip_idx = i-2 config[zip_idx] = "zip" if is_color(line[i]): color_idx = i-2 config[color_idx] = "color" if config in self.info_configs: # if the phone number, zip code, and color have been found and are in correct order if phone_idx == 0: line[0], line[1] = line[1], line[0] line = [token.strip(" ") for token in line] line = [token.replace(",","") for token in line] line[len(line)-1] = line[len(line)-1].replace("\n","") entry["firstname"] = line[0] entry["lastname"] = line[1] entry["color"] = line[color_idx+2] entry["phonenumber"] = line[phone_idx+2] entry["zipcode"] = line[zip_idx+2] return entry return None def parse_file(self): """Parses an input file, validates the formatting of its lines, and writes a JSON file with the properly formatted lines. Iterates through the input file validating each line. Creates a dictionary that contains a list of entries comprised of valid lines from the input file. Creates a JSON object of normalized data sorted in ascending order by a tuple of (lastname, firstname) for each line. """ lines_dict = {} json_dict = {} errors = [] with open(self.in_file_name,'r') as info_file: i = 0 for line in info_file: valid_line = self.validate_line(line) if valid_line: lines_dict[(valid_line["lastname"],valid_line["firstname"])] = valid_line else: errors.append(i) i += 1 json_dict["entries"] = [lines_dict[key] for key in sorted(lines_dict.keys(), reverse = True)] # sort by (lastname, firstname,) key value json_dict["errors"] = errors with open(self.res_file_name,'w') as out_file: json.dump(json_dict, out_file, indent = 2) # utility methods for parsing the file def tokenize(line): """Splits the passed in string on the delimiter and return a list of tokens. Takes a string and splits it on a delimter while maintaining the delimiter in its original position in the string. If the first word in the string doesn't end with a comma, the split operation will yield four tokens instead of five so the first two words (names) are broken up by the space character. Args: line: A string to be broken up into tokens based on a delimiter. Returns: A list of tokens (words) from the passed in line. """ delim = "," tokens = [e + delim for e in line.split(delim) if e] if len(tokens) == 4: names = tokens[0].split(" ") names[0] = names[0] + delim names[1] = " " + names[1] info = tokens[1:] tokens = [] tokens.extend(names) tokens.extend(info) return tokens def is_name(name_tuple): """Determines if the first two elements in a file line (names) are correctly formatted. Takes a tuple of elements and validates that they match one of two valid formats. Either both words end in a comma or the second one does while the first one doesn't. Args: name_tuple: A tuple of two elements (first and last name) from a line in a file Returns: A boolean indicating if the elements (names) in the tuple are correctly formatted. """ names = (name_tuple[0].strip(" "), name_tuple[1].strip(" ")) comma_first_case = False comma_second_case = False name1_comma = False name2_comma = False for i in range(2): curr_len = len(names[i]) for j in range(curr_len): if not names[i][j].isalpha() and j < curr_len -1: return False if j == curr_len - 1 and i == 0 and names[i][j] == ',': name1_comma = True if j == curr_len - 1 and i == 1 and names[i][j] == ',': name2_comma = True comma_first_case = name1_comma and name2_comma # both have commas comma_second_case = not name1_comma and name2_comma # name2 has comma, name 1 doesnt if not (comma_first_case or comma_second_case): return False return True def is_phone_number(token): """Determines if the passed in string represents a properly formatted 10-digit phone number. Takes a string and validates that it matches one of two valid formats specified for a phone number. Validates that the sequence of characters is an exact match to one of the valid formats. Args: token: A fragment of a line of a file Returns: A boolean indicating if the string is a properly formatted phone number. """ token = token.strip(" ") char_sequence = [] case_1 = ["paren","number","number","number","paren","dash","number","number","number","dash","number","number","number","number"] case_2 = ["number","number","number","space","number","number","number","space","number","number","number","number"] for char in token: is_paren = char == "(" or char == ")" is_dash = char == "-" is_ws = char == " " if represents_int(char): char_sequence.append("number") if is_paren: char_sequence.append("paren") if is_dash: char_sequence.append("dash") if is_ws: char_sequence.append("space") if char_sequence == case_1 or char_sequence == case_2: return True return False def is_color(token): """Determines if the passed in string represents a color. Takes a string and validates that it matches the valid formats specified for a color. Validates that it is only a one word color. Args: token: A fragment of a line of a file Returns: A boolean indicating if the string is a properly formatted color. """ token = token.strip(" ") for i in range(len(token)): if token[i] != "," and token[i] != "\n": if not token[i].isalpha() or not token[i].islower() : return False return True def is_zip_code(token): """Determines if the passed in string represents a properly formatted 5-digit zip code. Takes a string and validates that it matches the valid formats specified for a zip code. Validates that the string doesn't contain more than 5 numbers. Args: token: A fragment of a line of a file Returns: A boolean indicating if the string is a properly formatted zip code. """ token = token.strip(" ") digit_count = 0 for digit in token: if digit != "," and digit != "\n": if represents_int(digit): digit_count += 1 else: return False if digit_count != 5: return False return True def represents_int(char): """Determines if the passed in character represents an integer. Takes a char and attempts to convert it to an integer. Args: char: A character Returns: A boolean indicating if the passed in character represents an integer. Raises: ValueError: An error occured when trying to convert the character to an integer. """ try: int(char) return True except ValueError: return False if __name__ == "__main__": formatter= FileFormatter("data.in","result.out") formatter.parse_file() Answer: Your function is_phone_number is the prime example for the usage of regular expressions. You are basically trying to implement it yourself here! You can either use two different patterns here: import re def is_phone_number(token): token = token.strip(" ") return (re.match(r'\(\d{3}\)-\d{3}-\d{4}$', token) is not None or re.match(r'\d{3} \d{3} \d{4}$', token) is not None) Here, \d is any digit, \d{n} is a run of n digits and $ is the end of the string (to make sure there is nothing after the valid phone number). You could also combine it to one pattern: def is_phone_number(token): token = token.strip(" ") return re.match(r'\(?\d{3}\)?[ -]\d{3}[ -]\d{4}$', token) is not None This second pattern has the caveat, that it allows phone numbers that are mixes of the two patterns, like (123 456-1235, so I would stick to the two patterns. Your functions is_color and is_zip_code seem broken to me. Since you skip over commas, "blue,green" would be a valid one-word color and "50,364" a valid ZIP-code. I would use something like this: def is_zip_code(token): return re.match(r'\d{5}$', token) is not None def is_color(token): return re.match(r'[a-z]*$', token) is not None represents_int is now unneeded. The former makes sure that token is a string of five digits and the latter makes sure that the token consists only of lower-case letters. The function is_name is more complicated. But I would use str.endswith and exit early: def is_name(name_tuple): name = map(str.strip, name_tuple) if not name[1].endswith(",") return False if not name[1][:-1].isalpha(): return False if not (name[0].isalpha() or name[0].endswith(",") and name[0][:-1].isalpha()): return False return True Which can be combined to: def is_name(name_tuple): name = map(str.strip, name_tuple) return (name[1].endswith(",") and name[1][:-1].isalpha() and (name[0].isalpha() or name[0].endswith(",") and name[0][:-1].isalpha())) In retrospect, I don't understand why you insist on keeping the delimiter on the string in the tokenize function. It seems like it would be way easier to drop it here and work with a tokenized list afterwards... You could also just write one regex to rule them all (actually three, one each for each of your three input formats): name_comma = r'[a-z]*, [a-z]*' name_no_comma = r'[a-z]* [a-z]*' phone_paren = r'\(\d{3}\)-\d{3}-\d{4}' phone_space = r'\d{3} \d{3} \d{4}' zip_code = r'\d{5}' color = r'[a-z]*' # Lastname, Firstname, (703)-742-0996, Blue, 10013 # Firstname Lastname, Red, 11237, 703 955 0373 # Firstname, Lastname, 10013, 646 111 0101, Green acceptable_formats = [", ".join([name_comma, phone_paren, color, zip_code]), ", ".join([name_no_comma, color, zip_code, phone_space]), ", ".join([name_comma, zip_code, phone_space, color])] def validate_line(line): return any(re.match(pattern, line) is not None for pattern in acceptable_formats)
{ "domain": "codereview.stackexchange", "id": 25093, "tags": "python, json, file, csv" }
Conservation of mass, momentum, and kinetic energy -- what did I do wrong?
Question: Motivation: I came across a question while perusing the internet: assuming that mass, momentum, and kinetic energy are conserved in a system, ($mv^0, mv^1$, and $\frac{1}{2}mv^2$, but we can ignore the 1/2 here...), are all further quantities $mv^n$, with $n$ an integer >2, conserved? My instinct was mathematical induction. But I'm pretty sure worried I have no clue what I'm doing, so I checked a simpler case first. Given that mass and momentum are conserved, we have $m_0=m_f$ and $m_0v_0=m_fv_f$. Here we're taking $m$ to be the mass of the system, and $v$ to be the velocity of the center of mass. Now, if $m_0=0$, we already know that $\frac{1}{2}m_0v_0^2=\frac{1}{2}m_fv_0^2=0=\frac{1}{2}m_fv_f^2$ (the last equality is because $m_0=m_f=0$). So without loss of generality, we'll assume that the mass of the system is not $0$. Now, since we have $m_0v_0=m_fv_f$, squaring both sides gives $$m_0^2v_0^2=m_f^2v_f^2.$$ Since $m_0$ is assumed to be nonzero now, we can divide both sides by it to get $$\frac{m_0^2v_0^2}{m_0}=\frac{m_f^2v_f^2}{m_0}.$$ But we also have $m_0=m_f$, so this becomes \begin{align} \frac{m_0^2v_0^2}{m_0}&=\frac{m_f^2v_f^2}{m_f}\\ m_0v_0^2&=m_fv_f^2\\ \frac{1}{2}m_0v_0^2&=\frac{1}{2}m_fv_f^2. \end{align} Which seems to say that kinetic energy is conserved if mass and momentum are conserved. But I was told that kinetic energy can be changed to thermal or positional (or re-stored in other ways), and it is not necessarily conserved even if momentum is conserved. So where have I gone wrong? And is there any way to repurpose or salvage this inductive argument to answer the italicized question? (or is the answer that, in general, it is not true?) My thoughts on what may have gone wrong: Does $\frac{1}{2} mv^2$ only give the kinetic energy when all the mass is moving in the same direction? If so, is there any interpretation to $\frac{1}{2} mv^2$ when we are treating $v$ as the velocity of the center of mass? Answer: The answer is no. Here's a counterexample. Start with two objects, each with a mass of $2m$, heading towards each other at the same speed. v1 v1 2m ----> <---- 2m The collision breaks apart the masses into a total of four pieces, all of equal mass $m$. They have various velocities as shown below: v3 v2 v2 v3 <---m <--m m--> m---> Let's check the stipulations of the problem: Mass is conserved: $2(2m) = 4m.$ Momentum is conserved: zero before and afterwards. Kinetic energy is conserved if: $$2\left(\frac{1}{2}\left(2m\right)v_1^2\right) = 2\left(\frac{1}{2}mv_2^2\right) + 2\left(\frac{1}{2}mv_3^2\right)$$ $$2v_1^2 = v_2^2 + v_3^2$$ Let's choose $m = 1$, $v_1 = 5/\sqrt{2}$, $v_2 = 3$, $v_3 = 4$ in whatever units. Before the collision for the quartic conservation law: $$\sum_i m_iv_i^4 = 2\left(2mv_1^4\right) = 4\left(\frac{5}{\sqrt2}\right)^4 = 625$$ After the collision: $$\sum_i m_iv_i^4 = 2\left(mv_2^4\right) + 2\left(mv_3^4\right) = 2\left(3^4\right) + 2\left(4^4\right) = 674$$ So, the higher powers are not necessarily conserved. A quick note about your attempt: you don't have to prove that kinetic energy is conserved. That's already assumed by the puzzle. You only have to figure out the higher powers.
{ "domain": "physics.stackexchange", "id": 45940, "tags": "newtonian-mechanics, conservation-laws" }
Bending of Light in General Relativity using Perturbation
Question: It is standard textbook calculation (e.g. Schutz's First Course in General Relativity page 294) that we can find a total angular change in light deflection due to gravity to be \begin{equation}\Delta\phi=\frac{4GM}{bc^{2}}\end{equation} where $b$ is impact parameter. However, I am trying to do this via perturbation method, since for typical stars like our Sun, $\frac{GM}{rc^{2}}\ll 1$. The idea is that from Schwarzschild metric, we can show that \begin{equation}\frac{d\phi}{dr}=\frac{1}{r^{2}}\left(\frac{1}{b^{2}}-\frac{1}{r^{2}}\left(1-\frac{2GM}{rc^2}\right)\right)^{-1/2}\end{equation} From here, we consider \begin{equation}\frac{d\phi}{dr}=\frac{d\phi}{dr}\Bigr|_{M=0}+\delta(r)\end{equation} with $|\delta|\ll 1$. Then it should be true that $\Delta\phi=2\int_\infty^b\delta(r)dr$, with the extra factor of 2 accounting for symmetry of deflection. So the perturbation should be done making use of $\frac{GM}{rc^{2}}\ll 1$ on $\delta(r)$. I tried all sorts of approximations but it does not seem to work. In one case the integral diverges while in other cases I am off by at least a factor of 4. Textbooks like Schutz or Hobson never do it this way too. Anyone could help? Answer: There's an important subtlety: in the Schwarzschild metric, the impact parameter $b$ is not equal to the radius of closest approach. Let's start with the geodesic equation $$ \frac{1}{2}\dot{r}^2 + \frac{L^2}{2r^2}\left(1-\frac{2GM}{c^2r}\right) = \frac{1}{2}E^2. $$ At the radius of closest approach $R_0$ we have $$ \frac{1}{2}E^2 = \frac{L^2}{2R_0^2}\left(1-\frac{2GM}{c^2R_0}\right) $$ or $$ b^2 = R_0^2\left(1-\frac{2GM}{c^2R_0}\right)^{\!-1} \tag{1}, $$ where $b = L/E$. In other words, $b$ depends on $M$, and we need to take this into account. If we substitute (1) into $$ \frac{d\phi}{dr}=\frac{1}{r^{2}}\left[\frac{1}{b^{2}}-\frac{1}{r^{2}}\left(1-\frac{2GM}{c^2r}\right)\right]^{-1/2}, $$ we get $$ \begin{align} \frac{d\phi}{dr}&=\frac{1}{r^{2}}\left[\frac{1}{R_0^{2}}-\frac{1}{r^{2}} -\frac{2GM}{c^2}\left(\frac{1}{R_0^3}-\frac{1}{r^3}\right)\right]^{-1/2}\\ &= \frac{1}{r^{2}}\left[\frac{1}{R_0^{2}}-\frac{1}{r^{2}}\right]^{-1/2} \left[1-\frac{2GM}{c^2}\left(\frac{R_0^{-3}-r^{-3}}{R_0^{-2}-r^{-2}}\right)\right]^{-1/2}. \end{align} $$ Now we can use the first-order approach $$ \frac{d\phi}{dr}\approx\left.\frac{d\phi}{dr}\right|_{M=0}+\delta(r), $$ with $$ \left.\frac{d\phi}{dr}\right|_{M=0} = r^{-2}\left(R_0^{-2}-r^{-2}\right)^{-1/2},\\ \delta(r) = \frac{GM}{c^2}r^{-2}\left(R_0^{-3}-r^{-3}\right)\left(R_0^{-2}-r^{-2}\right)^{-3/2}. $$ Therefore $$ \Delta\phi|_{M=0} = 2\int_{R_0}^{\infty}\frac{\text d r}{r^2\left(R_0^{-2}-r^{-2}\right)^{1/2}} = 2\int_0^{1}\frac{\text d u}{\left(1-u^{2}\right)^{1/2}} = 2\sin^{-1}(1) = \pi, $$ where we used $u=R_0r^{-1}$, and $$ \begin{align} \Delta\phi|_\delta &= \frac{2GM}{c^2}\int_{R_0}^{\infty}\frac{R_0^{-3}-r^{-3}}{r^2\left(R_0^{-2}-r^{-2}\right)^{3/2}}\text d r\\ &= \frac{2GM}{c^2R_0}\int_0^1\frac{1-u^{3}}{\left(1-u^{2}\right)^{3/2}}\text d u\\ &= \frac{2GM}{c^2R_0}\int_0^{\pi/2}\frac{1-\sin^{3} x}{\cos^2 x}\text d x\\ &= \frac{2GM}{c^2R_0}\left[\int_0^{\pi/2}\frac{1-\sin x}{\cos^2 x}\text d x + \int_0^{\pi/2}\sin x\,\text d x\right] \\ &= \frac{2GM}{c^2R_0}\left[\int_0^1 \frac{2\text d t}{(1+t)^2} + 1\right] \\ &= \frac{4GM}{c^2R_0}, \end{align} $$ where we used $t = \tan(x/2)$ and the tangent half-angle formulae. Finally, we can use (1) again to replace $R_0$ with $b$, so that $$ \Delta\phi|_\delta = \frac{4GM}{c^2b}, $$ to first order in $M$.
{ "domain": "physics.stackexchange", "id": 29738, "tags": "homework-and-exercises, general-relativity, perturbation-theory, geodesics" }
Estimate timeline for a ML Project
Question: I am a novice data scientist and have been asked to provide an estimate for a data science project in our organization. From the problem stmt description, i am able to understand that it is a traditional binary classification problem. However, am new to the domain, dataset etc (and I don't have access to full dataset yet). Through my work, I will also have to interact with business users throughout to clarify my questions regarding data, domain etc. How can I propose a timeline to my supervisor without any experience in this space. Is there any guidelines that I can follow to come up with a reasonable timeline? Answer: Look at your past experience. Even though you're a novice, you were hired as a data scientist, so you'll probably have some experience with data science projects. A simple binary classification problem with a few hundred datapoints can be solved in a productive afternoon, whereas a large project that requires significant upfront engineering for the acquisition of your dataset could take months. Honesty is always key, as it leads to proper expectation management. Just stating the different phases of the project with an indication of how long they could take will already be quite nice. This could even be very rudimentary like: data acquisition: 1 week ~ 3 months EDA and preprocessing: ... If you don't have a better guess than 'somewhere between 1 week and 3 months', don't try to make a better guess. Because it will only lead to disappointment. Trust me, I'm speaking from experience here. Your supervisor will probably know you're a novice, and should not be offended and/or surprised if you come up with a timeline that is still quite abstract and prone to change over the coming time period. Also always take into account Hofstadter's law: It always takes longer than you expect, even when you take into account Hofstadter's Law
{ "domain": "datascience.stackexchange", "id": 10458, "tags": "machine-learning, neural-network, classification, regression, data-mining" }
Does a DNA sequence has its own derivation tree or pattern?
Question: I am new to bioinformatics and natural language processing. In linguistics, they have Treebank - derivation/parse tree for each sentence. For example, a sentence "Sara sleeps" can be visualized by a tree: Sentence -> Subject Verb, Subject -> Sara, Verb -> sleeps. I've heard about Genbank, a database for nucleotide sequences. Does a DNA has its own derivation tree, or is any pattern in a DNA sequence already known with current technology? I only heard about repeats. Answer: From your example I guess that by "deriving" you mean something like decomposing into elements with diverse functions. Maybe we could apply this concept to DNA sequences: Some parts code proteins, which are decomposed into triplets, each of which codes an amino-acid. Other parts have regulatory functions: They can be motifs not coding a protein, but instead allowing a protein or something else to bind and modify the expression of nearby protein-coding parts. Other parts may contribute to the physical structure of the chromosome, or may have no function at all.
{ "domain": "biology.stackexchange", "id": 4766, "tags": "bioinformatics" }
Successful implementation of Duplicate Files Finder in C++
Question: on my initial attempt of creating a duplicate files finder(see Duplicate files finder in C++ @ this code review site) I have finally came across the successful and optimized implementation of this code. The idea is same like the but I have made changes according to the advice of some professional here and also have fixed some bugs. Tangerine.cpp // SWAMI KARUPPASWAMI THUNNAI // Tangerine.cpp : Defines the entry point for the console application. // Programmed by VISWESWARAN N (C), 2016 #include<iostream> #include<fstream> #include<Windows.h> #include<string> #include"scan.h" int main() { std::cout << "Tangerine Solutions\n"; std::string location; std::cout << "Enter the location :" << std::endl; getline(std::cin, location); Process process; process.scan(location); int stay; std::cin >> stay; return 0; } scan.h #pragma once #include<iostream> #include<string> #include<map> #include<list> #include<map> #include<list> #include"preliminary.h" // Processing the files class Process:public PreliminaryTest { public: bool scan(std::string location); double getSize(std::string location); }; scan.cpp #include<iostream> #include<conio.h> #include<fstream> #include<Windows.h> #include<boost\filesystem.hpp> #include"scan.h" namespace v = boost::filesystem; bool Process::scan(std::string location) { for (v::recursive_directory_iterator end, file(location); file != end; ++file) { if (file->status().type() != v::regular_file) std::cout << "|| System file has been found skipping\n"; else { std::string loc = file->path().string(); // We will get the location std::cout << "|| Processing " << loc << "\n"; std::cout << "|| SIZE: " << getSize(loc) << "\n"; std::cout << "|| Performing the preliminary test...\n"; double size_of_current_file = getSize(loc); start_preliminary_test(size_of_current_file, loc); } } send_the_report_for_confirmatory_test(); return true; } double Process::getSize(std::string location) { std::ifstream in; in.open(location, std::ifstream::ate | std::ifstream::binary); double size = in.tellg(); return size; } preliminary.h // SWAMI KARUPPASWAMI THUNNAI #pragma once #include"headers.h" class PreliminaryTest:public ConfirmatoryTest { private: // This map is used to identify files with same sizes std::map<double, std::string> preliminary_tester; // This list will contain the files with same sizes std::list<std::string> preliminary_result; public: bool start_preliminary_test(double size,std::string location); void send_the_report_for_confirmatory_test(); }; preliminary.cpp // SWAMI KARUPPASWAAMI THUNNAI #include"headers.h" bool PreliminaryTest::start_preliminary_test(double size,std::string location) { std::map<double, std::string>::iterator test; test = preliminary_tester.find(size); if (test != preliminary_tester.end()) { // List of files with same sizes std::cout << "\n Files with similar sizes have been found\n"; preliminary_result.push_back(location); preliminary_result.push_back(test->second); std::ofstream file; file.open("SAME SIZED FILES.txt",std::ios::app); file << location << "\n"; file << test->second << "\n"; file.close(); } else preliminary_tester[size] = location; } void PreliminaryTest::send_the_report_for_confirmatory_test() { // Use iterators to access to the algorithms // I know this is lengthy but this is standard for almost all STL containers like deque, vectors etc., // This method is called half open and closed iterators // itr = iterators which links algo to the container std::list<std::string>::iterator itr1 = preliminary_result.begin(); std::list<std::string>::iterator itr2 = preliminary_result.end(); for (std::list<std::string>::iterator itr = itr1; itr != itr2; ++itr) { std::cout << "\nSending the result for confirmatory test...\n"; preliminary_test_result.push_back(*itr); std::cout << *itr; } std::list<std::string>::iterator itr3 = preliminary_test_result.begin(); std::list<std::string>::iterator itr4 = preliminary_test_result.end(); for (std::list<std::string>::iterator itr = itr3; itr != itr4; ++itr) { std::cout << "\nProcessing confirmatory test...\n"; std::cout << *itr; confirm(*itr); } } confirmatory.h // SWAMI KARUPPASWAMI THUNNAI #pragma once #include"headers.h" class ConfirmatoryTest { private: // Map to identify the hash matching std::map<std::string, std::string> confirmatory_tester; // Final Neat and Clean list of the locations of duplicate files are been stored here std::list<std::string> confirmatory_result; // The Member of This access specifier is used between member function of different class - so no provate protected: std::list<std::string> preliminary_test_result; public: std::string get_hash_for(const std::string location); void confirm(std::string location); }; confirmatory.cpp // SWAMI KARUPPASWAMI THUNNAI #include"md5.h" #include "headers.h" std::string ConfirmatoryTest::get_hash_for(const std::string location) { // I know this the below line may be omitted but it adds clarity std::string current_location = location; char* processed_location = new char[current_location.length() + 1]; strcpy(processed_location, location.c_str()); std::cout << "\n\n\n===>" << location; std::string md5; md5 = CALL_MD5_Function(processed_location); delete[] processed_location; // it is important to free up the memory return md5; } void ConfirmatoryTest::confirm(std::string location) { std::string current_hash; current_hash = get_hash_for(location); std::map<std::string, std::string>::iterator test; test = confirmatory_tester.find(current_hash); if (test != confirmatory_tester.end()) { confirmatory_result.push_back(location); confirmatory_result.push_back(test->second); // Some logging will help to identify the problem std::ofstream file; file.open("DUPLICATES.txt", std::ios::app); file << location << "\n"; file << test->second << "\n"; file.close(); } else { // This will save the current hash and locations confirmatory_tester[current_hash] = location; } } headers.h - came accross with a LNK2005 error so I have used this #pragma once #include<iostream> #include<string> #include<map> #include<list> #include<map> #include<list> #include<fstream> #include"confirmatory.h" #include"preliminary.h" ADVANTAGES 1. The previous one is windows dependent and this is capable of running in on any platform (cross-platformed) The previous one gained insane access to all system files when a manifest file is added but this could only scan the regular file Two way processing check file size and then compute hashes so time is greatly reduced! Every single bug has been fixed proper duplicate file location are stored in a DUPLICATES.txt file The initial code made used of an answer posted by the professional here which causes several license issue finally I've found my own solution so now I can happily host this on GitHub :) https://github.com/VISWESWARAN1998/Tangerine-Duplicate-Files-Finder I am a student and I will keep all the advices you drop here you can find that I have improved my code with previous answers and comments posted by the professionals like not used namespace std, endl, using const etc and etc so kindly inform the areas where I have to improve this code so that I can learn from you and gain knowledge and use on my future work :) Thank you Answer: Well apart from code review fix these bugs, Bug 1: The previous one is windows dependent and this is capable of running in on any platform (cross-platformed) - VISWESWARAN1998 Have you actually tested this? I am quite sure you are not. In Tangerine.cpp remove the windows header file #include<Windows.h> Bug 2: (May be called as a bug) remove this additional header file in scan.h and headers.h #include<list> #include<map> #include<list> Bug 3: No exception is handled if invalid location of the file is entered by an user fix it please... Bug 4: IMPORTANT your implementation is something like this, if a file irrespective of duplicates, it is done using map if another file with same size is found the current file location which is found recently and the one which is in the map is added, so by this do you mean there must be maximum two duplicates? Your implementation of course works but it is not the proper way let me you this example, say a folder contains 3 duplicate files file1,file2,file3 Your program will do the following, file1 will be added to the map when file2 is found to be the duplicate both file1 and file2 will be added to the list. when another file(file3) is found as the duplicate file1 and file3 will be pushed back to the list. So finally your list contains this, file1 file2 file3 file1 you can see that file1 is repeatedly added to the list, your program should handle this. Conditional statement will do this task before adding to the list. Ok now, the above process is repeated for MD5 the final list of duplicates may contain like this file1 file1 file2 file1 file3 file1 file1 you see there are 7 duplicate files where there are actually 3-1(atleast one file is needed) I still suspect this code will work since you have handled the location check before removing the file actually well that is a good habit(followed up your git-hub update) but this bug needs to be fixed or time consumption will greatly increased if there or multiple duplicate files,(calculation MD5 and size of files for huge files will actually take several seconds and when it is repeated it will waste several minutes. Bug 5: The previous one gained insane access to all system files when a manifest file is added but this could only scan the regular file-VISWESWARAN1998 well I see no implementation for this! what will your program does when it access the file which requires administrative permission? I am quite sure it will throw an exception. fix these bugs Good Work By the way
{ "domain": "codereview.stackexchange", "id": 22951, "tags": "c++" }
What if a Point Mutation is seen in only half the coverage for its location?
Question: I've been looking at some sequenced exomes and found an interesting point mutation that causes a Proline-to-Leucine amino acid change in the protein. This seems like it could have a big impact on the protein's functionality but before I go any further I want to explore whether or not the variant is a sequencing artifact. I looked at the coverage for this particular region of the genome and found that in some samples, the point mutation is seen in every single read covering the base in question while in others the point mutation is seen in approximately half of the reads. In all my samples, the base in question is covered by at least 15 separate reads but usually its more than 20. My primary question is: how should I interpret the cases where the point mutation is seen in some but not all of the reads covering its location? I'm also interested in any suggestions/advice on the more general topic of determining whether or not the mutation I've found is a sequencing artifact. Answer: I don't know, whether the organism you are working with is diploid, but suspect it's an animal (or even a mammal), so the most parsimonious explanation would be that you have homozygotes and heterozygotes at this SNP-position.
{ "domain": "biology.stackexchange", "id": 2589, "tags": "genomics, mutations" }
Partition partition with constraint of equal size
Question: I see the problem here which is the well know partition problem but with constraint that the size of both sets must be equal. I look at the answer and I don't understand that why Colin said add max(S)⋅length(S), and run the algorithm as normal would make the size of both sets equal. I think for long time. And i think that his claim for send s∈S to 1+s⋅length(S) is wrong? Example: Initial set: 1 2 3 9 1+s⋅length(S)= 1+s(4)=5 9 13 37 After running the traditional Boolean algorithm, what I got is one set of 5 9 13, another set of 37? Can anyone explain to me? Especially the max(S)⋅length(S) one. Answer: The idea of Colin's algorithm is to try to force a multiset that has partitions where the cardinality of the subset is different, to have only partitions where the resulting subsets have the same size. So that the problem can only be solved if we can divide the multiset into two subsets with the same cardinality For instance, think in the multiset {1,1,1,2,2,3}. This set has more than one way of partitioning it: We can partition it into {3,1,1} and {2,2,1} Or we can partition it into {1,1,1,2} and {3,2} In the variant they talk about, only the first way to partition this multiset is valid. So, what he does is adding to each number the following quantity: Number + (number with highest value * cardinality of the set we want to divide) In our example this results in: {19,19,19,20,20,21} And we can only partition this multiset of numbers only if we can partition it into two subsets with the same cardinality, in this case: {19,20,20} and {19,19,21} Why does it work? It works because you add to each number the same quantity, so in a partition where the two subsets have the same cardinality, we add the number the same number of times to each subset, in a partition with subsets of different cardinality, we add the number to each subset a number of times that is not equal, so the subsets can't have the same sum. For instance We keep with the previous example 1 + 18 + 2 + 18 + 2 + 18 and 1 + 18 + 1 +18 + 3 +18 This partition has the same sum because we add the number eighteen three times to each subset. But 1 + 18 + 1 +18 + 1 +18 + 2 + 18 and 3 + 18 + 2 +18 don't give the same result because we add eighteen to the first subset four times and only two to the other subset, as the two had the same sum before the transformation, by adding a number to each subset a number of times that is not equal, the two don't have the same sum anymore Hope this is helpful
{ "domain": "cs.stackexchange", "id": 3701, "tags": "algorithms, partition-problem, pseudo-polynomial" }
Access ObstacleLayer Costmap2D in localplanner
Question: Hi, Is there a way I can access Costmap2D of a layer in the local planner? I saw this question here, but I couldnt get it to work. The only way I can access the costmap is though layered_costmap_, but it is protected. Originally posted by aswin on ROS Answers with karma: 528 on 2015-01-09 Post score: 1 Answer: I got confused with the CostmapLayer doxygen docs here which doesnt show inheritance from Costmap2D std::vector<boost::shared_ptr<costmap_2d::Layer> >* plugins = costmap_ros_->getLayeredCostmap()->getPlugins(); for (std::vector<boost::shared_ptr<costmap_2d::Layer> >::iterator pluginp = plugins->begin(); pluginp != plugins->end(); ++pluginp) { boost::shared_ptr<costmap_2d::Layer> plugin = *pluginp; if(plugin->getName().find(layer_search_string_)!=std::string::npos) { boost::shared_ptr<costmap_2d::CostmapLayer> costmap; costmap = boost::static_pointer_cast<costmap_2d::CostmapLayer>(plugin); unsigned char* grid = costmap->getCharMap(); // do sth with it } } Originally posted by aswin with karma: 528 on 2015-01-09 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 20524, "tags": "ros, navigation, costmap-2d, move-base, layered-costmap" }
Why my loss is negative while training SAE?
Question: I am using loss='binary_crossentropy' here is my code: I tried to increase number of training image and Epoch ,but that did not help me. input_img = Input(shape=(28, 28, 1)) x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) encoded = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded) x = UpSampling2D((2, 2))(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = UpSampling2D((2, 2))(x) x = Convolution2D(16, 3, 3, activation='relu', border_mode='valid')(x) x = UpSampling2D((2, 2))(x) decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, nb_epoch=10, batch_size=500, shuffle=True, validation_data=(x_test, x_test), verbose=1) Answer: Use a linear output and mean squared error loss, assuming you are predicting normalised pixel intensity values. Cross-entropy over sigmoid output layer activations can do odd things when the values are not strictly in $\{0,1\}$, depending on implementation.
{ "domain": "datascience.stackexchange", "id": 1741, "tags": "unsupervised-learning, autoencoder" }
Are these languages are regular?
Question: Consider the languages $L_1, L_2 \subseteq \sum^*$, where $\sum=\{a,b,c\}$. Define $$L_1/L_2 = \{x : \exists y \in L_2\ such\ that\ xy \in L_1 \}$$ Let $L_1 = \{a^nb^nc^{2n}: n \ge 0\}$ and $L_2 = \{b^nc^{2n}: n \ge 0\}$. Justify whether $L_1$ and $L_1/L_2$ are regular. $L_1$ will not be CFL also as it needs more than one stack to count. $L_1/L_2$ gives concatenation and the result will be $a^{n} b^{2n} c^{4n}$ which is again non regular. Am I right? I am little bit confused for $L_1/L_2$ case as for some $y$, $xy$ belongs to $L_1$. Answer: Since this looks like an exercise, I'll just hint at the solution for now; some of this has already been nudged at in comments. What we have: $L_1 = \{a^nb^nc^{2n}\ |\ n\ge 0\}$ $L_2 = \{b^nc^{2n}\ |\ n\ge 0\}$ $L_1/L_2 = \{x\ |\ xy\in L_1 \mbox{ for some } y\in L_2\}$. Plugging in the actual $L_1 $ and $L_2$, we get $\begin{array}{ll} L_1/L_2 &= \{x\ |\ xb^nc^{2n}\in L_1 \mbox{ for some }n\ge 0\}\\ &= \{x\ |\ xb^nc^{2n}=a^mb^mc^{2m} \mbox{ for some }m,n\ge 0\} \end{array}$ Then it is straightforward to show that $m,n$ must actually be equal, and the rest is easy.
{ "domain": "cs.stackexchange", "id": 6574, "tags": "automata" }
Word Wrapping and Boxing
Question: This was an experiment to take any text and wrap it to a given number of columns. It also wraps the text in just one box, lets you box multiple blocks of text, and also lets you limit the number of boxes per line. The main paragraph tested was one that I found on Yahoo that I thought was really fun to use as a test. Would you be willing to look at my code for clarity and the algorithm for efficiency? WordWrap.java package wordwrap; public class WordWrap { private static String wordWrap(String text, int width, String delim) { String out = ""; String[] words; int currentWidth = 0; //Parse out tabs and new lines text = text.replaceAll("[\t\n]", " "); words = text.split(delim); //Rewrap to new width for (String word : words) { if (word.length() >= width) { //If it's not the first word, put it on a new line if (!out.isEmpty()) { out += "\n"; } out += word + " "; currentWidth = word.length(); } else if ((currentWidth + word.length()) <= width) { out += word + " "; currentWidth += word.length() + 1; } else { out = out.substring(0, out.length() - 1); out += "\n" + word + " "; currentWidth = word.length() + 1; } } return out.substring(0, out.length() - 1); } public static String wordWrap(String text, int width) { return wordWrap(text, width, " "); } public static String drawBox(String text, int width) { String out; String border; String[] lines; border = " "; for (int i = 0; i < (width - 2); i++) { border += "-"; } border += " \n"; out = border; if (width < 5) { width = 5; } width -= 4; lines = wordWrap(text, width).split("\n"); for (String line : lines) { out += String.format("| %-" + width + "s |\n", line); } out += border; return out; } public static String drawBoxes(String[] messages, int width) { String out = ""; String[] boxes = new String[messages.length]; String[][] lines; //String[box's full string][array of box's string split at new line] int maxBoxLines = 0; if (messages.length == 1) { out = drawBox(messages[0], width); } else { width = width / messages.length - 1; for (int i = 0; i < messages.length; i++) { boxes[i] = drawBox(messages[i], width); } for (String box : boxes) { int boxLines = box.split("\n").length; if (boxLines > maxBoxLines) { maxBoxLines = boxLines; } } lines = new String[boxes.length][maxBoxLines]; for (int b = 0; b < boxes.length; b++) { lines[b] = boxes[b].split("\n"); } for (int l = 0; l < maxBoxLines; l++) { String currentLine = ""; for (int b = 0; b < boxes.length; b++) { if (l >= lines[b].length) { currentLine += String.format("%" + width + "s ", ""); } else { currentLine += lines[b][l] + " "; } } out += currentLine.substring(0, currentLine.length() - 1) + "\n"; } } return out; } public static String drawBoxes(String[] messages, int width, int boxesWide) { String out = ""; if (messages.length <= boxesWide) { out = WordWrap.drawBoxes(messages, width); } else { int currentBox = 0; String[] currentMessages; while (currentBox < messages.length) { if (boxesWide > messages.length - currentBox) { boxesWide = messages.length - currentBox; } currentMessages = new String[boxesWide]; for (int i = 0; i < boxesWide; i++) { currentMessages[i] = messages[currentBox]; currentBox++; } out += WordWrap.drawBoxes(currentMessages, width) + "\n"; } } return out.substring(0, out.length() - 1); } } WordWrapTest.java package wordwrap; import static wordwrap.WordWrap.*; public class WordWrapTest { public static void main(String[] args) { final int WIDTH = 100; final String sample = "Once Jerry and 16 midgets set off on a journey. They were looking " + "for the great treasure of Ecrapolis. On their way\nthey got lost and camped " + "inside a giant elephant. they awoke the next morning to find that the elephant " + "had walked\nthem to Los Angeles. Being from an underground secret city, Jerry " + "and the midgets had no idea what to think of this.\nThey all went out exploring " + "the city, and got into all sorts of crazy-asss trouble. Jerry tried surfing and " + "was thrown off\nhis board into the sand, mouth-first. He proceeded to munch the " + "sand down, saying it as the best food he'd had in\nages. Suddenly while digging " + "through this delectable muck, he hit something hard. IT WAS A TREASURE CHEST!\n" + "He opened it slowly as the 16 midgets crowded around him. Unable to fathom what " + "was inside he tore it open. Inside\nwas a note, \"Ha Ha! There's no real " + "treasure You retarded egg goblin!!\" With that note, Jerry and the midgets " + "turned\npurple and floated into outer space, doomed to wander the universe."; final String sample2 = "Blah blah blah blah blah\n"; System.out.println("Raw Sample:\n" + sample + "\n"); System.out.println("Wrapped sample:\n" + wordWrap(sample, WIDTH) + "\n"); System.out.println("Boxed sample:\n" + drawBox(sample, WIDTH)); String[] samples = { sample2, sample }; System.out.println("Boxed samples:\n" + drawBoxes(samples, WIDTH)); String[] samples2 = { sample, sample2, sample }; System.out.println("Boxed samples3:\n" + drawBoxes(samples2, WIDTH)); String[] samples3 = { sample, sample2, sample2, sample }; System.out.println("Boxed samples3:\n" + drawBoxes(samples3, WIDTH)); String[] samples4 = { sample, sample2, sample, sample2, sample, }; System.out.println("Boxed samples4:\n" + drawBoxes(samples4, WIDTH, 3)); } } Test Output Raw Sample: Once Jerry and 16 midgets set off on a journey. They were looking for the great treasure of Ecrapolis. On their way they got lost and camped inside a giant elephant. they awoke the next morning to find that the elephant had walked them to Los Angeles. Being from an underground secret city, Jerry and the midgets had no idea what to think of this. They all went out exploring the city, and got into all sorts of crazy-asss trouble. Jerry tried surfing and was thrown off his board into the sand, mouth-first. He proceeded to munch the sand down, saying it as the best food he'd had in ages. Suddenly while digging through this delectable muck, he hit something hard. IT WAS A TREASURE CHEST! He opened it slowly as the 16 midgets crowded around him. Unable to fathom what was inside he tore it open. Inside was a note, "Ha Ha! There's no real treasure You retarded egg goblin!!" With that note, Jerry and the midgets turned purple and floated into outer space, doomed to wander the universe. Wrapped sample: Once Jerry and 16 midgets set off on a journey. They were looking for the great treasure of Ecrapolis. On their way they got lost and camped inside a giant elephant. they awoke the next morning to find that the elephant had walked them to Los Angeles. Being from an underground secret city, Jerry and the midgets had no idea what to think of this. They all went out exploring the city, and got into all sorts of crazy-asss trouble. Jerry tried surfing and was thrown off his board into the sand, mouth-first. He proceeded to munch the sand down, saying it as the best food he'd had in ages. Suddenly while digging through this delectable muck, he hit something hard. IT WAS A TREASURE CHEST! He opened it slowly as the 16 midgets crowded around him. Unable to fathom what was inside he tore it open. Inside was a note, "Ha Ha! There's no real treasure You retarded egg goblin!!" With that note, Jerry and the midgets turned purple and floated into outer space, doomed to wander the universe. Boxed sample: -------------------------------------------------------------------------------------------------- | Once Jerry and 16 midgets set off on a journey. They were looking for the great treasure of | | Ecrapolis. On their way they got lost and camped inside a giant elephant. they awoke the next | | morning to find that the elephant had walked them to Los Angeles. Being from an underground | | secret city, Jerry and the midgets had no idea what to think of this. They all went out | | exploring the city, and got into all sorts of crazy-asss trouble. Jerry tried surfing and was | | thrown off his board into the sand, mouth-first. He proceeded to munch the sand down, saying it | | as the best food he'd had in ages. Suddenly while digging through this delectable muck, he hit | | something hard. IT WAS A TREASURE CHEST! He opened it slowly as the 16 midgets crowded around | | him. Unable to fathom what was inside he tore it open. Inside was a note, "Ha Ha! There's no | | real treasure You retarded egg goblin!!" With that note, Jerry and the midgets turned purple and | | floated into outer space, doomed to wander the universe. | -------------------------------------------------------------------------------------------------- Boxed samples: ----------------------------------------------- ----------------------------------------------- | Blah blah blah blah blah | | Once Jerry and 16 midgets set off on a | ----------------------------------------------- | journey. They were looking for the great | | treasure of Ecrapolis. On their way they got | | lost and camped inside a giant elephant. they | | awoke the next morning to find that the | | elephant had walked them to Los Angeles. | | Being from an underground secret city, Jerry | | and the midgets had no idea what to think of | | this. They all went out exploring the city, | | and got into all sorts of crazy-asss trouble. | | Jerry tried surfing and was thrown off his | | board into the sand, mouth-first. He | | proceeded to munch the sand down, saying it | | as the best food he'd had in ages. Suddenly | | while digging through this delectable muck, | | he hit something hard. IT WAS A TREASURE | | CHEST! He opened it slowly as the 16 midgets | | crowded around him. Unable to fathom what was | | inside he tore it open. Inside was a note, | | "Ha Ha! There's no real treasure You retarded | | egg goblin!!" With that note, Jerry and the | | midgets turned purple and floated into outer | | space, doomed to wander the universe. | ----------------------------------------------- Boxed samples3: ------------------------------ ------------------------------ ------------------------------ | Once Jerry and 16 midgets | | Blah blah blah blah blah | | Once Jerry and 16 midgets | | set off on a journey. They | ------------------------------ | set off on a journey. They | | were looking for the great | | were looking for the great | | treasure of Ecrapolis. On | | treasure of Ecrapolis. On | | their way they got lost and | | their way they got lost and | | camped inside a giant | | camped inside a giant | | elephant. they awoke the | | elephant. they awoke the | | next morning to find that | | next morning to find that | | the elephant had walked them | | the elephant had walked them | | to Los Angeles. Being from | | to Los Angeles. Being from | | an underground secret city, | | an underground secret city, | | Jerry and the midgets had no | | Jerry and the midgets had no | | idea what to think of this. | | idea what to think of this. | | They all went out exploring | | They all went out exploring | | the city, and got into all | | the city, and got into all | | sorts of crazy-asss trouble. | | sorts of crazy-asss trouble. | | Jerry tried surfing and was | | Jerry tried surfing and was | | thrown off his board into | | thrown off his board into | | the sand, mouth-first. He | | the sand, mouth-first. He | | proceeded to munch the sand | | proceeded to munch the sand | | down, saying it as the best | | down, saying it as the best | | food he'd had in ages. | | food he'd had in ages. | | Suddenly while digging | | Suddenly while digging | | through this delectable | | through this delectable | | muck, he hit something hard. | | muck, he hit something hard. | | IT WAS A TREASURE CHEST! He | | IT WAS A TREASURE CHEST! He | | opened it slowly as the 16 | | opened it slowly as the 16 | | midgets crowded around him. | | midgets crowded around him. | | Unable to fathom what was | | Unable to fathom what was | | inside he tore it open. | | inside he tore it open. | | Inside was a note, "Ha Ha! | | Inside was a note, "Ha Ha! | | There's no real treasure You | | There's no real treasure You | | retarded egg goblin!!" With | | retarded egg goblin!!" With | | that note, Jerry and the | | that note, Jerry and the | | midgets turned purple and | | midgets turned purple and | | floated into outer space, | | floated into outer space, | | doomed to wander the | | doomed to wander the | | universe. | | universe. | ------------------------------ ------------------------------ Boxed samples3: ---------------------- ---------------------- ---------------------- ---------------------- | Once Jerry and 16 | | Blah blah blah blah | | Blah blah blah blah | | Once Jerry and 16 | | midgets set off on a | | blah | | blah | | midgets set off on a | | journey. They were | ---------------------- ---------------------- | journey. They were | | looking for the | | looking for the | | great treasure of | | great treasure of | | Ecrapolis. On their | | Ecrapolis. On their | | way they got lost | | way they got lost | | and camped inside a | | and camped inside a | | giant elephant. they | | giant elephant. they | | awoke the next | | awoke the next | | morning to find that | | morning to find that | | the elephant had | | the elephant had | | walked them to Los | | walked them to Los | | Angeles. Being from | | Angeles. Being from | | an underground | | an underground | | secret city, Jerry | | secret city, Jerry | | and the midgets had | | and the midgets had | | no idea what to | | no idea what to | | think of this. They | | think of this. They | | all went out | | all went out | | exploring the city, | | exploring the city, | | and got into all | | and got into all | | sorts of crazy-asss | | sorts of crazy-asss | | trouble. Jerry tried | | trouble. Jerry tried | | surfing and was | | surfing and was | | thrown off his board | | thrown off his board | | into the sand, | | into the sand, | | mouth-first. He | | mouth-first. He | | proceeded to munch | | proceeded to munch | | the sand down, | | the sand down, | | saying it as the | | saying it as the | | best food he'd had | | best food he'd had | | in ages. Suddenly | | in ages. Suddenly | | while digging | | while digging | | through this | | through this | | delectable muck, he | | delectable muck, he | | hit something hard. | | hit something hard. | | IT WAS A TREASURE | | IT WAS A TREASURE | | CHEST! He opened it | | CHEST! He opened it | | slowly as the 16 | | slowly as the 16 | | midgets crowded | | midgets crowded | | around him. Unable | | around him. Unable | | to fathom what was | | to fathom what was | | inside he tore it | | inside he tore it | | open. Inside was a | | open. Inside was a | | note, "Ha Ha! | | note, "Ha Ha! | | There's no real | | There's no real | | treasure You | | treasure You | | retarded egg | | retarded egg | | goblin!!" With that | | goblin!!" With that | | note, Jerry and the | | note, Jerry and the | | midgets turned | | midgets turned | | purple and floated | | purple and floated | | into outer space, | | into outer space, | | doomed to wander the | | doomed to wander the | | universe. | | universe. | ---------------------- ---------------------- Boxed samples4: ------------------------------ ------------------------------ ------------------------------ | Once Jerry and 16 midgets | | Blah blah blah blah blah | | Once Jerry and 16 midgets | | set off on a journey. They | ------------------------------ | set off on a journey. They | | were looking for the great | | were looking for the great | | treasure of Ecrapolis. On | | treasure of Ecrapolis. On | | their way they got lost and | | their way they got lost and | | camped inside a giant | | camped inside a giant | | elephant. they awoke the | | elephant. they awoke the | | next morning to find that | | next morning to find that | | the elephant had walked them | | the elephant had walked them | | to Los Angeles. Being from | | to Los Angeles. Being from | | an underground secret city, | | an underground secret city, | | Jerry and the midgets had no | | Jerry and the midgets had no | | idea what to think of this. | | idea what to think of this. | | They all went out exploring | | They all went out exploring | | the city, and got into all | | the city, and got into all | | sorts of crazy-asss trouble. | | sorts of crazy-asss trouble. | | Jerry tried surfing and was | | Jerry tried surfing and was | | thrown off his board into | | thrown off his board into | | the sand, mouth-first. He | | the sand, mouth-first. He | | proceeded to munch the sand | | proceeded to munch the sand | | down, saying it as the best | | down, saying it as the best | | food he'd had in ages. | | food he'd had in ages. | | Suddenly while digging | | Suddenly while digging | | through this delectable | | through this delectable | | muck, he hit something hard. | | muck, he hit something hard. | | IT WAS A TREASURE CHEST! He | | IT WAS A TREASURE CHEST! He | | opened it slowly as the 16 | | opened it slowly as the 16 | | midgets crowded around him. | | midgets crowded around him. | | Unable to fathom what was | | Unable to fathom what was | | inside he tore it open. | | inside he tore it open. | | Inside was a note, "Ha Ha! | | Inside was a note, "Ha Ha! | | There's no real treasure You | | There's no real treasure You | | retarded egg goblin!!" With | | retarded egg goblin!!" With | | that note, Jerry and the | | that note, Jerry and the | | midgets turned purple and | | midgets turned purple and | | floated into outer space, | | floated into outer space, | | doomed to wander the | | doomed to wander the | | universe. | | universe. | ------------------------------ ------------------------------ ----------------------------------------------- ----------------------------------------------- | Blah blah blah blah blah | | Once Jerry and 16 midgets set off on a | ----------------------------------------------- | journey. They were looking for the great | | treasure of Ecrapolis. On their way they got | | lost and camped inside a giant elephant. they | | awoke the next morning to find that the | | elephant had walked them to Los Angeles. | | Being from an underground secret city, Jerry | | and the midgets had no idea what to think of | | this. They all went out exploring the city, | | and got into all sorts of crazy-asss trouble. | | Jerry tried surfing and was thrown off his | | board into the sand, mouth-first. He | | proceeded to munch the sand down, saying it | | as the best food he'd had in ages. Suddenly | | while digging through this delectable muck, | | he hit something hard. IT WAS A TREASURE | | CHEST! He opened it slowly as the 16 midgets | | crowded around him. Unable to fathom what was | | inside he tore it open. Inside was a note, | | "Ha Ha! There's no real treasure You retarded | | egg goblin!!" With that note, Jerry and the | | midgets turned purple and floated into outer | | space, doomed to wander the universe. | ----------------------------------------------- Answer: String concatenation String concatenation using += is inefficient. Use a StringBuilder instead. Preparation before splitting The replacement before splitting could be improved here: //Parse out tabs and new lines text = text.replaceAll("[\t\n]", " "); words = text.split(delim); Multiple consecutive whitespace characters will result in multiple consecutive space characters. And since the default delimiter is space, the words array may contain empty elements. I recommend to adjust the replacement pattern for better results: text = text.replaceAll("\\s+", " "); Declare variables right before you need them In many places of there code you declare variables at the top of a function, even if they won't be used by all execution branches. This is not recommended. It's best to declare variables right before you need them. This is to minimize the live time, which is a window of vulnerability when the variable can be misused, leading to bugs. Unit testing Instead of printing formatted text to standard output, this kind of functionality really begs for unit testing, where you assert the expected outputs, which automated the verification step for you, so that you don't have to re-read and verify with your eyes.
{ "domain": "codereview.stackexchange", "id": 19176, "tags": "java, performance, algorithm" }
How does some matter prevent other matter from reaching a lower potential energy state?
Question: Before answering the question, keep in mind that I am a second year Biology student, with no experience in studying Physics and I consider myself 'mathematically illiterate'. An example of matter preventing other matter from reaching a lower potential energy state, is a table carrying an object, preventing it from falling due to gravity. However, I find it hard to understand what energy interactions are occurring between the matter involved. After Marcello Fonda kindly explained the terminology and concepts relevant to my question, I have refined my question and clarified my understanding of the subject. I understand that energy is, a measure of an object's ability to interact with other objects, changing their state of motion and that, Holding an object in place maintains its potential for interaction (energy), it doesn't change it. Initially, I suggested that the force present keeping the weight of the object (which I assume is a product of the amount of matter in the object and the 'strength' of gravity) at a higher potential energy, was possibly due to the repulsive and attractive forces between the atoms constituting the table. Marcello Fonda has agreed that the static force is caused by atomic repulsion. This makes me wonder if increasing the weight of the object on top of the atoms constituting the table, will cause them to be 'squeezed' or compressed, thereby increasing their energy state? If so, where would the source of energy pressing the atoms together come from? Side note: it would be nice to know the formal name for these 'barriers', or more specifically, the name for the matter preventing the other matter from falling to lower potential energy states. Answer: The concept of energy is a rather mysterious one. Feynman's explanation is probably one of the most beautiful, simple, and clear I've seen so far, you should try and check it for a good understanding. The main point of the answer to your question, though, is that we like to define energy as being always related to the force used for moving objects, not for holding them in place. To keep an object still doesn't change its energy, although force may be needed: you can thus think of energy as a measure of an object's ability to interact with other objects, changing their state of motion, for example by colliding with them or by pulling or pushing them around. Holding an object in place maintains its potential for interaction (energy), it doesn't change it. If you release the object by removing the table, you can turn that potential for interaction into real interaction, for example by tying your object to a pendulum clock and have it move the dials while losing height (by the way, this is sometimes called a gravity battery). A change in energy is called work, so for those forces which cause no change in an object's energy we say that they do no work. They are often called ideal constraint forces, so the table in this context would make for an ideal constraint. The main reason all this stuff sounds counterintuitive is that we as animals consume energy even for static holds, but that is due to the fact that our muscles work by continual micro-contractions, not by a single hold. A static hold is never truly static for our muscles, rather is a very fast cycle of pull-and-release type oscillations (you might be aware of the actual mechanism) which actually does work at a microscopic level (it takes the energy from ATP bonds), though not at a macroscopic one. A table doesn't experience this kind of fatigue, since atomic repulsion is rather static in nature. EDIT to address further question: Increasing the weight will make the table bend and squeeze. Its atoms will then be at a higher energy state because of the stretching and squeezing of their bonds (which can be reasonably accurately be thought of as acting like microscopical springs). This increase of energy will come at the expense of the energy of the object, which will fall a little distance by bending the table downwards. What we expect is that the bending will eventually stop. This can be explained in terms of energy by the fact that, as you bend the table, the energy required to further bend it increases. You reach a point where lowering the object yields less energy than is required to bend the table, so the descent stops. If no energy is dispersed in the process, lifting the weight will let the table go back to its initial, straight position. If there is energy dispersion (bonds are broken), then the table won't be able to get back to its initial position. If the dispersion is slow, the table will get saggy. If it is fast, the table will crack and break. Note that all this is regardless of the table's height: unless the object is tied to the ground with some sort of spring, raising the table and the object together (like carrying them to the second floor) doesn't stress the table more. Otherwise we couldn't have desks in skyscrapers.
{ "domain": "physics.stackexchange", "id": 93655, "tags": "quantum-mechanics, energy, atomic-physics, potential-energy, matter" }