anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
The existence of solution of a multivariate polynomial
Question: Given a multivariate polynomial of degree 4 on $v$ (real) variables such that there are $N$ terms in the polynomial. Is there an efficient algorithm to verify if it has a solution? Not aware of any algorithm for this problem. Can someone please comment. I think the solution itself can be calculated by Newton Raphson algorithm (but I am unaware of its running time complexity). Answer: No. The problem is NP-hard. It is NP-hard to test whether a system of quadratic equations has any solution. See Reduction from vertex-cover to system of quadratic equations, Determine whether the system of equations is underdetermined or overdetermined (related: 3-SAT and Systems of Nonlinear Modular Equations). Let $f_1(x)=0$, ..., $f_k(x)=0$ be a system of quadratic equations, where each $f_i$ is a multivariate quadratic polynomial on the variables. Define $$g(x) = f_1(x)^2 + \dots + f_k(x)^2.$$ Then the single equation $g(x)=0$ has a solution if and only if the system of quadratic equations $f_1(x)=0$, ..., $f_k(x)=0$ has a solution. Moreover, $g$ has degree 4. This reduction proves that your problem is NP-hard.
{ "domain": "cs.stackexchange", "id": 21032, "tags": "algorithms" }
How is walking related to Newton's third law
Question: How is walking related to Newton's third law, I know you push on the ground and the ground pushes back, but how does this happen? Answer: At the microscopic level, the electron charge clouds of the two interfaces repel each other, when they come close together. So essentially, you never touch the ground, you are always separated from it by a distance of the order of atomic radius.
{ "domain": "physics.stackexchange", "id": 54205, "tags": "newtonian-mechanics" }
Data structure for efficient searching, when insertions and removals are only one-sided
Question: I need a data structure for storing a number $n$ of elements, each of whom is associated with some different time $t_i$. $n$ varies and while it has a theoretical upper limit, this is many orders of magnitude larger than what is typically used. Through my application I can ensure that: Inserted elements are always newer than all existing elements, i.e., if an element associated with a time $\check{t}$ is inserted, then $\check{t}>t_i ∀ i ∈ {1,…,n}$. Elements are inserted one by one. Only the oldest elements are removed, i.e., if element $j$ is removed, then $t_j < t_i ~∀ i ∈ \lbrace 1,…,n \rbrace \setminus \lbrace j \rbrace$. Removals happen mostly one by one, but there is no direct harm if an element’s removal is delayed, as long as the fraction of spuriously stored elements remains smaller than 1. Apart from inserting and removing, the only thing I need to do is find the two neighbouring elements for some given time $\tilde{t}$ with $\min\limits_i t_i < \tilde{t} < \max\limits_{i} t_i$. With other words I need to find the two elements $j$ and $k$ such that $t_j<\tilde{t}<t_k$ and $∄ l ∈ \{1,…,n\}: t_j<t_l<t_k$. My criteria for the data structure are: Finding elements as described above should be as quick as possible. Inserting and removing should be quick. The data structure is comparably simple to implement. As long as we are not talking about a small runtime offset, each criterion has priority over the next. My research so far has yielded that the answer is likely some kind of self-balancing search tree, but I failed to find any information which of them is best for the case of one-sided inserting or deleting, and it will probably cost me a considerable time to find out myself. Also, I only found incomplete information about how well the trees self-organise and how quickly (e.g., AVL trees self-organise more rigidly than red-black trees), let alone how this is affected by one-sided inserting or deleting. Answer: Store the elements as a sequence, sorted by increasing timestamp. Use binary search to find the location where $\tilde{t}$ would occur if it were in the array; then you can easily find the two neighboring elements. Finding the two neighboring elements can be done in $O(\lg n)$ time. You'll also need to be able to append to the end of the sequence and delete from the beginning. Thus, basically you need a queue. There are standard constructions for a queue. For instance, you can store them in an array, with amortized $O(1)$-time insert and delete operations. Basically, you have an array for the elements of the sequence, and a start index (for the beginning of the sequence) and an end index (for the end of the sequence). To delete from the beginning, increment the start index. To add to the end, increment the end index; if this runs past the end of the existing array, allocate a new array of double the size and copy into the new array. Alternatively: you can store the elements in a balanced binary tree. This will achieve worst-case $O(\lg n)$-time for all operations.
{ "domain": "cs.stackexchange", "id": 6485, "tags": "data-structures, search-trees, algorithm-design" }
DC component: Seismic inversion
Question: Multiple papers (for example, Limitations on impedance inversion of band-limited reflection data Gosh[2000]) and textbooks covering seismic inversion mention a necessary component of any seismic inversion: the DC (or zero frequency) component. For lack of a better question, what does a "zero frequency" component look like? Also, why is this considered a necessary component of any complete seismic inversion scheme? Answer: The impedance inversion problem asks: "Recover acoustic impedance from seismic reflection waveforms". The forward modeling procedure from acoustic impedance to seismic reflection waveforms involves calculating reflection coefficients and then convolving these with a wavelet. Let's assume, for now, that you have perfect seismic resolution and therefore your wavelet is of infinite bandwidth. This means that the inversion problem becomes: "Recover the acoustic impedance from a given set of reflection coefficients". As shown by Peterson et al. (1955), for small contrasts in acoustic impedance, the reflection coefficient equation can be linearized as shown below. Linear problems are much much easier to solve computationally than non-linear. This makes the forward model calculation a simple difference, the discrete analog to a derivative. Note: this is for normal incidence but a similar analysis can be done for non-normal incidence. So... if it takes a derivative to calculate the reflection coefficient, it must take an integral to calculate the acoustic impedance. However, as everyone knows, when you integrate any function, you inevitably run into the non-uniqueness caused by that dreaded +C at the end of all integrals. That +C, in this context, means that there are an infinite number of acoustic impedance models that can give you the exact same reflection coefficients. So you need the +C (i.e. the DC, or zero frequency, or additive constant) information to do a complete inversion and really recover the true values of impedance in the earth. All you get from the seismic are the reflection coefficients (and that's with infinite resolution), so you will need to get this DC information from another independent source. In practice, the inversion is not done by a simple integration but by setting up a linear system and iteratively converging (by something like conjugate gradient) to the answer that minimizes some misfit function between the forward model and the data. As complex as that sounds, deep down, we are pretty much just doing a careful integration. Hope that helped. Peterson, R. A., W. R. Fillippone, and F. B. Coker, 1955, The synthesis of seismograms from well log data: Geophysics, 20 (3), 516–538.
{ "domain": "earthscience.stackexchange", "id": 1419, "tags": "seismic, inversion" }
Vertical motion initial velocity given max height
Question: I'm having a hard time finding a formula that calculates the initial vertical velocity for a projectile that should reach a certain height. I'm using this formula to calculate both vertical and horizontal displacement: This works great if I have the initial velocity, with any angle (even for motion in vertical axis only!). Problem is that I have maximum height and angle, and I need to calculate that initial velocity. I tried the following formula: And it works for angles other than 90 degrees (PI / 2). But I need to handle the case of vertical motion only. This angle works in the previous formula, but not here. Also obvously, I need to handle the other angles too (for which case this formula works good). Is there any other formula I can try? I have tried Wikipedia and Khan Academy, but couldn't find what I needed. Answer: Thanks to John Rennie and Goodies, I have reached the following equation: $$v_{0} = \sqrt{2gh}$$ This works fine in my case, and calculates the initial velocity for a projectile fired upwards at a 90 degree angle, which should reach a certain height.
{ "domain": "physics.stackexchange", "id": 25591, "tags": "homework-and-exercises, kinematics, projectile" }
Yeast contamination with fungus patch
Question: I am working on yeast mutant and I am observing their sporulation efficiency on agar plates. One of my plates got some dark green small fungus patches. I can't grow yeast again and dump this one with fungus because I am counting at a specific time interval to compare and it will take a lot of time to go through the entire process. What should I do? Should I remove fungus from scalpel? And does this fungus going to affect the yeast growth? Answer: Either you can cut out the fungus and get away with it, or the fungus will rapidly overgrow your plate regardless. You might as well try to cut out the fungus. If it overgrows your plate the experiment with that plate is done. This is why you should have parallel plates for this sort of experiment. There is a saying: don't put all of your eggs in one basket. The reason is that if you sit on that basket all eggs are broken. If you are counting on just one plate for your experiment and you sit on it, or it gets overgrown with fungus you must start over. The other reason to have several parallel plates going is that you can average the spore count. One plate might have more or less for some random reason, like fungus. But if you have several and average them out you will reduce noise.
{ "domain": "biology.stackexchange", "id": 7757, "tags": "cell-biology, cell-culture, yeast" }
Translating Objective-C use of static and +(void)initialize to Swift
Question: I am converting an old Objective-C class into Swift. My actual question is at the very end after all of the code. Here is a cut-down version of the Objective-C class: DateInfo.h: #import <Foundation/Foundation.h> @interface RMYearInfo : NSObject // There are also some instance properties but those aren't relevant to the question + (NSInteger)numberOfMonths; + (NSArray *)shortMonthNames; + (NSArray *)longMonthNames; // several other class methods @end DateInfo.m: #import "DateInfo.h" static NSArray *shortMonthNames = nil; static NSArray *longMonthNames = nil; // there are several other statics as well @implementation DateInfo + (void)reinitialize { NSCalendar *cal = [NSCalendar currentCalendar]; NSDateFormatter *yearFormatter = [[NSDateFormatter alloc] init]; shortMonthNames = [yearFormatter shortStandaloneMonthSymbols]; longMonthNames = [yearFormatter standaloneMonthSymbols]; // lots of other processing for the other statics } + (void)initialize { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(reinitialize) name:NSCurrentLocaleDidChangeNotification object:nil]; [self reinitialize]; } + (NSInteger)numberOfMonths { return shortMonthNames.count; } + (NSArray *)shortMonthNames { return shortMonthNames; } + (NSArray *)longMonthNames { return longMonthNames; } // Lots of other instance and class methods As you can see, the initialize method sets up a notification handler so all of the static variables can be reinitialized if the locale is changed while the app is running. Here is my Swift code. Since there is no initialize in Swift (any more), my solution is to use private backing variables for the public static variables. import Foundation public struct DateInfo { // some normal instance properties irrelevant to the question private static var _formatter: DateFormatter! private static var formatter: DateFormatter { if _formatter == nil { _formatter = DateFormatter() NotificationCenter.default.addObserver(forName: NSLocale.currentLocaleDidChangeNotification, object: nil, queue: nil) { (notification) in _formatter = DateFormatter() _shortMonthNames = nil _longMonthNames = nil // reset all of the other statics as well } } return _formatter } private static var _shortMonthNames: [String]! public static var shortMonthNames: [String] { if _shortMonthNames == nil { _shortMonthNames = formatter.shortStandaloneMonthSymbols } return _shortMonthNames } private static var _longMonthNames: [String]! public static var longMonthNames: [String] { if _longMonthNames == nil { _longMonthNames = formatter.standaloneMonthSymbols } return _longMonthNames } public static var numberOfMonths: Int { return shortMonthNames.count } // lots of other similar private/public pairs of statics } Is this an appropriate way to translate the functionality given the need to be able to reinitialize the statics? I don't like having a private static backing each public static property. Answer: Some thoughts: Make formatter a static stored property (which are guaranteed to be lazily initialized only once). This allows to get rid of the backing property _formatter. For more clarity, move the reinitialization code to a separate method, as in your Objective-C version. Do not cache the other static properties. For example, returning formatter.shortStandaloneMonthSymbols is only one indirection more than returning _shortMonthNames, but is simpler and allows to get rid of the remaining backing properties. A minor point: The notification closure does not access the (notification) argument, which can therefore be replaced by _. Putting it together, we have the following implementation: public struct DateInfo { private static func reinitialize() { formatter = DateFormatter() } private static var formatter: DateFormatter = { // This closure is executed exactly once, on the first accesss of the `formatter` property. NotificationCenter.default.addObserver(forName: NSLocale.currentLocaleDidChangeNotification, object: nil, queue: nil) { _ in reinitialize() } return DateFormatter() }() public static var shortMonthNames: [String] { return formatter.shortStandaloneMonthSymbols } public static var longMonthNames: [String] { return formatter.standaloneMonthSymbols } public static var numberOfMonths: Int { return shortMonthNames.count } } Another option would be to use the typical Singleton pattern: public class DateInfo { static let shared = DateInfo() private var formatter: DateFormatter private init() { formatter = DateFormatter() NotificationCenter.default.addObserver(forName: NSLocale.currentLocaleDidChangeNotification, object: nil, queue: nil) { _ in self.formatter = DateFormatter() } } public var shortMonthNames: [String] { return formatter.shortStandaloneMonthSymbols } public var longMonthNames: [String] { return formatter.standaloneMonthSymbols } public var numberOfMonths: Int { return shortMonthNames.count } } The advantage is that all initialization is clearly done in the init method. A small disadvantage might be that more typing is needed to access the properties (e.g. DateInfo.shared.shortMonthNames).
{ "domain": "codereview.stackexchange", "id": 34518, "tags": "swift, objective-c, static" }
Implement hash table using linear congruential probing in python
Question: I have just read the Chapter 5 of Data Structures and Algorithms with Python. The authors implemented hash sets using linear probing. However, linear probing may result in lots of clustering. So I decided to implement my hash table with a similar approach but using linear congruential probing instead. Below is my code: from collections.abc import MutableMapping def _probe_seq(key, list_len): """ Generate the probing sequence of the key by the linear congruential generator: x = (5 * x + c) % list_len In order for the sequence to be a permutation of range(m), list_len must be a power of 2 and c must be odd. We choose to compute c by hashing str(key) prefixed with underscore and c = (2 * hashed_string - 1) % list_len so that c is always odd. This way two colliding keys would likely (but not always) have different probing sequences. """ x = hash(key) % list_len yield x hashed_string = hash('_' + str(key)) c = (2 * hashed_string - 1) % list_len for _ in range(list_len - 1): x = (5 * x + c) % list_len yield x class HashTable(MutableMapping): """A hash table using linear congruential probing as the collision resolution. Under the hood we use a private list self._items to store the items. We rehash the items to a larger list (resp. smaller list) every time the original list becomes too crowded (resp. too sparse). For probing to work properly, len(self._items) must always be a power of 2. """ # _init_size must be a power of 2 and not too large, 8 is reasonable _init_size = 8 # a placeholder for any deleted item _placeholder = object() def __init__(self, items=None): """ :argument: items (iterable of tuples): an iterable of (key, value) pairs """ self._items = [None] * HashTable._init_size self._len = 0 if items is not None: for key, value in items: self[key] = value def __len__(self): """Return the number of items.""" return self._len def __iter__(self): """Iterate over the keys.""" for item in self._items: if item not in (None, HashTable._placeholder): yield item[0] def __getitem__(self, key): """Get the value corresponding to the key. Raise KeyError if no such key found """ probe = _probe_seq(key, len(self._items)) idx = next(probe) # return the value if key found while probing self._items while self._items[idx] is not None: if (self._items[idx] is not HashTable._placeholder and self._items[idx][0] == key): return self._items[idx][1] idx = next(probe) raise KeyError @staticmethod def _add(key, value, items): """Helper function for __setitem__ to probe the items list. Return False if found the key and True otherwise. In either cases, set the value at the correct location. """ loc = None probe = _probe_seq(key, len(items)) idx = next(probe) while items[idx] is not None: # key found, set value at the same location if items[idx] is not HashTable._placeholder and items[idx][0] == key: items[idx] = (key, value) return False # remember the location of the first placeholder found during probing if loc is None and items[idx] is HashTable._placeholder: loc = idx idx = next(probe) # key not found, set the item at the location of the first placeholder # or at the location of None at the end of the probing sequence if loc is None: loc = idx items[loc] = (key, value) return True @staticmethod def _rehash(old_list, new_list): """Rehash the items from old_list to new_list""" for item in old_list: if item not in (None, HashTable._placeholder): HashTable._add(*item, new_list) return new_list def __setitem__(self, key, value): """Set self[key] to be value. Overwrite the old value if key found. """ if HashTable._add(key, value, self._items): self._len += 1 if self._len / len(self._items) > 0.75: # too crowded, rehash to a larger list # resizing factor is 2 so that the length remains a power of 2 new_list = [None] * (len(self._items) * 2) self._items = HashTable._rehash(self._items, new_list) @staticmethod def _remove(key, items): """Helper function for __delitem__ to probe the items list. Return False if key not found. Otherwise, delete the item and return True. (Note that this is opposite to _add because for _add, returning True means an item has been added, while for _remove, returning True means an item has been removed.) """ probe = _probe_seq(key, len(items)) idx = next(probe) while items[idx] is not None: next_idx = next(probe) # key found, replace the item with the placeholder if items[idx] is not HashTable._placeholder and items[idx][0] == key: items[idx] = HashTable._placeholder return True idx = next_idx return False def __delitem__(self, key): """Delete self[key]. Raise KeyError if no such key found. """ # key found, remove one item if HashTable._remove(key, self._items): self._len -= 1 numerator = max(self._len, HashTable._init_size) if numerator / len(self._items) < 0.25: # too sparse, rehash to a smaller list # resizing factor is 1/2 so that the length remains a power of 2 new_list = [None] * (len(self._items) // 2) self._items = HashTable._rehash(self._items, new_list) else: raise KeyError I would like same feedbacks to improve my code. Thank you. Reference: Data Structures and Algorithms with Python, Kent D. Lee and Steve Hubbard Answer: Tests Given something this low-level, as well as your claims that it solves specific clustering problems - you need to test it. The tests for something like this, thankfully, are relatively easy. You may also want to do some rough profiling to get an idea of how this scales in comparison to the built-in hash method. Type hints def __init__(self, items=None): can probably be HashableItems = Iterable[ Tuple[Hashable, Any] ] # ... def __init__(self, items: Optional[HashableItems]=None): Class method _rehash and _remove should be @classmethod instead of @staticmethod because they reference HashTable, which can be replaced with cls.
{ "domain": "codereview.stackexchange", "id": 38408, "tags": "python, algorithm, python-3.x, object-oriented, hash-map" }
Question regarding the sail attached to a boat.
Question: I came across a physics numerical in which the boat is travelling 45 degrees south-east. The direction of wind is towards north and the force that it exerts is constant force of 100 N on the sail. The issue I am having is that the initial Kinetic energy is much less than energy lost(force of wind * cos (45) * distance traveled) because of the force of wind. Is it possible that the ship will cover the distance stated in numerical? Answer: I am going to make some general comments, expecting that they will lead for further clarification of the question. First - when a boat sails into the wind (as the one in this question does), it is important to realize that (all these are idealized statements... real sailing is a lot more complicated): a) the force of the wind is approximately normal to the sail b) the sail is approximately bisecting the angle between the wind and the boat c) the wind does not stop, but bends around the sail d) there is a lateral force from the keel - which prevents the boat from drifting sideways; only the forward component of the wind moves the boat. So with the boat at 45° to the wind, and the sail at 22.5° to the boat, the force pushing the boat forward is $$F = 100 \cdot \sin(22.5°) = 38 N$$ (I drew the boat as though the wind is coming from the top of the page, as that is how I have done it for years. You would have to call that "South" for your situation. I just can't bring myself to draw the other way up. It doesn't change the concept. Sorry?) Now drag of a boat is approximately quadratic with velocity - so there is a "sufficiently low" velocity where the drag is less than this 38 N, and the boat arrives (eventually) at its destination. I realize this may be an oversimplification - so I invite you to ask questions to further clarify this.
{ "domain": "physics.stackexchange", "id": 21879, "tags": "homework-and-exercises, kinematics, energy-conservation" }
In a synchrotron, do electrons make periodic recoils?
Question: Synchrotron radiation happens because circular motion of electrons produce a tangential acceleration-- or something along those lines. Point is, photons are produced by these accelerated electrons. As far as I know, emission of photons act as a brake and slows electrons down. My question is this. Given that photons are emitted in a quantized way, I.e. can only take certain energies, I would think that this is not a continuous process. Rather, these emissions would take place at a regular frequency (I assume this frequency is not the one of the emissions). Is this true? Do electrons emit photons one by one, with a distinct recoil each time? Answer: This is a more complicated question than it appears on its face. A free electron has a continuous energy spectrum. This means that the photons it emits are not constrained to quantized values like they would be for a bound system. Photons are unusual in that they don't have any mass. Formally the number of photons emitted in bremsstrahlung radiation is infinite- an infinite number of very low energy photons that sum up to a finite energy. Since any method of detecting photons necessarily has an energy threshold, you can only ever detect a finite number of photons, but that exact number depends on how sensitive your detector is. That said, for a given threshold, yes, the electron appears to emit the photons one at a time, with a distinct recoil for each one. It's just that you can never be sure how many photons below your detection threshold that you may have missed. And as a practical matter in a real synchrotron, you have so many electrons that you can ignore the recoils and trajectories of individual particles and deal with averages instead.
{ "domain": "physics.stackexchange", "id": 77433, "tags": "electromagnetic-radiation, photons, electrons, accelerator-physics" }
Correct nomenclature for reaction types
Question: What is the correct name for a reaction like this? $$ \ce{ 4MnO2 ->[500\ ^\circ \text{C}] 2Mn2O3 + O2 ^}$$ Is it a synthesis, or a Decomposition? I tried to find it out with http://en.wikipedia.org/wiki/Chemical_reaction but i did not get the link between the formula and the nomenclature as it seems to be both. Answer: This is a decomposition reaction. Synthesis requires a minimum of two reactants. Decomposition is where a reactant breaks down into two or more products. Here, one reactant (MnO2) goes to two products. It seems like it may not be decomposition because MnO2 is becoming MnO3, but it is still one reactant to two products.
{ "domain": "chemistry.stackexchange", "id": 368, "tags": "nomenclature, terminology" }
How can we say that the universe is expanding
Question: How can we say that universe is expanding as we r sure about that energy is neither created nor destroyed then from where does this more energy come from Answer: The theory of the expanding universe comes from Hubble. Hubble studied galaxies and plotted their red shift vs. their estimated distance. From that he found that there seems to be a correlation between distance and redshift. And since redshift is caused by something flying away from us, stronger redshift means its flying away faster. So since the further away a galaxy is, the stronger its redshift, implies that the further away something is from earth, the faster it flies away from us (thats where the Hubble constant comes from, it basically says how much the redshift increases if an object is a certain distance from earth). From this the model formed, that the reason for them to be flying faster away from us, the further away they are, is because the universe is expanding. This model fits all of our current evidence and checks out with other implications as far as I know. There are however other explentations on why stuff that is further away is more redshifted. At the moment the "Expanding Universe" model is the most accepted one in the community of scientists.
{ "domain": "astronomy.stackexchange", "id": 2272, "tags": "universe, fate-of-universe" }
Pressure activated gel cooling pads for dogs: what are the physics behind them?
Question: I recently bought a pressure activated gel cooling pad for my dog. (comparable product: https://www.amazon.com/Hugs-Pet-Products-Pressure-Activated/dp/B00C65TXO6) The mat is filled with a gel that noticeably cools down, when light pressure is applied. Now, in my understanding compression should lead to an increase in temperature. Unfortunately I was not able to find any sources that explain the effect involved in pressure induced cooling of the gel inside the cooling pad. I could also not find any further information about the gel itself that is inside the cooling mat. Can someone explain to me how such a cooling pad works and what the physics are behind it? I would also be happy if someone could point me towards any literature on this topic. Answer: As you might expect (well, if you have a background in chemistry or materials science anyway), here's the basic cycle The gel in a cooling pad is water-based with a polymer. Polymer is an activated chemical that changes properties when pressure-activated. When it’s activated it pulls the heat from the body. It is based on endothermology which means a chemical change that is accompanied by an absorption of heat. How does Polymer Activate? When contact or pressure is removed and there is no longer heat to absorb, the pad re-activates. It typically takes 20- 30 minutes depending upon the environment. … To reactivate the cooling pad, simply remove from pressure and let it sit in a cool place for approximately 30 minutes.”
{ "domain": "physics.stackexchange", "id": 80805, "tags": "thermodynamics" }
Caesar Cipher implementation in Java
Question: I wanted to create a class that encrypts provided clear text using Caesar Cipher. Do you find my implementation clean? Any suggestions? package biz.tugay.caesarcipher; import java.util.Locale; /* Encyrpts a clear text using Caeser Cipher (https://en.wikipedia.org/wiki/Caesar_cipher) with given shift amount. Provided shift amount (i.e. key) must be a positive integer less than 26. Only English alphabet is supported and encyrpted text will be in uppercase. Shift amount 0 will return the same clear text. */ public final class CaesarCipher { private final String clearText; private final int key; public CaesarCipher(final String clearText, final int key) { if (clearText == null) { throw new UnsupportedOperationException("Clear text to be encrypted can not be null!"); } if (key < 0 || key > 26) { throw new UnsupportedOperationException("Key must be between 0 and 26"); } this.clearText = clearText; this.key = key; } public String encryptText() { final StringBuilder cipherTextBuilder = new StringBuilder(); final String clearTextUpperCase = clearText.toUpperCase(Locale.US); final char[] clearTextUpperCaseCharArray = clearTextUpperCase.toCharArray(); for (final char c : clearTextUpperCaseCharArray) { if (c < 65 || c > 90) { // If the character is not between A .. Z, append white space. cipherTextBuilder.append(" "); continue; } final Character encryptedCharacter = encryptCharacter(c); cipherTextBuilder.append(encryptedCharacter); } return cipherTextBuilder.toString(); } private Character encryptCharacter(final char c) { final int initialShift = c + key; final int finalShift; if (initialShift > 90) { // This is the case where we go beyond Z, we must cycle back to A. finalShift = (initialShift % 90) + 64; } else { // We are in the boundries so no need to cycle.. finalShift = initialShift; } return (char) finalShift; } } and I have 2 simple tests: package biz.tugay.caesarcipher; import org.junit.Assert; import org.junit.Test; public class CaesarCipherTest { @Test public void shouldReturnBCDForClearTextABCAndKey1() { final String clearText = "abc"; final CaesarCipher caesarCipher = new CaesarCipher(clearText, 1); final String encryptedText = caesarCipher.encryptText(); Assert.assertTrue(encryptedText.equals("BCD")); } @Test public void shouldReturnAForZAndKey1() { final String clearText = "Z"; final CaesarCipher caesarCipher = new CaesarCipher(clearText, 1); final String encryptedText = caesarCipher.encryptText(); Assert.assertTrue(encryptedText.equals("A")); } } Answer: package biz.tugay.caesarcipher; import java.util.Locale; /* Encyrpts a clear text using Caeser Cipher (https://en.wikipedia.org/wiki/Caesar_cipher) with given shift amount. There is a typo in "Encrypts", and the URL should be a hyperlink, like <a href="https://...">Caesar Cipher</a>. Provided shift amount (i.e. key) must be a positive integer less than 26. Depending on who you ask, the word positive may exclude the 0. You should just say must be between 0 and 25. But even more important: the documentation must match the code. Currently the code says <= 26. Only English alphabet is supported and encyrpted text will be in uppercase. The same typo as above. And, the Turkish lowercase dotless i (ı) is also "supported", although not intentionally. Shift amount 0 will return the same clear text. */ public final class CaesarCipher { private final String clearText; There is no reason that the Cipher class ever stores the clear text. Therefore this field should be replaced with a method parameter. private final int key; public CaesarCipher(final String clearText, final int key) { if (clearText == null) { throw new UnsupportedOperationException("Clear text to be encrypted can not be null!"); } if (key < 0 || key > 26) { throw new UnsupportedOperationException("Key must be between 0 and 26"); } this.clearText = clearText; this.key = key; } public String encryptText() { final StringBuilder cipherTextBuilder = new StringBuilder(); final String clearTextUpperCase = clearText.toUpperCase(Locale.US); final char[] clearTextUpperCaseCharArray = clearTextUpperCase.toCharArray(); for (final char c : clearTextUpperCaseCharArray) { if (c < 65 || c > 90) { // If the character is not between A .. Z, append white space. This comment is redundant if you write c < 'A' || c > 'Z'. cipherTextBuilder.append(" "); continue; } final Character encryptedCharacter = encryptCharacter(c); cipherTextBuilder.append(encryptedCharacter); } return cipherTextBuilder.toString(); } private Character encryptCharacter(final char c) { final int initialShift = c + key; final int finalShift; if (initialShift > 90) { // This is the case where we go beyond Z, we must cycle back to A. finalShift = (initialShift % 90) + 64; } else { This can never happen since you already check the condition in the encryptText method. // We are in the boundries so no need to cycle.. finalShift = initialShift; } return (char) finalShift; } } I would write the encryptCharacter method as follows and remove the bounds check from the encryptText method: private char encryptCharacter(char c) { if ('A' <= c && c <= 'Z') { int position = c - 'A'; int shiftedPosition = (position + shift) % 26; return (char) ('A' + shiftedPosition); } else { return c; // Or ' ', as in your code } } The tests package biz.tugay.caesarcipher; import org.junit.Assert; import org.junit.Test; public class CaesarCipherTest { @Test public void shouldReturnBCDForClearTextABCAndKey1() { final String clearText = "abc"; final CaesarCipher caesarCipher = new CaesarCipher(clearText, 1); final String encryptedText = caesarCipher.encryptText(); Assert.assertTrue(encryptedText.equals("BCD")); } I usually split each test into three paragraphs. The first prepares everything, the second does the interesting work, and the third asserts that the result is correct. When you follow this style, you can easily see which part of the code is worth stepping through with a debugger. Instead of assertTrue, you should call assertEquals("BCD", encryptedText). Because when that assertion fails, the error message is much nicer, giving you the expected and the actual result. @Test public void shouldReturnAForZAndKey1() { final String clearText = "Z"; final CaesarCipher caesarCipher = new CaesarCipher(clearText, 1); final String encryptedText = caesarCipher.encryptText(); Assert.assertTrue(encryptedText.equals("A")); } } You forgot to test lowercase letters and non-alphabetic characters. And emojis, which by the way would result in two spaces per emoji.
{ "domain": "codereview.stackexchange", "id": 24797, "tags": "java, caesar-cipher" }
Curvature of a beam when subjected to an axial force
Question: How do you calculate the curvature of a beam due to any forces acting parallel to the beam? Intuitively, a beam in real life would bend perpendicular to the force to form an arc. How is this calculated? From my understanding, Euler-Bernoulli Beam theory cannot be used? For example, how do you solve for the equation of the massless beam in the diagram below, where the walls squash the inner beam to curve? Answer: You solve it as a slender column in compression, for which closed-form solutions are available. The failure mode is buckling (or outward bowing). Interestingly, and the critical load for the onset of buckling depends on the stiffness of the beam and not its yield strength. This is because the buckling onset is defined as that point where the compressive load places the column into a state where a vanishingly small lateral perturbation will grow without bound and cause the column to suddenly bow outwards, and collapse in bending. Per Rob's comment, this is called Euler buckling. From wikipedia (sorry, I do not know how to repair the formatting): When subjected to compressive forces it is possible for structural elements to deform significantly due to the destabilising effect of that load. The effect can be initiated or exacerbated by possible inaccuracies in manufacture or construction. The Euler buckling formula defines the axial compression force which will cause a strut (or column) to fail in buckling. $$F=\frac {\pi ^{2}EI}{(Kl)^{2}}$$ where $F$ = maximum or critical force (vertical load on column) $E$ = modulus of elasticity, $I$ = area moment of inertia, or second moment of area $l$ = unsupported length of column, $K$ = column effective length factor, whose value depends on the conditions of end support of the column, as follows. For both ends pinned (hinged, free to rotate), $K = 1.0$. For both ends fixed, $K = 0.50$. For one end fixed and the other end pinned, $K\approx 0.70$. For one end fixed and the other end free to move laterally, $K = 2.0$. In the design of ultraminiature point probes for electrically testing extremely small circuitry, it is common to use gold-plated wires of fine gauge as probes. they are susceptible to buckling and for this reason are sometimes made of gold-plated tungsten wire instead of copper or (solid) gold, because Young's modulus for tungsten is a bit higher than that of the other metals. This furnishes greater stiffness for a given wire diameter and hence a more buckling-resistant probe.
{ "domain": "physics.stackexchange", "id": 46416, "tags": "stress-strain, structural-beam" }
'Advancing' basic models
Question: Good morning. I am a student running a project using medical data, predicting if the patient will or won't get a disease. The data has about 50k cases and 70 features. I proposed to train 5 models- SVM, KNN, LR, RF and a neural network on this data, using cross-validation to optimize the hyperparameters and then report the best AUROC. However I've been told by my professor that using these models isn't 'advanced' enough and I need to do something at a higher level. First- I must admit I am not an expert at ML- i have run similar projects before successfully but nothing more advanced. I have also read a lot of literature on disease prediction using tabulate data and I am struggling to find options that are more 'advanced' and yet still feasible for myself- I have only been learning computer science for 9 months so my abilities are limited. However, I do have about 5 weeks to develop these models so I know I can learn lots. Please can anybody suggest where I can start? I am really worried about failing the project based on the reaction of my professor to my proposal. Thanks in advance Answer: If some of your input features are categorical, you could consider a TabTransformer model (see paper and blog post). There is an official Keras implementation and tutorial here and a third-party Pytorch version on Github. It may count as advanced enough because it uses Transformers, while still having a relatively simple neural network architecture. There are other examples of structured data classification on the Tensorflow/Keras websites that you can use for inspiration or comparison/baselines, e.g. GBDTs. Another option is to search past Kaggle competitions that are similar to your project, see what kind of advanced models scored highly and learn from their code. Personally I think anybody who can use Sklearn can use Keras, and I don't have a computer science background. Just look for a beginner-friendly tutorial that suits your needs, ideally one for the latest versions of TF/Keras from the official websites (they've changed quite a bit over the last few years).
{ "domain": "ai.stackexchange", "id": 3427, "tags": "python, scikit-learn" }
Why is there a λ⁴ in the spectral overlap integral in FRET calculations
Question: The Förster resonance energy transfer (FRET) spectral overlap integral looks like: $$ J=\int_{0}^{\infty}\bar{F}_{D}(λ)ε_{A}(λ)λ^4 dλ $$ Where $λ$ is the wavelength, $\bar{F}_{D}(λ)$ is the normalized fluorescence emission spectrum of the donor(normalized to unity area), and $ε_{A}(λ)$ is the extinction coefficient ($M^{-1} \ cm^{-1}$). I am not very familiar with resonance energy transfer but was just curious why would a $λ^4$ term be contained in this spectral overlap calculation. The intuitive implication would be that the longer the wavelength, the larger the spectral overlap? See for example Eq. 12.1 on page 471 in FRET and FLIM Techniques (Theodorus W. J. Gadella, ed. 2008) or the corresponding $\omega^{-4}$ in the integrals of Eq. 14.30 of Sect. 14.5.1 in David L. Andrews' Chapter 14: Resonance Energy Transfer: Theoretical Foundations and Developing Applications Answer: Why this is such a good question After finishing teaching at 8pm EDT, it took me about 1 hour to find the answer and 1.5 hours writing this answer and confirming things. The Wikipedia article just gives that formula out of nowhere and cites [17] which is nothing but a table of contents (apparently not containing the string "rster" from Förster anywhere) for a book that I couldn't find online, [18] which is a Google Books preview of a book for which something related to the $\lambda^4$ does appear in Eq. 1.2 of pg 13 but explains nothing about where it came from apart from citations to {61}, which is missing from the Google Books preview, and 81-84 which are all books (unlikely to have a free PDF lying around), and [19] which gives the relevant formula in Eq. 14.1 but with no explanation and not even any citations to say where it came from. Despite having worked in this field for a while, done my PhD in the general area, and even helped supervise a student whose Masters thesis gives a 5-page derivation of Foerster theory starting from page 30, I couldn't find where the $\lambda^4$ came from (the thesis presents the Foerster rate in a different way, which doesn't involve the $\lambda^4$, and I also looked at the reference in there to this paper on my friend Suggy Jang's generalization of Foerster-Dexter theory but it was done in the same spirit as my student's derivation, which unfortunately doesn't help us today with finding the source of the $\lambda^4$). I thought I was in luck, because my thorough studying of the field meant that I knew one of the best reviews on the topic (this PDF of a paper by Greg Scholes currently with 1238 citations on Google Scholar), but it turns out he literally just gives the integral on Pg. 62 with no explanation. The textbook by May and Kühn might have the desired explanation, but I couldn't find a copy online. Every single paper and/or article I found online with a title like "Introduction to Foerster Theory" or simply "Foerster Resonance Energy Transfer" gave me nothing (some examples of the better resources I found were this, this, some detailed discussions here, this very interesting article by someone who hung out with Foerster in real life entitled "Förster's resonance excitation transfer theory: not just a formula" but then disappointingly it did indeed give "just a formula" at Eq. 4. The original 1948 paper by Foerster in German (currently with 9994 citations on Google Scholar) doesn't seem to have anything resembling anything close to "the" desired equation. The only research paper by Foerster mentioned in his Wikipedia article doesn't have the equation either, and the famous 1953 paper (currently with 9977 citations on Google Scholar) by Dexter that earned him a spot in the name "Foerster-Dexter theory" doesn't have $\lambda^4$ or $1/\nu^4$ or $1/\omega^4$ anywhere, although now that I have the answer and double-checked all these articles to make sure I wasn't discrediting anyone, I do see that there's a $1/E^4$ in Eq. 16 but it's not as close to your equation as what I'm about to show you. The answer This 1965 "report" by Foerster actually tells us how he got the $1/\nu^4 = (\lambda /c)^4$ right here: Basically after 53 pages which largely set the reader up for it, he gets this expression for the transfer rate: In that equation, you can plug in the following: which will mean you'll get a factor of $c/\nu = \lambda$ in the numerator coming from the molar decadic excitation coefficient $\varepsilon(\nu)$ and a factor of $(c/\nu)^3 = \lambda^3$ coming from the normalized fluorescence quantum spectrum $f(\nu)$: the two combine together to form $\lambda^4$. Since Foerster tells us that these are "Einstein's well-known expressions" for $\varepsilon(\nu)$ and $f(\nu)$, I assume you don't need help knowing why those are proportional to $\nu$ and $\nu^3$ respectively... ... just kidding, explanations and more references about the Einstein coefficients can be found here and here and you'll see expressions proportional to $\nu$ and others proportional to $\nu^3$ in both of those resources.
{ "domain": "biology.stackexchange", "id": 11877, "tags": "fluorescent-microscopy, spectroscopy, fret" }
Creating a Qiskit Circuit sending $|00\rangle$ to $|1,-\rangle$ and $|11\rangle$ to $|0,-\rangle$
Question: I am trying to create a circuit in Qiskit that performs the following transformations: starting in state |00⟩ generates a √(2)/2 * (-|10⟩+|11⟩) state starting in state |11⟩ generates a √(2)/2 * (|00⟩-|01⟩) state I created a basic circuit for creating entangled states but I do not know how to infer relevant gates for such transformations. def create_circuit(): qr3 = qiskit.QuantumRegister(2) cr3 = qiskit.ClassicalRegister(2) qc3 = qiskit.QuantumCircuit(qr3 ,cr3) qc3.h(qr3[0]) # H qc3.cx(qr3[0], qr3[1]) # CNOT return qc3 Answer: Note that for the first state you have $$ \dfrac{|11 \rangle - |10\rangle}{\sqrt{2}} = -|1\rangle \otimes \dfrac{|0 \rangle - |1\rangle}{\sqrt{2}} $$ This is a product or separable state. Hence no entanglement and therefore you don't need a two qubit gate. Recall the mapping $X|0\rangle =|1\rangle$ and and $Z|1\rangle = -|1\rangle$ and $H|1\rangle = \dfrac{|0 \rangle - |1\rangle}{\sqrt{2}} $ So you are looking for something like ** Note the circuit is in little endian convention (read from bottom to top). You should be able to get the second one now.
{ "domain": "quantumcomputing.stackexchange", "id": 3707, "tags": "qiskit, circuit-construction, textbook-and-exercises" }
How to calculate Signal-To-Noise Ratio
Question: I am doing some analysis in python working with a photodiode. The signal is more or less periodic with some phase shifting in certain areas that give rise to faster or slower periods but overall still very periodic. I am trying to see how "good" our signal is by computing the Signal-to-Noise ratio (SNR), but so far I have only read from a book on DSP that SNR is defined only as the mean divided by the standard deviation of the signal. Also the book defines Coefficient of variation (CV) as the standard deviation divided by the mean, multiplied by 100. They say a good signal has high SNR and low CV. Is this simply the definition of SNR or is it something that should be adjusted depending on the signal? How would i go about calculating SNR for my signal? Answer: If you can get input signal when it doesn't contain any useful signal (I mean only noise is presented), you can estimate average noise power at first. Simply find a power of such a signal: $P_n=1/N \cdot \displaystyle\sum_{k=0}^{N-1}|s(k)|^2$. Choose $N$ in $2^{12}\ldots 2^{15}$ range for example. Then you can measure a power of $signal+noise$ mixture, $P_s$, by the same way, when input signal is presented at the input. Caution: the measurement of $P_s$ is only valid if the signal exists all the measurement time. SNR could be calculated by: $SNR=10 \cdot \log_{10} \frac {P_s - P_n}{P_n}$. If your noise is stationary and white it's possible to measure it once in the setup and use $P_n$ as a constant. If you can't measure noise power at any reason, you can try to estimate SNR in the frequency domain via FFT. But you have to know your signal's band. The point is the same but you should use FFT frequency indexes instead of sample numbers $k$ while estimating integral power. $P_s$ is now power of $signal+noise$ in the band of interest, while $P_n$ is the noise power in total band ($0...fs$). In this case power spectral density of the noise in your system should be evaluated at first. This method could be far too complex for your task. Both methods show you pointless result in the case of low SNR in the signal band (not total band in general). Frequency domain method will be preferable if your signal's band is quite less than $f_s$, otherwise time domain method will be good enough. To estimate SNR of the order of 5 dB or lower it's better to know signal and noise power (or power spectral density) a priory. So the task has robust analitical solution. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 2057, "tags": "signal-analysis, snr" }
Sign convention for writing heat released by or heat added to the system
Question: Suppose, we have two objects A and B. $Q$ amount of heat flows from A to B. So, will it be correct to write the following? Heat released by A = $-Q$ Heat added to B = $Q$ I'm confused because we say this Acceleration of car = $-5ms^{-2}$ Deceleration of car = $5ms^{-1}$ Do you understand my question? Answer: The first law of thermodynamics evaluates the change in internal energy (and kinetic and potential energy) of a system. Any energy added to a system that increases the internal energy of the system is considered positive, and any energy removed from the system that decreases the internal energy is considered negative. The first law considers the heat added to the system as positive, and the heat removed from the system as negative. The first law considers the work done on the system as positive and the work done by the system as negative. The first law considers mass flow into the system as positive and mass flow out of the system as negative. The heat added to the system is the negative of the heat lost by the surroundings. Similarly for a mass, a force that "increases motion" causes positive acceleration and a force that "retards motion" causes negative acceleration (deacceleration). In your question considering B as the system, the heat into B is +Q and the heat from A is -Q as you say. In your question you have acceleration negative and deacceleration positive, which is not correct considering the system as the mass affected by a force.
{ "domain": "physics.stackexchange", "id": 81764, "tags": "thermodynamics, conventions" }
Are all atoms spherically symmetric? If so, why are atoms with half-filled/filled sub-shells often quoted as 'especially' spherically symmetric?
Question: In my atomic physics notes they say In general, filled sub-shells are spherically symmetric and set up, to a good approximation, a central field. However sources such as here say that, even for the case of a single electron in Hydrogen excited to the 2p sub-shell, the electron is really in a spherically symmetric superposition $\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$ (which I thought made sense since there should be no preferred direction in space). My question there now is, why is the central field approximation only an approximation if all atoms are really perfectly spherically symmetric, and why are filled/half-filled sub-shells 'especially spherically symmetric? Answer: In general, atoms need not be spherically symmetric. The source you've given is flat-out wrong. The wavefunction it mentions, $\varphi=\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$, is in no way spherically symmetric. This is easy to check: the wavefunction for the $2p_z$ orbital is $\psi_{2p_z}(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:z \:e^{-r/2a_{0}}$ (and similarly for $2p_x$ and $2p_y$), so the wavefunction of the combination is $$\varphi(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:\frac{x+y+z}{\sqrt 3} \:e^{-r/2a_{0}},$$ i.e., a $2p$ orbital oriented along the $(\hat{x}+\hat y+\hat z)/\sqrt3$ axis. This is an elementary fact and it can be verified at the level of an undergraduate text in quantum mechanics (and it was also obviously wrong in the 1960s). It is extremely alarming to see it published in an otherwise-reputable journal. On the other hand, there are some states of the hydrogen atom in the $2p$ shell which are spherically symmetric, if you allow for mixed states, i.e., a classical probabilistic mixture $\rho$ of hydrogen atoms prepared in the $2p_x$, $2p_y$ and $2p_z$ states with equal probabilities. It is important to emphasize that it is essential that the mixture be incoherent (i.e. classical and probabilistic, as opposed to a quantum superposition) for the state to be spherically symmetric. As a general rule, if all you know is that you have "hydrogen in the $2p$ shell", then you do not have sufficient information to know whether it is in a spherically-symmetric or an anisotropic state. If that's all the information available, the initial presumption is to take a mixed state, but the next step is to look at how the state was prepared: The $2p$ shell can be prepared through isotropic processes, such as by excitation through collisions with a non-directional beam of electrons of the correct kinetic energy. In this case, the atom will be in a spherically-symmetric mixed state. On the other hand, it can also be prepared via anisotropic processes, such as photo-excitation with polarized light. In that case, the atom will be in an anisotropic state, and the direction of this anisotropy will be dictated by the process that produced it. It is extremely tempting to think (as discussed previously e.g. here, here and here, and links therein) that the spherical symmetry of the dynamics (of the nucleus-electron interactions) must imply spherical symmetry of the solutions, but this is obviously wrong $-$ to start with, it would apply equally well to the classical problem! The spherical symmetry implies that, for any anisotropic solution, there exist other, equivalent solutions with complementary anisotropies, but that's it. The hydrogen case is a bit special because the $2p$ shell is an excited state, and the ground state is symmetric. So, in that regard, it is valid to ask: what about the ground states of, say, atomic boron? If all you know is that you have atomic boron in gas phase in its ground state, then indeed you expect a spherically-symmetric mixed state, but this can still be polarized to align all the atoms into the same orientation. As a short quip: atoms can have nontrivial shapes, but the fact that we don't know which way those shapes are oriented does not make them spherically symmetric. So, given an atom (perhaps in a fixed excited state), what determines its shape? In short: its term symbol, which tells us its angular momentum characteristics, or, in other words, how it interacts with rotations. The only states with spherical symmetry are those with vanishing total angular momentum, $J=0$. If this is not the case, then there will be two or more states that are physically distinct and which can be related to each other by a rotation. It's important to note that this anisotropy could be in the spin state, such as with the $1s$ ground state of hydrogen. If you want to distinguish the states with isotropic vs anisotropic charge distributions, then you need to look at the total orbital angular momentum, $L$. The charge distribution will be spherically symmetric if and only if $L=0$. A good comprehensive source for term symbols of excited states is the Levels section of the NIST ASD.
{ "domain": "physics.stackexchange", "id": 75713, "tags": "quantum-mechanics, angular-momentum, atomic-physics, atoms, orbitals" }
What is the meaning of undecidability in Rice Theorem?
Question: Rice theorem says every non-trivial property of languages of Turing machines is undecidable. what is the meaning of undecidability here? is it semi-decidable? As an example the following language is R.E but it contains a non-trivial property $A = \{x | \phi_x$ is defined for at least one input$ \}$ this language equals to $Empty$ $Complement$ and it is clearly r.e.(m-complete in fact) while non-emptiness property is non-trivial. Answer: Undecidable means not decidable. Undecidable problems may or may not be semi-decidable. To see that an undecidable problem is not necessarily semi-decidable, observe that there are uncountably many problems but, since each decidable or semi-decidable problem corresponds to a Turing machine, there are only countably many decidable and semi-decidable problems.
{ "domain": "cs.stackexchange", "id": 4221, "tags": "terminology, computability, undecidability, semi-decidability, rice-theorem" }
Can AI be used for grading code copy exercises and adjust difficulty based on these scores?
Question: I'm a senior in a bachelor Multimedia and Creative Technology. My experience is mostly full-stack web app development. For my bachelor's thesis, I need to do research in a subject I have no experience in. I want to build an application where students can exercise HTML and CSS. Teachers can upload simple code pieces (e.g. h1, h2, and list with elements) with difficulty levels and students can try to copy these exercises with a code editor on the web with live preview. My question: Is it possible to use AI for grading these "copies", give the students scores, and, based on these scores, adjust the difficulty level so the next exercise is harder or easier? And if so, could you put me in the right direction? Answer: I might be wrong, but I would suggest you approach this problem more simply rather than using neural networks or other machine learning constructs. Machine learning is concerned with making a computer learn from a lot of data. You do not need a lot of data to score how well the student's code compares to the teacher's code. You also do not need recommender systems to suggest the next question. Recommender systems suggest by infering the user's preferences, whereas in your case you can simply suggest the next question based on how well the student did on the current question and the type of current question. I would first identify the following three subproblems: Scoring and assigning a difficulty score to a teacher's code piece. Scoring how similar the student's solution is to the true solution. Predicting next question based on the current problem difficulty and student score for the current problem. For the first you can develop some algorithms to do feature extraction from the html, css or whatever code piece you want to evaluate. Features can be the following: length of the code piece, number of tags, number of different tags, number of attributes and so on. Combine them in a mathematical formula to calculate a difficulty score for the code piece. I would suggest a linear combination like the following: $Y = \sum{a_i X_i}$ where $X_i$ is the $i$th feature. Then, normalizing all the scores of all questions to a range of $[0, 100]$. You can even have an upper decision boundary such that whenever a certain score is reached, the normalized difficulty score will be $100$. For the second you should start by checking if the student's code compiles, i.e. there are no syntax errors. Then, you can use Levenshtein Distance to calculate how many characters the student was wrong from the original solution. The goal of a good training should be for the student to infer the exact tags and attributes, so the exact sequence of characters. Calculate the percentage of mistakes over total length of code piece and assign it to $x$. Use a mathematical formula of your liking to score how well the student did given $x$. I would suggest you consider the following formula: $e^{-0.1x}$. It is $100\%$ score for $0$ mistakes and $50\%$ score for having $8\%$ of mistakes. Have a look at the graph at desmos.com. Similarly, you can construct a formula or recipe for choosing the difficulty of the next problem. It should be based on the current score and the current problem's difficulty. You can even force the student to repeat the question, or a question of similar difficulty if the score is below $50\%$. If the score is above $50\%$, you can for example increase the difficulty level of the questions by 1 point every 3 correctly solved problems. Nevertheless, there are many ways you can approach this. Hope this helps. Good luck in your thesis.
{ "domain": "ai.stackexchange", "id": 3112, "tags": "applications, research, algorithm-request, education" }
Phase response approaches towards the zeros
Question: Phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as filter.https://en.wikipedia.org/wiki/Phase_response. For example in this article http://www.ijettcs.org/Volume1Issue3/IJETTCS-2012-10-23-080.pdf. This figure shows that phase response of GA as it approaches towards the zeros. Hence phase response of GA is much better. Approaches to zeros that meant phase response is better,why? Answer: I'm sorry to say that but this is total nonsense. The paper you cite is bad, the authors don't know what they're talking about. All 3 methods discussed in the paper design linear phase filters by the very formulation of the problem. So the phase responses all three filters are perfectly linear, apart from phase jumps at the zeros of the transfer function (which are in the stopband where the phase doesn't matter anyway). It's pointless to compare the phase responses of these filters since they're all linear by definition.
{ "domain": "dsp.stackexchange", "id": 5958, "tags": "filter-design, phase, finite-impulse-response, linear-phase" }
Merge sort: sorting and merging complexity $\Theta(n)$
Question: So this is the Master theorem for Merge Sort: $$ T(n) = 2T(n/2) + \Theta(n). $$ I am not able to understand why is the time complexity for sorting and merging $\Theta(n)$. Is sorting $O(1)$ and merging $O(n)$? Answer: Each iteration of merge sort consist of 2 phases: Merge Sorting the first and the second half separately. Merging the two halves. So in your equation phase 1 is represented by $2T(n/2)$. This means that merge sort is called on the two halves. This is a recursive call, which is why $T$ is used here. Phase 2 is represented by $\Theta(n)$. Merging two lists of length $a$ and length $b$ takes $\Theta(a+b)$. In our case the two lists are both of size $\frac{1}{2}n$, so we get a total of $\Theta(n)$. In the end we get that $T(n) = $ step 1 + step 2, resulting in $T(n) = 2T(n/2) + \Theta(n)$
{ "domain": "cs.stackexchange", "id": 13344, "tags": "algorithms, time-complexity, master-theorem, mergesort" }
Bit vector in Java supporting O(1) rank() and O(log n) select()
Question: Introduction I have this GitHub repository (version 1.0.0.). It implements a rank(i) operation in \$\Theta(1)\$ time, and select(i) operation in \$\Theta(\log n)\$ time. (This post has an continuation post, version 1.0.1.) Code com.github.coderodde.util.RankSelectBitVector.java: /** * This class defines a packed bit vector that supports {@code rank()} operation * in {@code O(1)} time, and {@code select()} in {@code O(log n)} time. * * @version 1.0.0 * @since 1.0.0 */ public final class RankSelectBitVector { /** * Indicates whether some bits were changed since the previous building of * the index data structures. */ private boolean hasDirtyState = true; /** * The actual bit storage array. */ private final byte[] bytes; /** * The actual requested number of bits in this bit vector. Will be smaller * than the total capacity. */ private final int numberOfRequestedBits; /** * Denotes index of the most rightmost meaningful bit. Will be set to * {@code numberOfRequestedBits - 1}. */ private final int maximumBitIndex; /** * Caches the number of bits set to one (1). */ private int numberOfSetBits; /** * The block size in the {@code first} table. */ private int ell; /** * The block size in the {@code second} table. */ private int k; // The following three tables hold the index necessary for efficient rank // operation. According to internet, has space // O(sgrt(n) * log log n * log n. private int[] first; private int[] second; private int[][] third; /** * Constructs a new bit vector. * * @param numberOfRequestedBits the actual number of bits to support. */ public RankSelectBitVector(int numberOfRequestedBits) { checkNumberOfRequestedBits(numberOfRequestedBits); this.numberOfRequestedBits = numberOfRequestedBits; // Calculate the actual number of storage bytes: int numberOfBytes = numberOfRequestedBits / Byte.SIZE + (numberOfRequestedBits % Byte.SIZE != 0 ? 1 : 0); numberOfBytes++; // Padding tail byte in order to simplify the last // rank/select. bytes = new byte[numberOfBytes]; // Set the rightmost, valid index: this.maximumBitIndex = this.bytes.length * Byte.SIZE - 1; } @Override public String toString() { StringBuilder sb = new StringBuilder().append("[Bit vector, size = "); sb.append(getNumberOfSupportedBits()) .append(" bits, data = "); int bitNumber = 0; for (int i = 0; i < getNumberOfSupportedBits(); i++) { sb.append(readBitImpl(i) ? "1" : "0"); bitNumber++; if (bitNumber % 8 == 0) { sb.append(" "); } } return sb.append("]").toString(); } /** * Preprocesses the internal data structures in {@code O(n)}. */ public void buildIndices() { if (hasDirtyState == false) { // Nothing to do. return; } //// Deal with the 'first'. // n - total number of bit slots: int n = bytes.length * Byte.SIZE; // elll - the l value: this.ell = (int) Math.pow(Math.ceil(log2(n) / 2.0), 2.0); this.first = new int[n / ell + 1]; for (int i = ell; i < n; i++) { if (i % ell == 0) { int firstArraySlotIndex = i / ell; int startIndex = i - ell; int endIndex = i - 1; first[firstArraySlotIndex] = first[firstArraySlotIndex - 1] + bruteForceRank(startIndex, endIndex); } } //// Deal with the 'second'. this.k = (int) Math.ceil(log2(n) / 2.0); this.second = new int[n / k + 1]; for (int i = k; i < n; i++) { if (i % k == 0) { second[i/k] = bruteForceRank(ell * (i / ell), i - 1); } } //// Deal with the 'third': four Russians' technique: this.third = new int[(int) Math.pow(2.0, k - 1)][]; for (int selectorIndex = 0; selectorIndex < third.length; selectorIndex++) { third[selectorIndex] = new int[k - 1]; // third[selectorIndex][0] is always zero (0). third[selectorIndex][0] = (bitIsSet(selectorIndex, k - 2) ? 1 : 0); for (int j = 1; j < k - 1; j++) { third[selectorIndex][j] = third[selectorIndex][j - 1] + (bitIsSet(selectorIndex, k - j - 2) ? 1 : 0); } } hasDirtyState = false; } /** * Returns the number of bits that are set (have value of one (1)). * * @return the number of set bits. */ public int getNumberOfSetBits() { return numberOfSetBits; } /** * Returns the number of bits this bit vector supports. * * @return the number of bits supported. */ public int getNumberOfSupportedBits() { return numberOfRequestedBits; } /** * Sets the {@code index}th bit to one (1). * * @param index the index of the target bit. */ public void writeBitOn(int index) { writeBit(index, true); } /** * Sets the {@code index}th bit to zero (0). * * @param index the index of the target bit. */ public void writeBitOff(int index) { writeBit(index, false); } /** * Writes the {@code index}th bit to {@code on}. * * @param index the index of the target bit. * @param on the selector of the bit: if {@code true}, the bit will be * set to one, otherwise set zero. */ public void writeBit(int index, boolean on) { checkBitAccessIndex(index); writeBitImpl(index, on); } /** * Reads the {@code index}th bit where indexation starts from zero (0). * * @param index the bit index. * @return {@code true} if and only if the {@code index}th bit is set. */ public boolean readBit(int index) { checkBitAccessIndex(index); return readBitImpl(index); } /** * Returns the rank of {@code index}, i.e., the number of set bits in the * subvector {@code vector[1..index]}. Runs in {@code O((log n)^2)} time. * * @param index the target index. * @return the rank for the input target. */ public int rankFirst(int index) { checkBitIndexForRank(index); makeSureStateIsCompiled(); int startIndex = ell * (index / ell); int endIndex = index - 1; return first[index / ell] + bruteForceRank(startIndex, endIndex); } /** * Returns the {@code index}th rank. Runs in {@code O(log n)} time. * * @param index the target index. * @return the rank of the input index. */ public int rankSecond(int index) { checkBitIndexForRank(index); makeSureStateIsCompiled(); int startIndex = k * (index / k); int endIndex = index - 1; return first[index / ell] + second[index / k] + bruteForceRank(startIndex, endIndex); } /** * Returns the {@code index}th rank. Runs in {@code O(1)} time. * * @param index the target index. * @return the rank of the input index. */ public int rankThird(int index) { checkBitIndexForRank(index); makeSureStateIsCompiled(); int f = first[index / ell]; int s = second[index / k]; int thirdEntryIndex = index % k - 1; if (thirdEntryIndex == -1) { return f + s; } int selectorIndex = extractBitVector(index) .toInteger(k - 1); return f + s + third[selectorIndex][thirdEntryIndex]; } /** * Returns the index of the {@code index}th 1-bit. Relies on * {@link #rankFirst(int)}, which runs in {@code O((log n)^2)}, which yields * {@code O((log n)^3)} running time for the {@code selectFirst}. * * @param bitIndex the target index. * @return the index of the {@code index}th 1-bit. */ public int selectFirst(int bitIndex) { checkBitIndexForSelect(bitIndex); return selectImplFirst(bitIndex, 0, getNumberOfSupportedBits()); } /** * Returns the index of the {@code index}th 1-bit. Relies on * {@link #rankSecond(int)}, which runs in {@code O(log n)}, which yields * {@code O((log n)^2)} running time for the {@code selectSecond}. * * @param bitIndex the target index. * @return the index of the {@code index}th 1-bit. */ public int selectSecond(int bitIndex) { checkBitIndexForSelect(bitIndex); return selectImplSecond(bitIndex, 0, getNumberOfSupportedBits()); } /** * Returns the index of the {@code index}th 1-bit. Relies on * {@link #rankThird(int)}, which runs in {@code O(1)}, which yields * {@code O(log n)} running time for the {@code selectThird}. * * @param bitIndex the target index. * @return the index of the {@code index}th 1-bit. */ public int selectThird(int bitIndex) { checkBitIndexForSelect(bitIndex); return selectImplThird(bitIndex, 0, getNumberOfSupportedBits()); } private int selectImplFirst(int bitIndex, int rangeStartIndex, int rangeLength) { if (rangeLength == 1) { return rangeStartIndex; } int halfRangeLength = rangeLength / 2; int r = rankFirst(halfRangeLength + rangeStartIndex); if (r >= bitIndex) { return selectImplFirst(bitIndex, rangeStartIndex, halfRangeLength); } else { return selectImplFirst(bitIndex, rangeStartIndex + halfRangeLength, rangeLength - halfRangeLength); } } private int selectImplSecond(int bitIndex, int rangeStartIndex, int rangeLength) { if (rangeLength == 1) { return rangeStartIndex; } int halfRangeLength = rangeLength / 2; int r = rankSecond(halfRangeLength + rangeStartIndex); if (r >= bitIndex) { return selectImplSecond(bitIndex, rangeStartIndex, halfRangeLength); } else { return selectImplSecond(bitIndex, rangeStartIndex + halfRangeLength, rangeLength - halfRangeLength); } } private int selectImplThird(int bitIndex, int rangeStartIndex, int rangeLength) { if (rangeLength == 1) { return rangeStartIndex; } int halfRangeLength = rangeLength / 2; int r = rankThird(halfRangeLength + rangeStartIndex); if (r >= bitIndex) { return selectImplThird(bitIndex, rangeStartIndex, halfRangeLength); } else { return selectImplThird(bitIndex, rangeStartIndex + halfRangeLength, rangeLength - halfRangeLength); } } /** * The delegate for manipulating bits. * * @param index the index of the target bit. * @param on the flag deciding the value of the bit in question. */ private void writeBitImpl(int index, boolean on) { boolean previousBitValue = readBit(index); if (on) { if (previousBitValue == false) { hasDirtyState = true; numberOfSetBits++; } turnBitOn(index); } else { if (previousBitValue == true) { hasDirtyState = true; numberOfSetBits--; } turnBitOff(index); } } /** * Implements the actual reading of a bit. * * @param index the index of the target bit to read. * @return the value of the target bit. */ boolean readBitImpl(int index) { int byteIndex = index / Byte.SIZE; int targetByteBitIndex = index % Byte.SIZE; byte targetByte = bytes[byteIndex]; return (targetByte & (1 << targetByteBitIndex)) != 0; } /** * Makes sure that the state of the internal data structures is up to date. */ private void makeSureStateIsCompiled() { if (hasDirtyState) { buildIndices(); hasDirtyState = false; } } /** * Turns the {@code index}th bit on. Indexation is zero-based. * * @param index the target bit index. */ private void turnBitOn(int index) { int byteIndex = index / Byte.SIZE; int bitIndex = index % Byte.SIZE; byte mask = 1; mask <<= bitIndex; bytes[byteIndex] |= mask; } /** * Turns the {@code index}th bit off. Indexation is zero-based. * * @param index the target bit index. */ private void turnBitOff(int index) { int byteIndex = index / Byte.SIZE; int bitIndex = index % Byte.SIZE; byte mask = 1; mask <<= bitIndex; bytes[byteIndex] &= ~mask; } private void checkBitIndexForSelect(int selectionIndex) { if (selectionIndex < 0) { throw new IndexOutOfBoundsException( String.format( "The input selection index is negative: " + "(%d). Must be within range [1..%d].\n", selectionIndex, numberOfSetBits)); } if (selectionIndex == 0) { throw new IndexOutOfBoundsException( String.format( "The input selection index is zero (0). " + "Must be within range [1..%d].\n", numberOfSetBits)); } if (selectionIndex > numberOfSetBits) { throw new IndexOutOfBoundsException( String.format( "The input selection index is too large (%d). " + "Must be within range [1..%d].\n", selectionIndex, numberOfSetBits)); } } private void checkBitIndexForRank(int index) { if (index < 0) { throw new IndexOutOfBoundsException( String.format("Negative bit index: %d.", index)); } if (index > numberOfRequestedBits) { throw new IndexOutOfBoundsException( String.format( "Too large bit index (%d), number of bits " + "supported is %d.", index, numberOfRequestedBits)); } } private void checkBitAccessIndex(int accessIndex) { if (accessIndex < 0) { throw new IndexOutOfBoundsException( String.format( "Negative bit access index: %d.", accessIndex)); } if (accessIndex >= getNumberOfSupportedBits()) { throw new IndexOutOfBoundsException( String.format( "Too large bit access index (%d), number of bits " + "supported is %d.", accessIndex, getNumberOfSupportedBits())); } } /** * Returns {@code true} if and only if the {@code bitIndex}th bit in * {@code value} is set. * * @param value the value of which to inspect the bit. * @param bitIndex the bit index. * @return {@code true} if and only if the specified bit is set. */ private boolean bitIsSet(int value, int bitIndex) { return (value & (1 << bitIndex)) != 0; } int toInteger(int numberOfBitsToRead) { int integer = 0; for (int i = 0; i < numberOfBitsToRead; i++) { boolean bit = readBitImpl(i); if (bit == true) { integer |= 1 << i; } } return integer; } private RankSelectBitVector extractBitVector(int i) { int startIndex = k * (i / k); int endIndex = Math.min(k * (i / k + 1) - 2, maximumBitIndex); int extractedBitVectorLength = endIndex - startIndex + 1; RankSelectBitVector extractedBitVector = new RankSelectBitVector(extractedBitVectorLength); for (int index = extractedBitVectorLength - 1, j = startIndex; j <= endIndex; j++, index--) { extractedBitVector.writeBitImpl(index, this.readBitImpl(j)); } return extractedBitVector; } private int bruteForceRank(int startIndex, int endIndex) { int rank = 0; for (int i = startIndex; i <= endIndex; i++) { if (readBitImpl(i)) { rank++; } } return rank; } private void checkNumberOfRequestedBits(int numberOfRequestedBits) { if (numberOfRequestedBits == 0) { throw new IllegalArgumentException("Requested zero (0) bits."); } if (numberOfRequestedBits < 0) { throw new IllegalArgumentException( String.format( "Requested negative number of bits (%d).\n", numberOfRequestedBits)); } } private static double log2(double v) { return Math.log(v) / Math.log(2.0); } } com.github.coderodde.util.RankSelectBitVectorBenchmark.java: package com.github.coderodde.util.benchmark; import com.github.coderodde.util.RankSelectBitVector; import java.util.Random; public final class RankSelectBitVectorBenchmark { /** * The number of bits in the benchmark bit vector. */ private static final int BIT_VECTOR_LENGTH = 4_000_000; public static void main(String[] args) { long seed = parseSeed(args); System.out.printf("Seed = %d\n", seed); Random random = new Random(seed); long st = System.currentTimeMillis(); RankSelectBitVector rankSelectBitVector = createRandomBitVector(random); System.out.printf("Built the bit vector in %d milliseconds.\n", System.currentTimeMillis() - st); st = System.currentTimeMillis(); // st - start time. rankSelectBitVector.buildIndices(); System.out.printf("Preprocessed the bit vector in %d milliseconds.\n", System.currentTimeMillis() - st); System.out.println("--- Benchmarking rank operation ---"); benchmarkRanks(rankSelectBitVector); System.out.println("--- Benchmarking select operation ---"); benchmarkSelects(rankSelectBitVector); } private static RankSelectBitVector createRandomBitVector(Random random) { RankSelectBitVector rankSelectBitVector = new RankSelectBitVector(BIT_VECTOR_LENGTH); for (int bitIndex = 0; bitIndex != rankSelectBitVector.getNumberOfSupportedBits(); bitIndex++) { if (random.nextBoolean()) { rankSelectBitVector.writeBitOn(bitIndex); } } return rankSelectBitVector; } private static boolean rankArraysEqual(int[] rankArray1, int[] rankArray2) { if (rankArray1.length != rankArray2.length) { throw new IllegalArgumentException("Rank array length mismatch."); } int n = Math.max(rankArray1.length, rankArray2.length); for (int i = 0; i != n; i++) { int rank1 = rankArray1[i]; int rank2 = rankArray2[i]; if (rank1 != rank2) { System.err.printf( "ERROR: Mismatch at index = %d, " + "rank1 = %d, rank2 = %d.\n", i, rank1, rank2); return false; } } return true; } private static void benchmarkRanks(RankSelectBitVector rankSelectBitVector) { int numberOfBits = rankSelectBitVector.getNumberOfSupportedBits(); int[] answers1 = new int[numberOfBits]; int[] answers2 = new int[numberOfBits]; int[] answers3 = new int[numberOfBits]; long st = System.currentTimeMillis(); // st - start time. for (int i = 0; i != numberOfBits; i++) { answers1[i] = rankSelectBitVector.rankFirst(i); } long answersDuration1 = System.currentTimeMillis() - st; System.out.printf( "rankFirst() ran for %d milliseconds.\n", answersDuration1); st = System.currentTimeMillis(); for (int i = 0; i != numberOfBits; i++) { answers2[i] = rankSelectBitVector.rankSecond(i); } long answersDuration2 = System.currentTimeMillis() - st; System.out.printf( "rankSecond() ran for %d milliseconds.\n", answersDuration2); st = System.currentTimeMillis(); for (int i = 0; i != numberOfBits; i++) { answers3[i] = rankSelectBitVector.rankThird(i); } long answersDuration3 = System.currentTimeMillis() - st; System.out.printf( "rankThird() ran for %d milliseconds.\n", answersDuration3); if (!rankArraysEqual(answers1, answers2)) { System.err.println("Failed on rankFirst vs. rankSecond."); return; } if (!rankArraysEqual(answers1, answers3)) { System.err.println("Failed on rankFirst vs. rankThird."); } } private static void benchmarkSelects(RankSelectBitVector rankSelectBitVector) { int numberOfSetBits = rankSelectBitVector.getNumberOfSetBits(); int[] answers1 = new int[numberOfSetBits + 1]; int[] answers2 = new int[numberOfSetBits + 1]; int[] answers3 = new int[numberOfSetBits + 1]; long st = System.currentTimeMillis(); for (int i = 1; i <= numberOfSetBits; i++) { answers1[i] = rankSelectBitVector.selectFirst(i); } long answersDuration1 = System.currentTimeMillis() - st; System.out.printf( "selectFirst() ran for %d milliseconds.\n", answersDuration1); st = System.currentTimeMillis(); for (int i = 1; i <= numberOfSetBits; i++) { answers2[i] = rankSelectBitVector.selectSecond(i); } long answersDuration2 = System.currentTimeMillis() - st; System.out.printf( "selectSecond() ran for %d milliseconds.\n", answersDuration2); st = System.currentTimeMillis(); for (int i = 1; i <= numberOfSetBits; i++) { answers3[i] = rankSelectBitVector.selectThird(i); } long answersDuration3 = System.currentTimeMillis() - st; System.out.printf( "selectThird() ran for %d milliseconds.\n", answersDuration3); if (!rankArraysEqual(answers1, answers2)) { System.err.println("Failed on selectFirst vs. selectSecond."); return; } if (!rankArraysEqual(answers1, answers3)) { System.err.println("Failed on selectFirst vs. selectThird."); } } private static long parseSeed(String[] args) { if (args.length == 0) { return System.currentTimeMillis(); } try { return Long.parseLong(args[0]); } catch (NumberFormatException ex) { System.err.printf( "WARNING: Could not parse '%s' as an long value.", args[0]); return System.currentTimeMillis(); } } } com.github.coderodde.util.RankSelectBitVectorTest.java: package com.github.coderodde.util; import java.util.Random; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import org.junit.Test; public final class RankSelectBitVectorTest { @Test public void lastBitRank() { RankSelectBitVector bv = new RankSelectBitVector(8); bv.writeBitOn(2); bv.writeBitOn(6); bv.writeBitOn(7); assertEquals(3, bv.rankFirst(8)); assertEquals(3, bv.rankSecond(8)); assertEquals(3, bv.rankThird(8)); } @Test public void smallSelect() { RankSelectBitVector bv = new RankSelectBitVector(8); bv.writeBitOn(2); bv.writeBitOn(4); bv.writeBitOn(5); bv.writeBitOn(7); // 00101101 // select(1) = 2 // select(2) = 4 // select(3) = 5 // select(4) = 7 assertEquals(2, bv.selectFirst(1)); assertEquals(4, bv.selectFirst(2)); assertEquals(5, bv.selectFirst(3)); assertEquals(7, bv.selectFirst(4)); } @Test public void debugTest1() { // 00101101 RankSelectBitVector bv = new RankSelectBitVector(8); bv.writeBitOn(2); bv.writeBitOn(4); bv.writeBitOn(5); bv.writeBitOn(7); assertEquals(0, bv.rankThird(0)); assertEquals(0, bv.rankThird(1)); assertEquals(0, bv.rankThird(2)); assertEquals(1, bv.rankThird(3)); assertEquals(1, bv.rankThird(4)); assertEquals(2, bv.rankThird(5)); assertEquals(3, bv.rankThird(6)); assertEquals(3, bv.rankThird(7)); assertEquals(4, bv.rankThird(8)); assertEquals(2, bv.selectFirst(1)); assertEquals(4, bv.selectFirst(2)); assertEquals(5, bv.selectFirst(3)); assertEquals(7, bv.selectFirst(4)); } @Test public void debugTest2() { // 00101101 10101101 RankSelectBitVector bv = new RankSelectBitVector(16); bv.writeBitOn(2); bv.writeBitOn(4); bv.writeBitOn(5); bv.writeBitOn(7); bv.writeBitOn(8); bv.writeBitOn(10); bv.writeBitOn(12); bv.writeBitOn(13); bv.writeBitOn(15); assertEquals(0, bv.rankThird(0)); assertEquals(0, bv.rankThird(1)); assertEquals(0, bv.rankThird(2)); assertEquals(1, bv.rankThird(3)); assertEquals(1, bv.rankThird(4)); assertEquals(2, bv.rankThird(5)); assertEquals(3, bv.rankThird(6)); assertEquals(3, bv.rankThird(7)); assertEquals(4, bv.rankThird(8)); assertEquals(5, bv.rankThird(9)); assertEquals(5, bv.rankThird(10)); assertEquals(6, bv.rankThird(11)); assertEquals(6, bv.rankThird(12)); assertEquals(7, bv.rankThird(13)); assertEquals(8, bv.rankThird(14)); assertEquals(8, bv.rankThird(15)); assertEquals(9, bv.rankThird(16)); assertEquals(2, bv.selectFirst(1)); assertEquals(4, bv.selectFirst(2)); assertEquals(5, bv.selectFirst(3)); assertEquals(7, bv.selectFirst(4)); assertEquals(8, bv.selectFirst(5)); assertEquals(10, bv.selectFirst(6)); assertEquals(12, bv.selectFirst(7)); assertEquals(13, bv.selectFirst(8)); assertEquals(15, bv.selectFirst(9)); } @Test public void debugTest3() { // 00101101 10101101 00010010 RankSelectBitVector bv = new RankSelectBitVector(24); bv.writeBitOn(2); bv.writeBitOn(4); bv.writeBitOn(5); bv.writeBitOn(7); bv.writeBitOn(8); bv.writeBitOn(10); bv.writeBitOn(12); bv.writeBitOn(13); bv.writeBitOn(15); bv.writeBitOn(19); bv.writeBitOn(22); assertEquals(0, bv.rankThird(0)); assertEquals(0, bv.rankThird(1)); assertEquals(0, bv.rankThird(2)); assertEquals(1, bv.rankThird(3)); assertEquals(1, bv.rankThird(4)); assertEquals(2, bv.rankThird(5)); assertEquals(3, bv.rankThird(6)); assertEquals(3, bv.rankThird(7)); assertEquals(4, bv.rankThird(8)); assertEquals(5, bv.rankThird(9)); assertEquals(5, bv.rankThird(10)); assertEquals(6, bv.rankThird(11)); assertEquals(6, bv.rankThird(12)); assertEquals(7, bv.rankThird(13)); assertEquals(8, bv.rankThird(14)); assertEquals(8, bv.rankThird(15)); assertEquals(9, bv.rankThird(16)); // 00010010 assertEquals(9, bv.rankThird(17)); assertEquals(9, bv.rankThird(18)); assertEquals(9, bv.rankThird(19)); assertEquals(10, bv.rankThird(20)); assertEquals(10, bv.rankThird(21)); assertEquals(10, bv.rankThird(22)); assertEquals(11, bv.rankThird(23)); assertEquals(11, bv.rankThird(24)); // select(): assertEquals(2, bv.selectFirst(1)); assertEquals(4, bv.selectFirst(2)); assertEquals(5, bv.selectFirst(3)); assertEquals(7, bv.selectFirst(4)); assertEquals(8, bv.selectFirst(5)); assertEquals(10, bv.selectFirst(6)); assertEquals(12, bv.selectFirst(7)); assertEquals(13, bv.selectFirst(8)); assertEquals(15, bv.selectFirst(9)); assertEquals(19, bv.selectFirst(10)); assertEquals(22, bv.selectFirst(11)); } @Test public void bruteForceTest() { long seed = System.currentTimeMillis(); seed = 1706163778488L; Random random = new Random(seed); System.out.println("-- bruteForceTest, seed = " + seed); RankSelectBitVector bv = getRandomBitVector(random); BruteForceBitVector referenceBv = copy(bv); bv.buildIndices(); int numberOfOneBits = bv.rankThird(bv.getNumberOfSupportedBits()); for (int i = 0; i < bv.getNumberOfSupportedBits(); i++) { int actualRank = referenceBv.rank(i); int rank1 = bv.rankFirst(i); int rank2 = bv.rankSecond(i); int rank3 = bv.rankThird(i); int selectIndex = random.nextInt(numberOfOneBits) + 1; int actualSelect = referenceBv.select(selectIndex); int select1 = bv.selectFirst(selectIndex); if (select1 != actualSelect) { System.out.printf( "ERROR: i = %d, actualSelect = %d, select1 = %d.\n", i, actualSelect, select1); } if (rank3 != actualRank) { System.out.printf( "ERROR: i = %d, actual rank = %d, rank1 = %d, " + "rank2 = %d, rank3 = %d.\n", i, actualRank, rank1, rank2, rank3); } assertEquals(actualRank, rank1); assertEquals(actualRank, rank2); assertEquals(actualRank, rank3); assertEquals(actualSelect, select1); } } private static RankSelectBitVector getRandomBitVector(Random random) { RankSelectBitVector bv = new RankSelectBitVector(5973); for (int i = 0; i < bv.getNumberOfSupportedBits(); i++) { if (random.nextDouble() < 0.3) { bv.writeBitOn(i); } } return bv; } private static BruteForceBitVector copy(RankSelectBitVector bv) { BruteForceBitVector referenceBv = new BruteForceBitVector(bv.getNumberOfSupportedBits()); for (int i = 0; i < bv.getNumberOfSupportedBits(); i++) { if (bv.readBit(i)) { referenceBv.writeBitOn(i); } } return referenceBv; } @Test public void toInteger() { RankSelectBitVector bitVector = new RankSelectBitVector(31); assertEquals(0, bitVector.toInteger(20)); bitVector.writeBit(1, true); assertEquals(2, bitVector.toInteger(20)); bitVector.writeBit(2, true); assertEquals(6, bitVector.toInteger(20)); bitVector.writeBit(4, true); assertEquals(22, bitVector.toInteger(20)); } @Test public void readWriteBit() { RankSelectBitVector bitVector = new RankSelectBitVector(30); bitVector.writeBit(12, true); assertTrue(bitVector.readBit(12)); bitVector.writeBit(12, false); assertFalse(bitVector.readBit(12)); assertFalse(bitVector.readBit(13)); } // @Test public void bruteForceBitVectorSelect() { BruteForceBitVector bv = new BruteForceBitVector(8); bv.writeBitOn(2); bv.writeBitOn(4); bv.writeBitOn(6); bv.writeBitOn(7); assertEquals(2, bv.select(1)); assertEquals(4, bv.select(2)); assertEquals(6, bv.select(3)); assertEquals(7, bv.select(4)); } @Test public void countSetBits() { RankSelectBitVector bv = new RankSelectBitVector(11); assertEquals(0, bv.getNumberOfSetBits()); bv.writeBitOn(10); assertEquals(1, bv.getNumberOfSetBits()); bv.writeBitOn(5); assertEquals(2, bv.getNumberOfSetBits()); bv.writeBitOff(10); assertEquals(1, bv.getNumberOfSetBits()); bv.writeBitOff(5); assertEquals(0, bv.getNumberOfSetBits()); } } (The missing class BruteForceBitVector is here.) Typical benchmark demo output Seed = 1706175245835 Built the bit vector in 74 milliseconds. Preprocessed the bit vector in 117 milliseconds. --- Benchmarking rank operation --- rankFirst() ran for 623 milliseconds. rankSecond() ran for 110 milliseconds. rankThird() ran for 625 milliseconds. --- Benchmarking select operation --- selectFirst() ran for 7593 milliseconds. selectSecond() ran for 1399 milliseconds. selectThird() ran for 6006 milliseconds. Critique request I would like to hear about the following issues: Efficiency. rankThird(int) in particular. Unit tests. Did I cover all the possible corner cases? Anything else? Answer: Under this model you cannot do a bit-by-bit extractBitVector and preserve the desired time complexity. You should be able to extract a contiguous group of bits by masking out some bits and stitching together (a constant number of) parts (this would be cheaper if the backing storage was an array of words instead of bytes). Going beyond that set of arithmetic operations, you can use Long.bitCount to efficiently count up to 64 bits (this should compile to either a popcnt instruction or at worst an log bits-step bithack if popcnt is not available, so it's not equivalent to looping over the bits). You can use that in bruteForceRank. You can also use Integer.numberOfLeadingZeros to compute an integer base-2 logarithm without a scary floating point logarithm. Going even further beyond that set of operations, since Java 19 there is Long.expand, which you can use to implement the word-level select from A Fast x86 Implementation of Select (it's not really x86-specific): Transcribed into Java (not tested): static int PTSelect(long x, int j) { long i = 1L << j; long p = Long.expand(i, x); return Long.numberOfTrailingZeros(p); } That would reduce the amount of recursion that the full select needs to do.
{ "domain": "codereview.stackexchange", "id": 45419, "tags": "java, algorithm, bitwise, vectors, bitset" }
Is there a general method to implement a 'greater than' quantum circuit?
Question: I am interesting in finding a circuit to implement the operation $f(x) > y$ for an arbitrary value of $y$. Below is the circuit I would like to build: I use the first three qubits to encode $|x⟩$, use the other three qubits to encode $|f(x) = x⟩$, and finally I want to filter out all of the solutions for which $f(x) \leq y$. So, if we set $y = 5$, the states would be: $$\Psi_{0} = |0⟩\otimes|0⟩ $$ $$\Psi_{1} = \frac{1}{\sqrt{8}}\sum_{i=0}^{7} (|i⟩\otimes|0⟩) $$ $$\Psi_{2} = \frac{1}{\sqrt{8}}\sum_{i=0}^{7} (|i⟩\otimes|i⟩) $$ $$\Psi_{3} = \frac{1}{\sqrt{2}}(|6⟩\otimes|6⟩ + |7⟩\otimes|7⟩)$$ Is there a general method to come up with such filter, or is this non-trivial? Answer: What you are looking for I think is to use a quantum circuit for doing comparison. Those are made from adder circuits with a slight modification to have comparators. For adders, you have for example one from Cuccaro et al. (this one give the modification to adapt for comparison) and another from Himanshu et al.
{ "domain": "quantumcomputing.stackexchange", "id": 397, "tags": "quantum-algorithms, circuit-construction, gate-synthesis, mathematics" }
Can I solve this problem using Gay Lussac's law of pressure?
Question: This question came in the Khulna University admission exam 13-14 Q) When a tire is pumped at temperature $27^{\circ}C$, it bursts abruptly after its pressure becomes $2atm$. What is its final temperature $[\gamma=1.4]$? (a) $27^{\circ}C$ (b) $-30^{\circ}C$ (c) $-26.9^{\circ}C$ (d) $26.9^{\circ}C$ Third party question bank's attempt: $$T_1P_1^{\frac{1-\gamma}{\gamma}}=T_2P_2^{\frac{1-\gamma}{\gamma}}$$ $$300\cdot(2)^{\frac{1-1.4}{1.4}}=T_2\cdot(1)^{\frac{1-1.4}{1.4}}$$ $$T_2=246.1K=-26.9^{\circ}C$$ So, (c). My attempt: According to Gay Lussac's law of pressure, $$\frac{P_1}{T_1}=\frac{P_2}{T_2}$$ $$\frac{2}{300}=\frac{1}{T_2}$$ $$T_2=150K=-123^{\circ}C$$ Why did I get the wrong answer in my attempt? Answer: The third party answer assumes that the "abrupt burst" process is isentropic. When the tire bursts, the gas will expand (change the volume) until it reaches the atmospheric pressure. Because this process is fast (i.e. abrupt) it is also assumed to be adiabatic (no heat is exchanged with the environment) and with no losses (due to friction, sound, etc...). Your answer (with G-L law) assumes that the transformation occurs at constant volume which is clearly false; moreover, applying the first law of thermodynamics will show that heat is exchanged with the environment. The latter is in contradiction with the assumption of fast process.
{ "domain": "physics.stackexchange", "id": 89784, "tags": "homework-and-exercises, thermodynamics, ideal-gas, adiabatic" }
Median Filter one after another
Question: Operate on an image by performing Median Filtering in a 3x3 window. Operate on the resulting image by performing, again, Median Filtering in a 3x3 window. Can the resulting image be obtained from a single Median filtering? my initial thought is that it can be done with the right mask. maybe a median next to a median. but i'm not sure. Answer: ok so here is the answer of my Prof. Hagit Hal-or: if there is such a mask then it must be 5x5. A counter example shows that this can not be. consider the 5x5 region of an image: we fill it with values 0...0,1,2...2 (12 0's and 12 2's) the 5x5 median on this region gives 1, regardless where you place the numbers. Now we build the 5x5 region so that if we apply the median on the median we do NOT get 1. Set the following: 1 0 0 x x 0 0 0 x x 0 0 0 x x x x x x x x x x x x where x are the rest of the numbers. The first pass with median will set the top left 3x3 to 0 and so the 1 is "lost" and any order of placing the rest of the numbers will not bring the 1 back. So median on all other regions will result in 0 or 2. So that the second pass of the median will look at numbers that are 0 and 2 only and so will NOT result in 1. thanks all for helping
{ "domain": "dsp.stackexchange", "id": 708, "tags": "image-processing, theory" }
Is a constant on the RHS of the equation of simple harmonic motion allowed?
Question: I read at a STEP booklet that we have to know how to bring a simple harmonic motion's equation to the form: $$\frac{\mathrm{d}^2x}{\mathrm{d}t^2} + \omega^2x= c$$ where $c$ is a constant. We also have to quote the solution. But shouldn't $c$ be zero always? (since $\omega^2=\frac{k}{m}$ and $k\cdot\text{displacement}/m = \text{acceleration}$) Could you also tell me what the solution I have to quote can be? Answer: The equation $$ \frac{d^2 x}{dt^2} + \omega^2 x = 0$$ is an example of a homogeneous second order linear differential equation, with a general solution of the form $$x = A\sin(\omega t) + B\cos(\omega t),$$ where $A$ and $B$ are constants to be determined from the boundary conditions of the problem. If you now put a constant on the right hand side of ths equation, it becomes an inhomogeneous second order differential equation. The general solution to this will be the sum of the general solution you found for the homogeneous equation plus something that is normally called the particular solution $P(t)$. i.e. $$ x(t) = A\sin(\omega t) + B\cos(\omega t) + P(t)$$ Now because we know that substituting the first two terms into the left hand side of the differential equation will give zero (because they are the general solution to the homogeneous equation) then we can also say that $$ \frac{d^2 P}{dt^2} + \omega^2 P = c$$ To be true for all values of $t$, we see that $P$ must in fact also be a constant, so $P = c/\omega^2$. The final general solution would be $$ x(t) = A\sin(\omega t) + B\cos(\omega t) + \frac{c}{\omega^2}$$ Or: (as JR has pointed out) $$ x(t) - \frac{c}{\omega^2} = A\sin(\omega t) + B\cos(\omega t)$$ You can confirm this works by substituting back into the original differential equation. The method I've used above is also well-suited to more complicated functions of time on the RHS, using an appropriate guess at the form of $P(t)$.
{ "domain": "physics.stackexchange", "id": 22450, "tags": "harmonic-oscillator" }
Partially persistent linked list data structure: would lookup of the first element at a specific version be O(|versions|) and not O(1)?
Question: I'm following course material from the course Advanced Data Structures. The result by Driscoll et al 1989 states the following (wording of the following theorem taken from lec notes, page 4, which cites the original paper "Making data structures persistent") Any pointer-machine data structure with O(1) pointers to any node can be made partially persistent with O(1) ammortized multiplicative overhead and O(1) space per change I understand the argument for why reads are O(1). The key idea is that during a write to n when a new node n' is created due to mod table overflow, we guarantee existence of a mod entry pointing to n' in any node x which has a field pointing to n. During a read, you can look at appropriate mod entry in x which will point to either n or n' depending on the version, so you only have to look at O(mod table size) entries at every read. So in summary, the backpointer update guarantees efficient reads. However, in my program I start out with a pointer to the head of the linked list in a variable. With backpointer updates, it'll always point to the latest version. Which means that if I have to look up an older version of the list, I'll have to go through all mod tables of the first node of the linked list till I arrive at the correct version, making it O(n) in number of versions and not O(1). Is that correct? Answer: I think this is really a question about the API features. Access to an older version, in Driscoll et al.'s sense, requires a pointer to that version, not just a predicate used to traverse a version tree to locate the version. For a similar API/modelling question, you can look at the question of how long delete takes in a priority queue. Plenty of papers advertise that they offer $O(1)$ delete, but this assumes one already has a pointer to the item to be deleted.
{ "domain": "cstheory.stackexchange", "id": 4762, "tags": "ds.data-structures" }
Fast phase calculation
Question: I need to detect the PSK modulation posted on my previous post (phase difference detection) with 8 to 16 phase constellations (depending on selected baud rate) over a max $60^\circ$ PSK phase range. For communication start and end the Tx generates carrier with $\Delta(\phi)= 0 \text{ or } -180^\circ$ to enable Rx to synchronize or detect end of communication. As speed of converging to the result is important to my application (theoretically real-time application) I was wondering if there is a fast algorithm with non- or low-iterative mechanism that can converge fast to the instantaneous and differential phase value (implementation with Verilog HDL). I think the $\sin(\Delta(\phi))$ value can be filtered out of I/Q component multiplication relatively fast, but it needs an LUT to save the $\sin(\Delta(\phi))$ values for all phase constellations. New ideas are always welcomed! Answer: Below summarizes efficient phase estimators for this application updated to include both a phase range of +/- 30 degrees and +/- 60 degrees. This is given in two parts, estimators for a real IF (intermediate frequency) signal, and estimators for a baseband complex signal. At the end are additional considerations related to acquisition. For an additional estimator provided by Richard Lyons, please see his answer at this other post. Efficient Phase Estimators for Real IF Signals Product detector: For real signals, a common phase estimator (detector) is a multiplier followed by a low pass. For this application where sensitivity is desired over $\pm 30$ degrees, the signals are nominally 90 degrees in phase resulting is an estimate that is proportional to the sine of the phase between two signals: $$y(t,\phi) =A_1\cos(\omega_ct)A_2\sin(\omega_ct+\phi) = \frac{A_1 A_2\sin(\phi) + A_1A_2\sin(\omega_ct+\phi)}{2} $$ Where when followed with a low pass filter removes the time varying component resulting in: $$y(\phi) = \text{LPF}[y(t,\phi)] =\frac{A_1A_2}{2}\sin(\phi) $$ Where $\text{LPF}[\cdot]$ is the time average provided by a low pass filter. As a demodulator, this can be implemented either in a coherent receiver where $A_1\cos(\omega_ct)$ is estimated during acquisition and provided as an NCO in a digital implementation (or VCO in an analog), or in a non-coherent receiver (where the interest is in the phase difference between two successive symbols) where the demodulation is done by multiplying the received signal with a time delayed copy of itself, delayed by one symbol duration plus the duration of a quarter cycle of the carrier to convert $\cos$ to $\sin$ (when the IF carrier is sufficiently larger than the symbol rate): $$y(\phi_2-\phi_1) = \text{LPF}\bigg[A\cos(\omega_ct+\phi_1)A\cos(\omega_c(t-T_s-T_c)+\phi_2)\bigg]$$ Where $T_s$ is the symbol duration in seconds and $T_c = 1/(4f_c)$ is a quarter cycle of the IF carrier in seconds with the IF carrier frequency as $f_c$ in Hz. Resulting in: $$y(\phi_2-\phi_1) = \text{LPF}\bigg[A\cos(\omega_ct+\phi_1)A\sin(\omega_c(t-T_s)+\phi_2)\bigg]$$ $$ y(\phi_2-\phi_1) = \frac{A^2}{2}\sin(\phi_2-\phi_1)$$ $$ y(\Delta\phi) = \frac{A^2}{2}\sin(\Delta\phi)$$ A very efficient way to implementing either of the above approaches digitally is by hard limiting the input signal which reduces the above to a simple XOR of the most significant bit of the waveform. For accuracy, this requires ensuring the inputs to the XOR operation are a 50% duty cycle, but the result is linearly proportional to phase! It is usable over a $\pm 90°$ range with a linear phase result. Further, hard limiting a phase modulated waveform provides a 3 dB SNR improvement in positive SNR conditions (since all AM noise is removed), but can be more susceptible to jamming an interference (3 dB loss in negative SNR conditions). This is an approach to be considered due to the simplicity and high phase linearity. As above, the X-or phase detector could be used in a coherent receiver where the NCO is also simplified to a 1 bit output (basically the MSB of a counter and you increment the count rate to adjust the frequency as part of a carrier tracking loop), or non-coherently where the MSB of the received signal is XOR'd with a delayed copy. As with the multiplier the two input signals would be in quadrature to center the detector over its unambiguous range. Efficient Phase Estimators for Complex Baseband Signals Given a generic signal as $Ae^{j\phi}= I + jQ$, the actual phase is given by $\phi =\tan^{-1}(Q/I)$ or $\phi =\sin^{-1}(Q/A)$. The following is a summary of efficient phase demodulation approximations for a variation over $\pm30$ degrees and $\pm60$ degrees, assuming carrier recovery and timing is established during the $0 / 180$ acquisition period. Initial thoughts on approaches to efficient acquisition are also included at the bottom of this post. Summary of Results Below is a table summarizing the peak and rms phase error for various estimators. Estimators that were included in earlier versions of this post that offer no advantage to those listed below have been removed. As Ben suggests in the comments, the Q/A estimators are attractive for FPGA implementation since A is assumed to be constant over the duration of the packet. Plots showing the relative performance are included below: Detailed Descriptions The estimators that are scaled by the envelope magnitude $A$ (Q/A, Q/A Juha and Q/Est(A)) are preferred since $A$ can be readily determined during acquisition of the 0/180 signal, and only needs to be determined once for relatively short packets, or is a parameter from the AGC otherwise. In a constant envelope phase modulated signal such as this, the received signal can be simply hard limited if there isn't a concern with potential 3dB loss with stronger out of band interference (or the complete loss from hard-limiting in the presence of a coherent jammer). Further, there is no need to actually divide by $A$, assuming $A$ is maintained to be constant over the packet duration the result will be linearly proportional to the phase and the decision thresholds can be set accordingly. Q/A $$\phi =\sin^{-1}\bigg(\frac{Q}{A}\bigg)$$ $$\frac{Q}{A} = sin(\phi)$$ for small $\phi$, $sin(\phi) \approx \phi$ for $\phi$ in radians: $$\phi \approx \frac{Q}{A}$$ Q/A Juha Similar to @JuhaP's suggestion in the comments of removing the linear slope error for the $Q/I$ estimator, here applied to the Q/A Estimator. The coefficient is found from the linear portion of the remaining terms in the Taylor Series expansion that weren't used, minimizing the error: For ±30° Operation: $$\phi \approx 1.0475\frac{Q}{A}$$ For ±60° Operation: $$\phi \approx 1.150\frac{Q}{A}$$ Q/Est(A) A fast and very efficient approach for estimating magnitude is the $\alpha$ max plus $\beta$ min algorithm where the maximum between $|I|$ and $|Q|$ scaled by coefficient $\alpha$ is added to the minimum scaled by coefficient $\beta$. At 30° range, $Q$ would always be the minimum and $I$ always positive so this would simplify to $\alpha I + \beta|Q|$. A common choice for FPGA implementation is $\alpha = 1$ and $\beta =1/2$ since this minimized the error over all phases with bit shift divisions, but in this case $\alpha = 1$ and $\beta =1/4$ is a better choice given the narrowed phase range of $±30°$. If multipliers were acceptable, the optimized coefficients are $\alpha = 0.961$ and $\beta =0.239$. The plot below summarizes the two choices: $$\phi \approx \frac{Q}{\alpha I + \beta |Q|}$$ option 1: $\alpha =1$, $\beta = 0.25$ option 2: $\alpha =0.961$, $\beta = 0.239$ Also not plotted below but shown above is the option optimized for use over ±60°: $\alpha =0.85$, $\beta = 0.45$ Note these are not optimized for the estimate of $A$, but to minimize the phase estimation error. Q/I Phase Approximation $$\phi =\tan^{-1}\bigg(\frac{Q}{I}\bigg)$$ $$\frac{Q}{I} = tan(\phi)$$ for small $\phi$, $tan(\phi) \approx \phi$ for $\phi$ in radians: $$\phi \approx \frac{Q}{I}$$ As @JuhaP mentioned in the comment, the linear slope component of the error could be removed by multiplying by 0.9289 resulting in (This one is labeld Q/A JuhaP in the plot). The coefficient below is slightly different than his suggestion but minimizes the error as it was found from the linear portion of the remaining terms in the Taylor Series expansion that weren't used rather than his approach of a first order terms of a polynomial fit to arctan: $$\phi \approx 0.9289\frac{Q}{I}$$ Taylor Series Phase Approximations The first term is the Q/A and Q/I approximations covered above for $\sin^{-1}$ and $\tan^{-1}$ respectively. Going beyond that is NOT recommended if efficiency is paramount but included for accuracy comparison. arcsin $$sin^{-1}(n) = \sum_{n=0}^\infty \frac{2n!}{2^{2n}(n!)^2}\frac{x^{2n+1}}{2n+1} \text{ for } |n|\le1$$ $$sin^{-1}\bigg(\frac{Q}{A}\bigg)= \frac{Q}{A} +\frac{1}{6} \bigg(\frac{Q}{A}\bigg)^3 +\frac{3}{40}\bigg(\frac{Q}{A}\bigg)^5 ... \text{ for } |Q/A|\le1$$ arctan $$tan^{-1}(n) = \sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{2n+1} \text{ for } |n|\le1$$ $$tan^{-1}\bigg(\frac{Q}{I}\bigg) = \frac{Q}{I} -\frac{1}{3} \bigg(\frac{Q}{I}\bigg)^3 +\frac{1}{5}\bigg(\frac{Q}{I}\bigg)^5 ... \text{ for } |Q/I|\le1$$ Using the first two terms for each results in: $$\phi \approx \frac{Q}{A} +\frac{1}{6} \bigg(\frac{Q}{A}\bigg)^3$$. $$\phi \approx \frac{Q}{I} -\frac{1}{3} \bigg(\frac{Q}{I}\bigg)^3$$. A linear slope can also be removed from either of these with a gain constant multiplication as was done with the Q/A and Q/I estimators to further minimize the error. Other Estimators Juha Squared @JuhaP offered this interesting estimator in the comments. Not very efficient but highly accurate with square terms: $$\phi \approx \frac{3QI}{Q^2 + 3I^2}$$. Acquisition Efficient Acquisition for 0/180 preamble: One idea that comes to mind for acquisition during the $0/180$ transitions is to use $\text{sign}(I_2)Q_1-\text{sign}(I_1)Q_2$ to get the change in phase between two symbols, which can the be corrected in a fast coverging and simple loop by derotating the incoming signal. This appraoch would work well if the frequency offset is such that the phase does not rotate more than $\pm \pi/2$ between successive signals, otherwise a course FLL can be used first to get the offset within this acquisition range. For a coherent receiver approach a PLL would be used to lock/track an NCO or PLL to the carrier and my squaring the received signal a reference tone at twice the carrier can be tracked for all modulations presented here (both the bi-phase acquisition interval and the 30 degree phase modulation when doubled will produce a distinct tone at 2x the carrier). Similarly a Costas Loop would track both signals while providing the reference signal that is nominally 90 degrees in phase with the carrier, thus providing both carrier recovery and phase demodulation. Sources: Taylor Series Expansions for arcsin and arctan: https://proofwiki.org/wiki/Book:Murray_R._Spiegel/Mathematical_Handbook_of_Formulas_and_Tables
{ "domain": "dsp.stackexchange", "id": 8676, "tags": "fft, phase, quadrature" }
Detecting if resistances are parallel or series in complex circuits
Question: I know how to detect when resistors are arranged in parallel or series arrangement and I can also find their equivalent resistance in simple circuits or when resistances are connected in form of triangle but what happens when the arrangement is complex like this : Which resistors are parallel and which are in series ? How can I find the equivalent resistance in such cases ? Is there rule or method for figuring this out ? Answer: Alfred got in before me, but I have a diagram! I've marked all continuous bits of wire in the same colour, and marked the corresponding colours on the ends of the resistors. A quick redraw later and I get: which is a lot simpler!
{ "domain": "physics.stackexchange", "id": 91600, "tags": "electric-circuits, electrical-resistance" }
Height of AVL Tree
Question: I found an AVL tree implementation on the internet and experimented: For a tree with node count of 2^20, the minimal and maximal tree heights are 16 and 24. While these heights are lg(n)-ish, I am concerned about their difference. Doesn't AVL guarantee maximal-minimal height difference to be 1? The repository from which I got the code Answer: The AVL invariant does not guarantee that, given any two tree paths, their length differs at most by one unit. They can differ by more than one unit, as shown by the following tree (Fibonacci AVL tree from Wikipedia): Instead, the AVL invariant only requires that the heights of the two subtrees of any nodes differ at most by one. Note that the height of a (sub)tree is the maximum length of its paths. Therefore, the AVL invariant is equivalent to requiring that, for all nodes, the maximum length of a path in the left subtree and the maximum length of a path in the right subtree differ at most by one unit. Note how we only take the paths giving the maximum length, and never the paths giving the minimum length.
{ "domain": "cs.stackexchange", "id": 19422, "tags": "search-trees, balanced-search-trees, binary-search-trees, avl-trees" }
Protecting elemental Li from the atmosphere
Question: I would like to make a small decorative object out of lithium, and keep it in an ordinary household (indoors) atmosphere. The making itself aside, what can I do to protect it long-term from reacting with the atmospheric water and gases? Is there something that could react with it to form a protective outer layer, like oxides do on some other metals? Is it perhaps possible to electroplate it with Mg or Al? Are there other solutions? To be clear, I don't want to just keep it submerged in oil; I want it to stand on a shelf and hopefully even get handled occasionally by curious hands. I have no access to advanced equipment or supplies - I'm only armed with a high-school level of chemical understanding, access to ordinary hardware/paint shops, and patience. I do understand that the question is unusual and has little broad practical utility, but it's still a proper chemistry problem to which I would like to see a proper chemistry solution (or an exhaustive explanation why a search for a solution is futile). Please don't answer along the lines of "your idea is stupid and you should feel stupid". Answer: Interesting question!(Though personally I've no idea why you want to make a decorative article out of lithium, but I guess it's best not ask) You did consider using the process we know as electroplating; yes, it's one of the common, commercial methods of 'passivating' (rendering inert) a lot of metals that are prone to corrosion, like iron for example (Galvanization). However electroplating is something that could never work out with lithium, here's why... (The above example, courtesy Google Images, suggests the use of copper as the electroplating metal, but since I can't seem to find a more generalized diagram so we'll just go with this) A quick crash course on the set-up for electroplating: Your cathode is the metal to be electroplated, while your anode is the metal (copper here) that is used to electroplate whatever you've hooked up as the cathode. Both the anode and cathode are immersed in an aqueous solution of a salt of the metal used as the anode (here, it's copper sulfate). Clear with that? Good, now here's the catch... You need to put lithium in an aqueous solution, that is, a solution that uses water as the solvent. In your question you voice concerns over lithium's reactivity towards stuff you typically come across in your surroundings (oxygen and carbon dioxide in the air, moisture/humidity, etc), so I take it that you are well aware of what happens when you dump a large, fresh piece of lithium in water... So yeah, don't even think about electroplating this. The making itself aside, what can I do to protect it long-term from reacting with the atmospheric water and gases? Well, there are a couple of other methods you could use: 1) Coating it in wax. Coating the lithium in a thin, uniform layer of wax (candle wax, beeswax, furniture wax or any sort of wax that you can easily get your hands on, and remains a solid at room temperatures) would do a pretty decent job of protecting lithium from your ordinary household environment. It shields the metal from both atmospheric gases and moisture. Heck you could even dump the whole thing in water and it wouldn't go off in your face. But evenly coating a "decorative object" made out of lithium with wax while still being able to see the object clearly under the wax will be a bit tricky...even more so, if this "decoration" of yours has any really intricate patterns. This is where we move on to the next method, 2) Immersing it in a jar full of mineral/paraffin oil This is far simpler than coating the thing in wax. Get an airtight, transparent jar, fill it all the way to the brim with mineral oil, slowly immerse the object in the jar so you don't trap any air bubbles, screw on the lid tightly and there you have it! True, a major 'disadvantage' here, is that you can't take the object in your hands and examine it closer as and when you wish (which wasn't the case in the waxing method), but maybe a shiny lithium decoration floating around in a jar might make up for this aesthetically. Which of the two methods you use, is entirely at your discretion, however, a word of caution: Lithium metal is a fire risk! Do handle it with care. Have necessary safety arrangements in place.
{ "domain": "chemistry.stackexchange", "id": 7072, "tags": "everyday-chemistry, metal" }
Hokuyo_node tutorial error
Question: Hi, I followed the exact steps on http://wiki.ros.org/hokuyo_node/Tutorials/UsingTheHokuyoNode Everything showed up as expected until I reached the final step, which is rosrun rviz rviz -d rospack find hokuyo_node/hokuyo_test.vcg I got an error: ERROR: the config file '/opt/ros/indigo/share/hokuyo_node/hokuyo_test.vcg' is a .vcg file, which is the old rviz config format. New config files have a .rviz extension and use YAML formatting. The format changed between Fuerte and Groovy. There is not (yet) an automated conversion program. I tried to change the command to rosrun rviz rviz -d rospack find hokuyo_node/hokuyo_test.rviz, rviz did launch, but I just could not change the fixed frame to '/laser' in the global options, the fixed frame is 'map' when rviz launched. Can anyone please give me some advice? Thanks. Steven Originally posted by StevenJiang on ROS Answers with karma: 3 on 2015-05-08 Post score: 0 Original comments Comment by sergiocruz86 on 2015-06-05: Hi, Steven. I'm on the same boat as you are! Did you get the solution? Regards. Answer: Run rviz without the .vcg file i.e. rosrun rviz rviz. Set Global Options -> Fixed frame = /laser and Grid -> Reference Frame = /laser. The Add -> LaserScan, and set it's topic to /scan. You should see the laser scan on the grid. Originally posted by rj with karma: 46 on 2015-10-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by ahendrix on 2015-10-08: Just to elaborate on this: the Fixed Frame field is also a text entry field; you may have to manually type in '/laser' Comment by BoY on 2017-04-06: Thanks! Got the data successfully shown on the rviz!
{ "domain": "robotics.stackexchange", "id": 21636, "tags": "ros, hokuyo-node, hokuyo" }
Simple spinlock for C using ASM, revision #1
Question: Revision #1 for Simple spinlock for C using ASM The code: static inline void atomic_open(volatile int *gate) { asm volatile ( "jmp check\n" // Renegade selected, lets skip the line! "wait:\n" // Honest citizens wait in line. "pause\n" // Stroke beard, check phone/watch. "check:\n" // Ok, lets do this... "mov %[lock], %%eax\n" // eax = 1 "lock xchg %%eax, %[gate]\n" // Exhange eax with gate value. "test %%eax, %%eax\n" // 1 = closed, 0 = open. "jnz wait\n" // Ohhh man, here I go again... : [gate] "=m" (*gate) : [lock] "r" (1) : "eax" // Tell compiler you want to use eax register. ); } static inline void atomic_close(volatile int *gate) { asm volatile ( "mov %[unlock], %[gate]\n" : [gate] "=m" (*gate) : [unlock] "r" (0) ); } // Usage, example. volatile int atomic_gate_memory = 0; void *mymalloc(size_t size) { atomic_open(&atomic_gate_memory); void *ptr = malloc(size); atomic_close(&atomic_gate_memory); return ptr; } The question is the same as before: Will atomic_[open/close] make mymalloc both threadsafe and reentrant? If no, what is wrong? If yes, it is still wrong; isn't it?... Give me a good rant about what to consider, what is missing or about better approach. If you want to suggest libraries, please restrict your self to C. I am not experienced enough to bind C++ stuff to other languages, so I often can't use the good stuff over there :'( Answer: As you have no doubt noticed, given your follow up post, whilst your current lock is thread safe, it isn't reentrant. If you perform a double call to atomic_open from the same thread, the second call will fail to enter the gate (because it's already locked) and become deadlocked against itself. lock xchg %%eax, %[gate]\n" // Exhange eax with gate value. "test %%eax, %%eax\n" // 1 = closed, 0 = open. This is because there's no tracking of which thread opened the gate. Not checking the thread has the potential to cause another issue (which you might not care about, because it's pretty common with spinlocks) whereby if you have a bug in your code, it is possible for a thread that doesn't own the lock to unlock it (anybody can call atomic_close and it will unlock, even if they didn't call atomic_open first).
{ "domain": "codereview.stackexchange", "id": 20454, "tags": "c, reinventing-the-wheel, assembly, locking" }
How to get coordinates from depthImage
Question: Hi I am working with depthImages. This is my first time with camera models, so I do not understand how to get the information I need from camera_info topic. My inputs are a CameraInfo and a Image(depthimage) topics. I need to get the coordinates (in odom frame) of one of the pixels. Can anyone explain me how to do it? I have tried too much things but the results are wrong. I think it may be because of the matrices that define the camera or the focal length... but I am lost in this field. Thank you. Originally posted by arebot on ROS Answers with karma: 11 on 2014-10-23 Post score: 1 Answer: How you would do it in python (not tested) given a CameraInfo message info, pixel coordinates (u, v), and the depth at those coordinates, d: import image_geometry, numpy cam_model = image_geometry.PinholeCameraModel() cam_model.fromCameraInfo(info) ray = numpy.array(cam_model.projectPixelTo3dRay((u,v)) point3d = ray * d Originally posted by Dan Lazewatsky with karma: 9115 on 2014-10-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19829, "tags": "ros, depth-image, camera-depth-points" }
Rating the quality of a response based on the number of typos
Question: I have this method which returns the number, but Rubocop said: Assignment Branch Condition size for rating is too high. [15.84/15] def rating response = response_quality if response == 1 && @typos.zero? 5 elsif response == 2 && @typos.zero? 4 elsif response == 3 && @typos.zero? 3 elsif @typos.percent_of(@word_length) < 20 2 elsif @typos.percent_of(@word_length) < 30 1 elsif @typos.percent_of(@word_length) > 30 0 end end What can I change? Answer: What happens when @typos.percent_of(@word_length) is exactly 30? You need to fill in the gaps. The last case should be an else rather than an elsif, to ensure that all cases are covered. Is it guaranteed that response_quality will be 1, 2, or 3? If it's not obvious from context, then there should be an explanatory comment that other values are or aren't possible. The solution below would reduce the code a bit, but it's not necessarily better than your original approach, which did have the advantage of regularity in the pattern. def rating if @typos.zero? case response_quality when 1; return 5 when 2; return 4 when 3; return 3 end end typo_pct = @typos.percent_of(@word_length) typo_pct < 20 ? 2 : typo_pct < 30 ? 1 : 0 end
{ "domain": "codereview.stackexchange", "id": 17291, "tags": "beginner, ruby, ruby-on-rails" }
Why do the charges on a parallel plate capacitor lie only on the inner surface?
Question: In most pictures I've seen of parallel plate capacitors, charges are drawn so that they're entirely on the inner surface of the plates. I accept that there can't be any net charge within the conducting plates, as that would lead to a non-zero electric field within the metal, and charges would move to the surface. However, I see no reason why charge can't reside on both sides of the same plate. I feel like there should be a way to have charge accumulate on both sides of the same plate, and still get a zero electric field within the metal. How can we show that there isn't? Or...am I incorrect to assume that charge only distributes itself on the inner surfaces? I'd like to understand this both for the infinite plate idealization, and for the finite plate reality. Answer: This is a good question and let us analyse a general situation and then come to your question. Let us consider the following diagram: Two conducting plates $A$ and $B$ are placed parallel to each other. Let us assume that we supply a charge $Q_1$ to the left plate and $Q_2$ to the right plate. The charge distribution in the surfaces $1$, $2$, $3$ and $4$ is shown above. We know that the electric field due to a surface charge density is given by $\frac{\sigma}{2\epsilon_0}$ where $\sigma$ is the surface charge density ($Q/A$ where $Q$ is the charge and $A$ is the surface area of the plate). Using this and the fact - net electric field inside a conductor in electrostatics is zero, let us write an expression for electric field inside the plate $A$ due to the charge densities on the four surfaces: $$\begin{align} \frac{Q_1-q}{2A\epsilon_0}-\frac{q}{2A\epsilon_0}+\frac{q}{2A\epsilon_0}-\frac{Q_2+q}{2A\epsilon_0} &=0 \\ Q_1-q-Q_2-q &=0 \\ Q_1-Q_2 &=2q \\ \end{align}$$ $$q=\frac{Q_1-Q_2}{2}$$ Using this value of $q$, the charge distribution in the four surfaces looks something like the image below: Now let us turn our attention to your question. Usually, the plates of a capacitor are not charged initially. When we connect the two plates of a parallel plate capacitor to the two terminals of a battery, the battery just acts as an electron pump and moves negative charge from the positive plate of the capacitor and pushes it to the negative terminal of the capacitor. Or in short, the battery doesn't supply a net charge just moves it from one plate to the other. So the two plates have the same magnitude of charge but of opposite signs. Or on the lines of the example we considered earlier $Q_1=-Q_2$. Let the magnitude of this common charge be $Q$. Now the charge distribution looks like the following: Thus, the charges on the outer surface of an initially uncharged capacitor when connected to a battery is zero. Please click on the images to view them in higher resolution. Image courtesy: My own work :)
{ "domain": "physics.stackexchange", "id": 65476, "tags": "electrostatics, electric-circuits, charge, capacitance, conductors" }
Expansion of the universe a thermodynamic process or not?
Question: Can the expansion of the universe be thought of as a thermodynamic process? If so, is it a closed system? Is it a reversible system? Answer: The universe is not currently in (or even close to) thermodynamic equilibrium, so the kind of thermodynamics we teach in a first course (often called "equilibrium thermodynamics") is right out.
{ "domain": "physics.stackexchange", "id": 19508, "tags": "thermodynamics, cosmology, universe, space-expansion" }
Efficient recurrent network for sequences of varying length
Question: Suppose I have a bunch of sequences of varying lenghts. The absolute majority of them are short, just a few dozens items long. However, very few of them are significantly longer - more than a hundred items long. The question is, how to organize them efficiently as an input to recurrent layer? Padding doesn't work, since many sequences need to be heavily padded. Limiting batch size is not an option - these sequences are obtained as parts of a larger structure. Answer: Sequence bucketing Depending on the input length of sequences you can dynamically change the padding and speed it up. Take a look at here Sequence bucketing
{ "domain": "datascience.stackexchange", "id": 6576, "tags": "neural-network, rnn" }
Why do planes use 1-2 propellers in the front, but drones have 4 on top?
Question: Planes with propellers normally have one or two propellers in front, facing forward, helicopters have one on top facing up and one facing the side, and drones have four on top facing up. How do the propeller positions and number effect flight dynamics such as speed and stability, and why do each of these use what they do? Answer: traditionally, a helicopter tilts its lift vector so it can propel itself laterally by cyclically varying the pitch of its main rotor blades. This requires a complex mechanical system manufactured to close tolerances which also has to withstand the stresses imposed on it which arise from its rapid rotation. It also requires a tail rotor to counteract the torque which the engine applies to the main rotor. A quadcopter drone can tilt its lift vector simply by slowing down one rotor and speeding up the opposite rotor, so it dispenses altogether with the cyclic pitch mechanism and instead uses electronic control to vary the speed of each of its main rotors. the motors and control systems which make this possible are now sufficiently inexpensive that the earlier designs of radio-controlled helicopter camera platforms, which used cyclic pitch control just like the big boys, are obsolete.
{ "domain": "physics.stackexchange", "id": 58771, "tags": "fluid-dynamics, aerodynamics, aircraft" }
Definition of the refractive index
Question: My textbook says the absolute refractive index of a medium $$n = \dfrac{c}{v}$$ where $c$ is the speed of light in vacuum and $v$ is the speed of light in the medium. Why hasn't it been chosen the other way round i.e. $n = \dfrac{v}{c}$? Answer: It's just a definition - but using $c/v$ rather than $v/c$ it means that objects with a larger refractive index bend light more, which seems the right way round. It is more natural to work with bending angles (lenses, telescopes, spectacles) rather than the actual velocity which is not usually something you perceive directly (unless you're using fibre optics over long distances and worried about signal timing).
{ "domain": "physics.stackexchange", "id": 72652, "tags": "optics, conventions, definition, refraction" }
Color swatch selector with multiple mouseover and mouseout functions
Question: I am working on a simple mouseout and mouseover functions. Basically, I have color boxes (or swatches) and I have to get the color of the specific box when hovered and when selected. You can see how it works in the fiddle. So I came out with a working script. But I am not sure how I can optimize or shorten the script since I will be having several color swatches. I made 2 samples swatches in this demo. The question: is there a way to optimize/shorten it? Contents in the script were almost the same for each functions. The class names were the only ones changed. /* COLOR SWATCH 1*/ $(".color").mouseover(function() { var hex = $( this ).css("background-color"); var hexVal = $( this ).attr("value"); var styles = { backgroundColor : hex }; $('.selected-color').css(styles); $('.selected-color').text(hexVal); }); $(".color").mouseout(function() { var hex = $('.active').css( "background-color"); var hexValue = $('.active').attr("value"); var styles = { backgroundColor : hex }; $('.selected-color').empty(); $('.selected-color').css(styles); $('.selected-color').text(hexValue); }); $('.color').on('click', function() { $(this).addClass('active').siblings().removeClass('active'); var hex = $(this).css("background-color"); var hexVal = $('.active').attr("value"); var styles = { backgroundColor : hex }; $('.selected-color').css(styles); $('.selected-color').text(hexVal); }); /* COLOR SWATCH 2*/ $(".color-2").mouseover(function() { var hexs = $( this ).css("background-color"); var hexVal = $( this ).attr("value"); var styles = { backgroundColor : hexs }; $('.selected-color-2').css(styles); $('.selected-color-2').text(hexVal); }); $(".color-2").mouseout(function() { var hex = $('.active-2').css( "background-color"); var hexValue = $('.active-2').attr("value"); var styles = { backgroundColor : hex }; $('.selected-color-2').empty(); $('.selected-color-2').css(styles); $('.selected-color-2').text(hexValue); }); $('.color-2').on('click', function() { $(this).addClass('active-2').siblings().removeClass('active-2'); var hex = $(this).css("background-color"); var hexVal = $('.active-2').attr("value"); var styles = { backgroundColor : hex }; $('.selected-color-2').css(styles); $('.selected-color-2').text(hexVal); }); .color-swatch div { width: 25px; height: 25px; float: left; border: 1px solid #313131; margin-right: 5px; } .selected-color, .selected-color-2 { width: 90px; height: 20px; color: #FFF; text-align: center; padding: 2px; } .active, .active-2 { border: 3px solid #151515 !important; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script> <!-- COLOR SWATCH 1 --> <p>SELECTED COLOR 1: <span class="selected-color" style="background-color:#ef4060;" value="Pink">Pink</span></p> <div class="color-swatch"> <div class="color" style="background-color:#028482;" value="Aqua"></div> <div class="color" style="background-color:#4c0055;" value="Purple"></div> <div class="color active" style="background-color:#ef4060;" value="Pink"></div> </div> <!-- COLOR SWATCH 2 --> <br><br> <p>SELECTED COLOR 2: <span class="selected-color-2" style="background-color:#028482;" value="Aqua">Aqua</span> </p> <div class="color-swatch"> <div class="color-2 active-2" style="background-color:#028482;" value="Aqua"></div> <div class="color-2" style="background-color:#4c0055;" value="Purple"></div> <div class="color-2" style="background-color:#ef4060;" value="Pink"></div> </div> Answer: HTML abuse The HTML5 specification does not allow a <div> element to have a value attribute. You can use a data-* attribute (named data-color, for example). A title attribute would also be reasonable. You are misusing CSS classes as IDs. An ID is a unique name for an individual element. A class is a set of elements that should all be treated alike. You should not have color and color-2 as two class names. Rather, they should be a single class. Confused terminology Your wording is a bit off, in my opinion. Each individual sample is a "swatch". A color selector is a widget consisting of three swatches. When you make a decision by clicking on a swatch, the swatch isn't just "active" — I would say that that color has been selected. Based on the above, I would rename… the color and color-2 classes both to swatch the color-swatch class to swatch-selector the active class to selected jQuery Once you rename the classes so that both widgets have the same class names, it becomes possible to write code that treats the swatch selector as a generic widget. If you need to define both a mouseover and a mouseout handler, use hover() instead. $(function() { 'use strict'; function showActiveColor($selector) { var selectorId = $selector.attr('id'); var $selected = $selector.find('.swatch.selected'); $('#active-' + selectorId).empty() .text($selected.data('color')) .css({ backgroundColor: $selected.css('background-color') }); } $('.swatch-selector .swatch').hover( function mouseOver(event) { var colorName = $(this).data('color'); var color = $(this).css('background-color'); var selectorId = $(this).closest('.swatch-selector').attr('id'); $('#active-' + selectorId).text(colorName) .css({ backgroundColor: color }); }, function mouseOut(event) { showActiveColor($(this).closest('.swatch-selector')); } ).click(function onClick(event) { $(this).addClass('selected') .siblings().removeClass('selected'); }); // Initialization so that you don't have to hard-code the span's // background-color and text to match the initial selection $('.swatch-selector').each(function(index, el) { showActiveColor($(el)); }); }); .swatch-selector .swatch { width: 25px; height: 25px; float: left; border: 1px solid #313131; margin-right: 5px; } .swatch-selector .swatch.selected { border: 3px solid #151515; } .color-name { width: 90px; height: 20px; color: #FFF; text-align: center; padding: 2px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script> <!-- COLOR SWATCH SELECTOR 1 --> <p>SELECTED COLOR 1: <span class="color-name" id="active-color-1"></span></p> <div class="swatch-selector" id="color-1"> <div class="swatch" style="background-color:#028482;" data-color="Aqua"></div> <div class="swatch" style="background-color:#4c0055;" data-color="Purple"></div> <div class="swatch selected" style="background-color:#ef4060;" data-color="Pink"></div> </div> <br><br> <!-- COLOR SWATCH SELECTOR 2 --> <p>SELECTED COLOR 2: <span class="color-name" id="active-color-2"></span></p> <div class="swatch-selector" id="color-2"> <div class="swatch selected" style="background-color:#028482;" data-color="Aqua"></div> <div class="swatch" style="background-color:#4c0055;" data-color="Purple"></div> <div class="swatch" style="background-color:#ef4060;" data-color="Pink"></div> </div>
{ "domain": "codereview.stackexchange", "id": 18220, "tags": "javascript, jquery, html" }
Packet Scout (show traffic on specified port)
Question: I've just made a github repository for a simple packet scout in C#. It is my first time using C# (other than Neoaxis 3D engine logic) since I am more familiar with C#. Below is the main bulk of the code that I'd like to improve. You can see all of the code here. If you have any suggestions please share them. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Net.Sockets; using System.Net; namespace Packet_Scout { public partial class Form1 : Form { private const string endl = "\n"; private UdpClient socket; private bool listening = false; private static string buff = ""; public Form1() { InitializeComponent(); } private void downloadbuffer() { rblog.Text += buff; buff = ""; } private void btnstart_Click(object sender, EventArgs e) { if (!listening) { rblog.Text = ""; socket = new UdpClient((int)ndoor.Value); tmbuf.Start(); bgw.RunWorkerAsync(); btnstart.Text = "Stop"; listening = true; } else { tmbuf.Stop(); downloadbuffer(); bgw.CancelAsync(); socket.Close(); rblog.Text += "Stopped!"; listening = false; btnstart.Text = "Start"; } } private void bgw_DoWork(object sender, DoWorkEventArgs e) { buff = "Listening on port " + ndoor.Value + endl; while (socket != null) { if (socket.Available > 0) { IPEndPoint ipep = new IPEndPoint(IPAddress.Any, 0); Byte[] recvbytes= socket.Receive(ref ipep); buff += "Client (" + ipep.Address + ":" + ipep.Port + "): "; foreach (Byte b in recvbytes) buff += b.ToString() + " "; buff += endl; } } } private void tmbuf_Tick(object sender, EventArgs e) { downloadbuffer(); } } } Answer: Naming Methods should be named using PascalCase casing and should reflect by its name what the responsibility of the method is. So instead of downloadbuffer() a more descriptive name would be AppendToRichtextBox() or something similiar. Talking about responisibilties, a class should usually have only one responsibility, so you should separate the UI related code and the listening code into separate classes. BackgroundWorker The BackgroundWorker has an event to report the current progress, so no timer is needed. The call to CancelAsync() isn't working like you guess it is. CancelAsync submits a request to terminate the pending background operation and sets the CancellationPending property to true. When you call CancelAsync, your worker method has an opportunity to stop its execution and exit. The worker code should periodically check the CancellationPending property to see if it has been set to true. Now you will say, but it stops the worker. This is only partial true, because after the said call you are closing the socket which will lead to a ObjectDisposedException at retrieving the socket.Available property. This exception will be passed to the RunWorkerCompleted eventhandler see the DoWork() event documentation If the operation raises an exception that your code does not handle, the BackgroundWorker catches the exception and passes it into the RunWorkerCompleted event handler, where it is exposed as the Error property of System.ComponentModel.RunWorkerCompletedEventArgs. If you are running under the Visual Studio debugger, the debugger will break at the point in the DoWork event handler where the unhandled exception was raised. If you have more than one BackgroundWorker, you should not reference any of them directly, as this would couple your DoWork event handler to a specific instance of BackgroundWorker. Instead, you should access your BackgroundWorker by casting the sender parameter in your DoWork event handler. Instead of adding to the static buffer variable I would suggest to use the ReportProgress() event and instead of using the string variable you should use a StringBuilder object and passing the built string to the event. By doing so you omit the fact that for each string concatination a new string is created because strings in .NET are immutable. Like private void bgw_DoWork(object sender, DoWorkEventArgs e) { StringBuilder sb = new StringBuilder(); sb.Append("Listening on port ") .Append(ndoor.Value) .Append(endl); while (socket != null) { if (socket.Available > 0) { IPEndPoint ipep = new IPEndPoint(IPAddress.Any, 0); Byte[] recvbytes= socket.Receive(ref ipep); sb.Append("Client (") .Append(ipep.Address) .Append(":") .Append(ipep.Port) .Append("): "); foreach (Byte b in recvbytes) { sb.Append(b.ToString()) .Append(" "); } sb.Append(endl); bgw.ReportProgress(0, sb.ToString()); sb.Length = 0; } } private void AppendToRichtextBox(string value) { rblog.AppendText(value); rblog.ScrollToCaret(); } private void bgw_ProgressChanged(object sender, ProgressChangedEventArgs e) { AppendToRichtextBox(e.UserState.ToString()); } because the Append() method of the StringBuilder is returning a StringBuilder you can fluently call this method like sb.Append("first ").Append("and second !");. If you want to add a carriage return + new line you can simply call one of the overloaded AppendLine() methods. You can also use the var type if the right hand side of the assignment makes the type obvious Beginning in Visual C# 3.0, variables that are declared at method scope can have an implicit type var. An implicitly typed local variable is strongly typed just as if you had declared the type yourself, but the compiler determines the type. Like var sb = new StringBuilder();
{ "domain": "codereview.stackexchange", "id": 14016, "tags": "c#, beginner" }
Induced current in a loop - what about cancellation?
Question: We all know that a time-varying magnetic field through a coil will tend to induce a current in the coil, and the converse is also true. If you look at the field lines created by current in a coil, they travel in one direction inside the coil, and in the opposite direction outside the coil, like so: Suppose that we add a time-varying field that is perfectly uniform in magnitude AND direction over the entire area inside and outside the coil. The field inside the coil would tend to induce a current in the coil in one direction, while the field outside the coil would tend to induce a current in the other. Would the effect be that zero current flows in the coil? Answer: The field outside the coil do not contribute to the emf generated. It depends only on the rate of change of flux through the path considered. Referto the maxwell equations. It clearly states(in differential form): $\nabla \times \vec E = -\frac{\partial B}{\partial t}$ Which in integral form can be written as: $\int_{C} \vec E.dl = -\int \int_{S} \frac{\partial B}{\partial t}.dS$ Here the left hand side represents the contour integral around the circuit/loop/path under consideration. The right side represents a surface integral,over the surface $S$ bounded by the contour $C$. As you can see, the rhs depends only on how the magnetic field behaves inside the loop under consideration.
{ "domain": "physics.stackexchange", "id": 41605, "tags": "electromagnetism, magnetic-fields" }
Messages not being generated before dependent package
Question: Hi All, I am having a very frustrating problem. I have a package pac_industrial_robot_driver that uses messages declared in another package ros_opto22. However, I cannot get the dependency recognised by the pac_industrial_robot_driver CMakeLists.txt file. The error message: [ 31%] In file included from /home/controller/catkin_ws/src/pac_industrial_robot_driver/lib/PacIndustrialDriver.cpp:8:0: /home/controller/catkin_ws/src/pac_industrial_robot_driver/include/PacIndustrialDriver.hpp:32:38: fatal error: ros_opto22/valve_command.h: No such file or directory Clearly it is not finding the include file. # Create executables and add dependencies. foreach(p ${ALL_EXECS}) add_executable(${p} ${${p}_SRC}) add_dependencies(${p} ${PROJECT_NAME}_generate_messages_cpp ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS} ros_opto22_EXPORTED_TARGETS ros_opto22_gencpp ros_opto22_generate_messages_cpp) target_link_libraries(${p} ${ALL_LIBS} ${catkin_LIBRARIES} industrial_robot_client simple_message industrial_utils) endforeach(p) Note the inclusion of ros_opto22_gencpp and ros_opto22_generate_messages_cpp. I also have the following earlier on in the CMakeLists.txt file: catkin_package( # INCLUDE_DIRS include # LIBRARIES pac_industrial_robot_driver CATKIN_DEPENDS ros_opto22 # DEPENDS system_lib ) and ## Specify additional locations of header files ## Your package locations should be listed before other locations include_directories(include ${catkin_INCLUDE_DIRS}) as well as a having ros_opto22 listed under the find_package call. I am very frustrated with this, not in the least because there does not seem to be a single definitive guide to solving this problem. What is the best way to go about solving this problem? I can run catkin_make twice but that only masks the problem. I want catkin_make to run properly first time every time even after deleting everything in the build and devel directories under my catkin workspace. Kind Regards Bart Originally posted by bjem85 on ROS Answers with karma: 163 on 2014-09-16 Post score: 1 Original comments Comment by fherrero on 2014-09-16: I think what you need is generate_messages(DEPENDENCIES ros_opto22) Comment by paulbovbel on 2014-09-16: I believe generate_messages is not for using message files from another package. Comment by fherrero on 2014-09-17: True, my mistake. I meant add_dependencies(${PROJECT_NAME} <msg_package_name>_cpp). We use this in order to ensure the necessary messages are compiled before the current package. Answer: EDIT2: This line in your CMake code can be simplified: add_dependencies(${p} ${catkin_EXPORTED_TARGETS} ${PROJECT_NAME}_generate_messages_cpp) These are the only two things you would ever need to do, catkin_EXPORTED_TARGETS is never empty, which is important because add_dependencies will fail if you give it variables which do not exist. I can't see why your code is not compiling on the first try from what you have shown us, is it where you could share these two packages with us, maybe even in private? EDIT: I just noticed that you are actually using the right target dependency options here, let me look at it again. Sorry for the noise below. I'm sorry you're finding this frustrating, I can sympathize. Unfortunately to resolve this, you need to have a target dependency on the message generation. We cannot do this automatically because catkin does not override the add_executable macro in CMake. Though we've tried to mention it everywhere we think it might be run into, if it is lacking from some piece of the documentation, please feel free to update it yourself or at least point someone to the deficient docs. This extra step for depending on message generation targets is described in the ROS Tutorials: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29#roscpp_tutorials.2BAC8-Tutorials.2BAC8-WritingPublisherSubscriber.Building_your_nodes-1 It is also mentioned in the sort of cookbook for catkin CMakeLists.txt: http://wiki.ros.org/catkin/CMakeLists.txt#Important_Prerequisites.2BAC8-Constraints And in the official genmsg documentation (genmsg is the package which generates messages): http://docs.ros.org/api/genmsg/html/usermacros.html There is also a answer on this site, which I think you've found: http://answers.ros.org/question/111136/fatal-error-file-massageh-not-found/?comment=192767#comment-192767 Hope that helps! Originally posted by William with karma: 17335 on 2014-09-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by paulbovbel on 2014-09-16: I think the issue here is that the messages from package ros_opto22 are not being found by the package pac_industrial_robot_driver containing the above CMakeLists, even though the add_executable macro contains ros_opto22_generate_messages_cpp as a dependency Comment by William on 2014-09-16: You're right, in this case he needs ${catkin_EXPORTED_TARGETS} Comment by William on 2014-09-16: Oh, I guess I missed the part where he is using the EXPORTED TARGETS already, let me update my answer.
{ "domain": "robotics.stackexchange", "id": 19411, "tags": "ros, catkin-make, generate-messages" }
How to fairly conduct a model performance with 5-fold cross validation after augmentation?
Question: I have, say, a (balanced) data-set with 2k images for binary classification. What I have done is that randomly divided the data-set into 5 folds; copy-pasted all 5-fold data-set to have 5 exact copies of data-set (folder_1 to folder_5, all absolutely same data-set) first fold in folder_1 is saved as test folder and remaining (fold_2, fold_3, fold_4, fold_5) are combined as one train folder second fold in folder_2 is saved as test folder and remaining (namely, fold_1, fold_3, fold_4, fold_5) are combined as one train folder third fold in folder_3 is saved as test folder and remaining (namely, fold_1, fold_2, fold_4, fold_5) are combined as one train folder. similar process has been done on folder_4 and foder_5. I hope, by now, you got the idea of how I distributed the data-set. The reason I did so is as follows: I have augmented the training data (train folder) in each of the folders and used test folders respectively to evaluate (ROC-AUC score). Now I kind of have 5 ROC-AUC scores which I evaluated using test folders. If I get the average value out of those 5 scores. (Assuming the above cross-validation process is done right) If I were to perform some manual hyperparameter optimizations (like an optimizer, learning rate, batch size, dropout, activation) and perform the above cross-validation with data augmentation and find the best so-called "mean ROC-AUC", does it mean I successfully conducted hyper-parameter optimization? FYI: I have no problem with computing power OR/AND time at all to loop through the hyper-parameters for this type of cross-validation with data augmentation Answer: If you used your five $X_{test}$ sets multiple times (to measure the average AUC) to decide on the best set of hyperparameters (i.e. optimizer, learning rate, batch size, dropout, activation) then yes, you successfully conducted hyper-parameter optimization. However, the AUC you received for the best set of hyperparameters found (by manual tuning) is not representative of the real performance of your model. This is because the act of using a test set to tune the parameters of your model degenerates it back to a "training" set, because the data is not being used to measure the performance of the model but to improve it instead (although on a different level of abstraction, i.e. not to directly influence the parameters of the model, such as neural network weights), making the resulting AUC an overly optimistic biased estimator of the real AUC (that it could have resulted in for an unseen test dataset). That's why, if you care both about hyperparameter optimisation and being able to measure the "real" performance of your model, you need to split your dataset into three "buckets": training set $X_{train}$, validation st $X_{val}$ and test set $X_{test}$. $X_{test}$ should only be used once (after you've trained and tuned the model), and assuming it has enough samples and the samples are representative of the unseen data, you should get a good estimate of your model performance. $X_{val}$, on the other hand, is your validation set which you can reuse as many times as you want to find the optimal set of hyperparameters that result in the highest performance (i.e. AUC).
{ "domain": "ai.stackexchange", "id": 2062, "tags": "deep-learning, datasets, hyperparameter-optimization, cross-validation" }
Where did the nickel in the Earth's core come from?
Question: Why is there nickel in the core of the earth? Does it come from collisions of two neutron stars? And how did we know the core of our planet is made of nickel-iron alloy? Thank you. Answer: Nickel is formed in supernovae. There are several different types of supernova. Wikipedia suggests that about 70% of nickel was formed in exploding white dwarfs, and 30% in explosions of massive stars. Unlike say gold, merging neutron stars do not produce a substantial proportion of the nickel on Earth. We know about the composition of the core by Earth Science techniques (studies of Earthquake waves that pass through the core tell us about its density, we also know about the mass of the Earth, and these are all consistent with an iron-nickel core) And from meteorites. Metallic meteorites have come from the cores of disrupted asteroids, and these asteroids formed at the same time as the Earth. They are made of the same mix of iron and nickel.
{ "domain": "astronomy.stackexchange", "id": 5152, "tags": "planet, earth" }
A Question on Singularity
Question: I am not aware of GR, but due to curiosity i have a question in my mind. Please let me know if it is inappropriate to ask here. My question is about singularity. I am under the assumption that singularity means in mathematical terms equivalent to a discontinuity in a function. My question is what type of discontinuity are the ones corresponding to Penrose-Hawking singularity theorems and also to the the naked singularity ? by type i mean removable (left and right limits exist and are equal but not equal to the function value at that point). a jump type discontinuity (left and right limits exists and are not equal to each other). Oscillating discontinuity (as described as described in the above given link) Answer: Note: Depending on one's definition of the discontinuity (whether one allows talking about the continuity of the function at a point where the function is undefined), pole might also be considered an infinite discontinuity. Singularity is certainly not equivalent to a discontinuity (not even mathematically). It can also mean pole and this is the usual meaning it has in GR. I.e. the curvature of the space-time blows up. For example, Schwarzschild metric's curvature diverges at the origin as may be checked by looking at Kretschmann invariant. There might also exist coordinate singularities but these are unphysical and can be removed by a better choice of coordinates. Note: What does one mean by curvature? The basic object associated with the metric tensor is a connection and for that one can define a curvature form. Now this is quite complicated object and it is not quite clear how to detect singularities in it. One easy way is to define invariants (such as the above-mentioned Kretschmann invariant) associated with this form.
{ "domain": "physics.stackexchange", "id": 129, "tags": "general-relativity, gravity" }
For a spacecraft orbiting a planet, orbital speed is inversely proportional to orbit radius. But speed must be increased to increase orbit radius?
Question: For a spacecraft in orbit with radius $r$ with speed $v$ around a planet, centripetal force $F_C$ is provided by gravity: $$\frac{GmM}{r^2}=\frac{mv^2}{r},$$ which simplifies to $$\frac{GM}{r}=v^2.$$ This means that orbits closer to the planet are required to have greater speeds. However, if we want to move a spacecraft to a higher orbit, we have to increase the semimajor axis (adding energy to the orbit) by increasing velocity (source: FAA). How is this reconciled with the above equation? Answer: The equation you have written there applies only for circular orbit but the orbit is not circular during the time the spacecraft is climbing to higher orbit. As the spacecraft climbs towards the higher orbit its initially increased velocity slows down as kinetic energy is transformed to potential energy.
{ "domain": "physics.stackexchange", "id": 84386, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, centripetal-force, celestial-mechanics" }
Would anything limit an "invasive species" T. rex introduction now besides equilibration with available prey or intervention by humans?
Question: In the video Two-and-a-half billion T. rex roamed Earth, study finds contains a short explanation of the results in Science Absolute abundance and preservation rate of Tyrannosaurus rex by the corresponding author of the paper. I'd like to understand better what is known about T. rex's adaptability and range of environments where it was thought to flourish. It's hard to imagine what things were like back then, so using modern flora/fauna and climate distributions as reference points will help me to understand this better. Based on that knowledge and reference frame, I'd like to ask about the following Gedankenexperiment: Question: Would anything limit an "invasive species" T. rex introduction now besides running out of food or intervention by humans? Would they basically thrive over much of the Earth's surface and eat all the other mammals? Or without a supply of dead and rotting dinosaur cadavers would these giant scavengers with excellent olfaction quickly deplete a small region and die of starvation? Currently there is not likely enough DNA to reconstruct a full genome nor a scheme to build and incubate a dinosaur egg for it (as far as I know), so we are not at any risk of this happening in the near future. Answer: Lots of things could impact a reintroduced invasive species. Disease/sickness: although no close relatives are present that is not a guarantee, many diseases jump species. Parasites likewise could cause problems, especially newer parasites the dino may not have a good defense for. Even modern Fungus could pose a problem for eggs. Predators: While an adult t-rex is predator proof eggs and newborns are not. Rats and other small mammals as well as many birds have been the demise of many ground nesting birds. Poisons: there are plenty of animals and plants that are poisonous enough kill a t-rex that the rex may not recognize as dangerous. Remember there are a lot of things around a t-rex did not evolve with. This is more dependent on location, a t-rex dropped in Australia might kill itself with its first meal. Human stuff: I am not talking about humans killing them. I am thinking of the things humans make killing them. things like walking into power lines, eating weird plastic human garbage, car accidents ( a car moving at speed could easily cripple a rex), even something as simple as getting hit by trains
{ "domain": "biology.stackexchange", "id": 11343, "tags": "dinosaurs, invasive-species" }
Why do we need to use oxidation number in order to balance a complex chemical reaction easily in an acidic or alkaline medium?
Question: I have been trying to balance this chemical reaction in an "acidic" medium without using oxidation number or ion electron method: [$\ce{Fe^2+ + Cr2O7^2- -> Cr^3+ + Fe^3+}$]. Though I can balance number of atoms on both the sides $[\ce{Fe^2+ + Cr2O7^2- + 14H+ -> 2 Cr^3+ + Fe^3+ + 7 H2O}]$, but charge is still unbalanced. I am not sure why? I don't have any intuition about what's going on here. Answer: A balanced equation must have a mass balance i.e., the masses should be equal on both sides and charges must be balanced as well. Oxidation number is just a way of book-keeping. Nothing fundamental there. $$\ce{Fe^2+ + Cr2O7 + 14H+ -> 2 Cr^3+ + Fe^3+ 7 H2O}$$ What are you forgetting? It is mass balanced. Does $\ce{Cr2O7}$ exist? Hint: It is potassium dichromate.
{ "domain": "chemistry.stackexchange", "id": 12414, "tags": "acid-base, stoichiometry, oxidation-state" }
What form of energy is magnetism stored in?
Question: In physics class I was told that there are 9 forms of energy: -Electrical -Light -Thermal -Nuclear -Elastic Potential -Gravitational Potential -Sound -Kinetic -Chemical So, my question is, what form of energy is the energy stored in when two magnets are pulled apart? Answer: Answer: Electrical (or electromagnetic) The formula of the energy in magnetic fields can be written in two forms, as follows, $$ W=\frac{1}{2}\int_V(\mathbf{A}\cdot\mathbf{J})d\tau \ , $$ and $$ W=\frac{1}{2\mu_0}\int_{all\ space}B^2d\tau\ , $$ where $\mathbf{A}$ is the magnetic vector potential, $\mathbf{J}$ is the volume current density, and $B$ is the magnitude of the magnetic field. So, we can say the energy is stored in the magnetic field, in the amount $B^2/2\mu_0$ per unit volume. And we can also say that the energy is stored in the current distribution in the amount $\frac{1}{2}(\mathbf{A}\cdot\mathbf{J})$ per unit volume. I recommend Griffiths' Introduction to Electrodynamics for you, and you can read it to get more details.
{ "domain": "physics.stackexchange", "id": 26713, "tags": "electromagnetism, energy" }
$(\mu,P,T)$ pseudo-ensemble: why is it not a proper thermodynamic ensemble?
Question: While teaching statistical mechanics, and describing the common thermodynamic ensembles (microcanonical, canonical, grand canonical), I usually give a line on why there can be no $(\mu, P, T)$ thermodynamic ensemble ($\mu$ being the chemical potential, $P$ being the pressure, and $T$ being the temperature). My usual explanation is a handwaving that all of the control parameters would be intensive, which leaves all the extensive conjugated parameters unbounded, and you can't write your sums and integrals anymore. However, I have always felt uneasy because: I never was able to properly convince myself where exactly the problem arose in a formal derivation of the ensemble (I am a chemist and as such, my learning of statistical mechanics didn't dwell on formal derivations). I know that the $(\mu, P, T)$ can actually be used for example for numerical simulations, if you rely on limited sampling to keep you out of harm's way (see, for example, F. A. Escobedo, J. Chem. Phys., 1998). (Be sure to call it a pseudo-ensemble if you want to publish it, though.) So, I would like to ask how to address point #1 above: how can you properly demonstrate how chaos would arise from the equations of statistical mechanisms if one defines a $(\mu, P, T)$ ensemble? Answer: If you have only one species of particles then working with $(\mu,p,T)$ ensemble does not make sense, as its thermodynamic potential is $0$. $$U = TS -pV + \mu N,$$ so the Legendre transformation in all of its variables (i.e. $S-T$, $V-(-p)$ and $N-\mu$) $$U[T,p,\mu] = U - TS + pV - \mu N$$ is always zero. The fact is called Gibbs-Duhem relation, i.e. $$0 = d(U[T,p,\mu]) = -S dT + V dp - N d\mu.$$ However, if you have more species of particles, you can work with a thermodynamic potential as long as you have at least one extensive variable (e.g. number of one species of particles).
{ "domain": "physics.stackexchange", "id": 3250, "tags": "thermodynamics, statistical-mechanics" }
Reduction: Does polytime reduction imply Turing reduction?
Question: I am unsure if given $A \leqslant_p B$, does that imply that $A \leqslant_T B$. If we can polytime reduce $A$ to $B$, that would imply there is a decider for $A$ that runs in polynomial time which can be mapped to the decider for $B$, meaning we would halt on any input that is run on the decider for $B$. Answer: I think you got yourself a bit confused. If $A \le_p B$ there there is a polynomial-time computable function $f$ that maps instances (not deciders) of $A$ into instances of $B$ such that $x \in A \iff f(x) \in B$. This means that $x$ is a yes-instance of $A$ if and only if $f(x)$ is a yes instance of $B$. Notice how this definition does not involve deciders halting. In fact $A$ and $B$ could even be languages for which no decider exists. That said, if $A \le_p B$ then $A \le_T B$ since there exists a Turing machine with an oracle for $B$ that decides $A$. A possible Turing machine operates as follows on input $x$: It simulates a Turing machine that computes $y=f(x)$. It invokes the oracle for $B$ on $y$. If the oracle returns "yes", it accepts. Otherwise it reject. Notice that we did not even need to use the hypothesis that $f$ can be computed in polynomial time.
{ "domain": "cs.stackexchange", "id": 19263, "tags": "computability, reductions, polynomial-time-reductions" }
What is an equivalent?
Question: I want to know what is an equivalent and how is it different from equivalent mass. How to calculate and why is it used in chemistry for knowing concentration? Answer: In reaction stoichiometry, it is the amount of one substance that reacts with one mole of another substance. Equivalent mass is molar mass divided by the n factor which isn't always dependent on the other reactant. As a rule of thumb "One equivalence of one reactant reacts completely with one equivalence of another." These concepts of equivalence are extremely important in acid-base titration to determine the neutralisation point. $xM_1V_1 = yM_2V_2$ here $M_i$ is molarity. For a relation in terms of equivalence use normality. NOTE: Normality is a measure of concentration that is equal to the gram equivalent weight per liter of solution. Gram equivalent weight is a measure of the reactive capacity of a molecule. As an example a solution containing $\pu{1M}$ $\ce{H2SO4}$ is actually $\pu{2N}$ as each molecule dissociates to give 2 $\ce{H+}$ ions. I hope that clears it.
{ "domain": "chemistry.stackexchange", "id": 14065, "tags": "solutions, stoichiometry, concentration" }
How to get pointcloud2 data from usb_cam or Pointgrey camera?
Question: Hello, experts~ I am a new in ROS. And recently, I am trying to get image and make it detected by the node ar_track_alvar. However, this ar node requires sensor_msgs/PointCloud2 data. Firstly I tried to use usb_cam_node and I got sensor_msgs/Image. The same situation happened using Pointgrey camera with its driver: pointgrey_camera_driver. After searching some files, I find that it might work if I can use openni to get the image, after all it is discribed in wiki: "Launch files to open an OpenNI device and load all nodelets to convert raw depth/RGB/IR streams to depth images, disparity images, and (registered) point clouds."here. And after I run the command: " roslaunch openni2_launch openni2.launch " it says there is no device. I really don't know how to make it out. Can anyone help me answer the following question: Can pointgrey camera be directly used by openni launch file? Can common USB camera be used by openni launch? If the answer of both is negative, how can I get pointcloud2 data with what kind of device and with which driver? P.S. Here is what shell says after I run roslaunch openni2_launch openni2.launch [ INFO] [1431337093.836048605]: No matching device found.... waiting for devices. Reason: std::string openni2_wrapper::OpenNI2Driver::resolveDeviceURI(const string&) @ /tmp/buildd/ros-indigo-openni2-camera-0.2.3-0trusty-20150327-0611/src/openni2_driver.cpp @ 623 : Invalid device number 1, there are 0 devices connected. Originally posted by david059 on ROS Answers with karma: 13 on 2015-05-11 Post score: 0 Answer: For using openni package and also for generating point cloud you should have Kinect or Asus Xtion ( Depth Sensor). OpenNI wont work with normal usb camera. usb_cam package is to capture rgb image using usb camera. For generating pointcloud u can use either Kinect with freenect package or rgbdslam ( available for electric Older version of ROS) If you are only interested in creating 3D, you can use lsd_slam or ORB slam, it will give nice 3d in real time but it wont publish any pointcloud message. Originally posted by KDROS with karma: 67 on 2015-05-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by david059 on 2015-05-11: WOW! So quick! Thank you very much for your guidance! So now I understand what can be used for OpenNI and what cannot. But can I take it that the pointcloud can just only be generated by OpenNI which means I have to have devices you mentioned? Is there any other way I can get pointcloud with uabcam? Comment by KDROS on 2015-05-11: Yup I mentioned, lsd_slam and ORB_Slam you can get good pointcloud, but for very good pointcloud either use lsd_slam with global shutter camera with fish eye lens or Kinect/Asus Xiton with freenect/openNI2 package. Comment by david059 on 2015-05-11: I think you just really did me a great favor! Even though I can't find LSD_SLAM nor ORB_SLAM in http://www.ros.org/browse/list.php, I find them in github! I will try them right now and I hope this time I can make it. Thank you, KDROS! Comment by KDROS on 2015-05-11: Thanx, Yup It is not available in ROS, check this links lsd_slam. You can ask if you will be in trouble. Comment by david059 on 2015-05-12: Hey KDROS, after reading about what it says in github, I realize the problem you mentioned , LSD_SLAM indeed can not be used for live-pointcloud use in ROS. ButI think I find another way to get PointCloud2 data. By using image_proc Comment by david059 on 2015-05-12: I will talk in more detail in my answer below
{ "domain": "robotics.stackexchange", "id": 21652, "tags": "ros, sensor-msgs, ar-track-alvar, camera, usb-cam" }
Commutator of field operator with function of number operator
Question: Consider the commutator $$\left[\hat{a},\sqrt{1-\hat{a}^\dagger\hat{a}}\right].$$ Because $\hat{a}^\dagger\hat{a}=\hat{n}$ is the familiar number operator, and $[\hat{a},\hat{n}]=\hat{a}$, I would expect that there is a simple expression for the commutator above. However, I don't see it, and writing out the square root with a series expansion seems very messy. Is there a simple way to address this commutator? Answer: $\hat{a} f(\hat{n})= f(\hat{n}+1)\hat{a}$, so that $$\left[\hat{a}, f(\hat{n}) \right]= \Bigl (f(1+\hat{n})- f(\hat{n})\Bigr )~ \hat{a} \\ = \hat{a} \Bigl (f(\hat{n})- f(\hat{n}-1)\Bigr ) .$$ Note its action on $|0\rangle$, $|1\rangle$, ...
{ "domain": "physics.stackexchange", "id": 70415, "tags": "quantum-mechanics, homework-and-exercises, operators, harmonic-oscillator, commutator" }
How does the weak interaction sense the instability of a nucleus?
Question: To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons (...) (via nasa.gov). How did it know it needs to reduce the disruptive energy? Example: Two electrons (elementary particles) coming together are repelled by the exchange of a photon (mediator). They've come too close that the force-carrier acted. This is easy to grasp. For the beta decay / weak interaction / W boson, I can't quite understand it in similar intuitive terms. In other words, how does the instability of a nucleus (how far off the beta stability line it is) make it act on a single quark via the very short-ranged W boson? This question stems from further reading after posting a related question (now deleted). I thought I needed to read more before asking basic questions, but I only found myself more confused at more basic levels. (That question was how does a rare W boson become not rare in elements like francium that beta decay very quickly.) Answer: There are basically two things to consider: isospin, originally, when we did not know what neutrons and protons were made up of, this was the only QM characteristic to distuinguish between them. Now we know the structure of nucleons, but isospin remains. Now it is very important to understand that a neutron and a proton can align their isospin, thus (coming a little closer, where the strong force is even stronger) creating a more stable, lower energy QM system. This means, that the nucleus is trying to equalize the number of neutrons and protons. This is the only way to create a lowest energy state, most stable nucleus. the strong force is very short ranged, wheres the EM force's range is infinite. When you have a big nucleus, consisting of a lot of nucleons, the strong force will not be able to balance the EM repulsion between protons. Now if you have a proton rich nucleus, that is too big, it will not be stable, the nucleus will try to go to a lower energy state, a more stable energy state by converting protons to neutrons, thus, because of isospin, the size of the whole nucleus will decrease too, and the neighboring nucleons will have a stronger (isospin aligned) bond, with lower total energy, a more stable system. You are asking how the nucleus knows or feels, that it has to go to a lower energy state. In QM, it is all about probabilities. It is a misconception that the proton is stable. It is only stable if it is free. Inside the nucleus, the proton will decay. Neutrons are unstable, so they will decay too. Now you might be asking about beta decay. In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which a beta particle (fast energetic electron or positron) is emitted from an atomic nucleus. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino, or conversely a proton is converted into a neutron by the emission of a positron (positron emission) with a neutrino, thus changing the nuclide type. You are saying that two electrons will repel, because they are repelled by a mediator. Now this mediator is a virtual photon. It is a mathematical description of how the EM field of the electron interacts with the other electron. In reality, we do not know how they interact. We have experiments, and the mathematical description of virtual photons fits the data. Now in beta decay, the whole nucleus's instability does not act directly on the neutron or the proton to make it decay. It is the forces: strong force attracts EM force repels Now the isospin matters too. The nucleus as a whole moves toward the lowest energy state possible. That is if the neutrons and protons number is equalized. And if a nucleus is too big, the strong force can't balance the EM repulsion of the protons. So you: can't go too big, because big nuclei are unstable (strong vs EM force) can't have too many protons or too many neutrons, they need to be balanced (isospin) In QM, it is all about probabilities. The protons and neutrons inside the nucleus are unstable themselves, not just the whole nucleus. They will decay. Thus, they move toward a more stable nucleus. The answer to your question is that the whole nucleus's unstability does not act on the quarks directly. It is the neutrons and protons themselves that are unstable inside the nucleus and will decay.
{ "domain": "physics.stackexchange", "id": 59932, "tags": "particle-physics, nuclear-physics, weak-interaction, popular-science" }
In any glass, why doesn't its shadow show when near a screen?
Question: $\hspace{25px}$ $\hspace{25px}$$$ \begin{alignat}{7} & \begin{array}{l} \textbf{Figure 1.}~~ \text{Here shadow is showing, but} \\ \text{not when it is near. I had seen an} \\ \text{explanation, but I am not sure about it.} \end{array} & & \begin{array}{l} \textbf{Figure 2.}~~ \text{Here shadow is showing, but} \\ \text{not when the glass is near.} \\ \phantom{} \end{array} \\ \hspace{20px} & \hspace{300px} & \hspace{25px} & \hspace{300px} \end{alignat} $$ Answer: This happens because glasses are basically lenses and lenses change direction of incoming light, causing it to converge or diverge. Close to a screen, the effect is insignificant, since the intensity of the converging or diverging light changes gradually, with distance. As the distance from the screen increases, the effect will become more obvious and will depend on the type of lenses, converging or diverging, and the focal length. At some distances, the light could intensify, at other distances, it may get dimmer and you'll see a shadow. This is illustrated below for short sighted glasses (concave lenses) and for a magnifying glass (convex lens, similar to reading glasses), for different distances. With flat glass, you would not observe any of these changes.
{ "domain": "physics.stackexchange", "id": 53248, "tags": "optics, everyday-life" }
Finding Tours through Near-Hamiltonian Paths?
Question: Say I have a connected graph. I want to find a tour that visits each vertex at least once. It's not always possible, though, for there to be a solution if there is a bridge in the graph. Is there a term for this sort of problem or a recommended approach to solving it? The motivation if it helps that I ultimately have in mind is traversing every station in a subway system. Answer: There are no polynomial-time $\alpha$-approximation algorithms for TSP where $\alpha$ is a constant unless $\mathsf{P} = \mathsf{NP}$. However, for metric TSP there are approximation algorithms, e.g. Christofides algorithm. A simpler 2-approximation is obtained by taking an MST.
{ "domain": "cstheory.stackexchange", "id": 2179, "tags": "graph-theory, co.combinatorics, optimization" }
Why is in the reaction of cyclic unsaturated aldehydes with hydrazine and hydrogen peroxide the aldehyde preserved?
Question: Question What is the product of this reaction? The answer given: At first, I thought this was a typical Wolff-Kishner reduction, but then I saw the reagents didn't match up (no $\ce{OH-}$). How is this getting formed? Shouldn't the $\ce{CHO}$ group be reduced? What am I missing here? Answer: The reaction of $\ce{H2O2}$ with hydrazine forms diazene, $\ce{HN=NH}$. This reacts in a syn fashion to deliver both hydrogens to the same side of the alkene.
{ "domain": "chemistry.stackexchange", "id": 15877, "tags": "organic-chemistry, redox, carbonyl-compounds" }
Cells "grown in LB broth to an OD600nm"
Question: Cells "grown in LB broth to an OD600nm" What does the OD stand for in this? Answer: OD = Optical Density As you grow cells in broth, the broth becomes cloudy. The more cells there are, the cloudier it is. After you grow your cells for a while, you can measure their progress by shining a light through your sample. The more light your cell culture absorbs (higher absorbance is equivalent to higher OD) the more cells there are in it.
{ "domain": "biology.stackexchange", "id": 3000, "tags": "cell-culture" }
Relative Fluctuations in volume at thermodynamic limit
Question: Volume fluctuation of a pressure ensemble is given by $$\langle(\Delta V)^2\rangle = k_BT\langle V\rangle\kappa_T$$ where $\kappa_T$ is the isothermal compressibility. In the book, it is said that in the thermodynamic limit, the relative fluctuations vanish in the thermodynamic limit, which I don't seem to understand why? $$\frac{\sqrt{\langle(\Delta V)^2\rangle}}{\langle V\rangle} = \frac{\sqrt{k_BT\kappa_T}}{\sqrt{\langle V\rangle}}$$ Both $\kappa_T$ and $V$ are finite in the thermodynamic limit. So, how are the relative fluctuations vanishing? Answer: The thermodynamic limit (TL) is a theoretical tool necessary to justify the connection between Statistical Mechanics and Thermodynamics. Indeed, for interacting finite systems, the logarithm of the partition functions in different ensembles fails to have some key properties like additivity, convexity, and the possible presence of non-analytic behavior (aka phase transitions). The thermodynamic limit restores the expected thermodynamic behavior of macroscopic systems. However, the exact statement about how to perform the TL changes in each statistical ensemble. The general recipe is that the limit that should exist is the limit of the logarithm of the partition function divided by one extensive independent variable; all the extensive independent variables must go to infinity by keeping fixed their ratios. To list a few well-known and some less well-known cases, the way to implement the TL in different ensembles is the following. microcanonical ensemble $$ \lim_{{{E\rightarrow \infty} \atop {V \rightarrow \infty}} \atop {N \rightarrow \infty }} \frac{1}{V} \log W(E,V,N)~~~~~~~~~~~~~~~~~~~~~~{\mathrm {by~keeping~constant}}~~~~~~ \frac{E}{V}= e ~~~~ {\mathrm {and}}~~~~~~~~ \frac{N}{V}= \rho $$ canonical ensemble $$ \lim_{{ {V \rightarrow \infty}} \atop {N \rightarrow \infty }} \frac{1}{V} \log Z(\beta,V,N)~~~~~~~~~~~~~~~~~~~~~~{\mathrm {by~keeping~constant}}~~~~~~ ~~~~~ \frac{N}{V}= \rho $$ grand canonical ensemble $$ \lim_{V \rightarrow \infty} \frac{1}{V} \log \Xi(\beta,V,z=e^{\beta \mu}) $$ isothermal-isobaric ensemble $$ \lim_{N \rightarrow \infty} \frac{1}{N} \log X(\beta,\beta P,N) $$ Notice that there is only one extensive independent variable in the last two cases. The previous TLs, if they exist, correspond respectively to entropy density $s(e,\rho)$ (microcanonical); $-\beta f(\beta,\rho)$, $f$ density of free energy; $-\beta P(\beta,z)$; $-\beta \mu(T,P)$ In the case of the isothermal-isobaric ensemble, the existence of TL implies the extensiveness of the Gibbs free energy $G= N \mu$. Therefore, $\langle V \rangle=\frac{\partial G}{\partial P}$ grows like $N$ and the relative fluctuation of the volume vanishes at the TL.
{ "domain": "physics.stackexchange", "id": 93755, "tags": "thermodynamics, statistical-mechanics, volume" }
Vibrating Objects Part 2
Question: Since I have an answer to Vibrating Objects, I am asking a part two about much faster vibration. What about vibrating an object at 1 THz or higher? What methods can be used? How much energy is spent. Answer: I doubt you can do that. Let's do a back-of-the-envelope calculation. Suppose we are dealing with harmonic oscillation. $$v(t)= 2 \pi \nu A \cos (2\pi\nu t)$$ $$v_{max}=2\pi\nu A$$ where $A$ is the maximum displacement. You want the process to be macroscopic, so we could take $A=1\mathrm{mm}$, pretty small. If $\nu=1\mathrm{THz}$, we get $v_{max}=0.02c$, where $c$ is the speed of light. For a mass of $1\mathrm{g}$, the associated kinetic energy would be similar to the kinetic energy of an Airbus A380 at cruising speed.
{ "domain": "physics.stackexchange", "id": 21893, "tags": "vibrations" }
Tidal affect on an object and the length contraction in Relativity Theory
Question: According to the equivalence principle in general relativity theory; If an object are in free falling in a gravitational field,the object will not detect gravitational force on it. From this principle, Einstein deduced that free-fall is actually inertial motion. Objects in free-fall do not really accelerate. In an inertial frame of reference bodies (and light) obey Newton's first law, moving at constant velocity in straight lines. However, According to classic physical approach: The difference between the top force on the object and the down force will create a stretching affect on the box and we expect deformation on the box (tidal affect) because of Newton's gravitation law formula .As I understand according to general relativity theory, the change of box cannot be detected by the internal observer. (Assume that there is no friction in space during free falling). I believe that the force difference on an object can be so huge near to a black hole. I wonder if the person in free falling box to huge mass can feel any affect in the box or not?Can outside a rested observer observe a deformation on the box? Can The General Relativity formulas calculate that increasing length of the box in a gravitational field even if the internal observer does not detect any change in the box ? If the answer is yes, Isn't it a contradiction with the length contraction of the special relativity because It states that the length will be getting less via the formula below. $$L=L_0\sqrt{1-\frac{V^2}{c^2}}$$ where $L_0$ is rested lenght of object. Thanks for answers Answer: There are some qualifications on the equivalence principle that often aren't made clear in casual summaries. One is that the equivalence principle only works in the limit as the size of the spacetime region where measurements are performed approaches zero (or to use another term, in an "infinitesimal" region of spacetime)--in other words, you'd have to consider a limit where both the difference in height between top and bottom of the box approaches zero, and the time interval over which the experimenter in the box makes measurements approaches zero as well. For example, p. 67 of From Special Relativity to Feynman Diagrams: A Course of Theoretical Particle Physics for Beginners states it as: In the presence of a gravitational field, locally, the physical laws observed in a free falling frame are those of special relativity in the absence of gravity. As explained above, locally means, mathematically, in an infinitesimal neighborhood or, more physically, in a sufficiently small neighborhood of a point such that, up to higher order terms, the approximation (3.5) holds. The previous statement is referred to as the equivalence principle in its strong form or simply strong equivalence principle. Another important qualification is the one alluded to in the comment "up to higher order terms" above--from what I've read the equivalence principle only holds for first-order effects in the Taylor series expansion of whatever physics equations you're looking at, and tidal effects are actually second-order, so the equivalence principle doesn't actually say they should disappear even in the infinitesimal limit. This page on the site of physicist John Baez mentions that the equivalence principle says a small patch of spacetime in general relativity behaves like SR Minkowski spacetime "to a first-order approximation", and that the truly "gravitational" effects are considered to be the second-order ones, which remain after you "subtract off the first-order effects by using a freely falling frame of reference" (like a local inertial frame used by an experimenter in an infinitesimal falling elevator): Principle of Equivalence The metric of spacetime induces a Minkowski metric on the tangent spaces. In other words, to a first-order approximation, a small patch of spacetime looks like a small patch of Minkowski spacetime. Freely falling bodies follow geodesics. Gravitation = Curvature A gravitational field due to matter exhibits itself as curvature in spacetime. In other words, once we subtract off the first-order effects by using a freely falling frame of reference, the remaining second-order effects betray the presence of a true gravitational field. And p. 7 of Cosmological Physics likewise says the "important aspects of gravitation" are the second-order ones, and mentions that these are tidal forces: It may seem that we have actually returned to something like the Newtonian viewpoint: gravitation is merely an artifact of looking at things from the 'wrong' point of view. This is not really so; rather, the important aspects of gravitation are not so much to do with first-order effects as second-order tidal forces; these cannot be transformed away and are the true signature of gravitating mass. So if I'm understanding correctly, even in the limit as the volume of the box and time of measurements approach zero, tidal forces would still in principle remain measurable inside the box, and this doesn't contradict the technical formulation of the equivalence principle.
{ "domain": "physics.stackexchange", "id": 16955, "tags": "general-relativity, tidal-effect" }
Conservation of energy in a solenoid
Question: I found a website this afternoon which stated that for a given solenoid, the force it exerts on an iron bar can be modeled as a conservation of energy problem. The page gives a method to find $\Delta u_{solenoid}$. It then says that the force exerted on the bar can be modeled as $$F= \frac{\Delta u_{solenoid}}{\text{length of solenoid}}$$ This is a problem for me because I need the work done on the bar. For a bar of identical length to the solenoid, if I try to utilize \begin{align} W &= |F|\cdot|s| *\text{cos }\theta\\ &= |\frac{\Delta u_{solenoid}}{\text{length of solenoid}}| \cdot|\text{length of solenoid}| *\text{cos }\theta \\ &=|\Delta u_{solenoid}|*\text{cos }\theta \end{align} Then I'm stuck with something dependent on $\theta$. Intuitively, and experimentally, I know that magnets pull on iron which suggests to me that in this scenario $\theta$ is always $0$. (Both poles of a magnet pull iron, etc.) However, this doesn't seem like the "proper" way to do it, and I was wondering if anyone could explain analytically what $\theta$ has to be and why. Or, considering the not-improbable scenario in which I'm approaching this problem all wrong, what would be the correct way to get $W$ from my $\Delta u_{solenoid}$? Edit: I should have explained a bit better. The way that I view this $$ 0=\Delta KE + \Delta u$$ $$\Delta u = -\Delta KE$$ Which suggests to me that when the iron bar enters the solenoid, $KE$ is reduced and negative work is done on the bar. Experimentally, this isn't the case, however, and I'm trying to determine what's going on. I suspect that it has to do with a change in potential energy of my power source, but I'm not familiar enough with electrical and magnetic potentials to say. Answer: I think I've worked out a solution after sketching some things out. If we take the iron bar to be motionless and the solenoid to be moving, then the force on the solenoid is $$F_{solenoid}=\frac{\Delta u_{solenoid}}{-L}$$ From Newton's Third Law, we know that there's an antiparallel force of the same magnitude acting on the bar. $$F_{solenoid}=-F_{bar}$$ Now if we consider the solenoid to be stationary and the iron bar to be in motion, then $$W_{bar} = |F_{bar}| \cdot |s| * \text{cos } \theta$$ $$W_{bar} = |-F_{solenoid}| \cdot |L| * \text{cos } 0^{\circ}$$ $$W_{bar} = |\frac{\Delta u_{solenoid}}{L}| \cdot |L|$$ $$W_{bar} = |\Delta u_{solenoid}|$$ Which answers my question. I think I was accidentally confusing everything by treating the $KE$ of my solenoid as the $KE$ of the iron bar. Edit: To satisfy Conservation of Energy: If one has an inductor, there must be a power source somewhere with $E=u_{battery}$. For the solenoid/battery system, which is stationary if the bar is moving: In general: $$W_{other}=\Delta KE_{solenoid} + \Delta u$$ $$-W_{bar} - W_{heating} = \Delta KE_{solenoid} + \Delta u_{field} + \Delta u_{battery}$$ $$-W_{bar} - W_{heating} = (0) + \Delta u_{field} + \Delta u_{battery}$$ and specifically for our situation: $$-|\Delta u_{field}| - W_{heating} =\Delta u_{field} + \Delta u_{battery}$$ $$-\Delta u_{field} -|\Delta u_{field}| - W_{heating} =\Delta u_{battery}$$ And because our $\Delta u_{field}>0$ $$-2\Delta u_{field} - W_{heating} =\Delta u_{battery}$$ $$\Delta u_{battery}= -2\Delta u_{field} - W_{heating} $$ Which shows that our example does indeed follow the Law of Conservation of Energy. We're not getting energy from nothing; it must come from the power source.
{ "domain": "physics.stackexchange", "id": 39023, "tags": "homework-and-exercises, energy-conservation, work, inductance" }
Wavefronts and phase velocity faster than $c$
Question: Lets assume we have parallel wavefronts in a glass of water: and we put an inclined rod on the water surface: related to a very small inclining, Vy velocity is greater or much greater then Vx (Vy means, wavefronts' contact-making speed on the rod) Now lets assume environment is space and the waves are of electro-magnetic ones: would Vx be smaller than c? Would Vy and Vx be equal? I dont think Vy would be greater than c. What do you think? Assume inclined rod is just a metal rod and photo-electric effect is intact. Answer: Yes, $v_y$ can be greater than $c$, and in fact it could be as large as you want if the make the angle small enough. However nothing, i.e. no signal, is being transmitted at that velocity so it doesn't cause any faster than light travel issues.
{ "domain": "physics.stackexchange", "id": 4606, "tags": "electromagnetism, special-relativity, waves, faster-than-light, phase-velocity" }
capturing camera in electric and groovy
Question: Hi, I need to change my code for capturing images from electric to groovy and find that CvBridge has gone. Finding http://answers.ros.org/question/9765/how-to-convert-cvmat-to-sensor_msgsimageptr/?answer=14282#post-id-14282 and http://answers.ros.org/question/60279/sensor_msgscvbridgecvtoimgmsg-in-groovy/ they seem to offer a solution but also raises questions about the use of some variables that are just used without visible declaration. My current code: sensor_msgs::CvBridge cvbridge; image = cvbridge.imgMsgToCv(msgs_im, "bgr8");. Can somebody give an explanation ? Thank you Originally posted by hvn on ROS Answers with karma: 72 on 2013-11-21 Post score: 0 Original comments Comment by hvn on 2013-11-21: My point is that in the first answer in that link i and ci are being used without explanation. That makes it for me hard to understand how it exactly happens. If you can tell me, I'd appreciate it. Answer: Your second link would seem to answer your own question ... have you tried it? What didn't work? Originally posted by lindzey with karma: 1780 on 2013-11-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16230, "tags": "ros, ros-groovy, image, ros-electric" }
A Structured FizzBuzz
Question: Warning Wall of text, little bit of code. This question is about as much as introducing the language as it is about whether or not I still know how to write the language. Introduction Programmable Logic Controllers (PLCs) are industrial, heavy-duty computers used to control (semi-)automated processes in factories. They can be programmed in a large amount of languages and programmed using a multitude of languages. Up till '93, every brand had its own language. Thanks to the IEC 61131-3, languages are more (but not completely) standardized now. One of the most used languages of the standard 5 described in the earlier mentioned IEC document is "Structured text" (ST), not to be confused with reStructuredText. ST looks a bit like Pascal and is, usually, quite easy to read. In ST, there's a main program (traditionally called PLC_PRG) which can be accompanied by multiple Function Blocks (FBs). Every function gets its own FB. Variable declaration is separated from variable usage. To program a PLC, you need a programmer. Those are usually packed within an IDE. An ever growing amount of PLCs can be programmed with the standard languages using CODESYS. A typical set-up for a PLC is having a Human Machine Interface (HMI), a PLC, sensors and actuators. Most of those can be simulated and visualized so no actual hardware is required. Problem and approach How FizzBuzz works doesn't need much explanation. Except mine doesn't work quite like that. You see, I had a little trouble getting the text-box to work and by the time I figured it out it already looked just fine. So unlike an ordinary FizzBuzz, mine will always show the number. There will be extra output when the number is divisible by 3, 5 or both (15). All working as intended. To keep it more interesting, I added a couple of extras. A switch to turn the program off, for example. After all, it's a PLC. It might be driving a chainsaw instead of a couple of indicators next. Everything you build in hardware needs a switch to turn it off. The number indicator is limited to 90 or 255. Why? Because they're divisible by 15 and 100 isn't. It just looks better. There's also a reset button (self-explanatory) and an automated reset at 90. The automated reset can be toggled. If the reset isn't used, the count will go till 255 and overflow to 0. If the indicator is set to a scale end of 90, 91-255 will be indicated as 90. The FizzBuzzer has its own FB. I've considered putting the IF statements within it in their own FB as well, passing 3 and 5 as arguments from the FizzBuzzer, but that got surprisingly messy. Must've done something wrong. I'm also unable to make CONST variables out of fizz_num and buzz_num, I kind-of forgot the proper syntax for that. The simulation is controlled by mouse. Works like a train. Code PLC_PRG (PRG) PROGRAM PLC_PRG VAR state : BOOL := FALSE; reset : BOOL := FALSE; toggle_reset: BOOL := FALSE; i: USINT; reset_above : USINT := 90; fizzbuzz: FIZZBUZZ; fizz: BOOL; buzz: BOOL; END_VAR IF state THEN i := i+1; fizzbuzz(in:=i, fizz=>fizz, buzz=>buzz); END_IF; IF reset THEN i := 0; END_IF IF toggle_reset AND i >= reset_above THEN i := 0; END_IF FIZZBUZZ (FB) FUNCTION_BLOCK FIZZBUZZ VAR_INPUT in : USINT; END_VAR VAR_OUTPUT fizz : BOOL := FALSE; buzz : BOOL := FALSE; END_VAR VAR fizz_num : USINT := 3; buzz_num : USINT := 5; END_VAR IF in MOD fizz_num = 0 THEN fizz := TRUE; ELSE fizz := FALSE; END_IF; IF in MOD buzz_num = 0 THEN buzz := TRUE; ELSE buzz := FALSE; END_IF Project overview Result To 90: To 255: From left to right, top to bottom, including the variable they're linked with: Index indicator (PLC_PRG.i) Fizz indicator (PLC_PRG.fizz) Buzz indicator (PLC_PRG.buzz) Reset button (PLC_PRG.reset) State switch (PLC_PRG.state) Automatic reset switch (PLC_PRG.toggle_reset) Whether or not that naming is logical depends on how you look at it and what you're used to, I guess. The State switch probably needs a better name. Both animations use a MainTask interval of 250 ms and a VISU_TASK interval of 100 ms. Footnote This particular FizzBuzz is written in ST using CODESYS V3.5 SP12. Answer: Thanks to Lundin for pointing out in the comments that the button bounce was improperly handled. This led to 'missed triggers', the system ignoring a button being pushed because it wasn't looking at that particular moment. Quite an unfortunate bug. Quite easily fixed as well. There are a couple of methods to solve this, but in this case the easiest is by using a (somewhat ridiculous looking) latch and a dedicated handler for every input. Since the problem is most pressing for the manual reset button, the behaviour of switches won't change much if the same technique is used on them as well, I'll show how that's done. A list of global variable names is introduced, which I'll name GVL. A new GVL.reset_triggered will be added which will be reset at the end of each PLC_PRG run. The top of GVL will carry an attribute 'qualified_only' pragma to force the programs only to call the global variables by it's name inside the list (GVL.reset is valid while reset is not valid with this pragma, keeps it clean). Afterwards, a task handler is created for the manual reset and a few modifications to PLC_PRG are required to make it all fit again. Note how at the end of PLC_PRG the 'reset flag' is reset. GVL {attribute 'qualified_only'} VAR_GLOBAL reset_triggered : BOOL := FALSE; fizz: BOOL; buzz: BOOL; END_VAR PLC_PRG (PRG) PROGRAM PLC_PRG VAR state : BOOL := FALSE; toggle_reset: BOOL := FALSE; i: USINT; reset_above : USINT := 90; fizzbuzz: FIZZBUZZ; END_VAR IF state THEN i := i+1; fizzbuzz(in:=i, fizz=>GVL.fizz, buzz=>GVL.buzz); END_IF IF GVL.reset_triggered THEN i := 0; END_IF IF toggle_reset AND i >= reset_above THEN i := 0; END_IF IF GVL.reset_triggered THEN GVL.reset_triggered := FALSE; END_IF MANUAL_RESET (PRG) PROGRAM MANUAL_RESET VAR reset : BOOL := FALSE; END_VAR IF reset THEN GVL.reset_triggered := TRUE; GVL.fizz := FALSE; GVL.buzz := FALSE; END_IF FIZZBUZZ (FB) has not changed. Now, the manual reset is set-prioritized (not necessarily set-dominant) latched. With the new task running every 10 ms, all is fine again. The PLC_PRG.fizz and PLC_PRG.buzz moved to respectively GVL.fizz and GVL.buzz and now they can be reset by the MANUAL_RESET task to fix another bug (the lights kept burning if they were before the reset was hit and the PLC_PRG.state was no longer active). Just to be pedantic, the original code had a couple of semi-colons inconsistently placed behind END_IF statements. Those were not necessary. Is it clean code now? Not by a long shot. For starters, there is no proper separation of concerns. There should be a clear FizzBuzzer block, a clear HMI block and a clear LetsTestThisDeviceUnderTest block. That's insufficiently separated at the moment. Besides, I'm not convinced this is much prettier. Less buggy, by the looks of it. But not pretty.
{ "domain": "codereview.stackexchange", "id": 33482, "tags": "programming-challenge, fizzbuzz, embedded, structured-text" }
Surface energy in a solid cylinder
Question: I have been trying this for over an hour and I still can't proceed: $$F = 2 \pi RT$$ $$W = F(\text{displacement})$$ $$W = 2 \pi R (\text{displacement})$$ $$W = 2 \pi R \Delta x \ \ \ \ \text{or} \ \ \ \ W = 2 \pi R dx $$ I couldn't do beyond this as I am struggling to think about the limits, should they be the height of the cylinder or the circumference? I also wanted to know if I could use the standard formula for a 2D cylinder in this case $W = T \Delta x$ Answer: Force per unit length $f$, and tension $T(r)$ force per unit area: $$ F_{total} = f \times L = 2\pi r L T(r). $$ And push the radius from $r$ to $r+dr$, the amount of work done: $$ dW = F_{total} \times dr = f(r) L dr = 2\pi r dr L T(r). $$ The work per unit length $w = W / L$: $$ dw = f(r) dr = 2 T(r) \pi r dr. $$ Therefore, work per unit length from $r_1 \to r_2$ is : $$ w = \int dw = 2 \pi \int_{r_1}^{r_2} T(r) r dr. $$
{ "domain": "physics.stackexchange", "id": 75796, "tags": "homework-and-exercises, surface-tension" }
Reading a config file
Question: I am relatively new to Perl and would like to check that the code I am writing docent have any major flaws in it or any bad practices. This is a fairly simple sub but one that gets used often. Seems to work OK. Simply reads a text file that has parameters in it set by the user. Config.txt #cucm username and password of user associated with phones username = bob password = bobPass #subnet details. Use /30 to start with to test 2 phones before entire subnet subnet = 10.64.97.216/32 #HFS web server details IP and folder webserver = http://10.64.164.230/Desktop/ #image file names. Full and Thumbnail size imgFullSize = screen_F.png imgThumSize = screen_S.png Perl sub that reads the file: sub readConfigFile { print $lfh "Reading Config.txt file\n"; #path to the config file my $configF = "config.txt"; if($DEBUG == 1){print "opening config File\n";} open(my $configFhandle, '<', $configF) or die "Unable to open file, $!"; my @arrFileData=<$configFhandle>; #Slurp! #go through each of the lines in the file for (my $i = 0; $i <= $#arrFileData; ++$i) { local $_ = $arrFileData[$i]; if($DEBUG == 1){print "Reading line:$_";} #check to see if the line is a comment... if it is skip it if($_ =~ /^#/) { if($DEBUG == 1){print "Line is comment. Skiping:$_";} next; } #split the line into the key and value my @dataKV = split /=/, $_; #check to see that Key and Value exist ... if not skip it my $arrSize = @dataKV; if($arrSize <= 1) { if($DEBUG == 1){print "Read Config: Error: No Key AND Value found:$_";} next; } #remove all leading and end white spaces around elements s{^\s+|\s+$}{}g foreach @dataKV; for (my $x = 0; $x <= $#dataKV; $x++) { if($DEBUG == 1){print "Key:$dataKV[$x]\n";} given($dataKV[$x]) { when (/^username/) { $usr = $dataKV[++$x]; if($DEBUG == 1){print "Value Username:$dataKV[$x]\n";} next; } when (/^password/) { $pass = $dataKV[++$x]; if($DEBUG == 1){print "Value password:$dataKV[$x]\n";} next; } when (/^subnet/) { @arrSubnets = $dataKV[++$x]; if($DEBUG == 1){print "Value Subnet:$dataKV[$x]\n";} next; } when (/^webserver/) { $webserver = $dataKV[++$x]; if($DEBUG == 1){print "Value webserver:$dataKV[$x]\n";} next; } when (/^imgFullSize/) { $imgFull = $dataKV[++$x]; if($DEBUG == 1){print "Value imgFullSize:$dataKV[$x]\n";} next } when (/^imgThumSize/) { $imgThum = $dataKV[++$x]; if($DEBUG == 1){print "Value imgThumSize:$dataKV[$x]\n";} next } }#end of switch statement }#end of for loop going through the key and value line }#end of For loop going though the lines in the config print "*********************************\n"; print "*Config imported\n"; print "*********************************\n"; print "*username :$usr\n"; print "*password :$pass\n"; print "*subnet :@arrSubnets\n"; print "*webserver :$webserver\n"; print "*imgFullSize :$imgFull\n"; print "*imgThumSize :$imgThum\n"; print "*********************************\n"; print "Is this above correct?[Y]:"; my $confirm = <>; if($confirm !~ /^Y/) { print $lfh "Config file needs to be changed. Closing script down.\n"; close($lfh) or warn "Unable to close the file handle: $!"; close($configFhandle) or warn "Unable to close the file handle: $!"; print "Script will now terminate. Please amend \"Config.txt\" file and re-run script. Press any key to terminate"; <>; exit; } #close the file close($configFhandle) or warn "Unable to close the file handle: $!"; print $lfh "Y pressed by User. Config all good.\n"; } Any advise on how to improve this would be great. Answer: Well, here is how I'd write that subroutine. I thoroughly commented the code to explain my choices. use strict; use warnings; use autodie; # automatic error handling for "open" and other builtins use IO::Handle; # object oriented syntax for filehandles # DEBUG is a compile-time constant initialized by an environment variable. # If it is false, the debug statements will get optimized away. use constant DEBUG => $ENV{DEBUG}; # the log file handle and debug file handle # Here, I opened both to STDERR open my $lfh, '>&', STDERR; open my $dfh, '>&', STDERR; # close the $lfh file handle regardless of how the script terminates. END { no autodie; close $lfh or warn "Unable to close the log file handle: $!"; } # the main part of our script { my $filename = "config.txt"; # "read_config_file" is a function that takes one argument and returns one value. my $config = read_config_file($filename); # if the check fails, exit this script with error code "1". # the default error code (zero) is considered a success. ui_check_config_ok(\*STDOUT, $config, $filename) or exit 1; # now we can do stuff with the $config, e.g. my $usr = $config->{username}; } # Name your variables in snake_case, not with camelCase. # This function *only* parses the config file. # It does not interact with the user. # Such separation of concern makes your code easier to maintain: # every function should do one thing only. sub read_config_file { # Subroutines can take an argument list. # We unpack it like this: my ($filename) = @_; $lfh->say("Reading config file $filename"); DEBUG and $dfh->say("Opening config file $filename"); my %known_keys = map { $_ => 1 } qw/ username password subnet webserver imgFullSize imgThumSize /; # we wills store the config in this hash my %config; open my $fh, "<", $filename; # don't slurp the file, we loop over it line by line LINE: while (<$fh>) { # trim the line s{\A\s+}{}; s{\s+\z}{}; DEBUG and $dfh->say("Reading line: $_"); if (/^#/) { DEBUG and $dfh->say("Line is comment, skipping."); next LINE; } # * We use the /x flag on regexes to be able to structure the regex with spaces # they won't match, they're just decoration # * instead of assigning to an array, we assign to a list of scalars. # If the "split" fails, the last one should be "undef" # * We only accept one "=" per line, and produce 2 fragments max. So # foo = bar = baz # will produce $key="foo", $value="bar = baz". my ($key, $value) = split m{\s* [=] \s*}x, $_, 2; if (not defined $value) { DEBUG and $dfh->say("Ignoring config error: no key = value pair found on line: $_"); next LINE; } DEBUG and $dfh->say("Value $key:$value"); $config{$key} = $value; # a sanity check – you don't do anything on unknown keys if (not exists $known_keys{$key}) { $lfh->say("The config provided the unknown key $key, ignoring.") } } # another sanity check – did the user forget to specify some properties? for my $key (keys %known_keys) { if (not exists $config{$key}) { $lfh->say("The config didn't provide a key $key, ignoring.") } } return \%config; } # another subroutine that provides the user interface to ask whether the data is correct # It will return a truth value whether the config was OK. sub ui_check_config_ok { # Here, we see an argument list with three values my ($ui_fh, $config, $filename) = @_; $ui_fh->say($_) for "*********************************", "*Config imported", "*********************************", "*username :$config->{username}", "*password :$config->{password}", "*subnet :$config->{subnet}", "*webserver :$config->{webserver}", "*imgFullSize :$config->{imgFullSize}", "*imgThumSize :$config->{imgThumSize}", "*********************************"; $ui_fh->print("Is this above correct?[Y]:"); if (<STDIN> =~ /^\s*Y/) { $lfh->say("The user pressed Y. Config is good."); return 1; } $lfh->say("Config file contains false information, please correct."); $lfh->say("Terminating script"); $ui_fh->say("The script will now terminate."); $ui_fh->say("Please amend the file $filename and rerun the script."); $ui_fh->say("Press any key to terminate..."); <>; return 0; } Note how indentation made this code much easier to read than the dump you provided. Please use indentation! I actually do not generally use IO::Handle, but it makes stuff more obvious when having more than one file handle. The autodie module is a bit inflexible, but I highly encourage it when you are still new with Perl, as it provides sensible defaults. You do not manually have to close file handles if they are variables declared with my – they will get closed automatically, and it's highly unlikely that closing a file will fail (file handles are not only used for physical files, but also for other things like pipes, where the close can fail).
{ "domain": "codereview.stackexchange", "id": 6564, "tags": "beginner, perl, file" }
Professor and textbook disagree on frequency at which a pendulum oscillates
Question: We had a guest lecturer today who told us that the frequency at which a pendulum oscillates is $\omega=\sqrt{mgL/I}$. However the textbook states that is $\omega=\sqrt{g/L}$. Why the discrepancy? Answer: It depends on the kind of pendulum. The ideal and simplest case is a point mass on a massless string of exactly constant length $L.$ That gives $\omega = \sqrt{g/L}.$ The mathematical pendulum. A physical pendulum takes inte account that the mass is in an extended rigid body. See http://hyperphysics.phy-astr.gsu.edu/hbase/pendp.html for the derivation.
{ "domain": "physics.stackexchange", "id": 53989, "tags": "newtonian-mechanics, oscillators" }
Why are osmoles not considered SI units?
Question: The Wikipedia article for osmoles is pretty clear on what they are and how they are used. Right off the bat, though, it states that the osmole is not an SI unit, and indeed the usual lists of derived SI units don't mention it. This seems weird to me - it looks like a pretty reasonable thing to be talking about and it looks reasonably uniquely defined for each solute to me. Is there some fundamental reason why this unit did not make the cut? Is it down to historical accident? Or did the unit stop being useful by the time the SI was designed? Or is it simply not that well defined, changing with respect to exactly what situation one is talking about? Answer: The unit osmole (symbol: $\mathrm{osmol}$) is a coherent derived unit with a special name and symbol like the similar unit equivalent (symbol: $\mathrm{eq}$). Both units have not been accepted for use with the International System of Units (SI). The main reason why these units have not been adopted is probably that such use is neither necessary nor convenient. All concerned quantities have the same dimension as the amount of substance $n$ $(\dim n = \mathsf{N})$. Therefore, all values can be expressed in terms of the base unit mole (symbol: $\mathrm{mol}$). Note that, according to the definition of the mole, The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12; its symbol is “mol”. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. Thus, the amount of substance can always be expressed in terms of the unit mole, regardless of the considered elementary entities. For example, amount of mercury atoms: $n(\ce{Hg})=1\ \mathrm{mol}$ amount of hydrogen molecules: $n(\ce{H2})=1\ \mathrm{mol}$ amount of chloride ions: $n(\ce{Cl^-})=1\ \mathrm{mol}$ amount of electrons: $n(\ce{e-})=1\ \mathrm{mol}$ amount of equivalent entities $\ce{1/2H2SO4}$ corresponding to the transfer of a $\ce{H+}$ ion in a neutralization reaction: $n(\ce{1/2H2SO4})=1\ \mathrm{mol}$ amount of equivalent entities $\ce{1/5MnO4-}$ corresponding to the transfer of an electron in a redox reaction: $n(\ce{1/5MnO4-})=1\ \mathrm{mol}$ amount of solute that contributes to the osmotic pressure of a solution: $n_\text{solute}=1\ \mathrm{mol}$ Nevertheless, certain coherent derived units with special names and symbols have been adopted for use with the SI. The main reason is convenience. The special names and symbols are simply a compact form for the expression of combinations of base units that are used frequently, for example “one kilogram metre squared per second squared” may be written “one joule” $(1\ \mathrm{kg\ m^2\ s^{-2}}=1\ \mathrm J)$. However, his reason does not apply to the units osmole and equivalent, since “osmole” and “equivalent” are not shorter than “mole” and $1\ \mathrm{osmol}$ and $1\ \mathrm{eq}$ are not more compact than $1\ \mathrm{mol}$. In many cases, coherent derived units with special names and symbols may also serve to remind the reader of the quantity involved. For example, a value expressed in the unit newton may help to remind the reader that the considered quantity is a force. However, the unit symbol should not be used to provide specific information about the quantity, and should never be the sole source of information on the quantity. Therefore, even if the special name “osmole” or “equivalent” is used, the special nature of the considered quantity would still have to be explained in the text. There are, however, a few important exceptions. For the quantity plane angle, the unit one is given the special name “radian” (symbol: rad), and for the quantity solid angle, the unit one is given the special name “steradian” (symbol sr). For the quantity frequency, the unit reciprocal second is given the special name “hertz” (symbol: Hz), and for the quantity activity, the unit reciprocal second is given the special name “becquerel” (symbol: Bq) in order to emphasize the different nature of the quantities. For the quantity absorbed dose $D$, the unit joule per kilogram is given the special name “gray” (symbol: Gy), and for the quantity equivalent dose $H$, the unit joule per kilogram is given the special name “sievert” (symbol: Sv) to avoid the grave risks of confusion between these quantities (e.g. in therapeutic work). Apparently, the concept of “osmotic concentration” (formerly known as “osmolarity”) is simply not important enough to justify a similar exception for the unit osmole.
{ "domain": "chemistry.stackexchange", "id": 5869, "tags": "mole, units" }
Using MATLAB's Spectrogram Function for Analysis
Question: In the documentation - sepctrogram(). Question 1)s = spectrogram(signal) and spectrogram(signal) are two commands to plot the spectrogram. However, the variable s is complex valued. I am unable to understand which output of the spectrogram is used to generate the image plot? Question 2) How to determine the best values of the parameters window and noverlap ? Should noverlap be 50% of the signal length (number of elements in the time series) or 90% etc? What if it is zero then what does it mean? My dataset has sampling time = 1sec. I remember reading somewhere that the window should be at least roughly twice as long as the period of the lowest frequency. So, for my case is w=2 since frequency = 1? I was thinking of using pspectrum(signal,'spectrogram') which outputs the spectrogram and use the output values as inputs to the spectrogram() function. But again, I don't know which output values from pspectrum can be used, if at all that is possible. Answer: See the MATLAB documentation: s = spectrogram(x) returns the short-time Fourier transform of the input signal, x. Each column of s contains an estimate of the short-term, time-localized frequency content of x. Namely each column of the matrix s is the result of an fft() on some samples of the input. So the plot you see is the magnitude of the columns of s. Spectrogram is about analysis of Non Stationary signals. So something is changing over time which means it makes no sense to look on the DFT of all samples. The window length is the time you think the signal has the same properties over time. The overlap time should be similar to the time the signal is changing. Something like the time of Fade Out / Fade In, the transient length. Example The following code will recreate the figure from the function (Up to the Colorbar and the units of the Axis): t = 0:0.001:2; x = chirp(t, 100, 1, 200, 'quadratic'); figure(); spectrogram(x, 128, 120, 128, 1e3); s = spectrogram(x, 128, 120, 128, 1e3); figure(); hA = axes(); imagesc(20 * log10(abs(s).')); set(hA, 'YDir', 'normal');
{ "domain": "dsp.stackexchange", "id": 9337, "tags": "matlab, fft, discrete-signals, frequency-spectrum, spectrogram" }
How to use the functions of Navigation Stack separately?
Question: Hi all, I am interested in navigation on mobile robots. Following this tutorial I have successfully simulated the mobile robot running on rviz by given 2D Nav Goal. The rqt_graph is as attached. However I want to test and learn how to use the functions inside the move_base package separately. I have no idea how to get the rqt_graph inside the move_base. More specifically, if I want to test the global planner(or local planner) alone, which information(topics) I should give? I am trying to use rosnode info /node_name to guess which topics I may receive and which to publish, but it doesn't seem to be a good way. My launch file is <launch> <master auto="start"/> <!-- Run the map server --> <arg name="map_file" default="$(find robot)/yaml/map.yaml"/> <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" /> <node pkg="global_planner" type="planner" respawn="false" name="planner" output="screen" /> <node pkg="costmap_2d" type="costmap_2d_node" respawn="false" name="costmap_2d_node" /> <node name="rviz" pkg="rviz" type="rviz" args="-d $(find robot)/rviz/config.rviz"/> </launch> I think there is still some works to do like remapping the topic names. Could anyone give me some hints? Originally posted by kai1006 on ROS Answers with karma: 11 on 2016-04-15 Post score: 1 Original comments Comment by Procópio on 2016-04-27: Please, create a new question from your revised one. The way it looks now is you ask about planners (which has been answered) and then creates another question about costmaps, which is answered by yourself. Comment by kai1006 on 2016-04-27: sorry, thanks for your advice. Answer: Afaik, move_base always work as a single ROS node, so you cannot do such a thing. The internal components are of two types: plugins loaded at startup (but cannot be changed at runtime, I think) as the local or global planners, recovery behaviors, etc. fixed components as the global and local costmaps. But all the communications within move_base are function calls, no ROS messages at all. That said, you can instantiate any of these components in your own nodes, for example create a costmap for your own navigation. See Component API section at the bottom of move_base wiki Originally posted by jorge with karma: 2284 on 2016-04-15 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by spmaniato on 2016-04-15: jorge, I think you meant to link to http://wiki.ros.org/move_base or http://wiki.ros.org/costmap_2d Comment by jorge on 2016-04-15: exactly, thanks! (fixed) Comment by kai1006 on 2016-04-16: Thanks for your reply. Sorry for not replying immediately. I spend some time reread the costmap_2d. Comment by kai1006 on 2016-04-16: In fact, I don’t intend to use the global_planner package alone. My intention is to test every function properly. So if I want to test the global planner, I only need to subscribe the /move_base/NavfnROS/plan and close the marking and clearing function right?
{ "domain": "robotics.stackexchange", "id": 24382, "tags": "navigation, costmap-2d, base-local-planner" }
Chart showing the Percentage of Answered CR questions
Question: Every day Duga reports the number of unanswered questions and percentage of answered questions (after she calls the API endpoint /info) in The 2nd Monitor (so we can keep the Zombies at bay). I created a script to read the messages by her in the past few months and plot the percentage of answered questions on a line chart. A demonstration of it can be seen in this playground example. What would you change? Obviously there are other techniques for fetching and parsing the data... but is this approach acceptable? <?php $dd = new DomDocument(); $internalErrors = libxml_use_internal_errors(true); $html = file_get_contents('https://chat.stackexchange.com/search?q=%22RELOAD!+There+are%22&user=125580&room=8595&pagesize=150'); $data = []; $labels = []; if ($html) { $dd->loadHtml($html); $xml = simplexml_import_dom($dd); $messages = $xml->xpath('//div[@class="content"]'); $timestamps = $xml->xpath('//div[@class="timestamp"]'); if (count($messages) == count($timestamps)) { foreach(array_reverse($timestamps, true) as $index => $timestamp) { preg_match('#(\d{2}.\d{4})#',(string)$messages[$index], $matches); if (count($matches) > 1) { $labels[] = (string)$timestamp; $data[] = $matches[1]; } } } ?> <html> <head> <script src="//www.chartjs.org/dist/2.7.2/Chart.bundle.js"></script> <script src="//www.chartjs.org/samples/latest/utils.js"></script> </head> <body> <div style="width:75%;"> <canvas id="canvas"></canvas> </div> <script type="text/javascript"> Chart.defaults.global.legend.display = false; const config = { type: 'line', data: { labels: <?=json_encode($labels);?>, datasets: [{ label: 'Unanswered Percentage', backgroundColor: window.chartColors.red, borderColor: window.chartColors.red, data: <?=json_encode($data);?>, fill: false }] }, options: { title: { display: true, text: 'Answered percentage in recent months' }, scales: { xAxes: [{ display: true, scaleLabel: { display: true, labelString: 'Date' } }], yAxes: [{ display: true, scaleLabel: { display: true, labelString: 'Percentage' } }] } } }; document.addEventListener('DOMContentLoaded', _ => { var ctx = document.getElementById('canvas').getContext('2d'); window.myLine = new Chart(ctx, config); }); </script> </body> </html> <?php } else { echo 'Unable to fetch messages'; } ?> Sample output: Chart.defaults.global.legend.display = false; const config = { type: 'line', data: { labels: ["Mar 11 12:00 AM", "Mar 12 12:00 AM", "Mar 13 12:00 AM", "Mar 14 12:00 AM", "Mar 15 12:00 AM", "Mar 16 12:00 AM", "Mar 17 12:00 AM", "Mar 18 12:00 AM", "Mar 19 12:00 AM", "Mar 20 12:00 AM", "Mar 21 12:00 AM", "Mar 22 12:00 AM", "Mar 23 12:00 AM", "Mar 24 12:00 AM", "Mar 25 12:00 AM", "Mar 26 12:00 AM", "Mar 27 12:00 AM", "Mar 28 12:00 AM", "Mar 29 12:00 AM", "Mar 30 12:00 AM", "Mar 31 12:00 AM", "Apr 1 12:00 AM", "Apr 2 12:00 AM", "Apr 3 12:00 AM", "Apr 4 12:00 AM", "Apr 5 12:00 AM", "Apr 6 12:00 AM", "Apr 7 12:00 AM", "Apr 8 12:00 AM", "Apr 9 12:00 AM", "Apr 10 12:00 AM", "Apr 11 12:00 AM", "Apr 12 12:00 AM", "Apr 13 12:00 AM", "Apr 14 12:00 AM", "Apr 15 12:00 AM", "Apr 16 12:00 AM", "Apr 17 12:00 AM", "Apr 18 12:00 AM", "Apr 19 12:00 AM", "Apr 20 12:00 AM", "Apr 22 12:00 AM", "Apr 23 12:00 AM", "Apr 24 12:00 AM", "Apr 25 12:00 AM", "Apr 26 12:00 AM", "Apr 27 12:00 AM", "Apr 28 12:00 AM", "Apr 29 12:00 AM", "Apr 30 12:00 AM", "May 1 12:00 AM", "May 2 12:00 AM", "May 3 12:00 AM", "May 4 12:00 AM", "May 5 12:00 AM", "May 6 12:00 AM", "May 7 12:00 AM", "May 8 12:00 AM", "May 9 12:00 AM", "May 10 12:00 AM", "May 11 12:00 AM", "May 12 12:00 AM", "May 13 12:00 AM", "May 14 12:00 AM", "May 15 12:00 AM", "May 16 12:00 AM", "May 17 12:00 AM", "May 18 12:00 AM", "May 19 12:00 AM", "May 20 12:00 AM", "May 21 12:00 AM", "May 22 12:00 AM", "May 23 12:00 AM", "May 24 12:00 AM", "May 25 12:00 AM", "May 26 12:00 AM", "May 27 12:00 AM", "May 28 12:00 AM", "May 29 12:00 AM", "May 30 12:00 AM", "May 31 12:00 AM", "Jun 1 12:00 AM", "Jun 2 12:00 AM", "Jun 3 12:00 AM", "Jun 4 12:00 AM", "Jun 5 12:00 AM", "Jun 6 12:00 AM", "Jun 7 12:00 AM", "Jun 8 12:00 AM", "Jun 9 12:00 AM", "Jun 10 12:00 AM", "Jun 11 12:00 AM", "Jun 12 12:00 AM", "Jun 13 12:00 AM", "Jun 14 12:00 AM", "Fri 12:00 AM", "Sat 12:00 AM", "Sun 12:00 AM", "Mon 12:00 AM", "yst 12:00 AM", "12:00 AM"], datasets: [{ label: 'Answered Percentage', backgroundColor: window.chartColors.red, borderColor: window.chartColors.red, data: ["90.0229", "90.0027", "89.9802", "89.9748", "89.9868", "89.9923", "89.9685", "89.9861", "90.0081", "89.9973", "89.9849", "90.0006", "89.9790", "89.9817", "89.9911", "89.9979", "90.0127", "90.0040", "90.0019", "90.0069", "89.9896", "89.9731", "89.9829", "89.9804", "89.9781", "89.9643", "89.9741", "89.9783", "89.9808", "89.9760", "89.9895", "90.0025", "90.0084", "89.9738", "89.9727", "90.0107", "90.0017", "90.0162", "90.0069", "90.0036", "90.0080", "90.0120", "90.0086", "90.0099", "90.0004", "90.0032", "89.9888", "89.9899", "89.9688", "89.9671", "89.9620", "89.9778", "89.9668", "89.9790", "89.9968", "90.0233", "90.0313", "90.0419", "90.0261", "90.0170", "90.0119", "90.0255", "90.0359", "90.0234", "90.0302", "90.0006", "90.0009", "89.9923", "89.9883", "90.0026", "90.0047", "89.9938", "90.0006", "89.9940", "89.9750", "89.9927", "90.0197", "90.0032", "90.0248", "90.0182", "90.0248", "90.0349", "90.0272", "90.0409", "90.0289", "90.0345", "90.0425", "90.0421", "90.0376", "90.0264", "90.0095", "89.9877", "89.9987", "89.9850", "89.9815", "89.9815", "89.9681", "89.9765", "89.9815", "89.9830", "89.9776"], fill: false }] }, options: { title: { display: true, text: 'Answered percentage in recent months' }, scales: { xAxes: [{ display: true, scaleLabel: { display: true, labelString: 'Date' } }], yAxes: [{ display: true, scaleLabel: { display: true, labelString: 'Percentage' } }] } } }; document.addEventListener('DOMContentLoaded', _ => { var ctx = document.getElementById('canvas').getContext('2d'); window.myLine = new Chart(ctx, config); }); <script src="//www.chartjs.org/dist/2.7.2/Chart.bundle.js"></script> <script src="//www.chartjs.org/samples/latest/utils.js"></script> <div style="width:75%;"> <div class="chartjs-size-monitor" style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px; overflow: hidden; pointer-events: none; visibility: hidden; z-index: -1;"> <div class="chartjs-size-monitor-expand" style="position:absolute;left:0;top:0;right:0;bottom:0;overflow:hidden;pointer-events:none;visibility:hidden;z-index:-1;"> <div style="position:absolute;width:1000000px;height:1000000px;left:0;top:0"></div> </div> <div class="chartjs-size-monitor-shrink" style="position:absolute;left:0;top:0;right:0;bottom:0;overflow:hidden;pointer-events:none;visibility:hidden;z-index:-1;"> <div style="position:absolute;width:200%;height:200%;left:0; top:0"></div> </div> </div> <canvas id="canvas" width="481" height="240" class="chartjs-render-monitor" style="display: block; width: 481px; height: 240px;"></canvas> </div> Answer: When I ran your script and started digging through the dom, I noticed that Apr 21 12:00 AM was missing -- I wonder why that was. Anyhow, I wanted to encourage you to adjust the timestamp values because they are irregularly formatted. Then I started to play with iterated strtotime() calls to generate consistent Y-m-d stamps, but some of the strings needed to be prepared, so I wrote some conditionals and it just felt like hacky bloat. If you are happy with the time strings in your graph, I'll just leave that bit alone. As a matter of directness, I'll recommend: $dd->loadHTMLFile('https://chat.stackexchange.com/search?q=%22RELOAD!+There+are%22&user=125580&room=8595&pagesize=150'); Rather than the two step: $html = file_get_contents('https://chat.stackexchange.com/search?q=%22RELOAD!+There+are%22&user=125580&room=8595&pagesize=150'); $dd->loadHtml($html); The regex block can be tightened up slightly. Write the preg_match() in a conditional so that you don't need to count(). A capture group is not necessary and a literal dot is more accurate than an [any character] dot. if (preg_match('#\d{2}\.\d{4}#', (string)$messages[$index], $match)) { $labels[] = (string)$timestamp; $data[] = $match[0]; } These are very small adjustments, so I'd say your work is fine. p.s. I did also toy with //div[@class="messages"] so that I was guaranteed to be processing pairs of data (by making subsequent xpath calls to isolate the content and timestamp child elements). However, I abandoned that process, because it was adding too much complexity to an originally simple task and the effort (mine & php's) didn't seem to be worth the validation gains.
{ "domain": "codereview.stackexchange", "id": 30979, "tags": "javascript, php, ecmascript-6, web-scraping, data-visualization" }
What are an internal and external exons?
Question: I read the book: Essential Genetics and Genomics It has a table summarizing the properties of the "typical" human gene: It has a gene feature Size of internal exon, what internal means in this context? When I searched for it I found mentions about external exons, but found no definition. I strongly suspect they mean: External exons are the first exon right after the 5' untranslated region and the last exon just before the 3' untranslated region and all the other exons are internal exons. As turned out this was almost right, but had a mistake which is wonderfully pointed out in the accepted answer. Answer: Yes, the internal exons are those that aren't at the ends, which are often referred to as terminal exons1. However, exons are sequences of nucleotides that are incorporated into the mature mRNA — i.e. they don't have to be (entirely) protein coding. It is probably simplest to think of exons as being the transcribed regions that are not introns — i.e. they are the sequences that don't get spliced out during transcript maturation. In other words, the 5'UTR is typically§ part of the first exon and the 3' UTR is typically§ part of the last exon. §Note: I say typically, because sometimes introns are found within the UTRs2,3. Reference: 1: Bolisetty, M. T., & Beemon, K. L. (2012). Splicing of internal large exons is defined by novel cis-acting sequence elements. Nucleic acids research, 40(18), 9244-9254. 2: Eden, E., & Brunak, S. (2004). Analysis and recognition of 5′ UTR intron splice sites in human pre‐mRNA. Nucleic acids research, 32(3), 1131-1142. 3: Paolantoni, C., Ricciardi, S., De Paolis, V., Okenwa, C., Catalanotto, C., Ciotti, M. T., ... & Giorgi, C. (2018). Arc 3′ UTR splicing leads to dual and antagonistic effects in fine-tuning arc expression upon BDNF signaling. Frontiers in molecular neuroscience, 11, 145.
{ "domain": "biology.stackexchange", "id": 10344, "tags": "genetics, terminology, splicing, exons" }
How do masses on a spring accelerate when the connection between them is cut?
Question: I'm learning about springs in Physics and have come across something I do not fully understand. As in the picture, there are three masses (A, B and C), with mass ma, mb and mc, respectively. In the configuration above, masses A and B are connected by a light inelastic string. The masses of the string and the two springs are negligible and the whole system is stationary (in equilibrium). What I want to work out is the acceleration of the three masses when the string between A and B is suddenly cut. I currently understand from slinkies that C would stay stationary while S2 contracts. So B is accelerating downwards. Therefore, C would have no acceleration. Also, S1 is carrying all three masses but after the string is cut, it only carries ma, so it would contract upwards with a force equivalent to g(mb+mc), because they are the forces taken away from the spring. But when I calculate the acceleration of A using F=ma, I end up with a=g and I am not sure if that is correct, because I also get a=g for the acceleration of B. I am trying to express each acceleration in terms of ma, mb, mc and g. I am not looking for answers, as I know this is not a homework site. All I ask is for pointers in the right direction and perhaps correct me if my understanding of the spring system is flawed. Thank you. Answer: When the connection between A & B is cut, mass of B & C is removed from the spring. When the masses B & C were attached, they exerted some force on the spring causing extension. They are removed so the spring will relax and extension in it will be reduced. Mass A will be accelerated upwards and will start performing simple harmonic motion. As for the masses B & C, gravitational force will pull them down. They will be in free- fall and therefore mass C will not pull the spring downward as it was doing in initial conditions. So, the spring will relax and come back to it's natural length. Coming back to it's natural length, it will exert an upward force on C and downward force on B.
{ "domain": "physics.stackexchange", "id": 37314, "tags": "newtonian-mechanics, newtonian-gravity, spring" }
Performance measurement of an event extraction system
Question: I have developed an event extraction system from text documents. It first clusters the data corpus and extracts answers for what, when and where questions. Final answers are determined by using a candidate scoring function. I am struggling at evaluating the performance of the system. What measurements should I consider? Any suggestion is highly appreciated. An Image explaining the problem is attached. Answer: The standard evaluation would be to count the proportion of correct predictions. The most basic version would be to count 3 instances for every event: where, when, what. For example if the three questions are answered correctly the score for this event is 3/3. Note that the case where one of the questions has no answer should be counted normally, i.e. if the system doesn't give any answer it's correct but it's an error if it does. You might also have the case where an event is not detected at all by the system, in this case it makes sense to count as if it has the three questions wrong: 0/3. It looks like you can also have several answers for one of the question. In this case you might want to count partial answers, for example 0.5 if the system finds one correct answer out of 2. There can be different variants of this option. The final evaluation score is simply aggregated across all the events. Note that it would be common to count the detailed score for each type of question as well.
{ "domain": "datascience.stackexchange", "id": 9801, "tags": "clustering, performance, text, model-evaluations" }
Problems while Wick rotating the path integral
Question: I am trying to begin from the path integral of QM and write the Euclidean version of it performing the Wick rotation but it seems that I am missing a few things. For simplicity I work on 1 dimension and in natural units. The amplitude for a spinless particle of unit mass to go from the point $x_i$ to the point $x_f$ in a time interval $T$ is given by $$ \int D[x]\,e^{i\int_0^Tdt\,\mathcal{L}(t)} $$ $$ =\int D[x]\,e^{i\int_0^Tdt\,\big\{\frac{1}{2}\left(\frac{dx}{dt}\right)^2-V(x)\big\}} $$ let's focus on the integral on the exponent $$ \int_0^Tdt\,\big\{\frac{1}{2}\left(\frac{dx}{dt}\right)^2-V(x)\big\} $$ to get the Euclidean path integral I have to Wick-rotate this. In order to do this I write the Lagrangian for a general complex variable $z=t+i\beta$ $$ \mathcal{L}(z)=\frac{1}{2}\left(\frac{dx}{dz}\right)^2-V(x(z)) $$ and I consider the contour I also assume (maybe naively) that there is no pole in bothering us for $\mathcal{L}(z)$. Cauchy's theorem allows us to write $$ \int_{L_R}dz\,\mathcal{L}(z)+\int_{L_I}dz\,\mathcal{L}(z)+\int_{C}dz\,\mathcal{L}(z)=0 $$ Let's go one by one. For $L_R$ I parametrize $z(t)=t$ $$ \int_{L_R}dz\,\mathcal{L}(z)=\int_0^Tdt\,\big\{\frac{1}{2}\left(\frac{dx}{dt}\right)^2-V(x)\big\} $$ For $L_I$ I have $z(\beta)=i\beta$ $$ \int_{L_I}dz\,\mathcal{L}(z)=-i\int_0^Td\beta\,\big\{\frac{1}{2}\left(\frac{dx}{id\beta}\right)^2-V(x)\big\} $$ for $C$ i have $z(\phi)=Te^{i\phi}$ $$ \int_{C}dz\,\mathcal{L}(z)=iT\int_0^{\pi/2}d\phi\,e^{i\phi}\big\{\frac{1}{2}\left(\frac{dx}{iTe^{i\phi}d\phi}\right)^2-V(x)\big\} $$ by Cauchy's theorem then $$ \int_0^Tdt\,\big\{\frac{1}{2}\left(\frac{dx}{dt}\right)^2-V(x)\big\}=i\int_0^Td\beta\,\big\{-\frac{1}{2}\left(\frac{dx}{d\beta}\right)^2-V(x)\big\}-\int_{C}dz\,\mathcal{L}(z) $$ if I plug this in the path integral I get $$ \int D[x]\,e^{i\int_0^Tdt\,\big\{\frac{1}{2}\left(\frac{dx}{dt}\right)^2-V(x)\big\}} $$ $$ =\int D[x]\,e^{\int_0^Td\beta\,\big\{\frac{1}{2}\left(\frac{dx}{d\beta}\right)^2+V(x)\big\}}e^{-i\int_{C}dz\,\mathcal{L}(z)} $$ and you see the problem here. I lack a minus sign in the first exponential, and the second one shouldn't be there. Maybe I can get the correct expression by manipulating the second exponential but I right now I don't see how. Anybody sees my mistakes? Answer: OBJECTIVE: how to Wick rotate a path integral using Cauchy's theorem. I like the approach of using Cauchy's theorem and I must admit I haven't seen the finite-time problem approached from this viewpoint before, so it seemed like a fun thing to think about (on this rainy Sunday afternoon!). When I started thinking about this I was more optimistic than I am now (it's now Wednesday). I start by stating my conclusions (details of all claims will follow): 1. There is a local obstruction that enforces the quantity you call $$ e^{-i\int_C dz\,\mathcal{L}(z)}, $$ to contribute non-trivially. This is the first hint that the whole approach is creating more problems than it solves. (There is an easier way to do the analytic continuation.) 2. Your choice of contour does not make your life easy. If instead of the first quadrant you continued into the fourth quadrant of the complex-$z$ plane you would get the correct minus sign in the suppression factor (but not quite the expected integrand). 3. The assumption of holomorphicity is justified, in that there exists a complete basis expansion for $\tilde{x}(t)$ (and its analytically continued extension, $\tilde{x}(z)$) such that the quantity: $$ \oint_{L_R}dz\,\mathcal{L}(z)+\oint_{L_I}dz\,\mathcal{L}(z)+\oint_{C}dz\,\mathcal{L}(z), $$ indeed vanishes, so that the closed contour can be contracted to a point. 4. Boundary conditions are important: it is the quantum fluctuations terms involving $\tilde{x}(t)$ that need continuing, not the zero mode (classical) piece, $x_{\rm cl}(t)$, where $$ x(t)=x_{\rm cl}(t)+\tilde{x}(t), $$ subject to $x_{\rm cl}(0)=x_i$, $x_{\rm cl}(T)=x_f$, $\tilde{x}(0)=\tilde{x}(T)=0$. The last two conditions make your life slightly easier. 5. It is much more efficient and natural to analytically continue $\tilde{x}(t)$ in problems such as these, when there are finite time intervals involved. This is also true in QFT. 6. When you expand $\tilde{x}(t)$ as a Fourier series subject to the above boundary conditions (I'm forgetting about interactions because these are irrelevant for Wick rotations, at least within perturbation theory), $$ \tilde{x}(t) = \sum_{n\neq0}a_n\psi_n(t),\qquad {\rm with}\qquad \psi_n(t)=\sqrt{\frac{2}{T}}\sin\frac{n\pi t}{T}, $$ it becomes clear that the (unique) analytically continued expression, obtained from the above by replacing $t\rightarrow z$, does not have a well-behaved basis: $\{\psi_n(z)\}$ is no longer complete, nor orthonormal and in fact diverges for large enough $\beta$, where $z=t+i\beta$. But it is holomorphic within its radius of convergence, and then you might expect Cauchy's theorem to come to the rescue because for any closed contour: $$ \oint dz\,\psi_n(z)\psi_m(z)=0, $$ and so given that $\int_0^Tdt\,\psi_n(t)\psi_m(t)=\delta_{n,m}$ one can say something meaningful about the integrals in the remaining regions of the complex plane. And a comment (which is mainly addressed to some of the comments you have received @giulio bullsaver and @flippiefanus): the path integral is general enough to be able to capture both finite and infinite limits in your action of interest, in both QM and QFT. One complication is that sometimes it is not possible to define asymptotic states when the limits are finite (the usual notion of a particle only makes sense in the absence of interactions, and at infinity the separation between particles can usually be taken to be large enough so that interactions are turned off), and although this is no problem of principle one needs to work harder to make progress. In quantum mechanics where there is no particle creation things are simpler and one can consider a single particle state space. As I mentioned above this is just a taster: when I find some time I will add flesh to my claims. DETAILS: Consider the following path integral for a free non-relativistic particle: $$ \boxed{Z= \int \mathcal{D}x\,e^{\frac{i}{\hbar}\,I[x]},\qquad I[x]=\int_0^Tdt\,\Big(\frac{dx}{dt}\Big)^2} $$ (We set the mass $m=2$ throughout to avoid unnecessary clutter, but I want to keep $\hbar$ explicit. We can restore $m$ at any point by replacing $\hbar\rightarrow \hbar 2/m$.) This path integral is clearly completely trivial. However, the question we are aiming to address (i.e. to understand Wick rotation in relation to Cauchy's theorem) is (within perturbation theory) independent of interactions. Stripping the problem down to its ''bare bones essentials'' will be sufficient. Here is my justification: will not perform any manipulations that we would not be able to also perform in the presence of interactions within perturbation theory. So this completely justifies considering the free theory. For pedagogical reasons I will first describe how to go about evaluating such path integrals unambiguously, including a detailed discussion of how to use Cauchy's theorem to make sense of the oscillating exponential, and only after we have reached a result will we discuss the problems associated to following the approach suggested in the question. How to Wick Rotate Path Integrals Unambiguously: To compute any path integral the first thing one needs to do is specify boundary conditions. So suppose our particle is at $x_i=x(0)$ at $t=0$ and at $x_f=x(T)$ at $t=T$. To implement these let us factor out a classical piece and quantum fluctuations: $$ x(t) = x_{\rm cl}(t)+\tilde{x}(t), $$ and satisfy the boundary conditions by demanding that quantum fluctuations are turned off at $t=0$ and $t=T$: $$ \tilde{x}(0)=\tilde{x}(T)=0, $$ so the classical piece must then inherit the boundary conditions of $x(t)$: $$ x_{\rm cl}(0)=x_i,\qquad x_{\rm cl}(T)=x_f. $$ In addition to taking care of the boundary conditions, the decomposition into a classical piece and quantum fluctuations plays the following very important role: integrating out $x(t)$ requires that you be able to invert the operator $-d^2/dt^2$. This is only possible when what it acts on is not annihilated by it, i.e. when the eigenvalues of this operator are non-vanishing. We call the set of things that are annihilated by $-d^2/dt^2$ the kernel of $-d^2/dt^2$, so then the classical piece $x_{\rm cl}(t)$ is precisely the kernel of $-d^2/dt^2$: \begin{equation} -\frac{d^2}{dt^2}x_{\rm cl}(t)=0. \end{equation} This is of course precisely the classical equation of motion of a free non-relativistic particle with the unique solution (subject to the above boundary conditions): $$ x_{\rm cl}(t) = \frac{x_f-x_i}{T}t+x_i. $$ So now we implement the above into the path integral, starting from the action. The decomposition $x(t) = x_{\rm cl}(t)+\tilde{x}(t)$ leads to: \begin{equation} \begin{aligned} I[x]&=\int_0^Tdt\Big(\frac{dx}{dt}\Big)^2\\ &=\int_0^Tdt\Big(\frac{dx_{\rm cl}}{dt}\Big)^2+\int_0^Tdt\Big(\frac{d\tilde{x}}{dt}\Big)^2+2\int_0^Tdt\frac{dx_{\rm cl}}{dt}\frac{d\tilde{x}}{dt}\\ \end{aligned} \end{equation} In the first term we substitute the solution to the equations of motion given above. In the second term we integrate by parts taking into account the boundary conditions on $\tilde{x}(t)$. In the third term we integrate by parts taking into account the boundary conditions on $\tilde{x}(t)$ and use the fact that $d^2x_{\rm cl}/dt^2=0$ for all $t$. All in all, \begin{equation} I[x]=\frac{(x_f-x_i)^2}{T}+\int_0^Tdt\,\tilde{x}(t)\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}(t),\quad{\rm with}\quad \tilde{x}(0)=\tilde{x}(T)=0, \end{equation} and now we substitute this back into the path integral, $Z$, in order to consider the Wick rotation in detail: \begin{equation} \boxed{Z= e^{i(x_f-x_i)^2/\hbar T}\int \mathcal{D}\tilde{x}\,\exp\, \frac{i}{\hbar}\int_0^T\!\!\!dt\,\tilde{x}(t)\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}(t), \quad{\rm with}\quad \tilde{x}(0)=\tilde{x}(T)=0} \end{equation} Clearly, given that we have fixed the boundary values of $x(t)$, we also have that: $\mathcal{D}x=\mathcal{D}\tilde{x}$. That is, we are only integrating over quantities that are not fixed by the boundary conditions on $x(t)$. So the first point to notice is that only the quantum fluctuations piece might need Wick rotation. Isolated comment: Returning to a point I made in the beginning of the DETAILS section: if our objective was to simply solve the free particle theory we'd be (almost) done! We wouldn't even have to Wick-rotate. We would simply introduce a new time variable, $t\rightarrow t'=t/T$, and then redefine the path integral field at every point $t$, $\tilde{x}\rightarrow \tilde{x}'=\tilde{x}/\sqrt{T}$. The measure would then (using zeta function regularisation) transform as $\mathcal{D}\tilde{x}\rightarrow =\mathcal{D}\tilde{x}'=\sqrt{T}\mathcal{D}\tilde{x}$, so the result would be: $$ Z=\frac{N}{\sqrt{T}}e^{i(x_f-x_i)^2/\hbar T}, $$ the quantity $N$ being a normalisation (see below). But as I promised, we will not perform any manipulations that cannot also be performed in fully interacting theories (and within perturbation theory). So we take the long route. There is value in mentioning the shortcut however: it serves as an important consistency check for what follows. Wick Rotation: Consider the quantum fluctuations terms in the action, $$ \int_0^T\!\!\!dt\,\tilde{x}(t)\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}(t), \quad{\rm with}\quad \tilde{x}(0)=\tilde{x}(T)=0, $$ and search for a complete (and ideally orthonormal) basis, $\{\psi_n(t)\}$, in which to expand $\tilde{x}(t)$. We can either think of such an expansion as a Fourier series expansion of $\tilde{x}(t)$ (where it is obvious the basis will be complete), or we can equivalently define the basis as the full set of eigenvectors of $-d^2/dt^2$. There are three requirements that the basis must satisfy (and a fourth optional one): (a) it must not live in the kernel of $-d^2/dt^2$ (the kernel has already been extracted out and called $x_{\rm cl}(t)$). (b) it must be real (because $\tilde{x}(t)$ is real); (c) it must satisfy the correct boundary conditions inherited by $\tilde{x}(0)=\tilde{x}(T)=0$; (d) it is convenient for it to be orthonormal with respect to some natural inner product, $(\psi_n,\psi_m)=\delta_{n,m}$, but this is not necessary. The unique (up to a constant factor) solution satisfying these requirements is: $$ \tilde{x}(t)=\sum_{n\neq0}a_n\psi_n(t),\qquad {\rm with}\qquad \psi_n(t)=\sqrt{\frac{2}{T}}\sin \frac{n\pi t}{T}, $$ where the normalisation of $\psi_n(t)$ is fixed by our choice of inner product: $$ (\psi_n,\psi_m)\equiv \int_0^Tdt\,\psi_n(t)\,\psi_m(t)=\delta_{n,m}. $$ (In the present context this is a natural inner product, but more generally and without referring to a particular basis, $(\delta \tilde{x},\delta \tilde{x})$ is such that it preserves as much of the classical symmetries as possible. Incidentally, not being able to find a natural inner product that preserves all of the classical symmetries is the source of potential anomalies.) This basis $\{\psi_n(t)\}$ is real, orthonormal, satisfies the correct boundary conditions at $t=0,T$ and corresponds to a complete set of eigenvectors of $-d^2/dt^2$: $$ -\frac{d^2}{dt^2}\psi_n(t)=\lambda_n\psi_n(t),\qquad {\rm with}\qquad \lambda_n=\Big(\frac{n\pi}{T}\Big)^2. $$ From the explicit expression for $\lambda_n$ it should be clear why $n=0$ has been omitted from the sum over $n$ in $\tilde{x}(t)$. To complete the story we need to define the path integral measure. I will mention two equivalent choices, starting from the pedagogical one: $$ \mathcal{D}\tilde{x}=\prod_{n\neq0}\frac{d a_n}{K}, $$ for some choice of $K$ which is fixed at a later stage by any of the, e.g., two methods mentioned below (we'll find $K=\sqrt{4T}$). (The second equivalent choice of measure is less transparent, but because it is more efficient it will also be mentioned below.) To evaluate the path integral we now rewrite the quantum fluctuations terms in $Z$ in terms of the basis expansion of $\tilde{x}(t)$, and make use of the above relations: \begin{equation} \begin{aligned} \int \mathcal{D}\tilde{x}\,&\exp\, \frac{i}{\hbar}\int_0^T\!\!\!dt\,\tilde{x}(t)\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}(t)\\ &=\int \prod_{n\neq0}\frac{d a_n}{K}\,\exp\, i\sum_{n\neq0}\frac{1}{\hbar}\Big(\frac{n\pi}{T}\Big)^2a_n^2\\ &=\prod_{n\neq0}\Big( \frac{T\sqrt{\hbar}}{K\pi |n|}\Big)\prod_{n\neq0}\Big(\int_{-\infty}^{\infty} d a\,e^{ia^2}\Big), \end{aligned} \end{equation} where in the last equality we redefined the integration variables, $a_n\rightarrow a=\frac{|n|\pi}{\sqrt{\hbar}T}a_n$. The evaluation of the infinite products is somewhat tangential to the main point of thinking about Wick rotation, so I leave it as an Exercise: Use zeta function regularisation to show that: $$ \prod_{n\neq0}c=\frac{1}{c},\qquad \prod_{n\neq0}|n| = 2\pi, $$ for any $n$-independent quantity, $c$. (Hint: recall that $\zeta(s)=\sum_{n>0}1/n^s$, which has the properties $\zeta(0)=-1/2$ and $\zeta'(0)=-\frac{1}{2}\ln2\pi$.) All that remains is to evaluate: $$ \int_{-\infty}^{\infty} d a\,e^{ia^2}, $$ which is also a standard exercise in complex analysis, but I think there is some value in me going through the reasoning as it is central to the notion of Wick rotation: the integrand is analytic in $a$, so for any closed contour, $C$, Cauchy's theorem ensures that, $$ \oint_Cda\,e^{ia^2}=0. $$ Let us choose in particular the contour shown in the figure: Considering each contribution separately and taking the limit $R\rightarrow \infty$ leads to: $$ \int_{-\infty}^{\infty} d a\,e^{ia^2}=\sqrt{i}\int_{-\infty}^{\infty}da\,e^{-a^2}=\sqrt{\pi i} $$ (Food for thought: what is the result for a more general choice of angle, $\theta\in (0,\frac{\pi}{2}]$? which in the displayed equation and figure is $\theta=\pi/4$.) An important conclusion is that Gaussian integrals with oscillating exponentials are perfectly well-defined. There was clearly no need to Wick rotate to imaginary time at any point of the calculation. The whole analysis above was to bring the original path integral into a form that contains a product of well-defined ordinary integrals. Using this result for the $a$ integral, zeta function regularisation for the infinite products (see above), and rearranging leads to: \begin{equation} \begin{aligned} \int \mathcal{D}\tilde{x}\,&\exp\, \frac{i}{\hbar}\int_0^T\!\!\!dt\,\tilde{x}(t)\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}(t)=\frac{1}{2\pi}\frac{K\pi}{T\sqrt{\hbar}}\frac{1}{\sqrt{\pi i}}. \end{aligned} \end{equation} Substituing this result into the boxed expression above for the full path integral we learn that: $$ Z[x_f,T;x_i,0]=\frac{e^{i(x_f-x_i)^2/\hbar T}}{\sqrt{\pi i\hbar T}}\,\frac{K}{\sqrt{4 T}}. $$ Normalisation: Although somewhat tangential, some readers may benefit from a few comments regarding the normalisation: $K$ may be determined by demanding consistent factorisation, $$ Z[x_f,T;x_i,0] \equiv \int_{-\infty}^{\infty}dy\,Z[x_f,T;y,t]Z[y,t;x_i,0], $$ (the result is independent of $0\leq t\leq T$) or, if one (justifiably) questions the uniqueness of the normalisation of the $\int dy$ integral one may instead determine $K$ by demanding that the path integral reproduce the Schrodinger equation: the (position space) wavefunction at $t=T$ given a wavefunction at $t=0$, $\psi(x_i,0)$, is by definition, $$ \psi(x,t) = \int_{-\infty}^{\infty}dy\,Z[x,t;y,0]\psi(y,0), $$ and then Taylor expanding in $t$ and $\eta$ upon redefining the integration variable $y\rightarrow \eta=x-y$ leads to Schrodinger's equation, $$ i\hbar \partial_t\psi(x,t)=-\frac{\hbar^2}{4}\partial^2_x\psi(x,t), $$ provided the normalisation, $K=\sqrt{4T}$, (recall $m=2$). (A more boring but equivalent method to determine $K$ is to demand consistency with the operator approach, where $Z=\langle x_f,t|x_i,0\rangle$, leading to the same result.) So, the full correctly normalised path integral for a free non-relativistic particle (of mass $m=2$) is therefore, $$ \boxed{Z[x_f,T;x_i,0]=\frac{e^{i(x_f-x_i)^2/\hbar T}}{\sqrt{\pi i\hbar T}}} $$ (Recall from above that to reintroduce the mass we may effectively replace $\hbar\rightarrow 2\hbar/m$.) Alternative measure definition: I mentioned above that there is a more efficient but equivalent definition for the measure of the path integral, but as this is not central to this post I'll only list it as an Exercise 1: Show that, for any constants $c,K$, the following measure definitions are equivalent: $$ \mathcal{D}\tilde{x}=\prod_{n\neq0}\frac{d a_n}{K}, \qquad \Leftrightarrow\qquad \int \mathcal{D}\tilde{x}e^{\frac{i}{\hbar c}(\tilde{x},\tilde{x})}=\frac{K}{\sqrt{\pi i\hbar c}}, $$ where the inner product was defined above. Exercise 2: From the latter measure definition one has immediately that, $$ \int \mathcal{D}\tilde{x}e^{\frac{i}{\hbar }(\tilde{x},-\partial_t^2\tilde{x})}=\frac{K}{\sqrt{\pi i\hbar }}\,{\rm det}^{-\frac{1}{2}}\!(-\partial_t^2). $$ Show using zeta function regularisation that ${\rm det}^{-\frac{1}{2}}\!(-\partial_t^2)\equiv(\prod_{n\neq0}\lambda_n)^{-1/2}=\frac{1}{2T}$, thus confirming (after including the classical contribution, $e^{i(x_f-x_i)^2/\hbar T}$) precise agreement with the above boxed result for $Z$ when $K=\sqrt{4T}$. (Notice that again we have not had to Wick rotate time, and the path integral measure is perfectly well-defined if one is willing to accept zeta function regularisation as an interpretation for infinite products.) WICK ROTATING TIME? maybe not.. Having discussed how to evaluate path integrals without Wick-rotating rotating time, we now use the above results in order to understand what might go wrong when one does Wick rotate time. So we now follow your reasoning (but with a twist): We return to the path integral over fluctuations. We analytically continue $t\rightarrow z=t+i\beta$ and wish to construct a contour (I'll call the full closed contour $C$), such that: $$ \oint_C dz\,\tilde{x}(z)\Big(-\frac{d^2}{dz^2}\Big)\tilde{x}(z)=0. $$ Our work above immediately implies that any choice of $C$ can indeed can be contracted to a point without obstruction. This is because using the above basis we have a unique analytically continued expression for $\tilde{x}(z)$ given $\tilde{x}(t)$: $$ \tilde{x}(z)=\sqrt{\frac{2}{T}}\sum_{n\neq0}a_n\sin \frac{n\pi z}{T}. $$ This is clearly analytic in $z$ (as are its derivatives) with no singularities except possibly outside of the radius of convergence. The first indication that this might be a bad idea is to notice that by continuing $t\rightarrow z$ we end up with a bad basis that ceases to be orthonormal and the sum over $n$ need not converge for sufficiently large imaginary part of $z$. So this immediately spells trouble, but let us try to persist with this reasoning, in the hope that it might solve more problems than it creates (it doesn't). Choose the following contour (note the different choice of contour compared to your choice): I chose the fourth quadrant instead of the first, because (as you showed in your question) the first quadrant leads to the wrong sign Gaussian, whereas the fourth quadrant cures this. So we may apply Cauchy's theorem to the contour $C=a+b+c$. Using coordinates $z=re^{i\theta}$ and $z=t+i\beta$ for contour $b$ and $c$ respectively, \begin{equation} \begin{aligned} \int_0^T& dt\,\tilde{x}\Big(-\frac{d^2}{dt^2}\Big)\tilde{x}\\ &=- \int_0^{-\pi/2}(iTe^{i\theta}d\theta)\,\tilde{x}(Te^{i\theta})\frac{1}{T^2e^{2i\theta}}\Big(\frac{\partial^2}{\partial \theta^2}-i\frac{\partial}{\partial \theta}\Big)\tilde{x}(Te^{i\theta})\\ &\qquad+i\int_{-T}^0d\beta\,\tilde{x}(i\beta)\Big(-\frac{d^2}{d\beta^2}\Big)\tilde{x}(i\beta) \end{aligned} \end{equation} where by the chain rule, along the $b$ contour: $$ dz|_{r=T}=iTe^{i\theta}d\theta,\qquad {\rm and}\qquad -\frac{d^2}{dz^2}\Big|_{r=T} = \frac{1}{T^2e^{2i\theta}}\Big(\frac{\partial^2}{\partial \theta^2}-i\frac{\partial}{\partial \theta}\Big), $$ are evaluated at $z=Te^{i\theta}$. Regarding the integral along contour $b$ (i.e. the $\theta$ integral) there was the question above as to whether this quantity actually contributes or not. That it does contribute follows from an elementary theorem of complex analysis: the Cauchy-Riemann equations. Roughly speaking, the statement is that if a function, $f(z)$, is holomorphic in $z$ then the derivative of this function with respect to $z$ at any point, $p$, is independent of the direction of the derivative. E.g., if $z=x+iy$, then $\partial_zf(z) = \partial_xf(z)=-i\partial_yf(z)$ at any point $z=x+iy$. Applying this to our case, using the above notation, $z=re^{i\theta}$, this means that derivatives along the $\theta$ direction evaluated at any $\theta$ and at $r=T$ equal corresponding derivatives along the $r$ direction at the same $\theta$ and $r=T$, which in turn equal $z$ derivatives at the same $z=Te^{i\theta}$. So we conclude immediately from this that the integral along the $b$ contour is: \begin{equation} \begin{aligned} - \int_0^{-\pi/2}(iTe^{i\theta}d\theta)&\,\tilde{x}(Te^{i\theta})\frac{1}{T^2e^{2i\theta}}\Big(\frac{\partial^2}{\partial \theta^2}-i\frac{\partial}{\partial \theta}\Big)\tilde{x}(Te^{i\theta})\\ &=- \int_bdz\,\tilde{x}(z)\Big(-\frac{d^2}{dz^2}\Big)\tilde{x}(z)\Big|_{z=Te^{i\theta}}\\ &=- \sum_{n,m\neq0}\Big(\frac{n\pi}{T}\Big)^2a_na_m\int_bdz\,\psi_n(z)\psi_m(z)\Big|_{z=Te^{i\theta}}, \end{aligned} \end{equation} where we made use of the analytically-continued basis expansion of $\tilde{x}(z)$. We have an explicit expression for the integral along the $b$ contour: \begin{equation} \begin{aligned} \int_bdz\,\psi_n(z)\psi_m(z)\Big|_{z=Te^{i\theta}}&=2i\int_0^{-\pi/2}d\theta \,e^{i\theta}\sin (n\pi e^{i\theta})\sin (m\pi e^{i\theta})\\ &=-\frac{2}{\pi}\frac{m\cosh m\pi \sinh n\pi-n\cosh n\pi \sinh m\pi}{(m-n)(m+n)} \end{aligned} \end{equation} where we took into account that $m,n\in \mathbb{Z}$. It is worth emphasising that this result follows directly from the analytic continuation of $\tilde{x}(t)$ with the physical boundary conditions $\tilde{x}(0)=\tilde{x}(T)=0$. Notice now that the basis $\psi_n(z)$ is no longer orthogonal (the inner product is no longer a simple Kronecker delta as it was above), and the presence of hyperbolic sines and cosines implies the sum over $n,m$ is not well defined. Of course, we can diagonalise it if we so please, but clearly this analytic continuation of $t$ is not making life easier. It is clear that the integral along the $\theta$ direction contributes non-trivially, and this follows directly and inevitably from the Cauchy-Riemann equations which links derivatives of holomorphic functions in different directions. Given $\partial_t^m\psi_n(t)$ do not generically vanish at $t=T$, the $\theta$ integral along the $b$ contour cannot vanish identically and will contribute non-trivially, as we have shown by direct computation. Notice furthermore, that in the path integral there is a factor of $i=\sqrt{-1}$ multiplying the action $I[x]$, so from the above explicit result it is clear the exponent associated to the $b$ contour remains oscillatory. Finally, consider the action associated to the $c$ contour. From above: \begin{equation} \begin{aligned} i\int_{-T}^0d\beta\,&\tilde{x}(i\beta)\Big(-\frac{d^2}{d\beta^2}\Big)\tilde{x}(i\beta). \end{aligned} \end{equation} In the path integral this contributes as: $$ \exp -\frac{1}{\hbar}\int_{-T}^0d\beta\,\tilde{x}(i\beta)\Big(-\frac{d^2}{d\beta^2}\Big)\tilde{x}(i\beta), $$ so we seemingly might have succeeded in obtaining an exponential damping at least along the $c$ contour. What have we gained? To bring it to the desired form we shift the integration variable, $\beta\rightarrow \beta'=\beta+T$, $$ \exp -\frac{1}{\hbar}\int_0^Td\beta'\,\tilde{x}(i\beta'-iT)\Big(-\frac{d^2}{d{\beta'}^2}\Big)\tilde{x}(i\beta'-iT) $$ This looks like it might have the desired form, but we must remember that the $\tilde{x}(i\beta)$ is already determined by the analytic continuation of $\tilde{x}(t)$, and as a consequence of this it is determined by the analytic continuation of the basis $\psi_n(t)$. This basis along the $\beta$ axis is no longer periodic, so we have lost the good boundary conditions on $\tilde{x}(i\beta)$. In particular, although $\tilde{x}(0)=0$, we have $\tilde{x}(iT)\neq0$, so we can't even integrate by parts without picking up boundary contributions. One might try to redefine the path integral fields, $\tilde{x}(i\beta)\rightarrow \tilde{x}'(\beta)$, and then Fourier series expand $\tilde{x}'(\beta)$, but then we loose the connection with Cauchy's theorem. I could go on, but I think a conclusion has already emerged: analytically continuing time in the integrand of the action is generically a bad idea (at least I can't see the point of it as things stand), and it is much more efficient to analytically continue the full field $\tilde{x}(t)$ as we did above. (Above we continued the $a_n$ which is equivalent to continuing $\tilde{x}(t)$.) I'm not saying it's wrong, just that it is inefficient and one has to work a lot harder to make sense of it. I want to emphasise one point: Gaussian integrals with oscillating exponentials are perfectly well-defined. There is no need to Wick rotate to imaginary time at any point of the calculation.
{ "domain": "physics.stackexchange", "id": 98677, "tags": "quantum-mechanics, path-integral, wick-rotation" }
Negative curvature of zero sound dispersion
Question: In the theory of a Landau-Fermi liquid, one of the major predictions is the dispersion of zero sound. From the linearized kinetic equation, we know that the dimensionless dispersion $s$ is given by $$ s=\frac{\omega}{qv_F}=\begin{cases} 1+2e^{-2(1+1/F_0^s)},\quad &F_0^s\ll1\\ \sqrt{F_0^s/3},\quad &F_0^s\gg1 \end{cases} $$ where $F_0^s$ is the Landau parameter quantifying interactions, $\omega/q$ is the phase velocity of the excitation, and $v_F$ is the Fermi velocity. However, I read this work (Two-dimensional Fermi liquids sustain surprising roton-like plasmons beyond the particle-hole band, by Sultan et. al.) which gives a schematic representation of the elementary excitations of a Fermi liquid in Fig. 1. The authors then state that At relatively low wave-vectors, zero-sound is observed as a well-defined mode with a linear dispersion relation, located above the PHB. It displays then a negative curvature, finally entering the PHB. where PHB means particle-hole band. My question is if there is any in-depth study that talks about this "negative curvature" of the zero-sound dispersion. I would think that this would amount to taking higher-order terms of $q$ in the above expression for $s$, but I have not found any reference that discusses this apparent "plateauing" of the zero sound mode. Answer: To preface the answer below, this is not a subject I have ever actively participated in, so it may be interpreting the history wrong. In this paper by Cowley here, the distinguishing factor between zero sound and first sound (ordinary acoustic sound) is the relationship between measured frequency $\omega$ and the excitation lifetime $\frac{1}{\tau}$. If the frequency is much slower than the lifetime $\omega \ll \frac{1}{\tau}$, it is called first sound and is the usual vibration mode of a condensed matter phase. If the frequency is much faster than the lifetime, then it is called zero sound $\omega \gg \frac{1}{\tau}$. So, technically, zero and first sound are smoothly connected, but in practice you often need different experimental approaches to measure each one. Let's focus on the zero sound regime (phonon), since that is most relevant to the roton zero-sound mode. The roton was originally thought to be a completely separate excitation from the zero-sound phonon, as shown in the image below. But experimentally it was found that the zero-sound phonon and roton are connected and are the same excitation (read this for references), as shown in the image below. Both of these images are from this reference. In fact this minimum seems to be found in all sorts of non-superfluid liquids, from liquid hydrogen to supercritical liquid argon (see here). So the modern understanding is that the roton has little to do with vortices etc., but represents the fact that density fluctuations with wavelengths close to the interatomic lattice spacing of the nearby solid phase cost relatively lower energy (i.e. energy lowering when $q\sim \frac{2\pi}{a}$, where $a$ is the solid-phase lattice spacing). A discussion of this fact is found here. Nonetheless, it does seem that some researchers are still studying theories of the roton that distinguish it from the phonon, but I don't know how they reconcile such theories with the fact that many non-superfluids also exhibit the roton.
{ "domain": "physics.stackexchange", "id": 64868, "tags": "condensed-matter, solid-state-physics, many-body, fermi-liquids, collective-excitations" }
unique_ptr basic implementation for single objects
Question: This is an implementation to simulate the basic functionality of unique_ptr. This doesn't provide features like custom deleter and make_unique(). I would really appreciate any feedback to improve the below code, any other api's that I should be providing etc. my_unique_ptr.h #ifndef MY_UNIQUE_PTR_H_ #define MY_UNIQUE_PTR_H_ #include <utility> namespace kapil { template <typename T> class unique_ptr final { private: T* ptr_; unique_ptr(const unique_ptr&) = delete; // Make unique_ptr non copy constructible unique_ptr& operator = (const unique_ptr&) = delete; // Make unique_ptr non copy assignable public: explicit unique_ptr (T* ptr = nullptr) noexcept : ptr_{ptr} { } unique_ptr(unique_ptr<T>&& rval) noexcept // Move constructor : ptr_{rval.ptr_} { rval.ptr_ = nullptr; } unique_ptr& operator = (unique_ptr&& rhs) noexcept { // Move assignment delete ptr_; ptr_ = rhs.ptr_; rhs.ptr_ = nullptr; return *this; } ~unique_ptr() noexcept { delete ptr_; } T* release() noexcept { T* old_ptr = ptr_; ptr_ = nullptr; return old_ptr; } void reset(T* ptr = nullptr) noexcept { delete ptr_; ptr_ = ptr; } void swap(unique_ptr& rhs) noexcept { std::swap(ptr_, rhs.ptr_); } T* get() const noexcept { return ptr_; } explicit operator bool() const noexcept { return (ptr_ != nullptr); } T& operator * () const { return *ptr_; } T* operator -> () const noexcept { return ptr_; } friend bool operator == (const unique_ptr& lhs, const unique_ptr& rhs) { return lhs.get() == rhs.get(); } friend bool operator != (const unique_ptr& lhs, const unique_ptr& rhs) { return !(lhs == rhs); } }; template <typename T> void swap(unique_ptr<T>& lhs, unique_ptr<T>& rhs) { lhs.swap(rhs); } } //kapil #endif Answer: Don't mark your class final without a good reason. It inhibits user-freedom. The default-access for members of a class is already private. Explicitly deleting copy-constructor and copy-assignment-operator is superfluous, as you define move-constructor and move-assignment, which already suppresses them. Still, some assert being explicit adds clarity. I wonder why you didn't declare construction from T* to be constexpr... Try to consistently use the injected class-name (unique_ptr), instead of sporadically naming the template-arguments (unique_ptr<T>). You are missing implicit upcasting in the move-ctor and move-assignment-operator. template <class U, class = std::enable_if_t< std::has_virtual_destructor<T>() && std::is_convertible<U*, T*>()>> unique_ptr(unique_ptr<U>&& other) noexcept : ptr_(other.release()) {} template <class U> auto operator=(std::unique_ptr<U>&& other) noexcept -> decltype((*this = unique_ptr(other))) { return *this = unique_ptr(other); } Take a look at std::exchange(object, value) from <utility>. It allows you to simplify some of your code. If you use move-and-swap, you could isolate freeing of the referee to the dtor. Having it at only one place ensures you always do it the same, and is a good first step for retrofitting custom deleters. Not to mention that it in many cases simplifies the implementation. (ptr != nullptr) can be simplified to ptr. In contexts where you have to force the type, !!ptr. Why are op==() and op!=() inline-friend-functions, but swap() isn't? That's inconsistent. It's especially puzzling as they are all written to use the public interface only. There is exactly one place where you don't have a single empty line between two functions, but two. Yes, that's nothing big.
{ "domain": "codereview.stackexchange", "id": 33234, "tags": "c++, c++11, reinventing-the-wheel, pointers" }
Nature of magnetic field
Question: If I understand correctly a magnetic field is created by the exchange of photons between charged particles. Is it correct to assume that the magnetic field lines are the trajectories of said photons between the magnet poles? Answer: No, it would be incorrect to assume as much. Magnetic field lines are drawn by "connecting" the field vectors at each point so that they form a solid line. This is more or less a visual aid that is useful in calculations, as you can consider "how many" of the field lines pass through a surface to calculate something like flux. When people say that electromagnetism is mediated by photons, they mean that it is mediated by the photon field of which photons are quanta. This is represented on a Feynman diagram by the exchange of virtual photons but this should not be taken too literally. See the answers here and here for more discussion. Once you get to the level of talking about how a force is mediated by quanta, you are considering a quantum theory where the idea of a definite trajectory gets a bit murky.
{ "domain": "physics.stackexchange", "id": 60264, "tags": "electromagnetism" }
Something wrong with this definition of factorial with structural recursion?
Question: In The Algebra of Programming page 5, the authors defined structural recursion foldn (c, h) over natural numbers: f 0 = c f (n+1) = h (f n) They then went on to defin factorial as follows: fact = outr . foldn ((0, 1), f) outr (m, n) = n f (m, n) = (m + 1, (m + 1) * n) This doesn't seem right, first of all, foldn ((0, 1), f) does not comply to its definition, secondly, this fold will never terminate, will it? Answer: This definition is correct. The syntax is almost Haskell code, so you can run it in a Haskell interpreter: foldn (c, h) = g where g 0 = c g n = h (g (n - 1)) fact = outr . foldn ((0, 1), f) outr (m, n) = n f (m, n) = (m + 1, (m + 1) * n) Usage in ghci: Prelude> :load fact.hs [1 of 1] Compiling Main ( fact.hs, interpreted ) Ok, modules loaded: Main. *Main> fact 0 1 *Main> fact 1 1 *Main> fact 2 2 *Main> fact 3 6 *Main> fact 4 24 *Main> fact 5 120 I don't know what you mean by “foldn ((0, 1), f) does not comply to its definition. If you mean that it's missing an argument, then this is not a problem. It's a function waiting to be applied. foldn is a curried function: it takes an argument and returns a function, which itself takes an argument and returns something; this makes foldn look like a function that takes two arguments (the first of which is a pair). You can't evaluate foldn ((0, 1), f) any further (except to replace it by an anonymous function for which there is no source syntax) because its definition is by case on its argument: there's no way to know which one of g 0 = … or g (n+1) = … to use until you know whether the argument is 0. Once you pass an argument, you can evaluate the call. foldn ((0, 1), f) 0 = (0, 1) so fact 0 = (outr . foldn ((0, 1), f)) 0 = outr (foldn ((0, 1), f) 0) = outr (0, 1) = 1 fact 1 = (outr . foldn ((0, 1), f)) 1 = outr (foldn ((0, 1), f) 1) = outr (f (foldn ((0, 1), f) 0)) = outr (f (0, 1)) = outr (0 + 1, (0 + 1) * 1) = outr (1, 1) = 1 I'll let you work out other values by yourself. The computation always terminates, regardless of the function h (assuming only that h terminates), by a simple induction argument: each nested call to foldn is applied to a smaller integer. For any nonnegative parameter, eventually the argument will be 0 which does not involve recursion.
{ "domain": "cs.stackexchange", "id": 4261, "tags": "recursion, functional-programming" }
Witnesses for mathematical software
Question: I, like many people, am a keen user of mathematical software such as Mathematica and Maple. However, I have become increasingly frustrated by the many cases where such software simply gives you the wrong answer without warning. This can occur when performing all sorts of operations from simple sums to optimization amongst many other examples. I have been wondering what could be done about this serious problem. What is needed is some way to allow the user to verify the correctness of an answer that is given so that they have some confidence in what they are being told. If you were to get a solution from a math colleague she/he might just sit down and show you their working. However this is not feasible for a computer to do in most cases. Could the computer instead give you a simple and easily checkable witness of the correctness of their answer? Checking may have to be done by computer but hopefully checking the checking algorithm will be much easier than checking the algorithm to produce the witness in the first place. When would this be feasible and how exactly could this be formalized So in summary, my question is the following. Could it be possible at least in theory for mathematical software to provide a short checkable proof along with the answer you have asked for? A trivial case where we can do this immediately is for factorization of integers of course or many of the classic NP-complete problems (e.g. Hamiltonian circuit etc.). Answer: The concept of "witnesses" or "checkable proofs" is not totally new: as mentioned in the comments, look for the concept of "certificate". Three examples came to mind, there are more (in the comments and elsewhere): Kurt Mehlhorn described in 1999 a similar problem in computational geometry algorithms (e.g. minor errors in coordinates can yield big errors in the results of some algorithm), solved in a similar way in the library Leda, by insisting that each algorithm produces a "certificate" of its answer in addition of the answer itself. Demaine, Lopez-Ortiz and Munro in 2000 used the concepts of certificates (they call them "proofs") to show adaptive lower bounds on the computation of the union and intersection (and difference, but this one is trivial) of sorted sets. Don't exclude their work because they did not use certiticates to protect against computing errors: they showed that even though the certificate can be linear in the size of the instance in the worst case, it is often shorter, and hence can be "checked" in sublinear time (given random access to the input as a sorted array or a B-Tree), and in particular in time less than required to compute such a certificate. I have been using the concept of certificates on various other problems since seeing Ian Munro presenting their implementation at Alenex 2001, and in particular for permutations (apologies for the shameless plug, another one is coming), where the certificate is shorter in the best case than in the worst or average case, which yields a compressed data structure for permutations. Here again, checking the certificate (i.e. the order) takes at most linear time, less than computing it (i.e. sorting). The concept is not always usefull for error checking: there are problems where checking the certificate takes as much time as producing it (or simply producing the result). Two examples come to mind, one trivial and one complicated, Blum and Kannan (mentioned in the comments) give others. The certificate for proving that an element is not in an unsorted array of $n$ elements is obtained in $n$ comparisons and checked in the same time. The certificate for the convex hull in two and three dimensions, if the points are given in random order, takes as much bits to encode as comparisons to compute [FOCS 2009] (other shameless plug). The library Leda is the most general effort (that I know of) toward making deterministic certificate-producing algorithms the norm in practice. Blum and Kannan's paper is the best effort I saw to make it the norm in theory, but they do show the limits of this approach. Hope it helps...
{ "domain": "cstheory.stackexchange", "id": 2026, "tags": "cc.complexity-theory" }
Heat produced when dielectric inserted in a capacitor
Question: When a capacitor is connected to battery, it stores $\frac{C V^2}{2}$, while battery supplied $CV^2$ energy. Therefore, $\frac{C V^2}{2}$ energy gets lost as heat. When a capacitor is already charged and a dielectric is inserted in this charged capacitor (which is still connected to the battery), will there be any heat produced ? Answer: The calculation by Ben clearly shows that the total energy stored in the battery and the capacitor is lower for the final situation than for the initial situation. Some energy has thus gone somewhere else. The question is: Where did this energy go? And are we allowed to calculate to force on the dielectric by taking the derivative of the energy with respect to the position of the dielectric? The answer is that it depends on in what way you let the dielectric slide into the capacitor. (I consider a solid dielectric here) If the dielectric is slowly inserted into the capacitor, there will be no energy converted into heat at all. A force is needed to prevent the dielectric from sliding in. The dielectric is thus performing work on the object that is holding it back. All the missing energy will be transferred to the object holding back the dielectric. In this situation calculating the force from the change in energy is justified. Note that in this situation, the voltage over the capacitor will remain constant during the insertion of the dielectric and the current that is required to charge the capacitor can be made arbitrarily low by choosing a low enough insertion velocity. The situation changes when instead of slowly inserting the dielectric, you let go of the dielectric and it is just left to move freely into the capacitor. In that case a large current is needed to increase the charge on the capacitor. The electrical resistance in the circuit will dissipate some energy into heat. The rest of the energy is converted into kinetic energy of the dielectric. If there is little mechanical resistance, the dielectric will shoot out the other side of the capacitor and will be pulled back again. It will oscillate in this way, until the oscillations are damped by the electrical and mechanical resistance. In this case it is not justified to calculate the force by just considering the energy in the capacitor and battery. One should also consider the energy dissipated in the resistance. Note that in this case the voltage over the capacitor is no longer constant. The voltage drop over the electrical resistance will cause a voltage difference between the battery and the capacitor.
{ "domain": "physics.stackexchange", "id": 46100, "tags": "electrostatics, classical-electrodynamics" }
unable to find gui node in gazebo ros package
Question: Hello, I am trying to execute drone simulator in gazebo7/ros-kinetic I've installed 'tum_simulator' from https://github.com/angelsantamaria/tum_simulator Next, I tried to test the following launch file <launch> <param name='/use_sim_time' value='true' /> <node name='empty_world_sever' pkg='gazebo_ros' type='gazebo' args="$(find cvg_sim_gazebo)/worlds/emtpy.world" respawn='false' output='screen'> </node> <node name='gazebo_gui' pkg='gazebo' type='gui' respawn='false' output='screen'/> </launch> When I execute the above launch file then, it says 'cannot launch node of type gazebo/gazebo' I heard there is some change between ros-kinetic, gazebo 7 and the prior version Thus I found I need to change pkg argument from gazebo to gazebo_ros Now, it says 'cannot launch node of type [gazebo_ros/gui]: can't locate node [gui] in package [gazebo_ros] and I can't find what should I do to execute such launch file plz, tell me what should I do? Originally posted by yoo on ROS Answers with karma: 1 on 2017-09-11 Post score: 0 Answer: Hello, If you installed the tum_simulator, you should have launch files examples in "tum_simulator/cvg_sim_gazebo/launch". You can launch them with roslaunch : roslaunch cvg_sim_gazebo ardrone_testworld.launch In this version the gui client and server are both launch by default. If you only want the server then you neep to add the option <arg name="gui" value="false"/> in the launch file : <?xml version="1.0"?>y_no_gui.launch (press RETURN) <launch> <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched --> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find cvg_sim_gazebo)/worlds/empty_sky_AR.world"/> <arg name="gui" value="false"/> </include> <!-- Spawn simulated quadrotor uav --> <include file="$(find cvg_sim_gazebo)/launch/spawn_quadrotor.launch" > <arg name="model" value="$(find cvg_sim_gazebo)/urdf/quadrotor_sensors.urdf.xacro"/> </include> </launch> You may also want to look at that answer. Originally posted by mahe_antoine with karma: 26 on 2017-09-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 28822, "tags": "ros, gazebo-ros" }
Photon-Photon-scattering (Feynman diagram)
Question: The feynman diagram for the Delbrück-scattering (photon-photon-scattering) in the lowest order looks like as in the picture: But why is the following diagram equal to one of the upper three? There will be six possibillity to order the indices in the loop, but there are only three independent diagrams. Why? Answer: There are three inequivalent permutations of the external legs (with the same orientation of the loop momenta) which give rise to the three independent diagrams that you have drawn. Start off by labelling each of the external photons by $1,2,3$ and $4$ say. Then the possible configurations are given by, for example, sandwiching $1$ between $2$ and $4$, $1$ between $2$ and $3$ and $1$ between $3$ and $4$. This gives the following, The $4$-tuplets $1432, 1243$ and $1324$ can all be obtained from each other by even permutations, as must be the case for orientation preserving loop momentum. Another set of three diagrams, belonging to the equivalence class of diagrams similar to that of the above but with the loop momenta reversed may also be considered. Those again are related by even permutations but differ from the ones above by an odd permutation, again to be expected. The diagram you propose to be different is in fact the same as the diagram far left in the OP. This can be seen by labelling the external momenta, choosing an orientation for the loop momenta and writing out its $4$-tuplet, as defined above. You will see this coincides with either $1432$ or $1234$, corresponding to the choice of clockwise and anticlockwise loop momenta respectively. Intuitively, this amounts to a 'twisting' of the diagram, thereby sending the loop momentum $\ell \rightarrow - \ell.$ At the level of the one loop amplitude for $\gamma \gamma \rightarrow \gamma \gamma$ , all six diagrams should be considered.
{ "domain": "physics.stackexchange", "id": 37959, "tags": "scattering, feynman-diagrams" }
Finding a function from a name within a string
Question: The code is designed to see if the first alphabetic characters in the *str argument refers to a known function, as part of a calculator app, and sets *str to the first character after the name of the mathematical function if found, similar to the endptr argument in strtol, strtod, etc. I seek to make this code as direct as possible, eliminating redundant/unnecessary steps. static int c(const void *const restrict a, const void *const restrict b) { const char *const sa = *(const char *const *)a, *const sb = *(const char *const *)b; const size_t l = strlen(sb); const int cmp = memcmp(sa, sb, l); return cmp ? cmp : isalpha(sa[l]) ? sa[l]-sb[l] : 0; } static double (*func(const char **const str))(double) { static const char *const s[] = {"abs", "acos", "acosh", "asin", "asinh", "atan", "atanh", "cbrt", "ceil", "cos", "cosh", "erf", "erfc", "floor", "gamma", "ln", "log", "round", "sin", "sinh", "sqrt", "tan", "tanh", "trunc"}; static double (*const f[])(double) = {fabs, acos, acosh, asin, asinh, atan, atanh, cbrt, ceil, cos, cosh, erf, erfc, floor, tgamma, log, log10, round, sin, sinh, sqrt, tan, tanh, trunc}; const char *const *const r = bsearch(str, s, sizeof(s)/sizeof(*s), sizeof(*s), c); return r ? *str += strlen(r), f[r-s] : NULL; } Example Usage: const char *s = "log100"; printf("%g\n", func(&s)(strtod(s, NULL))); // Prints 2 Answer: Missing includes We need a definition for size_t and the mathematical functions for this to compile; we also use undefined functions whose return type is not int. I recommend including prototypes for all of them: #include <ctype.h> #include <math.h> #include <stdlib.h> #include <string.h> Even then, abs, gamma and ln are not defined. I'm guessing fabs, tgamma and log were intended (and log10 where log is used). restrict is unnecessary in the comparison function c(), since no values are modified. memcmp(sa, sb, l) is undefined behaviour unless we already know that string sa is longer than sb, as is the subsequent dereference sa[l]. On the other hand, sb[l] is known to be zero. Don't forget that we should be casting to unsigned char before passing characters to <ctype.h> functions. And if we ever want to use function names such as log2, we'll want to use isalnum() rather than isalpha(). I'm not a fan of the side-by-side arrays that must agree; that's a maintenance nightmare. Prefer an array of pairs to a pair of arrays. And we definitely need a comment telling future maintainers that the elements are required to be in sorted order. strlen(r) doesn't make sense given that r is a char** - was strlen(*r) intended? I'm surprised that it passes basic unit tests. Modified code I've addressed all the issues I identified above. The testing is minimal, and should be greatly expanded, probably using one of the available test frameworks. #include <ctype.h> #include <math.h> #include <stdlib.h> #include <string.h> struct math_fun { const char* name; double (*func)(double); }; static int compare_function_name(const void *a, const void *b) { const char *const *const key = a; const struct math_fun *const entry = b; const size_t entry_len = strlen(entry->name); const int cmp = strncmp(*key, entry->name, entry_len); if (cmp) { /* mismatch: return as is */ return cmp; } /* else b is a prefix of a - match only a complete word, else a > b */ return isalpha((unsigned char)(*key)[entry_len]); } /* If the first word of `str` matches a function name, returns the corresponding function and advances `str` to the next character, else returns a null function-pointer. */ static double (*func(const char **const str))(double) { static const struct math_fun functions[] = { /* N.B. must be in `strcmp()` order */ { "abs", fabs }, { "acos", acos }, { "acosh", acosh }, { "asin", asin }, { "asinh", asinh }, { "atan", atan }, { "atanh", atanh }, { "cbrt", cbrt }, { "ceil", ceil }, { "cos", cos }, { "cosh", cosh }, { "erf", erf }, { "erfc", erfc }, { "floor", floor }, { "gamma", tgamma }, { "ln", log }, { "log", log10 }, { "round", round }, { "sin", sin }, { "sinh", sinh }, { "sqrt", sqrt }, { "tan", tan }, { "tanh", tanh }, { "trunc", trunc } }; struct math_fun *match = bsearch(str, /* count */ functions, /* array */ sizeof functions / sizeof *functions, /* array len */ sizeof *functions, /* element size */ compare_function_name); /* comparator */ if (!match) { return 0; } /* modify argument to point after the function name */ *str += strlen(match->name); return match->func; } int main(void) { const char *input_tan = "tan"; const char *input_tanh = "tanh"; return func(&input_tan) != tan || *input_tan || func(&input_tanh) != tanh || *input_tanh; }
{ "domain": "codereview.stackexchange", "id": 44376, "tags": "performance, c" }
Is a literal instance of Russell's teapot possible?
Question: Much has been said about Russell's teapot, and I accept as obvious that a teapot would be too small to be detectable in an orbit around the Sun by any reasonable method today, or by any method that can reasonably be expected to develop in the near future. But how small is "too small"? Is it possible that there are bodies the size of Phobos (or even the Moon) in the orbit around Uranus, for example? If there is anything I know about the Solar System, it's that the gaps in the distances between planets are huge. How can we be sure that there are no large bodies inside those gaps? I guess our observations are mostly based on (the absence of) gravitational anomalies, but how precise are they? Is the "limit of detectability" smaller for orbits around Sun than for much larger orbits e.g. at Jupiter's distance? Answer: The brightness of light received from a light source (or an object that reflects light) is inversely proportional to the square of the distance. So if an astronomical object A which reflects light from the Sun back to Earth orbits at a distance of two AU from the Sun and an idental astronomic object B orbits at twice that distance, or at 4 AU, how much light will Earth get from the two objects at their oppositions? Since object B is twice as far away from the Sun as object A, it's surface receives one quarter as much light from the Sun as object A. Since object B will be three tiems as far from Earth at opposition as object A will be at opposition, its reflected light will be one ninth as bright. So together, those two factors will make the light that Earth gets from object at its opposition one thirty sixth, or 0.02777, as bright as the light that Earth gets from object A at its opposition. So the farther away from the Sun an astronomical object is, the less bright its reflected light will be as seen from Earth. And if an object C has a higher albedo, or is more reflective than object D of the same size, it will seem brighter at the same distance. The brighter a solar system object is as seen from Earth, the sooner it will be noticed and discovered. The dimmer a solar system object is as seen from Earth, the longer it can remain undiscovered. Thus the undiscovered objects in the solar system should be the ones which appear fainter as seen from Earth. They may be fainter and dimmer as seen from Earth because their albedo is lower than objects at the same distance. Or maybe because they are much smaller than other objects with the same albedo and distance. Or maybe they are very large objects with high albedos but are very faint because they are very far away. Over time, astronomers discover dimmer and dimmer objects in the solar system, objects which have darker surfaces, or are smaller, or are much farther from the Sun. So the largest solar system objects likely to be discovered in the future are likely to be very far fromt he Sun and the Earth and thus appear many times dimmer than objects of equal size in the inner solar system. You may have heard of the hypothetical Planet Vulcan, inside the orbit of Mercury, once used to explain certain problems with the orbit of Mercury. It is no known that Vulcan could not possibly exist. It would have been discovered long ago if it did exist. None of these claims has ever been substantiated after more than forty years of observation. It has been surmised that some of these objects—and other alleged intra-Mercurial objects—may exist, being nothing more than previously unknown comets or small asteroids. No vulcanoid asteroids have been found, and searches have ruled out any such asteroids larger than about 6 km (3.7 mi).[4] Neither SOHO nor STEREO has detected a planet inside the orbit of Mercury.[4][23] https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)[1] The largest solar system objects discovered in recent times have all been beyond the orbit of Neptune, and have been rather small. Only a few of the largest ones have been large enough to classify as dwarf planets, and they are all smaller than Eerth's moon. It is possible that there may be one or more as yet undiscovered planets in the outer solar system. But since there are limits on how bright those planets could be and remain undiscovered, there are limits on how large and/or close they could be. As of 2016 the following observations severely constrain the mass and distance of any possible additional Solar System planet: An analysis of mid-infrared observations with the WISE telescope have ruled out the possibility of a Saturn-sized object (95 Earth masses) out to 10,000 AU, and a Jupiter-sized or larger object out to 26,000 AU.[6] WISE has continued to take more data since then, and NASA has invited the public to help search this data for evidence of planets beyond these limits, via the Backyard Worlds: Planet 9 citizen science project.[96] Using modern data on the anomalous precession of the perihelia of Saturn, Earth, and Mars, Lorenzo Iorio concluded that any unknown planet with a mass of 0.7 times that of Earth must be farther than 350–400 AU; one with a mass of 2 times that of Earth, farther than 496–570 AU; and finally one with a mass of 15 times that of Earth, farther than 970–1,111 AU.[97] Moreover, Iorio stated that the modern ephemerides of the Solar System outer planets has provided even tighter constraints: no celestial body with a mass of 15 times that of Earth can exist closer than 1,100–1,300 AU.[98] However, work by another group of astronomers using a more comprehensive model of the Solar System found that Iorio's conclusion was only partially correct. Their analysis of Cassini data on Saturn's orbital residuals found that observations were inconsistent with a planetary body with the orbit and mass similar to those of Batygin and Brown's Planet Nine having a true anomaly of −130° to −110° or −65° to 85°. Furthermore, the analysis found that Saturn's orbit is slightly better explained if such a body is located at a true anomaly of 117.8°+11° −10°. At this location, it would be approximately 630 AU from the Sun.[99] https://en.wikipedia.org/wiki/Planets_beyond_Neptune[2] New moons of the four giant planets are still being discovered, but the recent discoveries are all of tiny objects, less than 10 kilometers in diameter.
{ "domain": "astronomy.stackexchange", "id": 4825, "tags": "observational-astronomy, gravity, asteroids, natural-satellites, size" }
Code Generator Generator
Question: I've created a code generator generator, which is hosted here. I need its parser portion reviewed for OOP, OOD, and C++ best practices. gengenparser.h #ifndef GENGENPARSER_H #define GENGENPARSER_H #include <string> #include <boost/algorithm/string.hpp> #include "linecodegenerator.h" #include "staticcodegetter.h" #include "codeappender.h" #include "codejoiner.h" #include "postparser.h" enum SingleLineParseMode { LINEMODE_CODE, LINEMODE_TEMPLATE }; enum BlockType { BLOCK_PREHEADER, BLOCK_HEADER, BLOCK_FOOTER, BLOCK_POSTFOOTER, BLOCK_CODE, BLOCK_UNKNOWN }; enum BlockMode { BLOCKMODE_CODE, BLOCKMODE_TEMPLATE }; static std::string TOKEN_LINEDUMP("$$$"); static std::string TOKEN_INDENTNEXT("$$>"); static std::string TOKEN_INDENTEQUAL("$=>"); static std::string TOKEN_INDENTDEPTHOFTWO("$>>"); static std::string TOKEN_UNINDENTNEXT("<$$"); static std::string TOKEN_UNINDENTEQUAL("<=$"); static std::string TOKEN_UNINDENTDEPTHOFTWO("<<$"); static std::string TOKEN_INLINE_START("{$$"); static std::string TOKEN_INLINE_END("$$}"); static std::string TOKEN_PREHEADER("$$PREHEADER"); static std::string TOKEN_HEADER("$$HEADER"); static std::string TOKEN_FOOTER("$$FOOTER"); static std::string TOKEN_POSTFOOTER("$$POSTFOOTER"); static std::string TOKEN_CODEBLOCK("$$CODE"); static std::string TOKEN_ENDBLOCK("$$END"); class GenGenParser { private: LineCodeGenerator* mLinecode; StaticCodeGetter* mStaticGetter; PostParser* mPostParser; CodeAppender mAppender; unsigned int indentCount; bool IsStrIn(std::string& str, int pos, std::string& checkStr); void LineModeParse(std::string& line, int size); public: GenGenParser(LineCodeGenerator* linecode, StaticCodeGetter* staticGetter, PostParser* postParser); void Parse(); void PostParse(); }; #endif // GENGENPARSER_H gengenparser.cpp #include "gengenparser.h" bool GenGenParser::IsStrIn(std::string& str, int pos, std::string& checkStr) { int check_str_size = checkStr.length(); int str_size = str.length(); if (pos + check_str_size > str_size) { return false; } for (int i = 0; i < check_str_size; i++) { if (str[pos + i] != checkStr[i]) { return false; } } return true; } void GenGenParser::LineModeParse(std::string& line, int size) { std::string token(""); SingleLineParseMode mode = LINEMODE_CODE; mLinecode->StartLine(); for (int i = 0; i < size; ++i) { if (this->IsStrIn(line, i, TOKEN_INLINE_START)) { i += 2; if (!token.empty()) { mLinecode->WriteCodePrintingCode(token); token.clear(); } mode = LINEMODE_TEMPLATE; continue; } else if (this->IsStrIn(line, i, TOKEN_INLINE_END)) { i += 2; if (!token.empty()) { mLinecode->WriteCode(token); token.clear(); } mode = LINEMODE_CODE; continue; } char chr = line[i]; if (mode == LINEMODE_CODE) { mLinecode->EscapedAppend(token, chr); } else { token.push_back(chr); } } mLinecode->WriteCodePrintingCode(token); mLinecode->EndLine(); mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount) + mLinecode->GetGeneratedCode()); } GenGenParser::GenGenParser(LineCodeGenerator *linecode, StaticCodeGetter *staticGetter, PostParser *postParser) { mLinecode = linecode; mStaticGetter = staticGetter; mPostParser = postParser; indentCount = staticGetter->GetStartingIndent(); } void GenGenParser::Parse() { std::string line; BlockMode blockMode = BLOCKMODE_TEMPLATE; BlockType blockType = BLOCK_UNKNOWN; while (std::getline(std::cin, line)) { std::string trimmedLine = boost::trim_copy(line); if (boost::equal(trimmedLine, TOKEN_ENDBLOCK)) { blockMode = BLOCKMODE_TEMPLATE; blockType = BLOCK_UNKNOWN; continue; } else if (boost::equal(trimmedLine, TOKEN_PREHEADER)) { blockMode = BLOCKMODE_CODE; blockType = BLOCK_PREHEADER; continue; } else if (boost::equal(trimmedLine, TOKEN_HEADER)) { blockMode = BLOCKMODE_CODE; blockType = BLOCK_HEADER; continue; } else if (boost::equal(trimmedLine, TOKEN_FOOTER)) { blockMode = BLOCKMODE_CODE; blockType = BLOCK_FOOTER; continue; } else if (boost::equal(trimmedLine, TOKEN_POSTFOOTER)) { blockMode = BLOCKMODE_CODE; blockType = BLOCK_POSTFOOTER; continue; } else if (boost::equal(trimmedLine, TOKEN_CODEBLOCK)) { blockMode = BLOCKMODE_CODE; blockType = BLOCK_CODE; continue; } if (blockMode == BLOCKMODE_CODE) { switch (blockType) { case BLOCK_PREHEADER: mAppender.AppendToPreHeader(line); break; case BLOCK_HEADER: mAppender.AppendToHeader(line); break; case BLOCK_FOOTER: mAppender.AppendToFooter(line); break; case BLOCK_POSTFOOTER: mAppender.AppendToPostFooter(line); break; case BLOCK_CODE: mAppender.AppendToCodeBody(line); break; default: break; } continue; } if (this->IsStrIn(line, 0, TOKEN_LINEDUMP)) { mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount) + line.substr(TOKEN_LINEDUMP.length())); continue; } else if (this->IsStrIn(line, 0, TOKEN_INDENTNEXT)) { mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount++) + line.substr(TOKEN_INDENTNEXT.length())); continue; } else if (this->IsStrIn(line, 0, TOKEN_INDENTEQUAL)) { mAppender.AppendToCodeBody(mLinecode->CalculateIndent(++indentCount) + line.substr(TOKEN_INDENTEQUAL.length())); continue; } else if (this->IsStrIn(line, 0, TOKEN_INDENTDEPTHOFTWO)) { mAppender.AppendToCodeBody(mLinecode->CalculateIndent(++indentCount) + line.substr(TOKEN_INDENTDEPTHOFTWO.length())); indentCount++; continue; } else if (this->IsStrIn(line, 0, TOKEN_UNINDENTNEXT)) { mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount) + line.substr(TOKEN_UNINDENTNEXT.length())); if (indentCount > 0) { --indentCount; } continue; } else if (this->IsStrIn(line, 0, TOKEN_UNINDENTEQUAL)) { if (indentCount > 0) { --indentCount; } mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount) + line.substr(TOKEN_UNINDENTEQUAL.length())); continue; } else if (this->IsStrIn(line, 0, TOKEN_UNINDENTDEPTHOFTWO)) { if (indentCount > 0) { --indentCount; } mAppender.AppendToCodeBody(mLinecode->CalculateIndent(indentCount) + line.substr(TOKEN_UNINDENTDEPTHOFTWO.length())); if (indentCount > 0) { --indentCount; } continue; } this->LineModeParse(line, line.length()); } } void GenGenParser::PostParse() { CodeJoiner joiner(mAppender, mStaticGetter); mPostParser->PostParse(joiner.GetCode()); } codeappender.h #ifndef CODEAPPENDER_H #define CODEAPPENDER_H #include <string> class CodeAppender { private: std::string mStdStrPreHeader; std::string mStdStrHeader; std::string mStdStrCodeBody; std::string mStdStrFooter; std::string mStdStrPostFooter; public: virtual void AppendToPreHeader(const std::string& code); virtual std::string GetPreHeader(); virtual void AppendToHeader(const std::string& code); virtual std::string GetHeader(); virtual void AppendToCodeBody(const std::string& code); virtual std::string GetCodeBody(); virtual void AppendToFooter(const std::string& code); virtual std::string GetFooter(); virtual void AppendToPostFooter(const std::string& code); virtual std::string GetPostFooter(); }; #endif // CODEAPPENDER_H codejoiner.h #ifndef CODEJOINER_H #define CODEJOINER_H #include <string> #include "codeappender.h" #include "staticcodegetter.h" class CodeJoiner { private: std::string mStdStrCode; public: CodeJoiner(CodeAppender codeAppender, StaticCodeGetter* staticCodeGetter); virtual std::string GetCode(); }; #endif // CODEJOINER_H postparser.h #ifndef POSTPARSER_H #define POSTPARSER_H #include <iostream> #include <string> class PostParser { public: virtual void PostParse(std::string code); }; #endif // POSTPARSER_H staticcodegetter.h #ifndef STATICCODEGETTER_H #define STATICCODEGETTER_H #include <string> class StaticCodeGetter { public: virtual std::string GetBeforePreHeader() = 0; virtual std::string GetAfterPreHeader() = 0; virtual std::string GetAfterHeader() = 0; virtual std::string GetBeforeFooter() = 0; virtual std::string GetAfterFooter() = 0; virtual std::string GetAfterPostFooter() = 0; virtual unsigned int GetStartingIndent() = 0; }; #endif linecodegenerator #ifndef LINECODEGENERATOR_H #define LINECODEGENERATOR_H #include <string> class LineCodeGenerator { public: virtual void StartLine() = 0; virtual void EndLine() = 0; virtual void EscapedAppend(std::string& token, char c) = 0; virtual void WriteCodePrintingCode(const std::string& escapedCodeToPrint) = 0; virtual void WriteCode(const std::string& code) = 0; virtual std::string GetGeneratedCode() = 0; virtual std::string CalculateIndent(unsigned int amount) = 0; }; #endif // LINECODEGENERATOR_H Answer: const is good to use on token that do not change, just in case. e.g. static const std::string TOKEN_LINEDUMP("$$$"); You class GenGenParser contains a number of raw pointers e,g, LineCodeGenerator* which could be replaced with smart pointers to make ownership clear and memory handling easier. std::unique_ptr<LineCodeGenerator> mLineCode; ... Prefer references instead of pointers when you pass arguments to functions, that way you are sure in the function that they are defined and not null. Also you see that it is not clear from the function prototype GenGenParse(LineCodeGenerator* linecode, ... ) whether ownership is to be passed to it or not, using a smart pointer eliminates that need. e.g. here you directly that GenGenParse only shares the object, it does not delete it. GenGenParse(shared_ptr<LineCodeGenerator> linecode, ...); There seems to be no error handling in your code, maybe it would be useful to printout a syntax error and in which location it occurred. Prefer to put implementation details at the end of the class declaration like private/protected parts. In a perfect world when a user wants to use your class he should not need to know the implementation. In general I find your code nicely structured. EDIT: rephrased according to comment, hopefully making it more clear: When you declare a class put the private and protected parts below the public part because the user of the class should need to know about implementation details (design goal). class X { public: ... protected: ... private: ... }; or go one step further and use the pimpl idiom and move all implementation details away from the class declaration to the .cpp file. example: class CodeAppender { public: ... private: struct sections; std::unique_ptr<sections> sectionsImpl; ... in the cpp file struct sections { std::string mStdStrPreHeader; std::string mStdStrHeader; std::string mStdStrCodeBody; std::string mStdStrFooter; std::string mStdStrPostFooter; }; CodeAppender::CodeAppender() : sectionsImpl(std::make_unique<sections>()) {}
{ "domain": "codereview.stackexchange", "id": 12045, "tags": "c++, beginner, c++11, parsing, boost" }
Meaning of wavelet and scaling coefficients
Question: What is the meaning of wavelet coefficients and scaling coefficients? E.g. for a sequence I obtained the following wavelet coeffients. How am I supposed to interpret them? I used wavelets package in R. The sequence concers price time series of 21 days. I used the Haar wavelet. Answer: Hope this helps:... The fast wavelet transform (FWT) is a mathematical transform which results in a series of orthogonal coefficients of varying resolution. Assume an input signal or time series $S$ of length $n=2^{j}$. The length of $S$ must be equal to some power of 2, that is, $n$ equal to 8, 16, 32, 64, 128, 256, 512, 1024, etc. To transform a signal into wavelets, at each resolution the signal is convolved using a series of scaling coefficients, $a_0$, $a_1$, $a_2$, $\ldots$ representing a low pass filter and a sequence of $d_0$, $d_1$, $d_2$, $\ldots$ of a high-pass filter. Scaling function coefficients, $\phi$. At the $j-1$th resolution, the convolution is taken over the signal's original length, $2^j$, in the form \begin{equation} \phi_k = a_{-1}s_{2k+1} + a_0 s_{2k}, \end{equation} where $k=0,1,2,\ldots,2^{j-1}-1$ are the indices of each scaling function, $s_0, s_1, \ldots, s_{2^{j}-1}$ are the original input signal elements, and $a_{-1}$ and $a_0$ are constants from the Haar scaling function. An example of expanding the notation for the $j-1$ resolution is \begin{equation} \begin{split} \phi_0 &= a_{-1}s_1 + a_0 s_0\\ \phi_1 &= a_{-1}s_3 + a_0 s_2\\ \vdots\\ \phi_{2^{j-1}-1} &= a_{-1}s_{(2^j-1)} + a_0 s_{(2^j-2)}.\\ \end{split} \end{equation} Consider a signal $S$ of length $n=1024=2^{10}$ $(j=10)$, and assume Haar scaling function values of $a_{-1}=a_0=0.5$. Although the length of $S$ is 1024, the notation used to represent the series of values is $s_0$, $s_1$, $s_2$, $\ldots$, $s_{1023}$. The first convolution of $S$ will result in 512 ($2^{10-1}=2^9$) scaling function coefficients, which are obtained using the relationship \begin{equation} \phi_k = 0.5 s_{2k+1} + 0.5 s_{2k} \end{equation} where $k=1,2,\ldots,511$. This translates to \begin{equation} \begin{split} \phi_0 &= 0.5 s_{1} + 0.5 s_{0}\\ \phi_1 &= 0.5 s_{3} + 0.5 s_{2}\\ \vdots\\ \phi_{511} &= 0.5 s_{1023} + 0.5 s_{1022}\\ \end{split} \end{equation} It is easily noticed that the scaling function coefficients are the average of each neighboring pair of signal elements. Wavelet coefficients, $\psi$. The wavelet coefficient is essentially based on the difference between each neighboring pair of signal elements. At the $j-1$ resolution, these are \begin{equation} \begin{split} \psi_0 &= -d_{-1}s_1 + d_0 s_0\\ \psi_1 &= -d_{-1}s_3 + d_0 s_2\\ \vdots\\ \psi_{2^{j-1}-1} &= -d_{-1}s_{(2^{j-1}-1)} + d_0 s_{(2^{j-1}-2)}\\ \end{split} \end{equation} where $d_{-1}$ and $d_0$ are both 0.5 based on the Haar wavelet. For our first convolution of signal $S$ of length 1024 at resolution $j-1$, the 512 wavelet coefficients are \begin{equation} \begin{split} \psi_0 &= -0.5 s_{1} + 0.5 s_{0}\\ \psi_1 &= -0.5 s_{3} + 0.5 s_{2}\\ \vdots\\ \psi_{511} &= -0.5 s_{1023} + 0.5 s_{1022}\\ \end{split} \end{equation}
{ "domain": "dsp.stackexchange", "id": 11850, "tags": "wavelet, r" }
What do I need to do to find the stopping time of a decelerating car?
Question: The question is: A car can be stopped from initial velocity 84 km/h to rest in 55 meters. Assuming constant acceleration, find the stopping time. Sorry for my ignorance, but I need to review physics knowledge. Can anyone explain me for this question? Answer: Well, as the acceleration is constant you can use equations of uniform accelerated motion. Here, you are given with $v_0 = $ initial velocity, $d = $ distance traveled by car before stopping directly, and as the car eventually stops, hence you know the final velocity $v$, which is zero in this case. Using two equations of motions, (a) $v^2 - v_0^2 = 2 a d$ and (b) $v = v_0 + at $ you can get the formula for time as, $$ t = \frac{2d}{(v+v_0)}$$ Hence, calculate the time taken before stopping.
{ "domain": "physics.stackexchange", "id": 2147, "tags": "homework-and-exercises, kinematics" }
Potential Energy of Interaction Between a Sphere and a Particle Formula Derivation
Question: A sphere of radius R has density described by ρ=ρ(r). Derive equation for pontetial energy of interaction between the sphere and some point particle of mass m which is at distance r from the center of the sphere. Sphere and point particle is interacting according to the universal law of gravitation. Attempt: gravitational potential generated by shpere $$ Φ=-\frac{4πGρ}r \int_0^r r'^2 \,dr'-4πGρ\int_r^R r' \,dr'$$ is this correct? If yes, how do i relate this to the particle and derive the potential energy of interaction? Thanks in advance. Answer: The gravitational force acting on the particle as function of radius $r$ is [Gauss's law] $$F(r) = -\frac{GmM(r)}{r^2}$$ where $M(r) = \int_0^r4\pi \rho(r')r'^2dr'$ is the mass contained within a radius $r$. Note that for $r>R$ we have $\rho(r) = 0$ and $M(r) = M(R) \equiv M$ the total mass of the whole sphere. The potential energy is the work needed to bring the particle to $r = \infty$. This work is given by $$V(r) = \int_r^\infty F(r')dr'$$ To evaluate it is consider the two cases seperately: 1) the particle is outside the sphere for which $$F(r) = -\frac{GmM}{r^2}\implies V(r) = -\frac{GMm}{r}$$ and 2) the particle is inside the sphere for which $$F(r) = -\frac{GmM}{r^2}~~~~~~~\text{for}~~~~~~r>R$$ $$F(r) = -\frac{Gm\int_0^r4\pi r'^2\rho(r')dr'}{r^2} = -\frac{GmM r}{R^3}~~~~~~~\text{for}~~~~~~r<R$$ where the last equality only holds is we have a sphere of constant density $\rho$. In that case $$V(r) = \int_r^R F(r')dr' + \int_R^\infty F(r')dr' = -\int_r^R \frac{GmM r'}{R^3}dr' - \int_R^\infty \frac{GM}{r'^2}dr' ~~~~\text{for}~~~r<R$$ again the last equality only holds if $\rho(r)$ is constant (and I leave the simple evaluation of the integral to you). This expression is just the sum of the potential energy for bringing the particle from $r$ out to $R$ and from $R$ to $\infty$ as expected.
{ "domain": "physics.stackexchange", "id": 20004, "tags": "homework-and-exercises, classical-mechanics, gravity, potential-energy, interactions" }
colcon build fails when adding pkg in setup.py
Question: Greetings: I am new to ROS2 and paid for a tutorial that has 10 sections. I understand everything so far. The course has covered building packages, nodes, topics, and servers. It shows me how to do this in python and C++. I use Ubuntu 20 and Visual code. The instructor provided source code for all the examples. I'm using Foxy. QUESTION: Where might I look to resolve colcon build failure when I add the last item in this entry list? i.e. "add_two_ints_client = my_py_pkg.add_two_ints_client:main entry_points={ 'console_scripts':[ "py_node = my_py_pkg.my_first_node:main", "robot_news_station = my_py_pkg.robot_news_station:main", "smartphone = my_py_pkg.smartphone:main", "number_publisher = my_py_pkg.number_publisher:main", "number_counter = my_py_pkg.number_counter:main", "add_two_ints_server = my_py_pkg.add_two_ints_server:main", "add_two_ints_client_no_oop = my_py_pkg.add_two_ints_client_no_oop:main", "add_two_ints_client = my_py_pkg.add_two_ints_client:main)" ], I build correctly when I don't add the last line. The code registers clean and I even replaced mine with the instructors. The result from colcon build is rather cryptic to me. I include it here in hopes someone has an idea. BTW, I did an update and upgrade and verified I have the latest colcon-extensions. report from "stdout_stderr.log" file Traceback (most recent call last): File "/usr/lib/python3/dist-packages/colcon_core/executor/__init__.py", line 91, in __call__ rc = await self.task(*args, **kwargs) File "/usr/lib/python3/dist-packages/colcon_core/task/__init__.py", line 93, in __call__ return await task_method(*args, **kwargs) File "/usr/lib/python3/dist-packages/colcon_ros/task/ament_python/build.py", line 51, in build setup_py_data = get_setup_data(self.context.pkg, env) File "/usr/lib/python3/dist-packages/colcon_core/task/python/__init__.py", line 20, in get_setup_data return dict(pkg.metadata[key](env)) File "/usr/lib/python3/dist-packages/colcon_ros/package_identification/ros.py", line 129, in getter return get_setup_information( File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 241, in get_setup_information _setup_information_cache[hashable_env] = _get_setup_information( File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 281, in _get_setup_information result = subprocess.run( File "/usr/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/bin/python3', '-c', "import sys;from setuptools.extern.packaging.specifiers import SpecifierSet;from distutils.core import run_setup;dist = run_setup( 'setup.py', script_args=('--dry-run',), stop_after='config');skip_keys = ('cmdclass', 'distclass', 'ext_modules', 'metadata');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith('_') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data['metadata'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in ('license_files', 'provides_extras')};sys.stdout.buffer.write(repr(data).encode('utf-8'))"]' returned non-zero exit status 1. Originally posted by Clark_Clark on ROS Answers with karma: 11 on 2021-07-05 Post score: 0 Original comments Comment by Clark_Clark on 2021-07-08: Oh my! mia culpa. I feel so foolish but also grateful for te answer. Yes, my code works now. I will remember this as it is such a newbie error and I was drilling through the depths of complex reasons that include browsing through "core.py". Again, thank you for taking the time. Answer: "add_two_ints_client = my_py_pkg.add_two_ints_client:main)" Can you please! have a look at extra " ) " remove this and try to build it again. I think this will solve your problem. If it does not solve your problem feel free to drop a comment. Originally posted by Ranjit Kathiriya with karma: 1622 on 2021-07-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36649, "tags": "ros, ros2, python3, colcon" }
How to remove and respawn robot
Question: Hi everyone I am a new user of ROS with Gazebo 5. I am wondering how to remove and respawn hector quadrotors during the simulation. More specifically, robots should be removed or respawned when certain conditions are met. Thank you very much Originally posted by jasonwang538@gmail.com on ROS Answers with karma: 31 on 2017-04-20 Post score: 0 Answer: Gazebo, when run through ROS, provides a ROS API for accessing and controlling the simulation. There is a good tutorial available in the Gazebo documentation: http://gazebosim.org/tutorials?tut=ros_comm Specifically for creating and destroying robots, you can use the spawn_model and delete_model services. If you want to do this based on conditions, you should create a node to manage your simulated robots. It should these services and decide when to call them. You can also use the spawn_model node in the gazebo_ros package to spawn models from a launch file. For example: <node name="spawn_hector" pkg="gazebo_ros" type="spawn_model" args="-file hector.srdf -sdf -model hector_the_quadrotor"/> This is an alternative way to set up your simulation, but I think that if you are going to manage them as the simulation runs, then the node responsible for managing the models should also be responsible for the initial creation of them. Originally posted by Geoff with karma: 4203 on 2017-04-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jasonwang538@gmail.com on 2017-04-21: Thanks for telling me this Comment by jasonwang538@gmail.com on 2017-04-21: Hi, Geoff Sorry to bother you again. I have 2 questions. I use gazebo 5 with ROS jade. I cannot find service /gazebo/spawn_urdf_model. Should I install something? Which .hh file should I include? I cannot find that header file Thank you very much. Comment by Geoff on 2017-04-23: The gazebo_msgs package contains the service definitions. You will need to include the header files for the SpawnModel and DeleteModel messages (e.g. gazebo_msgs/SpawnModel.h). By "cannot find service..." do you mean that the service is not visible in rosservice list?
{ "domain": "robotics.stackexchange", "id": 27667, "tags": "hector-quadrotor" }
How close to circular is the Earth's equator
Question: Earth's orbit is about 99% circular. How circular is Earth around the equator? I know it bulges around the equator and is a spheroid. I also know it is not smooth and has oceans and mountains and all that. Is there a similar measure of eccentricity to describe how circular it is around the equator? Is it stretched into an ellipse by the sun/moon, and if so by how much? Just for comparison, does the Earth's equator deviate from a circle more than the Earths' orbit around the Sun deviates from a circle? Answer: My answer won't be complete, from lack of time and resources, but I still wanted to share some interesting aspects here that could be helpful. The difficulty in aswering this question revolves around the complex and irregular shapes involved here. Also, finding a "best-fit" ellipse for comparison is not as easy as it seems, because it depends on what and how you want to model. The actual shape of the equator is rather complex and irregular. Generally speaking, it is pretty much circular, but indeed topography and the geoid complicate matters. The Earth's movement in the solar system is not a perfect ellipse either, because of the gravitational interactions with other celestial bodies. Let's review the different irregularities involved here. The Earth is usually modeled as an ellipsoid of revolution (oblate spheroid), and the equator as a circle. A good example of that is the WGS 1984 geodetic reference system used by the Global Satellite Navigation System. Of course, the equator is not a perfect circle, it has irregularities mainly because of topography, and even sea level itself is a little irregular too. We can approximate sea level with a geoid, for example, here is a map of the EGM2008, a geoid used with WGS 1984 to transform ellipsoidal heights to geoid heights: Basically, this map shows the height of the geoid (the idealized sea level without the effects of tides and currents) with respect to the WGS84 reference ellipsoid of revolution (semi-major axis 6,378,137 m, semi-minor axis 6,356,752.314 m). The differences are mostly less than 100 meters, and are caused by the irregular distribution of mass inside the Earth itself. Now, some studies show that the Earth's shape could be slightly better modeled by a triaxial ellipsoid, and one could try to model the equator as an ellipse, and the Earth as a triaxial ellipsoid, however, even with a best-fit triaxial, we would still need geoid corrections for the irregular mean sea level, let alone topography, and geodetic computations would be more complex on a triaxial. Other funny models and names have come up over time, like a pear-shaped (because of a slight bulge in mid southern latitudes) model as a best-fit shape. But if you look at the map above, good luck visually finding the pear shape in there, or other mathematically modelizable aspects of these bumps. We are talking about very subtle differences here, that do not necessarily need to be taken into account for most purposes when describing the general shape of the Earth. So depending on how you consider the shape of the equator (i.e. by topography and ocean floor, or sea level) you will arrive at a shape that is mostly circular with irregular bumps along the way. There is no authoritative agreement that I know of about an eccentricity of the equator. For instance, This study proposes a flattening of about 70 meters for the equator's ellipse, This article on Encyclopaedia Britannica proposes 80 meters. For the Earth's orbit, for the sake of this comparison, we can use an best-fit ellipse of 149.598 million km by 149.577 million km. Of course, that is only a idealized ellipse, the real movement of the Earth in the Solar System is more complex. Finally, say we scale down and superimpose the Earth's orbit's ellipse on the equator to compare. The eccentricity of the Earth's orbit is 0.0167, the semi-major axis is 149.598 million km and semi-minor axis is 149.577 million km. Scaled down by a factor of 23,455 to the equator's size, this corresponds to a difference of about 900 meters in both orbit axes. So I think we can agree that a best-fit ellipse of sea level along the equator is more circular than the Earth's orbit. However, topography-wise, there are bumps up to over 4,000 meters in the Andes, and the sea floor reaches 5000 meters below sea level in several places. So the "topographic surface" equator would, for one thing, definitely appear more bumpy than the Earth's orbit. Unfortunately, I haven't found a example or study showing what a best-fit ellipse of the equator (including topography and bathymetry) could look like, mainly because we tend to approximate sea level, not topography itself, but with more time, tools and data, it might be possible to work out the answer with a "topographic best-fit" ellipse.
{ "domain": "astronomy.stackexchange", "id": 3495, "tags": "orbit, earth, eccentric-orbit, eccentricity" }