anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Does the salt in the ocean act as a binding agent on sand?
Question: When I walk on the beach, there appears to be an upper layer of the sand that is 'crusty'. This appears to be the opposite of what you see in sand in a dessert setting, where the individual particles are not bound together, and blow to form 'dunes'. I'm trying to figure out what causes the sand to bind together like this. To me the crystalline properties of salt could cause this. (Ie the way that salt binds together to form crystals could cause this salt-binding that leads to 'crustiness'. My question is: Does the salt in the ocean act as a binding agent on sand? Answer: Good observations! Seawater contains salts in small amounts such as such as $\ce{LiCl}$ and $\ce{MgCl2}$ that are quite hygroscopic, which could keep the sand wet longer. There is also dissolved organic matter (http://www.eoearth.org/view/article/154471/) in seawater that might act as a hydrating agent or as a wetting agent, helping to hold onto the water and to bond it to the sand. You might try an experiment comparing the appearance of (washed) sand after being wet with plain water, with $\ce{NaCl}$ salt water of the same concentration as seawater, and with seawater itself to see if there is a difference. Though this seems like a casual test, this could be of significance in cases where it's necessary to drive on a sandy shore.
{ "domain": "chemistry.stackexchange", "id": 4899, "tags": "water, aqueous-solution, solubility, home-experiment" }
Alternative representations of the momentum operator in position space
Question: The fundamental relation between the position and momentum basis in quantum mechanics is summed up in the canonical commutation relation: $[x,p]=i\hbar.I$ From here, one can get to the matrix elements $\langle x |P|x'\rangle =-i\hbar\frac{\partial}{\partial x}\delta(x-x')$ using the fact: $(x-x').\delta(x-x')=0$. However, this representation is not unique. One can see this from the very fact that $-i\hbar \frac {\partial} {\partial x}+f(x)$ as the momentum operator in the postion basis will satisfy the commutation relation just as perfectly. But I am struggling to see how exactly one takes care of this extra degree of freedom from the fundamental CCR. Shankar suggests if one makes an unitary transformation (I guess inspired by the Stone Von Neumann theorem) of the X basis: $|x \rangle \to |y\rangle=e^{\frac{ig(X)}{\hbar}}|x\rangle=e^{\frac{ig(x)}{\hbar}}|x\rangle $, the X operator will have the same matrix elements as before which is easily seen. But I can't figure out a way to get the P matrix elements in this rotated basis which are supposed to be: $\langle y |P|y'\rangle =[-i\hbar\frac{\partial}{\partial x}+ f(x)]\; \delta(x-x')$ where $f(x)=\frac{d g}{d x}$ I gave it a try as shown below but wasn't able to calculate the messy integral at the end. Any help in this regard would be really appreciated! $\begin{align} \langle y |P|y'\rangle &=\int dx_1\int dx_2 \; \;\langle y |x_1 \rangle \langle x_1 |P| x_2\rangle \langle x_2 |y' \rangle \\ & = \int dx_1\int dx_2 \; \;(e^{\frac{-ig(x_1)}{\hbar}} \delta(x_1-x) ).( -i\hbar\frac{\partial}{\partial x_1}\delta(x_1-x_2)). (e^{\frac{ig(x_2)}{\hbar}}\delta(x_2-x'))\\ &= \qquad ? \end{align}$ I suspect these is some kind of integration by parts involved here but there is just too many delta functions for me to handle this carefully. Answer: You are doing fine. Just keep going. Collapse both underived Dirac δ-functions, and apply the obvious identity $$ \delta(x-x') = \delta (x-x') e^{{i\over\hbar} (g(x)-g(x'))}, $$ to get $$ \langle y |P|y'\rangle =\int dx_1\int dx_2 \; \;\langle y |x_1 \rangle \langle x_1 |P| x_2\rangle \langle x_2 |y' \rangle \\ = \int dx_1\int dx_2 \; \;(e^{\frac{-ig(x_1)}{\hbar}} \delta(x_1-x) )\bigl( -i\hbar\frac{\partial}{\partial x_1}\delta(x_1-x_2)\bigr ) (e^{\frac{ig(x_2)}{\hbar}}\delta(x_2-x'))\\ = e^{\frac{-ig(x)}{\hbar}} e^{\frac{ig(x')}{\hbar}} ( -i\hbar)\frac{\partial}{\partial x} \Bigl ( \delta(x-x')e^{{i\over\hbar} (g(x)-g(x'))}\Bigr ). \\ = ( -i\hbar)\frac{\partial}{\partial x} \delta(x-x') +\frac{\partial g}{\partial x} \delta(x-x') ~. $$
{ "domain": "physics.stackexchange", "id": 80682, "tags": "quantum-mechanics, operators, momentum, commutator, dirac-delta-distributions" }
Magnetic field caused by electric field
Question: A sudden change in electric field will cause just magnetic field or changing magnetic field ? Once the electric field is established and is not changing then what will happen to the magnetic field which was caused by the changing electric field? Will the magnetic field (which was caused by the changing electric field) remains constant or will collapse ? Answer: A changing electric field causes a magnetic field and vice versa. This is given by the two Maxwell's equations. $$\mathbf E\cdot d\mathbf l=-\dfrac{d\phi_B}{dt}$$ This is the well known Faraday's law and states that a changing magnetic field creates an electric field (or induces an EMF). This induced electric field will be constant as long as the time derivative of magnetic flux is constant. Now we have another equation which discusses the induction of a magnetic field with a changing electric field. This equation is also known as the Ampere-Maxwell equation. $$\mathbf B\cdot d\mathbf l=\mu_0\epsilon_0\dfrac{d\phi_E}{dt}+\mu_0I$$ Let's assume the term $\mu_0I$ to be zero for a while because that only matters when there is a current moving through a wire, then we're just left with $$\mathbf B\cdot d\mathbf l=\mu_0\epsilon_0\dfrac{d\phi_E}{dt}$$ which is the displacement current term and will be non-zero in case of a changing electric field. It simply ensures continuity of the magnetic field like in the case of a charging-discharging capacitor connected to an AC supply. It works in the same manner as the previous one. That means, if the rate of change of electric field is constant, the magnetic field thus produced will also be constant. Once the electric field is established and is not changing then what will happen to the magnetic field which was caused by the changing electric field? We should notice that in both the cases, once the change stops i.e. the derivative becomes $0$ the left hand side of the equation also goes to $0$. So yes, if the change stops, the generation of the other field is also stopped. Will the magnetic field (which was caused by the changing electric field) remain constant or collapse? It will remain constant as long as $\dfrac{d\phi_E}{dt}$ is constant. But as you slow down and finally stop changing the electric field, the magnetic field will collapse as in the case of a DC capacitor circuit after a very long time when the current goes to $0$.
{ "domain": "physics.stackexchange", "id": 63481, "tags": "electromagnetism, magnetic-fields, electric-fields" }
The proper self-energy diagrams for the Anderson model
Question: According to Fetter's book, the Feynman diagrams contribute to the proper self-energy are: where the first and second orders here refer to the perturbation expansion of the interacting Green's function. The question is: Why do only two figures (${\rm{(a)'}}$ and ${\rm{(e)}}$) contribute to the proper self-energy of the Anderson mode? (see for example Fig. 6.20 in here) Answer: The Fetter&Walecka's diagrams are written for a general coulomb interaction, something like $$ V=\frac{1}{2}\sum_{k,k',q, q'}\sum_{\sigma,\sigma'}v_{k,k';q,q'}c_{k,\sigma}^\dagger c_{k',\sigma'}^\dagger c_{q',\sigma'}c_{q,\sigma}, $$ whereas the interaction in the Anderson model is $$ V=Un_\uparrow n_\downarrow=\frac{1}{2}\sum_{\sigma}n_\sigma n_{\bar{\sigma}}= \frac{1}{2}\sum_{\sigma}d_\sigma^\dagger d_\sigma d_{\bar{\sigma}}^\dagger d_{\bar{\sigma}}= \frac{1}{2}\sum_{\sigma,\sigma'}d_\sigma^\dagger d_{\sigma'}^\dagger d_{\sigma'}d_\sigma $$ There are here important differences in comparison to the full Coulomb interaction there is only exchange term present, i.e., there is no term with equal spin projections ($\sigma'=\bar{\sigma}$ but no $\sigma'=\sigma) There is only one orbital state for electrons, that is we cannot have more than one spin of each spin projection, and we cannot have more than 2 electrons. The following should be checked by derivation (which I strongly recommend to do oneself, e.g., at zero temperature), but I would claim the following: b, c, d are really included in the self-energy by dressing the inner lines f is actually the Kondo diagram and should not be neglected (although one usually arrives to Kondo effect by expansion in tunneling) See also: Higher-order perturbation in Kondo problem Anderson model without tunneling It might be however that the reference cited considers Anderson model without tunneling. In this case most of the diagrams would vanish due to the restricted phase space. In this case the Green's function can be calculated exactly in respect to any state: $$ |\psi\rangle=p_0|0\rangle+\sum_\sigma p_\sigma d_\sigma^\dagger |0\rangle + p_2d_\uparrow^\dagger d_\downarrow^\dagger |0\rangle= p_0|0\rangle+\sum_\sigma p_\sigma |\sigma\rangle + p_2|\uparrow\downarrow\rangle, $$ with the result that it contains only terms $$\frac{1}{\omega-\epsilon_\sigma\pm i0}\text{ and }\frac{1}{\omega-\epsilon_\sigma-U\pm i0},$$ the latter corresponding to the sum of the ladder diagrams, i.e. a' and e. TL; DR: An easy way to see which diagrams give zero contribution is by assigning spin to every electron line. Electron spin does not change between two vertices, whereas an interaction line necessarily connects two electron lines of opposite spin.
{ "domain": "physics.stackexchange", "id": 94281, "tags": "quantum-field-theory, feynman-diagrams, perturbation-theory, self-energy" }
Simple File Manager with recursion in C#
Question: Simple class to simpify runtime file loading of ressources. The class reads all files from a given directory including all subdirectories and stores the name together with the path in a simple struct. If a file needs to be used by the program just pass the name to GetPath() and the class returns the location of the file. Any suggestions/improvements about my implementation of this? Im especially not sure if making this static is the right choice. using System.IO; using System.Collections.Generic; namespace Sample { struct File { public string Name; public string Path; } static class FileManager { private static List<File> Files = new List<File>(); public static void AddFiles(string directory) { foreach(string file in Directory.GetFiles(directory)) { Files.Add(new File() { Name = Path.GetFileName(file), Path = directory }); } foreach(string subdirectory in Directory.GetDirectories(directory)) { AddFiles(subdirectory); } } public static string GetPath(string filename) { var File = Files.Find(x => x.Name == filename); return File.Path; } public static void ClearFiles() { Files.Clear(); } } } Answer: I don't think I would make this class static, unless it's going to be used extensively throughout the program. If not you can build up a large list of files, that may hang around in memory for no use, unless you remember to call ClearFiles(). Instead you could make a static method that could return an initialized object like: public static FileManager Create(string directoryPath) { FileManager fm = new FileManager(); fm.AddFiles(directoryPath); return fm; } If you have a need for it, then make this instance as static somewhere in the application. public static string GetPath(string filename) { var File = Files.Find(x => x.Name == filename); return File.Path; } It returns only a first match of possible more matches, which will be in a directory high in the hierarchy, but what if you actually seek a path to a file in a subdirectory? I think I would return a list/array/IEnumerable instead and let the client filter as needed. Besides that, file names are case insensitive, so you should do: Files.Find(x => string.Equals(x.Name, filename, StringComparison.CurrentCultureIgnoreCase)); public static void AddFiles(string directory) { foreach (string file in Directory.GetFiles(directory)) { Files.Add(new File() { Name = Path.GetFileName(file), Path = directory }); } foreach (string subdirectory in Directory.GetDirectories(directory)) { AddFiles(subdirectory); } } Nice recursive method. As an alternative you could consider to use DirectoryInfo instead - it can handle the recursive search for you: DirectoryInfo directory = new DirectoryInfo(directoryPath); Files.AddRange( directory .GetFiles("*.*", SearchOption.AllDirectories) .Select(fi => new File { Name = fi.Name, Path = fi.DirectoryName })); There is no way to iterate through all the found File objects because the Files static member is private. I would consider to provide a public IEnumerable of some kind. All in all, my implementation would look something like: public struct File { public string Name; public string Path; public override string ToString() { return $"{Name} => {Path}"; } } public class FileManager : IEnumerable<File> { private List<File> Files = new List<File>(); public void AddFiles(string directoryPath) { DirectoryInfo directory = new DirectoryInfo(directoryPath); Files.AddRange( directory .GetFiles("*.*", SearchOption.AllDirectories) .Select(fi => new File { Name = fi.Name, Path = fi.DirectoryName })); } public IEnumerable<string> GetPaths(string filename) { return Files .Where(x => string.Equals(x.Name, filename, StringComparison.CurrentCultureIgnoreCase)) .Select(f => f.Path); } public void Clear() { Files.Clear(); } public IEnumerator<File> GetEnumerator() { return Files.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } public static FileManager Create(string directoryPath) { FileManager fm = new FileManager(); fm.AddFiles(directoryPath); return fm; } }
{ "domain": "codereview.stackexchange", "id": 34198, "tags": "c#" }
Algebraic effects and handlers, dynamic effects
Question: What exactly are dynamic effects? What does it mean to dynamically create an effect? In a language with algebraic effects and handlers (such as Eff or Koka) one could already do different operations based on runtime information, for example: if x > 10 then get () else put 10 In this case the actual operation done will only be known at runtime. But in the Frank paper ("Do be do be do") they talk about dynamic effects in the sense of ML-style references. In what sense are references a dynamic effect that cannot be done in Eff (without resources) or Koka? Also why were resources removed again from Eff? Answer: Let us take the example, the input/output effect. Ordinarily, one presents this with two algebraic operations print and read. We can then imagine that things get "printed out" and "read in" from some sort of a communication channel. In fact, that's how I/O works in a typical operating system. Except that in a typical operating system one can open lots of communication channels: open files, bind to internet sockets, etc. Whenever a new channel opens, something new gets created dynamically (i.e., while the program is running). An operating system typically just creates a new integer, the "file handle", but that's just a cheap version of what we really want: an identifier that is guaranteed to be unique and that cannot be guessed by any part of the program, unless it was already given to it. (Integers can be guessed.) Let us call such a thing an instance. (In cryptography it is often called a nonce, but they cheat and think of it as an "unguessable integer". In programming languages we can make sure that instances are abstract tokens that really are unguessable.) There are other examples where we need to create new instances. One is state, i.e., a memory location with operations update and lookup. Typical programs want to allocate any amount of memory locations, each of which is then an instance of the state effect. In general, instances allow us to create local effects, such as local exceptions and local references. In many applications it is essential to have such effects, for instance so that we can guarantee that only a certain part of the program is allowed to use a certain effect. There is a tendency among the theoreticians to ignore effect instances because they complicate the theory. I feel that they should not be ignored because a programming language without dynamic creation of effects is next to useless. Who wants to write programs in which all memory allocations, files, and sockets, have to be specified ahead of time?
{ "domain": "cstheory.stackexchange", "id": 4213, "tags": "pl.programming-languages" }
Is spacetime not static inside a black hole?
Question: As I understood spacetime is just spacetime inside and out of a black hole, except it is extremely curved inside. But the structure remains the same. I have read this question: What's the proper distance from the event horizon to the singularity? Inside the horizon, we can't have a ruler at rest. The spacetime inside the horizon is not static. In GR, is 'Static' the same as 'Time-symmetric'? A stationary spacetime is one that has a timelike Killing vector. There is also a notion of an asymptotically stationary spacetime, which is what some authors mean by "stationary."Although a stationary spacetime does not have a uniquely pre- ferred time, it does prefer some time coordinates over others. In a stationary spacetime, it is always possible to find a “nice” t such that the metric can be expressed without any t-dependence in its components. A static spacetime is one that is not only stationary but also has the property that coordinates exist in which it is diagonal. (Coordinates will also exist in which it is not diagonal.) I need some clarification as to what static means here. Does this mean that the dimensions change, is it the same as GWs stretching and squeezing spacetime itself? Is the non-static spacetime one where the tidal forces change the distances between events? What do the off-diagonal elements of the metric tensor represent? Non-diagonal elements of the Schwarzchild metric As I understand based on the comments, when spacetime is not static, this means that the metric is not diagonal. When the metric is not diagonal, does that mean that it is like when spacetime is changing, like stretching/squeezing always, like if GWs would pass by always? For example, the Kerr metric is not diagonal, because of the rotation, that is a physical phenomenon. I am asking for other physical phenomena inside the BH, that are described by the not diagonal metric, like stretching and squeezing. Question: What do we mean when we say that spacetime is not static inside a black hole? Answer: Static means that there is a family of observers, covering spacetime (which means that you can find one such observer at each point of space and time), such that spacetime doesn't change from their perspective.* In other words, as far as these observers are concerned, spacetime today is physically indistinguishable from spacetime yesterday. Observers with these properties only exist outside a black hole. Inside, every observer sees change, because in particular every observer eventually hits the singularity. Spacetime looks like a weird expanding/contracting universe, not like the inside of a static sphere. * Actually, there is one more technical requirement to distinguish static from stationary, but that doesn't matter here.
{ "domain": "physics.stackexchange", "id": 64533, "tags": "quantum-mechanics, general-relativity, black-holes, spacetime" }
Programming Challenge: Python 3 DNS query resolver using socket
Question: This is a DNS query resolver written in Python 3 using socket, I wrote it entirely by myself, it supports 8 primary DNS query types: A, NS, CNAME, SOA, PTR, MX, TXT, AAAA, and it is working correctly. Albeit the code is a little bit ugly. Sample output: In [22]: print(json.dumps(dns_query('deviantart.com', '8.8.8.8', 'SOA'), indent=4)) { "Question": { "ID": "0425", "Flags": { "Hexadecimal": "8180", "Binary": "1000000110000000", "Breakdown": { "Response": true, "Operation Code": "Query", "Authoritative Answer": false, "Truncated": false, "Recursion Desired": true, "Recursion Available": true, "Reserved": 0, "Authenticated Answer": false, "Non-authenticated Answer": "Unacceptable", "Error Code": "NoError" } }, "Questions": 1, "Answers": 1, "Authorative Answers": 0, "Additional Resources": 0, "Name": "deviantart.com", "Type": "SOA", "Class": "INTERNET" }, "Answers": [ { "QName": "deviantart.com", "QType": "A", "QClass": "INTERNET", "Time-to-live": 232, "Data length": 4, "RData": "50.117.117.42" } ] } In [23]: print(json.dumps(dns_query('deviantart.com', '8.8.8.8', 'SOA'), indent=4)) { "Question": { "ID": "e52f", "Flags": { "Hexadecimal": "8180", "Binary": "1000000110000000", "Breakdown": { "Response": true, "Operation Code": "Query", "Authoritative Answer": false, "Truncated": false, "Recursion Desired": true, "Recursion Available": true, "Reserved": 0, "Authenticated Answer": false, "Non-authenticated Answer": "Unacceptable", "Error Code": "NoError" } }, "Questions": 1, "Answers": 1, "Authorative Answers": 0, "Additional Resources": 0, "Name": "deviantart.com", "Type": "SOA", "Class": "INTERNET" }, "Answers": [ { "QName": "deviantart.com", "QType": "A", "QClass": "INTERNET", "Time-to-live": 176, "Data length": 4, "RData": "103.97.3.19" } ] } In [24]: print(json.dumps(dns_query('google.com', '8.8.8.8', 'SOA'), indent=4)) { "Question": { "ID": "3d0a", "Flags": { "Hexadecimal": "85b0", "Binary": "1000010110110000", "Breakdown": { "Response": true, "Operation Code": "Query", "Authoritative Answer": true, "Truncated": false, "Recursion Desired": true, "Recursion Available": true, "Reserved": 0, "Authenticated Answer": true, "Non-authenticated Answer": "Acceptable", "Error Code": "NoError" } }, "Questions": 1, "Answers": 1, "Authorative Answers": 0, "Additional Resources": 0, "Name": "google.com", "Type": "SOA", "Class": "INTERNET" }, "Answers": [ { "QName": "google.com", "QType": "A", "QClass": "INTERNET", "Time-to-live": 60, "Data length": 4, "RData": "59.24.3.174" } ] } In [25]: print(json.dumps(dns_query('baidu.com', '8.8.8.8', 'SOA'), indent=4)) { "Question": { "ID": "9a50", "Flags": { "Hexadecimal": "8180", "Binary": "1000000110000000", "Breakdown": { "Response": true, "Operation Code": "Query", "Authoritative Answer": false, "Truncated": false, "Recursion Desired": true, "Recursion Available": true, "Reserved": 0, "Authenticated Answer": false, "Non-authenticated Answer": "Unacceptable", "Error Code": "NoError" } }, "Questions": 1, "Answers": 1, "Authorative Answers": 0, "Additional Resources": 0, "Name": "baidu.com", "Type": "SOA", "Class": "INTERNET" }, "Answers": [], "Authorative Answers": [ { "QName": "baidu.com", "QType": "SOA", "QClass": "INTERNET", "Time-to-live": 7200, "Data length": 31, "RData": { "Primary Name Server": "dns.baidu.com", "Responsible Authority's Mailbox": "sa.baidu.com", "Serial Number": 2012145250, "Refresh Interval": 300, "Retry Interval": 300, "Expire Limit": 2592000, "Minimum TTL": 7200 } } ] } Code import ipaddress import publicsuffix2 as psl import random import socket import validators from collections import defaultdict QTYPE = { 1: 'A', 2: 'NS', 5: 'CNAME', 6: 'SOA', 12: 'PTR', 15: 'MX', 16: 'TXT', 28: 'AAAA', 'A': 1, 'NS': 2, 'CNAME': 5, 'SOA': 6, 'PTR': 12, 'MX': 15, 'TXT': 16, 'AAAA': 28 } OPCODE = { 0: 'Query', 1: 'IQuery', 2: 'Status', 4: 'Notify', 5: 'Update', 6: 'DSO' } RCODE = { 0: 'NoError', 1: 'FormErr', 2: 'ServFail', 3: 'NXDomain', 4: 'NotImp', 5: 'Refused', 6: 'YXDomain', 7: 'YXRRSet', 8: 'NXRRSet', 9: 'NotAuth', 10: 'NotZone', 11: 'DSOTYPENI' } def byte2int(by: bytes) -> int: if not isinstance(by, bytes): raise TypeError() return int.from_bytes(by, 'big') def byte2hex(by: bytes) -> str: if not isinstance(by, bytes): raise TypeError() return by.hex() def dns_opcode(n: int): if not isinstance(n, int): raise TypeError() if n not in OPCODE: raise ValueError() return OPCODE[n] def dns_rcode(n: int): if not isinstance(n, int): raise TypeError() if n not in RCODE: raise ValueError() return RCODE[n] def dns_cd(n: int): if n not in (0, 1): raise ValueError() return ['Unacceptable', 'Acceptable'][n] def dns_qclass(n: int): if n == 1: return 'INTERNET' else: raise ValueError('Invalid QCLASS value received') DECODE_HEADER = [byte2hex, byte2hex, byte2int, byte2int, byte2int, byte2int] FLAG_LENGTH = [1, 4, 1, 1, 1, 1, 1, 1, 1, 4] HEADERS = ['ID', 'Flags', 'Questions', 'Answers', 'Authorative Answers', 'Additional Resources'] FLAGS = [ 'Response', 'Operation Code', 'Authoritative Answer', 'Truncated', 'Recursion Desired', 'Recursion Available', 'Reserved', 'Authenticated Answer', 'Non-authenticated Answer', 'Error Code' ] SOA_NUMBERS = ['Serial Number', 'Refresh Interval', 'Retry Interval', 'Expire Limit', 'Minimum TTL'] DECODE_FLAG = [bool, dns_opcode, bool, bool, bool, bool, int, bool, dns_cd, dns_rcode] def decode_flags(flag: str) -> dict: if not isinstance(flag, str): raise TypeError() if not (len(flag) == 4 and all(i in '0123456789abcdef' for i in flag)): raise ValueError() flag = '{:016b}'.format(int(flag, 16)) index = 0 flags = [] for i, f in zip(FLAG_LENGTH, DECODE_FLAG): flags.append(f(int(flag[index:index+i], 2))) index += i return dict(zip(FLAGS, flags)) def decode_response(fields: bytes) -> dict: if not isinstance(fields, bytes): raise TypeError() if len(fields) != 10: raise ValueError() qtype = QTYPE[byte2int(fields[:2])] qclass = dns_qclass(byte2int(fields[2:4])) ttl = byte2int(fields[4:8]) length = byte2int(fields[8:10]) return { 'QType': qtype, 'QClass': qclass, 'Time-to-live': ttl, 'Data length': length } def valid_domain(domain): return (validators.domain(domain) and psl.get_sld(domain, strict=True)) def make_query(query, qtype): if not (isinstance(query, str) and isinstance(qtype, str)): raise TypeError('Parameters must be instances of `str`') qtype = QTYPE.get(qtype.upper(), None) if not qtype: raise ValueError('QTYPE is invalid or unsupported') if qtype == 12: if validators.ipv4(query): query = ipaddress.IPv4Address(query).reverse_pointer elif validators.ipv6(query): query = ipaddress.IPv6Address(query).reverse_pointer else: raise ValueError('QUERY is not a valid IPv4 or IPv6 address') else: if not (validators.domain(query) and (sld := psl.get_sld(query, strict=True))): raise ValueError('QUERY is not a valid web domain') if qtype in (2, 15, 16): query = sld return b''.join([ random.randbytes(2), b'\1\0\0\1\0\0\0\0\0\0', ''.join(chr(len(i)) + i for i in query.split('.')).encode('utf8'), b'\0', qtype.to_bytes(2, 'big'), b'\0\1' ]) class DNS_Parser: def __init__(self, response: bytes) -> None: if not isinstance(response, bytes): raise TypeError('Argument must be an instance of `bytes`') self.response = response self.names = dict() self.question = dict() self.answers = [] self.soa = [] self.position = 0 self.raw = dict() self.simple = dict() def check_bounds(self, pos: int): if not isinstance(pos, int): raise TypeError('Argument must be an instance of `int`') if pos >= len(self.response): raise IndexError('Index exceeds the maximum possible value') def read_stream(self, pos: int, recur: bool=False, length: int=0) -> str: self.check_bounds(pos) chunks = [] count = 0 while True: hint = self.response[pos] if hint == 0: if not recur: self.position = pos break elif hint == 192: index = self.response[pos+1] self.position = pos+1 if index in self.names: name = self.names[index] else: name = self.read_stream(index, True) self.names[index] = name chunks.append(name) pos += 2 count += 2 if not length or count == length: break else: continue pos += 1 count += 1 chunk = self.response[pos:pos+hint].decode('utf8') chunks.append(chunk) pos += hint count += hint return '.'.join(chunks) def parse_dns_query(self): pos = self.response[12:].index(0) query = self.response[:pos+17] headers = [f(query[:12][i:i+2]) for f, i in zip(DECODE_HEADER, range(0, 12, 2))] self.question = dict(zip(HEADERS, headers)) flags = self.question['Flags'] self.question['Flags'] = { 'Hexadecimal': flags, 'Binary': f'{int(flags, 16):016b}', 'Breakdown': decode_flags(flags) } name = self.read_stream(12) self.names[12] = name qtype = QTYPE[byte2int(query[pos+13:pos+15])] self.position = pos + 16 self.question.update({ 'Name': name, 'Type': qtype, 'Class': dns_qclass(byte2int(query[-2:])) }) def rdata_ipv4(self, pos: int) -> str: self.check_bounds(pos+3) return '.'.join([str(i) for i in self.response[pos:pos+4]]) def rdata_ipv6(self, pos: int) -> str: self.check_bounds(pos+15) return str(ipaddress.IPv6Address(self.response[pos:pos+16])) def rdata_txt(self, pos: int, length: int) -> dict: self.check_bounds(pos+length-1) return {'Text length': self.response[pos], 'Text': self.response[pos+1:pos+length+1].decode('utf8')} def rdata_mx(self, pos: int, length: int) -> dict: return {'Preference': byte2int(self.response[pos:pos+2]), 'Mail Exchange': self.read_stream(pos+2, length-2)} def rdata_soa(self, pos: int) -> dict: pns = self.read_stream(pos) ramx = self.read_stream(self.position+1) fields = self.response[self.position+1:self.position+21] soa = [byte2int(fields[i:i+4]) for i in range(0, 20, 4)] soa = dict(zip(SOA_NUMBERS, soa)) rdata = {'Primary Name Server': pns, "Responsible Authority's Mailbox": ramx} rdata.update(soa) self.position += 20 return rdata def parse_dns_answer(self): qname = self.read_stream(self.position+1) headers = decode_response(self.response[self.position+1:self.position+11]) answer = {'QName': qname} answer.update(headers) qtype = headers['QType'] length = headers['Data length'] if length == 0: raise ValueError('DNS message is malformed or invalid') if qtype == 'A': if length != 4: raise ValueError('DNS message is malformed or invalid') rdata = self.rdata_ipv4(self.position+11) self.position += 14 elif qtype == 'AAAA': if length != 16: raise ValueError('DNS message is malformed or invalid') rdata = self.rdata_ipv6(self.position+11) self.position += 26 elif qtype == 'TXT': rdata = self.rdata_txt(self.position+11, length) if length - rdata['Text length'] != 1: raise ValueError('DNS message is malformed or invalid') self.position += (10 + length) elif qtype in ('CNAME', 'NS', 'PTR'): rdata = self.read_stream(self.position+11, length) if not valid_domain(rdata): raise ValueError('DNS message is malformed or invalid') elif qtype == 'MX': if length == 3 and self.response[self.position+13] == 0: prefs = byte2int(self.response[self.position+11:self.position+13]) rdata = {'Preference': prefs, 'Mail Exchange': '<Root>'} self.position += 13 else: rdata = self.rdata_mx(self.position+11, length) mx = rdata['Mail Exchange'] if not valid_domain(mx): raise ValueError('DNS message is malformed or invalid') elif qtype == 'SOA': rdata = self.rdata_soa(self.position+11) pns, ramx = rdata['Primary Name Server'], rdata["Responsible Authority's Mailbox"] if not (valid_domain(pns) and valid_domain(ramx)): raise ValueError('DNS message is malformed or invalid') answer['RData'] = rdata self.answers.append(answer) if qtype != 'SOA' else self.soa.append(answer) def parse_dns_response(self): self.parse_dns_query() total = sum(( self.question['Answers'], self.question['Authorative Answers'], self.question['Additional Resources'] )) count = 0 while count < total: self.parse_dns_answer() count += 1 self.raw['Question'] = self.question self.raw['Answers'] = self.answers if self.soa: self.raw['Authorative Answers'] = self.soa def dns_query(query, address, qtype): request = make_query(query, qtype) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.settimeout(2) try: sock.sendto(request, (address, 53)) response = sock.recv(8192) except Exception as e: print(e) return finally: sock.close() parser = DNS_Parser(response) parser.parse_dns_response() return parser.raw It isn't complete yet, but it is indeed working properly and there are no bugs. How can I improve its performance, refactor the code, improve readability, make it more structured, group the functions into classes, reduce code duplication and increase reuse rate, etc? Well, I think I need to make something clear. Obviously this project wasn't done for practicality, I am not some sort of egomaniac arrogant enough to think my code is better than library code written by experienced professionals; This project was done in the name of learning only, it was a self-imposed challenge, I only did it to learn, in the hopes of improving my skills. This script is poorly written and hacked together, but I really did learn a lot from the experience, I aimed for the process, not the result; So if you don't like it and don't want to help me to improve, that's fine, just don't try to discourage me or say the script should be deleted. Well I have found a bug in the code, but since there are answers I can't edit the code. The length parameter in read_stream is useless, remove that. There is no check against recursion after the pointer jumped back, so the position indicator (self.position) might be falsely decremented, thus breaking the code. The solution is to put the indicator change inside if not recur: block. Unfortunately fixing the previously mentioned bug introduces yet another bug, I have only encountered the bug just now. Trying to query anything with TXT as QTYPE will raise the following: In [164]: dns_query('example.com', '114.114.114.114', 'TXT').raw --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ... <ipython-input-161-6d0c784748c1> in rdata_txt(self, pos, length) 198 199 def rdata_txt(self, pos: int, length: int) -> dict: --> 200 return {'Length': self.response[pos], 'Text': self.response[pos+1:pos+length+1].decode('utf8')} 201 202 def rdata_mx(self, pos: int) -> dict: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 11: invalid start byte It can be fixed by simply removing +1 after length. The error wasn't there before. Anyways I have obtained something like this: In [178]: print(json.dumps(multi_query('en.wikipedia.org', '8.8.8.8', 'SOA'), indent=4)) { "Question": { "Name": "en.wikipedia.org", "Type": "SOA", "Class": "INTERNET" }, "Answers": { "A": [ "202.160.128.210", "173.252.105.21" ], "CNAME": [ "dyna.wikimedia.org" ] }, "Authority": { "SOA": [ { "MNAME": "ns0.wikimedia.org", "RNAME": "hostmaster.wikimedia.org", "Serial": 2022031717 } ] } } It combines information from multiple responses to one single query into a single dictionary: Answer: Don't repeat yourself You currently store the reverse mappings explicitly in QTYPE: QTYPE = { 1: 'A', 2: 'NS', 5: 'CNAME', 6: 'SOA', 12: 'PTR', 15: 'MX', 16: 'TXT', 28: 'AAAA', 'A': 1, 'NS': 2, 'CNAME': 5, 'SOA': 6, 'PTR': 12, 'MX': 15, 'TXT': 16, 'AAAA': 28 } Maybe just do it once and add then reverse dict programmatically: QTYPE = { 1: 'A', 2: 'NS', 5: 'CNAME', 6: 'SOA', 12: 'PTR', 15: 'MX', 16: 'TXT', 28: 'AAAA' } QTYPE = {**QTYPE, **{value: key for key, value in QTYPE.items()}} Runtime type checks Remove runtime type checks and with them the useless conversion functions. Python is a dynamically typed language. The parser will complain if an object does not support a certain method via an AttributeError. Also they are unnecessarily costly. Don't micromanage types (again) Consider: def dns_opcode(n: int): if not isinstance(n, int): raise TypeError() if n not in OPCODE: raise ValueError() return OPCODE[n] vs. def dns_opcode(n: int): return OPCODE[n] # Will throw a KeyError on invalid opcodes And with this the function becomes virtually useless, since you can call OPCODE[n] at the given time directly. Avoid magic numbers I have no idea what b'\1\0\0\1\0\0\0\0\0\0' is. Put it into a global variable with a descriptive name. Divide and conquer Some functions are currently pretty long. Especially read_stream() and parse_dns_answer(). Consider splitting them into smaller functions dealing with a part of the problem. Since the latter has a long if/else block, it should be a good candidate to split the parser into functions for each response type.
{ "domain": "codereview.stackexchange", "id": 43094, "tags": "python, python-3.x, programming-challenge, reinventing-the-wheel, socket" }
Does anyone use Julia programming language?
Question: Is anyone using Julia (http://julialang.org/) for professional jobs? Or using it instead of R, Matlab, or Mathematica? Is it a good language? If you have to predict next 5-10 years: Do you think it grow up enough to became such a standard in data science like R or similar? Answer: I personally have used Julia for a good number of professional projects, and while, as Dirk mentioned, this is purely conjecture, I can give some insights on where Julia really stands out. The question of whether or not these reasons will prove enough to have Julia succeed as a language is anyone's guess. Distributed Systems: Julia is the easiest language I've ever dealt with in terms of building distributed systems. This is becoming more and more relevant in computing, and will potentially become a deciding factor, but the question of whether or not Julia'a relative ease decides this is up for debate JIT Performance: Julia's JIT compiler is extremely fast, and while there is a lot of debate as to how accurate these benchmark numbers are, the Julia Website shows a series of relevant benchmarks Community: This is an area where Julia just isn't quite there. The community that is there is generally supportive, but not quite as knowledgable as the R or python communities, which is a definite minus. Extensibility: This is another place where Julia is currently lacking, there is a large disconnect between the implies code patterns that Julia steers you toward and what it can actually support. The type system is currently overly bulky and difficult to use effectively. Again, can't say what this means for the future, but these are just a couple of relevant points when it comes to evaluating Julia in my opinion.
{ "domain": "datascience.stackexchange", "id": 38, "tags": "tools, julia" }
Wrap INSERT statements to PDO transaction (PHP OOP)
Question: so Im trying to wrap SQL INSERT statements to PDO transaction in PHP OOP. I don't know if I'm doing it right and this is the best/easiest way to do it. So what is important for me is: There can be multiple(unpredictable) INSERT statement (which will be in loop in the final code) I want to wrap all of these queries to transaction and commit(execute) only at the end of the code (after loop in final code) And of course in PHP OOP. My code looks like this right now: class.inc.php class MyQuery { protected $DB_CONN; private $DB_insert_query; public function __construct($DB_CONN) { $this->PDO_CONN = $DB_CONN->DB_CONN(); } public function insert_query($insert_query) { $this->DB_insert_query = $insert_query; } public function commit() { try { $this->PDO_CONN->beginTransaction(); foreach ($this->DB_insert_query as $query) { $stmt = $this->PDO_CONN->prepare($query["query"]); $stmt->execute($query["params"]); } $this->PDO_CONN->commit(); } catch (PDOException $e) { $this->PDO_CONN->rollBack(); error_log("Error occurred. Error message: " . $e->getMessage() . ". File: " . $e->getFile() . ". Line: " . $e->getLine(), 0); } if ($stmt->rowCount()) { return true; } else { return false; } } } insert.php $DB_CONN = new DataBase; $myQuery = new MyQuery($DB_CONN); $insert_query[] = array("query" => "INSERT INTO mytable (name, text) VALUES (:name, :text);", "params" => array(":name" => "Name Test", ":text" => "Text Test") ); $myQuery->insert_query($insert_query); $insert_query[] = array("query" => "INSERT INTO mytable (name, text) VALUES (:name, :text);", "params" => array(":name" => "Another Name", ":text" => "Another Text") ); $myQuery->insert_query($insert_query); print_r($myQuery->commit()); Thank you for your help! Answer: I don't like the approach from the other answer as it violates the single responsibility principle. A function called insert_query shouldn't do things unrelated to, well, insert. And at the same time it is severely limiting your SQL. Let alone a straight up SQL injection. But I understand the desire to encapsulate the regular transaction routine, i.e. try { $pdo->beginTransaction(); foreach ($data as $row) { $pdo->prepare($row['sql'])->execute($row['params']); } $pdo->commit(); }catch (\Throwable $e){ $pdo->rollback(); throw $e; } to make this code less boilerplate-looking. I would suggest the approach used in Laravel's Eloquent: a function that accepts anonymous function as a parameter. It gives you the separation concerns and enormous flexibility: not only insert queries are allowed but any kind of query or even PHP code inbetween. So it could be like class MyQuery { public $dbConn; public function __construct($dbConn) { $this->dbConn = $dbConn; } public function query($query, $params) { $stmt = $this->dbConn->prepare($query); $stmt->execute($params); return $stmt; } public function transaction(Callable $f) { try { $this->dbConn->beginTransaction(); $return = $f($this); $this->dbConn->commit(); return $return; } catch (\Throwable $e) { $this->dbConn->rollBack(); throw $e; } } } So it can be used like this $myQuery->transaction( function () use (/* variables you need*/) { // write any code you want to wrap into transaction }); and then using this generic transaction() method we can create a helper function for the specific multiple queries case public function multiQueryTransaction($queries) { $this->transaction( function () use ($queries) { foreach ($queries as $row) { $this->query($row['query'], $row['params']); } }); } So the final code would be like $insert_query[] = [ "query" => "INSERT INTO mytable (name, text) VALUES (:name, :text);", "params" => [":name" => "Name Test", ":text" => "Text Test"], ]; $insert_query[] = [ "query" => "INSERT INTO mytable (name, text) VALUES (:name, :text);", "params" => [":name" => "Another Name", ":text" => "Another Text"], ]; $pdo = new PDO ...; $myQuery = new MyQuery($pdo); $myQuery->multiQueryTransaction($insert_query); Also, regarding the MyQuery() class in general, I would recommend to read this coding standard and this article of mine, Your first database wrapper's childhood diseases that can explain some flaws and obsoleted parts in the existing code that I fixed in my version.
{ "domain": "codereview.stackexchange", "id": 43587, "tags": "php, pdo" }
Why do we need to take the derivative of the activation function in backwards propagation?
Question: I was reading this article here: https://towardsdatascience.com/how-does-back-propagation-in-artificial-neural-networks-work-c7cad873ea7. When he gets to the part where he calculates the loss at every node, he says to use the following formula: "delta_0 = w . delta_1 . f'(z) where values delta_0, w and f’(z) are those of the same unit’s, while delta_1 is the loss of the unit on the other side of the weighted link." And $f$ is the activation function. He then says: "You can think of it this way, in order to get the loss of a node (e.g. Z0), we multiply the value of its corresponding f’(z) by the loss of the node it is connected to in the next layer (delta_1), by the weight of the link connecting both nodes." However, he doesn't actually explain why we need the derivative term. Where does that term come from and why do we need it? My idea so far is this: The fact that the identity activation function causes the term to disappear is a hint. The node doesn't feed into the next exactly as is, it depends on the activation function. When the activation function is the identity, the loss at that node just passes to the next one based on the weight. Basically, you just need to factor in the activation function somehow, specifically in a way that doesn't matter when it's the identity, and of course the derivative is a way to do this. The issue is that this isn't very rigorous, so I'm looking for a slightly more detailed explanation. Answer: As a disclaimer, I didn't read that page, but I can certainly explain where the derivatives come from. The backpropagation algorithm is actually a variant of the gradient descent algorithm. Think about a single function for the moment. Suppose you have a function which responds to a single input and a single weight: $f(w, x)$. We want to adjust $w$ so that it gives expected answers for various known values of $x$. Then by Taylor's theorem: $$f(w + \delta w, x) \approx f(w, x) + \delta w \frac{\partial f}{\partial w}(w, x)$$ Or to put it another way: $$f(w + \delta w, x) - f(w, x) \approx \delta w \frac{\partial f}{\partial w}(w, x)$$ So if you find a value for $x$ which gives the wrong output, this gives you an estimate of how "wrong" $w$ is, and it's related to the derivative of $f$. This idea extends to multi-variate calculus. If you have a vector of weights and inputs $\mathbf{w}$ and $\mathbf{x}$, then Taylor's theorem says: $$f(\mathbf{w} + \delta \mathbf{w},\mathbf{x}) \approx f(\mathbf{w}, \mathbf{x}) + \nabla_{\mathbf{w}}f \cdot \delta \mathbf{w}$$ Where $\nabla_{\mathbf{w}}f$ is the gradient of $f$ with respect to the weights only. Now let's think about your case, where the function $f$ has a specific form: it's a function applied to the dot product of $\mathbf{w}$ and $\mathbf{x}$: $$f(\mathbf{w},\mathbf{x}) = h(\mathbf{w} \cdot \mathbf{x})$$ Then: $$\begin{align*} f(\mathbf{w} + \delta \mathbf{w},\mathbf{x}) - f(\mathbf{w}, \mathbf{x}) & = h((\mathbf{w} + \delta \mathbf{w}) \cdot \mathbf{x}) - h(\mathbf{w} \cdot \mathbf{x}) \\ & \approx (\mathbf{x} \cdot \delta \mathbf{w})\, h'(\mathbf{w} \cdot \mathbf{x}) \end{align*}$$ Again, this gives you an approximation to how "wrong" the weights are given a "wrong" observation. The backpropagation algorithm works by combining all of the training data into a single loss function which measures how "bad" the current set of weights is in training the data, and then does more or less exactly the above procedure, estimating the gradient of the loss function with respect to the weights, and then moving all of the weights in that direction.
{ "domain": "cs.stackexchange", "id": 15142, "tags": "machine-learning, neural-networks" }
Karp hardness of an equidistant vertex set
Question: What is the hardness of the following problem? Input: An undirected graph $G(V, E)$ and a natural number $k$ Output: YES if $G$ has an equidistant vertex set of size $k$, otherwise NO $\DeclareMathOperator{\dist}{dist}$An equidistant vertex set is a set of vertices $V'\subseteq V$ such that for every two pairs of vertices $u, v\in V'$ and $w, s\in V'$, we have $\dist(u, v) = \dist(w, s)$, where $\dist(u, v)$ is the length of a shortest path between $u$, $v$. Answer: Your problem is NP-hard, by reduction from E3SAT, an NP-hard variant of 3SAT in which every clause involves 3 different variables. Let $C_1 \lor \cdots \lor C_m$ be an instance of 3SAT; we can assume that $m$ is larger than some constant, since otherwise we can brute force the answer in constant time. We construct the following graph: Vertices: There is a vertex $(C_i,\ell)$ for each clause $C_i$ and for each literal $\ell$ appearing in the clause. Edges: We connect $(C_i,\ell)$ and $(C_j,\ell')$ if $i \neq j$ and $\ell \neq \lnot \ell'$. Here are a few claims: The formula is satisfiable iff the graph contains an $m$-clique. Indeed, if the formula is satisfiable, we choose one satisfied literal from each clause, and these are the vertices of the clique. In the other direction, an $m$-clique must identify a literal from each clause. These literals are non-contradictory, and so correspond to a (possibly partial) satisfying truth assignment. The diameter of the graph is 2. Indeed, given two vertices $(C_i,\ell),(C_j,\ell')$ (where possibly $i=j$), assuming $m \geq 3$ we can find a vertex $(C_k,\ell'')$ such that $k \neq i,j$ and $\ell''$ involves a different variable from $\ell,\ell'$. If $(C_i,\ell)$ and $(C_j,\ell')$ are at distance exactly 2 then either $i = j$ or $\ell = \lnot \ell'$. Suppose that $K$ is a "2-clique", that is, a set of vertices in which any two vertices are at distance exactly 2. Pick some $(C_i,\ell) \in K$. If all vertices in $K$ are of the form $(C_i,\cdot)$ then $|K| \leq 3$. Otherwise, let $(C_j, \lnot \ell) \in K$, where $j \neq i$. Every other vertex $(C_k, \ell') \in K$ must satisfy $k \neq i$ or $k \neq j$; suppose, without loss of generality, that $k \neq i$. Then $\ell' = \lnot \ell$. Since $(C_k, \lnot \ell)$ and $(C_j, \lnot \ell)$ are at distance 2, necessarily $j = k$. That is, in this case $|K| \leq 2$. Therefore if $m \geq 4$, there is an equidistant vertex set of size $m$ iff the formula is satisfiable.
{ "domain": "cs.stackexchange", "id": 12197, "tags": "np-complete, np-hard, np" }
How to find distance between two objects in free fall?
Question: So if I have two exactly the same rocks and drop one then the other 2 seconds later, how would I find the distance between the two rocks when the first hits the ground? I know the height, let's say it's 40m. I want to know HOW to do this. Would I find time it takes the rocks to hit the ground first since both are the same? Then set each equal? Mostly just having issues with what equations I suppose Answer: Find the time it takes for the first rock to hit the ground using your constant acceleration formulas and then sub this time -2 seconds back into your formula to find the position of the second rock at that time.
{ "domain": "physics.stackexchange", "id": 37892, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, kinematics, free-fall" }
Why does a spinning rod create transverse waves?
Question: I attached a ballpoint pen refill to a DC motor and made it spin very fast. Instead of just turning along its axis, the refill started to wobble around to make a transverse wave. You can see that there is a node located about 10 cm away from the base of the refill. (You can click on the images to see video) I know that a spinning object tends to spread out its mass away from the axis of rotation. But then, I would expect the tube to bend away from the axis all the way from base to tip. Why would it incline towards the axis instead, near the node and form a wave? Note that these transverse waves only arise if there is some kind of perturbation. I confirmed this by dipping the spinning refill in a glass of water (which would dampen any vibrations) and found that it spins stably along its axis. Answer: Consider a coordinate system with the $x$-axis parallel to the initial position of the rod and let $y(x)$ describe the shape of the spinning rod. The potential energy of a small piece of the spinning rod is the sum of the elastic energy, $\frac{\kappa y''^2}{2} dx$ (here we assume that $y' \approx 0$) and the energy due to centripetal force $\frac{-\rho \omega^2 y^2}{2}dx$, where $\rho$ is the linear density and $\kappa$ characterises the stiffness of the rod (Young's modulus multiplied by the cross-section area of the rod). Thus $$U = \int_0^L \frac{\kappa y''^2}{2} - \frac{\rho \omega^2 y^2}{2}dx$$ where $L$ is the length of the rod. In a stable position we must have $\delta U=0$ Also we have $y(0) = 0$ The extremum can be found by calculus of variations - the solution will be $$y(x) = Ash(ax) + Bsin(ax)$$ where $a = (\frac{\omega ^ 2 \rho}{\kappa})^\frac{1}{4}$ and $A, B$ are solutions of a linear system $$\begin{cases} Ash(aL) - Bsin(aL) = 0 \\ Ach(aL) - Bcos(aL) = 0 \end{cases}$$ If $B$ was zero, we would indeed have bent all the way to the tip without the node in a hyperbolic shape. But it is due to this additional $Bsin(ax)$ term that we have a node. Indeed, if your pen was longer, I would say you might have been able to observe multiple nodes, the position of the nodes, $x_n$, given by the equation $$y(x_n) = Ash(ax_n) + Bsin(ax_n) = 0$$
{ "domain": "physics.stackexchange", "id": 65991, "tags": "classical-mechanics, waves, home-experiment" }
Calculating doubling times from data points
Question: In the code below, noisy data points with unique errors are created. From this, an exponential function is fitted to the data points, and then doubling times (10 unit windows) are calculated. I'm uncertain how to show the unique errors in the data points in the fitted function or doubling times. Output: from scipy import optimize from matplotlib import pylab as plt import numpy as np import pdb from numpy import log def exp_growth(t, x0, r): return x0 * ((1 + r) ** t) def doubling_time(m, x_pts, y_pts): window = 10 x1 = x_pts[m] y1 = y_pts[m] x2 = x_pts[m+window] y2 = y_pts[m+window] return (x2 - x1) * log(2) / log(y2 / y1) # First, artificially create data points to work with data_points = 42 # Create the x-axis x_pts = range(0, data_points) # Create noisy points with: y = x^2 + noise, with unique possible errors y_pts = [] y_err = [] for i in range(data_points): random_scale = np.random.random() y_pts.append((i * i) + data_points * random_scale) y_err.append(random_scale * 100 + 100) x_pts = np.array(x_pts) y_pts = np.array(y_pts) y_err = np.array(y_err) # Fit to function [x0, r], pcov = optimize.curve_fit(exp_growth, x_pts, y_pts, p0=(0.001, 1.0)) fitted_data = exp_growth(x_pts, x0, r) # Find doubling times x_t2 = range(32) t2 = [] t2_fit = [] for i in range(32): t2.append(doubling_time(i, x_pts, y_pts)) t2_fit.append(doubling_time(i, x_pts, fitted_data)) # Plot fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True) ax1.plot(x_pts, y_pts, 'bo') ax1.errorbar(x_pts, y_pts, yerr=y_err) ax1.set_ylim([0, 2000]) ax1.set_title('Artificially created raw data points with unique errors', fontsize=8) ax2.plot(fitted_data, 'g-') ax2.set_ylim([0, 2000]) ax2.set_title('Fitted exponential function', fontsize=8) ax3.plot(x_t2, t2, 'ro', label='From points') ax3.plot(x_t2, t2_fit, 'bo', label='From fitted') ax3.set_title('Doubling time at each point (10 unit window)', fontsize=8) ax3.legend(fontsize='8') plt.show() Answer: Your code is doing the job, but I think there are a few ways in which it could be improved. Consistent use of parameters for functions. def doubling_time(m, x_pts, y_pts): window = 10 x1 = x_pts[m] y1 = y_pts[m] x2 = x_pts[m+window] y2 = y_pts[m+window] return (x2 - x1) * log(2) / log(y2 / y1) Why is window not a parameter in the function? Having it as a parameter will make it much easier to change, and more importantly, it will make future users of the code (including yourself six months/weeks/days from now) realize that in fact this function depends on a window parameter to compute its result. If you do def find_local_doubling_time(index, x_pts, y_pts, window=10):[...], then window will be a parameter with a default value of 10, so you don't have to pass it in if you don't want to. I would personally find a name like index or something to be far more illustrative and informative than m. Also, the function itself could do with a slightly more informative name, such as local_doubling_time() or find_local_doubling_time(). The way you are generating y_pts feels very unnatural to me. Since you have x_pts already, you can just use numpy to define a y_pts without any for loops, like this: x_pts = np.array(range(0, data_points)) random_scales = np.random.random(size=data_points) y_pts = x_pts**2 + data_points*random_scales y_err = random_scale * 100 + 100 BTW, the more common choice for the error model would be Gaussian noise instead of uniformly distributed noise. You might consider adding a comment to the code to explain your choice of the uniform model. Finding the "local" or "instananeous" doubling times does not require for loops either. t2 = [doubling_time(i, x_pts, y_pts) for i in x_t2] t2_fit = [doubling_time(i, x_pts, fitted_data) for i in x_t2] I didn't know about the sharex option for plt.subplots(). Very cool to learn! I like your graphs; the only improvement would be to plot the points and the curve on the same graph (i.e. combine the top panel and mid panel of the graph). Scipy's optimize uses nonlinear least squares regression. In addition to comparing to the "local" results, you might also compare the NLSR results to the results of doing linear-regression on log-transformed data.
{ "domain": "codereview.stackexchange", "id": 14674, "tags": "python, numpy, matplotlib" }
Replace Multiple Matches with different Values
Question: I want to replace multiple matches of a Regex with different values from a map. I have for example the following string #id#_#date#_#value#_additional_text. I now want to replace the parts #xxx# with the corresponding values from a map. (The string could change so that I don't exactly know what kind of patterns are in there. What I'm currently doing are the following steps: Use std::sregex_token_iterator to go through the string and store all patterns I found in a std::vector. Go through all the patterns in the vector and use std::regex_replace to replace them with the values from the map. This is the code for the steps above: int main() { std::map<std::string, std::string> metadata{ {"value", "9"}, {"id", "1234"}, {"date", "1234"}, {"more", "abc"}}; std::vector<std::string> patterns{}; std::string input_data = "#id#_#date#_#value#_additional_text"; std::regex reg{R"(#([a-zA-Z]+)#)"}; const std::sregex_token_iterator end; for (std::sregex_token_iterator iter{std::cbegin(input_data), std::cend(input_data), reg, 1}; iter != end; ++iter) { std::cout << iter->str() << '\n'; patterns.push_back(iter->str()); } for (const auto pattern : patterns) { std::cout << pattern << '\n'; std::regex regex_pattern{"#" + pattern + "#"}; input_data = std::regex_replace(input_data, regex_pattern, metadata[pattern]); std::cout << input_data << '\n'; } std::cout << input_data << '\n'; } My question now is, is there a better way to achieve this? Answer: Library includes The code doesn't compile as presented. I needed to add a few headers: #include <iostream> #include <map> #include <regex> #include <string> #include <vector> Unnecessary output The problem statement just refers to replacing parts of the string, but we seem to be writing lots of other things to std::cout: std::cout << iter->str() << '\n'; std::cout << pattern << '\n'; std::cout << input_data << '\n'; These look like leftover debugging prints (that would normally go to std::clog rather than std::cout, and be removed before the program is ready for review). I'm assuming this output is not required. Unnecessary copying We don't need to copy each pattern here: for (const auto pattern : patterns) { Instead, we can just bind a reference: for (const auto& pattern : patterns) { Consider a single-pass algorithm without regular expressions Since # acts as a delimiter, we can implement these substitutions much more simply - just search for # and then look to see if it's followed by one of our translation keys and another #. That would look something like this: #include <iostream> #include <map> #include <string> #include <string_view> std::string replace_in_string(std::string_view s, const std::map<std::string, std::string>& replacements) { std::string result; for (;;) { auto pos = s.find('#'); auto end = s.find('#', pos+1); if (end == std::string_view::npos) { return result.append(s); } auto const key = s.substr(pos+1, end - pos - 1); auto const it = replacements.find(std::string{key}); if (it != replacements.end()) { result.append(s.substr(0, pos)).append(it->second); s = s.substr(end+1); } else { result.append(s.substr(0, end)); s = s.substr(end); } } } int main() { const std::map<std::string, std::string> metadata{ {"value", "9"}, {"id", "1234"}, {"date", "1234"}, {"more", "abc"}}; const std::string input_data = "#id#_#date#_#value#_additional_text"; std::cout << replace_in_string(input_data, metadata) << '\n'; }
{ "domain": "codereview.stackexchange", "id": 44192, "tags": "c++, regex" }
Pointing PR2 head at something in the camera frame
Question: I'm running a face detector, and would like to be able to point the head at a detected face. As it stands now, I get the face pixel coordinates, use the camera geometry to project that point to a 3D ray, and add a frame with that ray as a child of the optical frame of the camera I'm using. This all works fine - the problem comes when I point the head at that new frame. Since it's a child of the optical frame, when the head moves, the face frame moves along with it making it so once the move head action is done, the face frame is in a different location, so the head moves again. I've tried adding the face frame as a child of some frame that doesn't move with the head (base_link), but am having trouble getting this to work right since to lookup that transform, I have to broadcast the face frame first, then lookup base_link -> face not knowing if the head is currently moving. This seems like a common task that should have a standard solution. What's the right way to point the head at something detected in a camera image? Originally posted by Dan Lazewatsky on ROS Answers with karma: 9115 on 2011-07-28 Post score: 0 Answer: I would use TF's transformPose to get a pose in some fixed frame and use that pose to point the head. Create a new tf::Stamped<tf::Pose> with the frame id of your face frame and the current time as stamp and make it an identity pose, i.e. with a vector of (0 0 0) and a quaternion of (0 0 0 1). Then use tf::TransformListener::transformPose to transform that pose into some fixed frame, e.g. base_footprint (if the robot doesn't move) or map. Then use the resulting pose for pointing the head. Originally posted by Lorenz with karma: 22731 on 2011-07-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Dan Lazewatsky on 2011-07-28: That did it! I actually used transformPoint instead since I don't care about orientation.
{ "domain": "robotics.stackexchange", "id": 6288, "tags": "pr2, transform" }
How can surface integral's contribution goes to zero but volume's does not in this particular derivation?
Question: When we derive the formula for energy of a continuous charge distribution $\rho$ using this equation $$W = \frac{1}{2}\int\rho V \text d\tau$$ with $V$ being the electric potential, we get this expression for work done $$W = \frac{\epsilon}{2}\left[\int E^2\text d\tau + \int V\mathbf E\cdot \text d\mathbf a\right]$$ where $\mathbf E$ is the electric field. Now in the book it is mentioned that the contribution from term $\int V\mathbf E\cdot\text d\mathbf a$ will approach to zero as we make our Gaussian surface larger and larger because $EV$ will vary as $\frac{1}{r^3}$ and multiplying it with by $r^2$ roughly make it $\frac{1}{r}$. By the same logic then shouldn't the contribution from the term $\int E^2d\tau$ approach to zero? But its written that this whole integral is positive since $E^2$ will only increase Answer: No. The volume integral is over all space contained in the surface, so using a larger volume just adds to the integral. You aren't just looking at $E^2$ at the boundary of the surface. In other words, you are looking at all $r$ values, not just where the boundary is. Contrast this with the surface integral where you actually are just looking at everything at the boundary of the surface. At this point you can think of how the values drop off or grow with $r$ because you're only looking at at single $r$ value when evaluating the integral.
{ "domain": "physics.stackexchange", "id": 60015, "tags": "electrostatics, energy, vector-fields" }
How is this trigonometric substitution achieved for this simple capacitor circuit equation?
Question: In a simple circuit with one capacitor and one AC source, where the equation of the source voltage is v(t) = Acos(ωt), I was trying to follow how they found the equation for the current as a function of time, and they made a substitution I was unable to follow. It can be seen on page 3 of this PDF, around 1.9. The equation i(t) = -CAωsin(ωt) magically transforms into i(t) = CAωcos(ωt + pi/2). I don't know why the negative went away, and I have less than a great understanding of the pi/2. Answer: This is more of a math question, but it's just a trigonometric identity. $$\cos\Big(\frac{\pi}{2}-x\Big)=\sin(x)\\$$ Or $$\cos\Big(x+\frac{\pi}{2}\Big)=-\sin(x)$$
{ "domain": "physics.stackexchange", "id": 22892, "tags": "electric-circuits, differentiation" }
Rough/ballpark thermodynamics and black body temperature question
Question: This is probobly pretty basic but I got into a debate with someone and the temperature of Titan came up, and I did some quick and dirty calculations - as follows. Titan is about 9.5 times as far from the sun as the earth is, so, per square foot, it gets about 1/90th the solar energy the earth does. According to the Stefan-boltzmann law, heat radiates off an object at the 4th power of the temperature, so if the earth recieves 90 times the energy per square foot in energy than Titan, it makes sense that it would also, given time to let the equilibrium happen, radiate about 90 times the energy per square foot as Titan, and so it's temperature should be roughly the 4th root of 90 times Titan's temperature (about 3) and that seems roughly true. Earth - average temp, about 288 degrees. Titan average temp, about 94 degrees. 3 times 94 = 282. - pretty close. Now, I know this quick and dirty calculation doesn't take into account internal heat, albedo, tides or Greenhouse, but my question is - is that roughly correct - 2 identical objects, the one that gets 90 times as much energy (or, 81 times) as the other should be three time as hot, or is that a bad way to look at it? Thanks. Answer: I think this should be roughly correct. If you want to also roughly estimate the difference that internal heat, tides and atmosphere (the Earth's) would have had, you can look at the moon, which is the same distance from Titan but has an average surface temperature of $268K$. So your error is about $20K$ for estimating earth's temperature from Titan's. $268K/3\approx 95K$ which is more accurate for Titan's temperature.
{ "domain": "physics.stackexchange", "id": 20608, "tags": "thermodynamics" }
Theory of computation introductory curriculum
Question: I want to study theory of computation on my own, so I am looking for books. What set of books would you recommend for the equivalent of a one-semester course that introduces theory of computation? Please post answers that describe a complete curriculum, explaining which chapters of each book are relevant at which stage of the course or self-study. Answer: Introduction to the Theory of Computation by Michael Sipser is a relatively recent entry into this field. It was the required book for a class my friend was taking, and I asked him for the PDF so I could browse through at my leisure. I ended up reading almost the whole book, even the chapters on topics I was already very familiar with, just because the book is such a joy to read. It's written at an introductory level, which means less notation, more exposition, and more intuition. The motivation behind every idea and theorem is crystal clear. He precedes every proof with a "proof idea" section that lays out the path the proof is going to take without getting into the gory details. The book covers Automata Theory, Computability Theory, and Complexity Theory to a satisfactory depth for an undergraduate level. I've read many textbooks in computer science and math, and this is probably my favorite.
{ "domain": "cs.stackexchange", "id": 7763, "tags": "computability, automata, computation-models, books" }
How do people historically have come to use the Yang-Mills theory in physics?
Question: There are many books, in which Yang-Mills theory is introduced "just like that". But I didn't find some book with set of historical arguments, which had led people to using it in quantum field theory. Can you tell me about this? Maybe, my question leads to the next question: how did people guess that they need to expand the group of local gauge invariance for describing, for example, quarks? Answer: I believe the milestone about the introduction of Yang-Mills theory from gauge invariance is the 1973 article by Ernest Abers and Benjamin Lee. You can easily find the original article on the web through a simple google search. This is a fundamental article I would recommend to everybody interested in Quantum Field Theory. Also, I remember I found good historical, as well as logical, introductions to Y-M theory in the book by Aitchinson and Hey, and in the older one by Cheng and Li of the 1983. These are certainly good references for the history of the development of gauge theories. Another milestone is the 1980 article by Gerardus 't Hooft, which I think could provide a great answer to your second question.
{ "domain": "physics.stackexchange", "id": 9462, "tags": "soft-question, gauge-theory, history, yang-mills" }
How to edit audios so that they have the same length
Question: I have a dataset of audio files, which differ on their lengths by a couple of milliseconds. Some are little shorter than 1s and some are little longer than 1s. They all have the same sampling rate, namely 20kHz, what differs is the number of samples. Does anyone know a way to automatically transform them into 1s audio files with 20kHz sampling rate? EDIT: I need to transform the audios to having the same length so that I can input them into a standard fully-connected neural network. Cutting the audio in the beginning and at the end would be the best solution, I think, if the audio is longer than 1 second. For example, if the audio is longer 1.1s, then I would like to cut the audio by 5ms in the beginning and the end to transform it to a 1s audio? And if the data is shorter, somehow putting some "silence" at the end of the data? Thank you in advance! Answer: ImageMagick is a great tool to edit multiple images in the command line. Thanks to your question, I just discovered SoX, or Sound eXchange: SoX is a cross-platform (Windows, Linux, MacOS X, etc.) command line utility that can convert various formats of computer audio files in to other formats. It can also apply various effects to these sound files, and, as an added bonus, SoX can play and record audio files on most platforms. It can trim, crop, cut multiple files (and resample too), see for instance: Editing Multiple Mono Sound Files in SoX, Audio format conversion cheat sheet (aka how to) For shorter files, an option is to create a silence file, happen it, and crop again, see for instance: Linux command to extend the duration of audio files.
{ "domain": "dsp.stackexchange", "id": 10334, "tags": "signal-analysis, audio, speech-processing, speech-recognition" }
Directional derivative of the potential energy in the direction of the displacement in three dimensions
Question: For a conservative force $\vec{F}=-\vec{\nabla } U \implies \mathrm dW= -\vec{\nabla} U \cdot \mathrm d\vec{s} $ Where $\mathrm d\vec{s}$ is the infinitesimal displacement. For a differentiable function $f$ the directional derivative in the direction of a vector $\vec{a}$ is $$\frac{\partial f}{\partial \vec{a}}= \vec{\nabla} f \cdot \vec{a}\;.$$ So is it possible to say the following? $$\mathrm dW= -\vec{\nabla} U \cdot \mathrm d\vec{s}= -\frac{\partial U}{\partial \vec{s}}$$ I don't think that this i right because, in one dimension, $\vec{F}= -\frac{\mathrm dU}{\mathrm dx}\;.$ The spatial derivative of the potential energy is not the infinitesimal work but the force. How's that possible? What's the directional derivative of the potential energy, in the direction of $\vec{\mathrm ds}\,,$ in the case of potential energy in three dimensions? And what is its physical meaning? Answer: Your first line is fine, everything else is wrong (except the one time you repeated something from the first line). How wrong? You don't even have the right units, so pretty much as wrong as you can be. It's like if I asked for the surface area of a house and you said 5m or 8s or 20N or 80K. For a differentiable function $f$ the directional derivative in the direction, $\hat a,$ of a nonzero vector $\vec{a}$ is $$\frac{\partial f}{\partial a}= \hat a\cdot\vec{\nabla} f,$$ where $\hat a =\vec a/\|\vec a\|.$ So since $\mathrm dW= -\vec{\nabla} U \cdot \mathrm d\vec s$ you get: $$\mathrm dW= -\left(\frac{\partial U}{\partial x}\hat x+\frac{\partial U}{\partial y}\hat y+\frac{\partial U}{\partial z}\hat z\right)\cdot\left(\mathrm dx\hat x+\mathrm dy\hat y+\mathrm dz\hat z\right),$$ or equivalently $$ \mathrm dW= -\frac{\partial U}{\partial x}\mathrm dx-\frac{\partial U}{\partial y}\mathrm dy-\frac{\partial U}{\partial z}\mathrm dz.$$ Notice that now the units are correct. Any time your units are wrong it's a sign that you made at least one mistake. What's the directional derivative of the potential energy, in the direction of $\vec{\mathrm ds}\,,$ in the case of potential energy in three dimensions? It's the (negative of the) component of the force in the direction of $\mathrm d \vec s$. And note that the direction is a dimensionless unit vector equal to $\mathrm d\vec s/\|\mathrm d\vec s\|$. For perspective, notice that each of the terms $\mathrm dx,$ $\mathrm dy,$ and $\mathrm dz,$ are proportional to $\mathrm ds$ (the longer your segment, the longer each of its three components) so when you divide by the length of your segment $\mathrm ds=\|\mathrm d \vec s\|$ instead of getting how much $U$ changed, you get the rate it changes per length in that particular direction. That's what a directional derivative such as $\partial U/\partial s$ is. And what is its physical meaning? The physical meaning of a component of the force along a direction is the component of a force along a direction. Sure, multiplying by the length of the displacement when the direction is the direction of the displacement give you the amount of work done in that displacement. But really you just need to know what potential energy is, and what a direction derivative is, how unit vectors are related to directions which are different than vectors with units and magnitudes. And what a directional deriavtive is. A directional deriavtive is a scalar that you tell you rate (per length of movement in the domain) that a function undergoes when you evaluate it at different places in the domain that are displaced along a particular direction. You could imagine a line in the domain and then evaluate the function along that line and then take its slope at that point. Or if you have a restricted domain you might need to take a curve that goes through the point whose curve has that direction as a tangent and then find the slope of the line whose projection onto the domain is tangent to the curve at that point and which is tangent to the function at that point. It's just about rates of change in various directions. It's as fundamental as that.
{ "domain": "physics.stackexchange", "id": 29791, "tags": "work, potential-energy, differentiation" }
Is Combinatorial Chemistry related to combinatorics in math?
Question: If the answer is no, then I don't understand the adjective "Combinatorial". Barring decompositions, aren't the five main types of chemical reactions combinatorial? You must combine at least two reactants for a reaction! Answer: Yes, the term "combinatorial" chemistry is related to mathematical combinatorics. Imagine that we have three positions (numbered 1, 2, 3) on a molecule, and at each position, we can place any one of 5 functional groups (A, B, C, D, E). We can synthesize all possible molecules in parallel by making all possible derivatives at position 1 (5 total), then reacting all of those molecules to make all possible derivatives at position 2 (5 for each of the 5 intermediates) and then again at position 3 (again, 5 new molecules for each product of step 2). In this example, we would produce 5x5x5=125 combinations as our final output.
{ "domain": "chemistry.stackexchange", "id": 15098, "tags": "terminology" }
Why do we disorder-average before/after taking the logarithm of the partition function for annealed/quenched disorder?
Question: Pg. 19 of these notes says Crucially, the [disorder] average $\overline{\log Z}$ has to be computed after taking the logarithm. Such an average is called quenched ... Computing the average first, i.e. on the partition function itself, is called annealed averaging. Physically, this corresponds to a situation in which the couplings themselves are fluctuating variables. The sentence spanning the first two pages of this paper says pretty much the same thing. I understand the physical difference between quenched and annealed disorder, but why does disorder-averaging after vs. before taking the logarithm correctly capture their respective statistics? Answer: To fix the idea, let's consider a spin glass Hamiltonian $H(\sigma,J)$, where $\sigma$ are the spins and $J$ is a random variable with distribution $p(J)$ representing the couplings. An example is the Edwards-Anderson spin glass: $$H = - \sum_{\langle i,j \rangle} J_{ij} \sigma_i \sigma_j$$ where $J_{ij}$ are Gaussian random variables. The annealed free energy is $$F_a = -\frac{1}{\beta} \ \log \int dJ \ p(J) \int d\sigma\ e^{-\beta H(\sigma,J)} = -\frac{1}{\beta} \log[\overline{Z(\beta,J)}]\tag{1}\label{1}$$ while the quenched free energy is $$F_q = -\frac{1}{\beta} \int dJ\ p(J) \ \log \int d\sigma\ e^{-\beta H(\sigma,J)} =-\frac{1}{\beta} \overline{\log[Z(\beta,J)]}\tag{2}\label{2}$$ In \ref{1} you are treating $J$ and $\sigma$ on equal footing, so that $J$ becomes just another degree of freedom: $J$ and $\sigma$ fluctuate "together". You could actually define an "effective Hamiltonian" $$\tilde H_\beta(\sigma,J) = H-\frac 1 \beta \log p(J)$$ and write $$F_a = -\frac{1}{\beta} \log \int dJ d\sigma e^{-\beta \tilde H_\beta(\sigma,J)}$$ In \ref{2}, the situation is different, because you are First, creating a realization of the system with a certain (fixed) disorder $J$, and calculating the corresponding free energy. Then, averaging over all the free energies obtained this way with respect to the disorder $J$. The variables $J$ and $\sigma$ are not anymore on equal footing: $J$ is fixed when you average over $\sigma$, and this is the crucial point. References T. Castellani, A. Cavagna, Spin-Glass Theory for Pedestrians (2005)
{ "domain": "physics.stackexchange", "id": 47204, "tags": "statistical-mechanics, many-body, glass, disorder, spin-models" }
How much energy is required to perform the Saitama's moon-jump?
Question: I have no real knowledge of physics beyond the very basics of classic Newtonian mechanics, but as far as I understand, when a particle moves closer to the speed of light, its "relativistic mass" becomes greater, which means it requires more and more energy to further accelerate it. I was watching an anime called "One Punch Man", and in this anime there is a character, with absurd strength, that physically jumps from the Moon back to Earth. Watching that scene made me curious about the relativistic effects, and requirements, of such feat. Considering that the Moon is +/- 1 lightsecond away from Earth, and that the trip took just a few seconds (10 seconds maybe?), it's safe to assume that the character achieved "relativistic speeds", right? So, assuming that the character weights 70 kilograms, the trip took 10 seconds (from the characters perspective), and the character "crashes" into earth (does not de-accelerate upon entrance): What speed did the character achieve? How much energy did the character "consume" to perform the jump? If the 10 seconds were measured from the perspective of someone on earth, how long did the trip took for the character? Answer: The average velocity is given by the same formula from Newtonian physics, so $v=\frac{1 ls}{10 s}=0.1 c$. The Lorentz factor corresponding to this velocity is $\gamma=\frac{1}{\sqrt{1-0.1^2}}=1.005$, so relativistic effects are not that great. By energy "consumed," I'm guessing you mean the kinetic energy they have, which is also equal to the work done to reach that energy. The relativistic formula for this is $KE=(\gamma-1)mc^2$, so substituting in the values you gave yields $KE=3.169*10^{16} J$ For the last part, you are asking for something called the proper time, which is given by the time divided by the Lorentz factor, so $\tau=\frac{t}{\gamma}=9.95 s$, so they experience a very small amount of time dilation.
{ "domain": "physics.stackexchange", "id": 58051, "tags": "homework-and-exercises, special-relativity, estimation" }
Wounded Pigeon in my balcony
Question: There is a, probably wounded, pigeon (common rock pigeon) in my balcony. It is dark right now and thus, I have very little idea about its condition. But it seems it is finding it difficult to fly/move its wings because when I approached the balcony door, it 'walked' towards the farther end of the balcony while flapping its wings very slowly. What is the right way to handle the bird and what is the best thing I can do for him/her? Answer: If you have a cat carrier then trying to intice it inside with some food would be a good idea. Try to contact your local wildlife vet for information on whether they would be willing to treat the animal. If you think it may be injured then I wouldn't reccomened handling the bird as you may hurt it further, but if you do need to then handle it by placing both hands over the wings in order to pick it up (https://www.rspb.org.uk/birds-and-wildlife/advice/how-you-can-help-birds/injured-and-baby-birds/if-you-find-an-injured-bird/) . In addition the bird could have gotten used to its injury and learned to live with it so you could monitor it for a bit and see how it's getting on. Hope it turns out alright. If you live in the UK here is some advice from the RSPB: https://www.rspb.org.uk/birds-and-wildlife/advice/how-you-can-help-birds/injured-and-baby-birds/sick-and-injured-birds-faqs/#:~:text=For%20most%20injured%20birds%2C%20place,how%20you%20can%20help%20it.
{ "domain": "biology.stackexchange", "id": 11070, "tags": "ornithology" }
obstacle_range & raytrace_range - precise explanation?
Question: Hey folks, could somebody please explain how exactly obstacle_range and raytrace_range work? Assuming I have an IR sensor which publishes a point cloud with one point. That IR sensor can measure obstacles from 40 to 800mm. If it doesn't measure anything (there's nothing within its range) its values will exceed this range and heavily jump around (never accidentily jumping into its valid range however) and can be considered infinite/free. How would I have to correctly configure the mentioned parameters? From the documentation I don't really understand what the parameters do. A precise explanation would be greatly appreciated. Thanks. Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2013-08-12 Post score: 11 Answer: Short answer: I think setting both parameters to .8 would work for you. Long answer: For each sensor, you specify a sensor_frame and the distance from that frame to each observation is measured. If the observation source is set to 'marking', then it will place a lethal obstacle on the costmap if that distance is less than obstacle range. Similarly, if the source is set to 'clearing' then it will mark all of the space between the sensor frame and the observation as free space if the distance is less than the raytrace range. Otherwise, it will only clear the line that is raytrace_range long (closest to the sensor). In your particular case, you MIGHT want different values for these two parameters if for instance, you only want to mark with certainty points <800mm, but want to clear up to 1200mm (for example). Originally posted by David Lu with karma: 10932 on 2013-08-12 This answer was ACCEPTED on the original site Post score: 10 Original comments Comment by Hendrik Wiese on 2013-08-12: Perfect answer! Thanks a lot. Comment by pk99 on 2021-04-24: @David Lu, Is there a relationship between raytrace_range and obstacle_range? Should it be greater than the obstacle_range? Thanks Comment by David Lu on 2021-05-13: They are not technically related. obstacle range is often less but does not have to be. If you have further questions please open your own question.
{ "domain": "robotics.stackexchange", "id": 15230, "tags": "navigation, costmap-2d" }
Recursive Implementation of the Gaussian Filter (1D & 2D)
Question: I'm trying to implement an IIR form to approximate the Gaussian Blur Filter. I'm working with the article "Recursive Implementation of the Gaussian Filter" by Ian T. Young and Lucas J. van Vliet. They suggest a form and way to calculate the coefficients as given by: I'm trying to reproduce their example for $ q = 5.0 $. There is the MATLAB code I wrote: qFactor = 5; b0Coeff = 1.57825 + (2.44413 * qFactor) + (1.4281 * qFactor * qFactor) + (0.422205 * qFactor * qFactor * qFactor); b1Coeff = (2.44413 * qFactor) + (2.85619 * qFactor * qFactor) + (1.26661 * qFactor * qFactor * qFactor); b2Coeff = (-1.4281 * qFactor * qFactor) + (-1.26661 * qFactor * qFactor * qFactor); b3Coeff = 0.422205 * qFactor * qFactor * qFactor; normalizationCoeff = 1 - ((b1Coeff + b2Coeff + b3Coeff) / b0Coeff); vDenCoeff = [b0Coeff, b1Coeff, b2Coeff, b3Coeff] / b0Coeff; vXSignal = zeros(61, 1); vXSignal(31) = 10; vYSignal = filter(normalizationCoeff, vDenCoeff, vXSignal); vYSignal = filter(normalizationCoeff, vDenCoeff, vYSignal(end:-1:1)); figure(); plot(vYSignal); Now, the result I get is this: Namely, I get a filter which isn't stable. Yet it seems I get the same coefficients as they get in their example. What am I missing? Has anyone ever implemented this method? Thank You. Answer: The answer was simple, the article uses the coefficients value on one hand where the MATLAB implementation on the other. Namely, a minus sign should be added. Here's the correct code: qFactor = 5; b0Coeff = 1.57825 + (2.44413 * qFactor) + (1.4281 * qFactor * qFactor) + (0.422205 * qFactor * qFactor * qFactor); b1Coeff = (2.44413 * qFactor) + (2.85619 * qFactor * qFactor) + (1.26661 * qFactor * qFactor * qFactor); b2Coeff = (-1.4281 * qFactor * qFactor) + (-1.26661 * qFactor * qFactor * qFactor); b3Coeff = 0.422205 * qFactor * qFactor * qFactor; normalizationCoeff = 1 - ((b1Coeff + b2Coeff + b3Coeff) / b0Coeff); vDenCoeff = [b0Coeff, -b1Coeff, -b2Coeff, -b3Coeff] / b0Coeff; vXSignal = zeros(61, 1); vXSignal(31) = 10; vYSignal = filter(normalizationCoeff, vDenCoeff, vXSignal); vYSignal = filter(normalizationCoeff, vDenCoeff, vYSignal(end:-1:1)); figure(); plot(vYSignal);
{ "domain": "dsp.stackexchange", "id": 2510, "tags": "image-processing, matlab, filters, filter-design, software-implementation" }
Boundary condition of continuity for a barrier of 1D wave function with variable effective mass
Question: When dealing with potentials, quantum wells, etc. I've usually used the following conditions for assuring the continuity of a wave function: $$1. \ \psi_I(x)|_{x=0} = \psi_{II}(x)|_{x=0}$$ $$2. \ \frac{d \psi_I(x)}{dx}|_{x=0} = \frac{d\psi_{II}(x)}{dx}|_{x=0}$$ Now I'm working on a problem in which an electron is in a system of two layers. In one layer its effective mass is equal to $m^*=m_1>0$, whereas in the other one $m^*=-m_2<0$. In the solution the second condition is written as: $$\frac{1}{m_1}\frac{d \psi_I(x)}{dx}|_{x=0} = -\frac{1}{m_2} \frac{d\psi_{II}(x)}{dx}|_{x=0}$$ I've never seen such a condition, for sure it takes into account that the mass is different but could somebody explain why it takes such form? Answer: The continuity of $$x~\mapsto~\pi(x)~:=~\frac{1}{m^{\ast}(x)}\frac{d\psi(x)}{dx}\tag{1}$$ and $$x~\mapsto~\psi(x)\tag{2}$$ follows from a mathematical bootstrap argument similar to my Phys.SE answer here. Proof of (1). Rewrite the TISE $$ - \frac{d}{dx} \frac{\hbar^2}{2m^{\ast}(x)}\frac{d\psi(x)}{dx} +V(x)\psi(x) ~=~E \psi(x) \tag{3}$$ as a differential-integral equation $$ \frac{\hbar^2}{2}\pi(x) ~\equiv~\frac{\hbar^2}{2m^{\ast}(x)}\frac{d\psi(x)}{dx} ~=~\int^x\!\mathrm{d}y ~(V(y)-E)\psi(y). \tag{4} $$ If we assume that $V,\psi \in {\cal L}^2_{\rm loc}(\mathbb{R})$ are locally square integrable functions, then the product $(V-E)\psi\in {\cal L}^1_{\rm loc}(\mathbb{R})$ due to Cauchy–Schwarz inequality. Then the integral $x\mapsto \int^{x}\mathrm{d}y\ (V(y)-E)\psi(y)$ is continuous. Hence the LHS of eq. (4) is continuous as well. $\Box$ Proof of (2). Rewrite eq. (1) as an integral equation $$ \psi(x)~=~ \int^x\!\mathrm{d}y ~m^{\ast}(y)\pi(y).\tag{5} $$ If we assume that $m^{\ast} \in {\cal L}^2_{\rm loc}(\mathbb{R})$, then we can repeat the previous proof technique to conclude that the LHS of eq. (5) is continuous as well. $\Box$
{ "domain": "physics.stackexchange", "id": 84318, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, mathematical-physics, boundary-conditions" }
Why we need distances in Ideal LPF?
Question: Here I have a MATLAB code for Ideal Low pass filtering an Image. In the below code they have used D=sqrt(U.^2+V.^2); which finds some distances. Then this distance matrix D is compared with the cut-off frequency to create a filter function H ( H=double(D<=P); ) that has be multiplied with the fourier transformed image F. i.e, G=H.*F; to get the output image. My questions: What is this distance ? Why we need this distance matrix D ? What information this gives to us? How it helps in Low Pass Filter an Image? (I understood that it helps to create the filter function H) but Why we need the distances to create the filter function? I need to it understand clearly. This may be a simple question but could anyone please explain me this with little patience. Thank you for reading my question. Reference: MATHWORKS:IDEAL LOW PASS FILTER %IDEAL LOW-PASS FILTER function idealfilter(X,P) % X is the input image and P is the cut-off freq f=imread(X); % reading an image X [M,N]=size(f); % Saving the the rows of X in M and columns in N F=fft2(double(f)); % Taking Fourier transform to the input image u=0:(M-1); v=0:(N-1); idx=find(u>M/2); u(idx)=u(idx)-M; idy=find(v>N/2); v(idy)=v(idy)-N; [V,U]=meshgrid(v,u); D=sqrt(U.^2+V.^2); % finding out the distance H=double(D<=P); % Comparing with the cut-off frequency G=H.*F; % Multiplying the Fourier transformed image with H g=real(ifft2(double(G))); % Inverse Fourier transform imshow(f),figure,imshow(g,[ ]); % Displaying input and output image end Answer: In the Fourier domain, the frequency increases with the distance from origo. So an ideal low-pass filter in the Fourier domain is shaped like a disk centered at the origo. Here they work with only one quadrant of the Fourier domain. Therefore, they create a pie-shaped mask H for the quadrant by computing the Euclidean distance from the quadrant corner and store the distance values in D. The mask H is then chosen as all elements in D with values less than the parameter P (which is the radius of the disk). So the answer to your question(s) is that the distance computation helps the method compute the filter shape. The key is that a constant Euclidean distance from a point defines the shape of a circle, and all elements inside that circle form the disk that is used as filter mask. So, naturally, after filtering, only the low-frequency components of the image are left in the Fourier domain. Then the low-pass filtered image is retrieved by inverse transforming the filtered image back from the Fourier domain to the spatial domain. EDIT Here is an illustrated example to clarify the answer. A typical way of illustrating filtering in the Fourier domain is to show the input image. Then we show the Fourier transform of the image. Here the origo of the Fourier space is in the center. Since the frequencies increase with the distance from origo, the ideal filter is a disk with its center at the origo. Here we see the disk. The disk radius is the input parameter P. We use the disk to filter the image by multiplying the filter with the transformed image. We then get the filtered transform. To get the filtered image we take the inverse transform of the filtered result. So far so good. As I said, this is usually how low-pass filtering in the Fourier domain is explained. Now what I was trying to explain when talking about quadrants of the Fourier domain, which got a bit confusing, is that the result from the Fourier transform in the MATLAB code provided does not have origo centered as in the second image in the example above (the image showing the unfiltered Fourier transform). Instead the result from the Fourier transform F=fft2(double(f)) looks like this (we refer to it as the transform image). This means that since the filter is a disk with its center in origo, the disk must be placed in the corner of the transform image. Since the Fourier transform is cyclic, the filter should wrap around to cover all four corners of the transform image. Here is where the distance matrix D comes in. It is used to compute the filter mask above. D is computed by taking the Euclidean distance from each corner (using a neat trick with meshgrid). So Dis the Euclidean distance map looking like this, where blue values represent low distances and red values represent high. % Top left corner of D 0.00000 1.00000 2.00000 3.00000 4.00000 5.00000 1.00000 1.41421 2.23607 3.16228 4.12311 5.09902 2.00000 2.23607 2.82843 3.60555 4.47214 5.38516 3.00000 3.16228 3.60555 4.24264 5.00000 5.83095 4.00000 4.12311 4.47214 5.00000 5.65685 6.40312 5.00000 5.09902 5.38516 5.83095 6.40312 7.07107 The filter mask can therefore simply be acquired by selecting the elements in D that have a distance value less than or equal to P. The rest of the code works as in the example described above. That is, the filter mask is multiplied by the transform image, and the result is inverse transformed to get the low-pass filtered image. I hope this clarifies what role the distance computation has for creating the filter.
{ "domain": "dsp.stackexchange", "id": 1298, "tags": "image-processing, matlab, lowpass-filter" }
Canonical field momentum in quantum field theory
Question: In the context of the second quantization and the use of fields in the canonical quantization, the canonical momentum of the field is defined as the derivative of the field by the time coordinate. But if we're talking relativistically, shouldn't it be the derivative of the field by the proper time? What am I missing? Thanks Answer: The second quantisation you mentioned is an equal time quantisation so it is specific to the frame one starts with, and for this reason time and spatial indices are not treated equally. For details, one can see ,for example, the canonical quantisation of scalar field from Srednicki's QFT Chapter 3. However, we do need to check that the canonical quantisation is compatible with Lorentzian transformations, and one way to do this is to check the following diagram commutes: (for notational convenience I only write one particle state. But this should be checked for $n$ particle states ) $$\require{AMScd} \begin{CD} p^u @>{Quantization }>> |p^u >;\\ @V{\Lambda}VV @V{U(\Lambda)}VV \\ \Lambda^u{}_vp^v @>{Quantization}>>|\Lambda^u{}_v p^v > ; \end{CD}$$ where $U(\Lambda)$ is according to Srednicki's notation the representation of Lorentzian transformation.
{ "domain": "physics.stackexchange", "id": 29128, "tags": "quantum-field-theory, special-relativity, second-quantization" }
Why aren't all photons black holes?
Question: According to Special Relativity, there is no preferred inertial reference frame. And shifting reference frames can cause blue shifting or red shifting of photons. And according to General Relativity, photons are both affected by the shape of space time as well as have an effect on space time such that a Kugelblitz, a black hole formed from nothing but photons is possible. But then, for every photon, shouldn't there be some inertial reference frame where the photon is sufficiently blue shifted that it has enough energy confined within a small enough space that it would be a black hole? And since it appears to not be the case by all empirical evidence (my room isn't filled with black holes when I turn on the light), how does adding additional photons let one create a Kugelblitz? Answer: As your question indicates, if gravity depended only on energy then it would be frame dependent, which would conflict with relativity (or at least produce very odd effects like the ones you describe). This was one of the motivations for Einstein to come up with his theory of gravity, general relativity. In fact gravity (spacetime curvature) is produced not only by energy, but by the whole stress-energy tensor, which includes energy, momentum, pressure, and other terms, and which is frame independent. When changing frames, any change to the energy part of the stress-energy tensor is offset by changes to other parts of the tensor. Newton's law of gravity is only an approximation, valid when the non-energy terms of the stress-energy tensor are tiny. For light this is not the case, and to properly deal with the gravitational effects of light one needs to use the Einstein field equations, not Newton's.
{ "domain": "physics.stackexchange", "id": 88201, "tags": "general-relativity, special-relativity" }
What is the body density of insects on average?
Question: What is the body density (in $\text{g}/\text{cm}^3$) of insects and is there a list of animals and their value of body density? Answer: A recent studya measured the volumes (using a 3D scanner) and dry masses of 113 different insect species. They found the following relationship between the dry mass of the insects and their volumes ($V[mm^3]$ and $m[mg]$): $\ln (V) = 1.019 \ln (m) + 1.46$ $\Leftrightarrow V = 4.30596 m^{1.019}$ Thus, since $\rho = \frac m V$ $\rho(m) = \frac m {4.30596 m^{1.019}} = \frac {0.232236} {m^{0.019}} $ Figure: Scatter plot of the measured dry masses and volumes Reference: a Kühsel, S., Brückner, A., Schmelzle, S., Heethoff, M. and Blüthgen, N. (2016), Surface area–volume ratios in insects. Insect Science. doi:10.1111/1744-7917.12362
{ "domain": "biology.stackexchange", "id": 7218, "tags": "zoology, entomology, ecology" }
urdf beginner error
Question: In the first tutorial, for 'creating a urdf file'. On part 2 on creating the tree structure. I saved the xml as advised. Then 'rosmake urdf_parser' it compiles without any errors. Then after when i run the 'rosrun urdf_parser check_urdf my_urdf.xml' line. the following error occurs : Could not find the 'robot' element in the xml file ERROR: Model Parsing the xml failed If anyone has any ideas for what i could do, i would very grateful. Originally posted by Patrick on ROS Answers with karma: 40 on 2012-02-02 Post score: 1 Answer: Thank you both for your help, ive updated my ubuntu and using a different text editor and the error does not appear. Originally posted by Patrick with karma: 40 on 2012-02-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8085, "tags": "ros, urdf, urdf-tutorial" }
Is there a way the robot go through goal point without stopping?
Question: Hello, I want to limit the travel path of the robot to some extent. So, I set goal minutely and published the goals to move_base using SimpleActionServer. This plan was successful that the robot go through the route I envisioned, but the robot stopped at each goal points. Is there a way the robot go through goal point without stopping? I've found that teb_local_plannner has a parameter called free_goal_vel, which seems to be close to what I want. But I don't want to use teb_local_plannner, because teb_local_plannner seems to retreat frequently. Now I use base_local_planner. Thanks in advance. Originally posted by ayato on ROS Answers with karma: 25 on 2020-09-14 Post score: 1 Answer: There isn't a super clean way to do this - but you can approximate this behavior. Have the node that is sending goals to move_base monitor the progress and when you get "close" to your previously sent goal, send the next goal. The new goal will supersede the old one seamlessly. You can determine the robot's position either through TF or by monitoring the feedback topic of the move_base action. Originally posted by fergs with karma: 13902 on 2020-09-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ayato on 2020-09-22: Thank you @fergs. I will try to implement node the way you taught me. Comment by Thazz on 2020-09-23: This kind of logic is implemented in yocs_waypoints_navi package so you can take a look there.
{ "domain": "robotics.stackexchange", "id": 35540, "tags": "navigation, move-base, ros-kinetic, base-local-planner" }
Reference request for the evolution of faces
Question: We humans have faces, as well as many other animals, like cats, dogs, monkeys, etc. But there was a time on planet Earth before there were any faces in living things. I am currently very interested in how faces evolved. Is there any book or paper on this topic? I would like some references. Answer: If what you are interested in is really the evolution of heads (which I think imply faces), then cephalization is what you are looking for. Namely, when bilaterally symmetric organisms colocalize sensory organs, feeding apparatus, nervous system, and in most cases a brain. Fig 1: Wikipedia's figure showing where cephalization occurred. Note that starfish and other echinoderms lost their ancestral cephalized state and went back to radially symmetric headless animals!
{ "domain": "biology.stackexchange", "id": 12461, "tags": "evolution, literature" }
Basic Sudoku Solver
Question: For my programming class I had to make a sudoku solver. import time # Used to add a delay for user readability ROWS = COLS = possibleValues = 9 # The rows and columns of the board GRID_ROWS = GRID_COLS = 3 # The rows and columns in which there are lines possibleBoard = [] """Functions""" def board_filler(): """Creates the sudoku board from user input""" board = [[] for _ in range(ROWS)] # Creates the nested list to contain the board for x in range(ROWS): for y in range(COLS): # Takes an input makes sure it is good, and if not ask for another one, if it is add it to the list while True: number = input( f"Please enter an integer for the square in column {x + 1} and in row {y + 1} (hit enter for no number): ") try: number = int(number) # Makes the input that was a string into a number if number > 9 or number < 1: raise ValueError else: board[x].append(number) # Add the number to the list break # Exit the loop and let it move on to the next number # If its not a number, or a number more 9 or less than 1 runs this except (TypeError, ValueError): # If its empty, adds just a space to the list if not number: board[x].append(" ") break else: print("Please enter an integer between 1 and 9, or just hit enter") return board def board_printer(board): """Prints the sudoku board""" counter = 0 # Makes sure it does not print extra lines for row in range(ROWS): s = '' # A variable to contain the row before its printed # Adds the items from the list to the variable for col in range(COLS): s += str(board[row][col]) + ' ' if not (col + 1) % GRID_COLS: s += '| ' s = s[:-2] # Removes trailing characters print(s) # Prints the line of lines if not (row + 1) % GRID_ROWS and counter < 2: print('-' * len(s)) counter += 1 def line_solver(board): """Remove confirmed values from the possible values in the lines""" global possibleBoard # Checks to see if there are any duplicate numbers in each row, then removes them from the possible board for x in range(ROWS): for y in range(COLS): if board[x][y] == " ": for z in range(COLS): try: possibleBoard[x][y].remove(board[x][ z]) # Removes values from the possibleBoard that are in the same row as a number on the board # If the number that the code is trying to remove has already been removed, do nothing except (ValueError, AttributeError): pass for x in range(ROWS): for y in range(COLS): if board[x][y] == " ": for z in range(ROWS): try: possibleBoard[x][y].remove(board[z][ y]) # Removes values from the possibleBoard that are in the same row as a number on the board # If the number that the code is trying to remove has already been removed, do nothing except (ValueError, AttributeError): pass return board def square_solver(board): """Remove confirmed values from the possible values in the squares""" global possibleBoard # Sets up a modulator to multiply by to get the 3x3 grid of one square with the first value being the rows and the second being the column blockNum = [0, 0] for _ in range(9): # A loop that checks the 9 numbers in one of the squares for x in range(3): for y in range(3): if not board[(blockNum[0] * 3) + x][(blockNum[1] * 3) + y] == " ": # Checks if that square a number # Checks all the empty spots in one of the squares for that number, then removes them for z in range(3): for w in range(3): try: # Removes the number from the possible board possibleBoard[(blockNum[0] * 3) + z][(blockNum[1] * 3) + w].remove( board[(blockNum[0] * 3) + x][(blockNum[1] * 3) + y]) # If it can't do anything, run this except (ValueError, AttributeError): pass blockNum = block_num(blockNum) return board def board_updater(board): """Makes it so if there is any number on the board, that that number is a definite on the possible board""" global possibleBoard for x in range(ROWS): for y in range(COLS): if not board[x][y] == " ": possibleBoard[x][y] = board[x][y] def solver(board): """Solves a few number of the sudoku board""" global possibleBoard board_updater(board) board = line_solver(board) board = square_solver(board) # Sets up the counter and a modulator to multiply by to get the 3x3 grid of one square with the first value being the rows and the second being the column counter = [0] * 9 blockNum = [0, 0] for _ in range(9): for x in range(3): for y in range(3): # Checks the possible board and counts how many time a possible number appears if type(possibleBoard[(blockNum[0] * 3) + x][(blockNum[1] * 3) + y]) == list: for z in range(len(possibleBoard[(blockNum[0] * 3) + x][(blockNum[1] * 3) + y])): counter[possibleBoard[(blockNum[0] * 3) + x][(blockNum[1] * 3) + y][z] - 1] += 1 for x in range(len(counter)): # Checks to see if there was any times only one number appeared if counter[x] == 1: for y in range(3): for z in range(3): try: # Finds the solo number, and makes that number definite if (x + 1) in possibleBoard[(blockNum[0] * 3) + y][(blockNum[1] * 3) + z]: board[(blockNum[0] * 3) + y][(blockNum[1] * 3) + z] = x + 1 except TypeError: pass blockNum = block_num(blockNum) # Rests the counter counter = [0] * 9 for x in range(ROWS): for y in range(COLS): # If there is only one number in the possibleBoard list, set that as a definite value on the possibleBoard list and add it to the board list if type(possibleBoard[x][y]) == list and len(possibleBoard[x][y]) == 1: board[x][y] = possibleBoard[x][y][0] return board def filler(): """Fills the possible board""" listOfLists = [[], [], [], [], [], [], [], [], []] numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9] # All numbers are possible on an empty board so it fills it with all numbers # Adds 9 empty lists to the row list to represent the 9 squares for x in range(ROWS): for _ in range(ROWS): listOfLists[x].append([]) # Puts the list with the numbers 1-9 in each square for x in range(ROWS): for y in range(COLS): listOfLists[x][y] = numbers.copy() return listOfLists def solve_check(): """Checks if board is solved""" for x in range(ROWS): for y in range(COLS): if type(possibleBoard[x][y]) == list: return False return True """Repeated code segments""" def block_num(blockNum): """Increments the square""" blockNum[1] += 1 if blockNum[1] > 2: blockNum[0] += 1 blockNum[1] = 0 return blockNum possibleBoard = filler() board = board_filler() # Solves some numbers, prints the new board then waits to allow user to see changes while True: if solve_check(): break board_printer(board) time.sleep(1) prevBoard = board board = solver(board) print("") # Loops so that if entered from the command line, it does not close while True: pass As this was the most complicated program I have ever done,I focused on making it work, rather than making it pretty or readable (which caused lots of headache when doing solver function). Looking at the code, there is a lot of things that I wish I had done differently. Things I Don't Like There are a lot of nested for loops that get very confusing It's not modular solver is really long It takes longer to solve then I like What would be the best way to fix these things, and is there any other things I should change? Things to Note board_filler and board_printer were reviewed in this question solver was made after everything else, as I forgot to add it originally The reason it goes into the loop at the end is so that if it is run in the command line, it does not close once it is done solving it Despite wanting it to go faster, I still want to user to still be able to see it solve the sudoku puzzle Answer: I think you're over-commenting, but you're a student, so your professor probably is requiring more than needed. I think your code has some pretty big problems even after my answer. Over use of globals, lack of SRP, and I don't think your code works with all Sodoku boards. But my answer is long enough. In board_filler: I'd prefer the name create_board, it's not really filling something passed to it, it's creating something. Don't raise types, raise instances. raise ValueError(...) When raising an error always enter a description. Don't use errors for standard control flow. Your if that raises the error can very easily be in the else of the try. You can clean up the function by only using two ifs to handle correct numbers and empty cells. You can populate board when you are in the loops. You can use a generator function to make the entire function a little more clean too. IMO your code violates SRP. By making the innermost while loop be it's own function, you can easily use two comprehensions to make create_board really clean. Stop relying on globals and enter the cols and rows. def get_cell(x, y): """Get cell from user input.""" number = input( f"Please enter an integer for the square in column {x + 1}" f" and in row {y + 1} (hit enter for no number): " ) while True: try: number = int(number) except TypeError: if not number: return " " else: if 1 <= number <= 9: return number print("Please enter an integer between 1 and 9" ", or just hit enter") def create_board(cols, rows): """Create a Sudoku board from user input.""" return [ [get_cell(x, y) for y in range(cols)] for x in range(rows) ] In board_printer: Stop relying on globals. You're just limiting the functionality and reusability of your code without really any benefit. Use enumerate, rather than range and indexes. Rather than using modulo arithmetic for each row, you can just make a new list using slices. You don't need to use modulo arithmetic to display the line. You can just check if the row index is 3 or 6. def board_printer(board): """Prints the sudoku board.""" for y, row in enumerate(board, 1): format_row = ( row[0:3] + ['|'] + row[3:6] + ['|'] + row[6:9] ) line = ' '.join(map(str, format_row)) print(line) if y in (3, 6): print('-' * len(line)) In filler: You should merge both loops into one. You can also merge listOfLists into this loop too. Your name listOfLists isn't PEP 8 compliant. To make the code more reliable you can pass the amount of rows and columns. And also pass what you want to default to. import copy def filler(rows, columns, value): """Fills the possible board.""" board = [] for _ in range(rows): row = [] for _ in range(columns): row.append(copy.deepcopy(value)) return board in line_solver: You don't need two nested loops. if board[x][y] == " ": and the proceeding loops are the same. Make two new functions for filling vertically and horizontally. There is no need to return. Stop relying on globals and just use enumerate. def remove_existing_horizontal(board, x, y): for z in range(len(board[0])): try: possibleBoard[x][y].remove(board[x][z]) except (ValueError, AttributeError): pass def remove_existing_vertical(board, x, y): for z in range(len(board)): try: possibleBoard[x][y].remove(board[z][y]) except (ValueError, AttributeError): pass def line_solver(board): """Remove confirmed values from the possible values in the lines.""" for x, row in enumerate(board): for y, item in enumerate(row): if item == " ": remove_existing_horizontal(board, x, y) remove_existing_vertical(board, x, y) In square_solver: blockNum is a PEP 8 naming violation. Use != rather than not foo == bar. You can simplify your code by making a function that does all the blockNum[0] * 3 + x noise. Don't return board. def square_positions(x, y): x *= 3 y *= 3 return ( (x + i, y + j) for i in range(3) for j in range(3) ) def square_solver(board): """Remove confirmed values from the possible values in the squares""" for block_y in range(3): for block_x in range(3): for x, y in square_positions(block_x, block_y): if board[x][y] != " ": for i, j in square_positions(block_x, block_y): try: possibleBoard[i][j].remove(board[x][y]) except (ValueError, AttributeError): pass In solver: Use square_positions that we defined before. Use isinstance not type to check the type of a value. Use enumerate not for i in range(len(foo)): foo[i]. Stop using ROWS and COLS and instead use enumerate. def solver(board): """Solves a few number of the sudoku board""" global possibleBoard board_updater(board) line_solver(board) square_solver(board) counters = [0] * 9 for block_y in range(3): for block_x in range(3): for x, y in square_positions(block_x, block_y): if isinstance(possibleBoard[x][y], list): for z in range(len(possibleBoard[x][y])): counters[possibleBoard[x][y][z] - 1] += 1 for i, counter in enumerate(counters, 1): # Checks to see if there was any times only one number appeared if counter == 1: for x, y in square_positions(block_x, block_y): try: if i in possibleBoard[x][y]: board[x][y] = i except TypeError: pass counters = [0] * 9 for x, row in enumerate(possibleBoard): for y, item in enumerate(row): if (isinstance(item, list) and len(item) == 1 ): board[x][y] = item[0] return board
{ "domain": "codereview.stackexchange", "id": 37093, "tags": "python, beginner, python-3.x, sudoku" }
Which is the currently accepted mechanism of a Wittig reaction?
Question: The Wittig reaction is one of the most significant advances in synthetic organic chemistry in the 20th century and rightfully won its discoverer, Georg Wittig, the Nobel Prize in Chemistry. A Wittig reaction is the addition of a phosphorus ylide (previously thought to be an ylene with a $\ce{C=P}$ bond) to an aldehyde or a ketone resulting in a $\ce{C=C}$ double bond. Mechanistically, there is little debate that the final step is a cycloreversion of an oxaphosphetane liberating a phosphane oxide. However, as far as I know there is no agreement on the preceeding step(s) to form said oxaphosphetane; with two different competing mechanistic proposals: Nucleophilic attack of the ylide onto the carbonyl carbon to give a charge-separated betain structure. This rotates around the newly-formed $\ce{C-C}$ bond until oxide and phosphonium are in proximity whereupon a bond is formed to close the oxaphosphetane (stepwise mechanism) [2+2] cycloaddition of the ylide and the carbonyl to immediately form the oxaphosphetane in a concerted manner. Wikipedia just presents these two mechanisms side by side hence it is not really helpful. Which of the two mechanisms is, at present, considered the most likely reaction pathway? Which principal pieces of evidence point towards it and disfavour the competing mechanism? Please answer including references to the corresponding journal articles. Answer: tl;dr: I don't think there is any mechanism that is 100% correct, and, in cases like this especially, I think it would completely depend upon what set of carbonyl/ylid/base/solvent etc. was used. But, of course, we like being able to generalise, and to my knowledge theres a lot more evidence to support a concerted type mechanism. General Background The question is hopefully summarised in the scheme below (no geometry, that, in itself, is a pretty lengthy discussion). Reaction of a carbonyl (in this case a ketone) with a phosphorus ylid is able to give rise to two species, either an oxaphosphetane directly or a betaine (an internal salt for the benefit of Loong) which then goes on to form the oxaphosphetane which itself is able to eliminate to afford the olefin with generation of triphenylphosphine oxide as a thermodynamic driving force. This answer will present the case for a cycloaddition mechanism, and evidence against the betaine pathway. Importantly, only Wittig reactions of unstabilised ylids, as with stabilised ylids the rules of the game change due to reversibility of intermediate formation (see a Horner–Wadsworth–Emmons). Evidence against the initial formation of a betaine Betaines have never been observed during the course of a 'normal' Wittig reaction, that is, spectroscopically, we are unable to see it (this may, of course, just be due to the fact that the formation of the oxaphosphetane is so rapid relative to the timescale of the methods used). Many reports of betaines have been from accidental (or otherwise) opening of oxaphosphetane intermediates, and indeed this seems to have been what led Wittig down the betaine pathway in the initial reports (though in the first ever paper he suggested that oxaphosphetane was the sole intermediate, but subsequently couldn't find any evidence for it back in the days before NMR etc). Wittig[2] and others[3] reported the isolation of crystalline betaine salts. Phosphonium bromides were treated with PhLi to give a ylid, to which a carbonyl was added; subsequent addition of hydrogen bromide afforded crystalline solids which were able to be elucidated, proving the formation of a betaine. Whilst this was convincing at the time, we now know that the betaine isn't observed, but rather the oxaphosphetane is. Vedejs (whom we'll come back to), has gone on to show that the earlier findings could be explained by quenching of the oxaphosphetane rather than as a result of directly trapping the betaine[4], which is more consistent with the other data available to us. Scheme 1: Generation of the crystalline betaine derivative from the oxaphosphetane. There are more recent reports of 'stable' betaines for very specific substrates where subsequent collapse isn't possible, again generated from oxaphosphetane intermediates. Stefan Berger's group at Leipzig reported a betaine stabilised by a bipyridyl group, allowing NMR data to be reported for the first time[1]. Scheme 2 Reversible formation of a stabilised betaine In this case, the lithium holds a chelated structure together, preventing it from forming the alkene (and also to a certain extent preventing reformation of the oxaphosphetane on steric grounds). Interesting, with addition of 18-crown-6 to sequester the lithium, the reaction proceeded normally in the forward direction with alkene signals observed. The fact Berger was able to do this, does of course not imply that they are real intermediates along the Wittig pathway, but it was a nice piece of mechanistic work that I thought deserved mention. In addition to this mechanistic work, there are some empirical issues with invoking a betaine intermediate that cannot be easily explained. Namely, all Wittig reactions using non-stabilised ylids should be under kinetic control and irreversible (hence highly (Z)-selective due to the inability of the initial intermediates to reverse and hence equilibrate to the thermodynamic product), however this is frequently observed not to be the case, and in certain cases, high (E)-selectivity can be achieved. Evidence for direct oxaphosphetane formation Vedejs was the one of the first to propose direct (irreversible) cycloaddition of the ylid and carbonyl to give rise to the oxaphosphetane, quickly followed by cycloreversion to form the desired alkene and a phosphine oxide. Scheme 3: cycloaddition/cycloreversion mechanism for Wittig In his early reports, direct observation of the oxaphosphetane by P-NMR was conducted[5], the paper reasons that the high-field chemical shift observed for the phosphorus containing intermediate ruled out charged, 4-valent species such as betaines, but was instead more consistent with formation of the 5-valent neutral species such as an oxaphosphetane. Scheme 4: Vedejs' observation of oxaphosphetane. This NMR analysis isn't really too convincing by itself, however in the scheme above, compound 4 had previously been characterised by X-ray crystallography[6] and various NMR methods and as such gave some weight to the species observed by Vedejs being similar to the known compound. More recent work[7] has also observed directly the cis/trans oxaphosphetanes formed via competing cycloadditions. Summary Overall, direct observation of the oxaphosphetane but not the betaine does seem to suggest that mechanistically the cyclic intermediate is formed directly rather than via the betaine closing. As stated previously, we can't 100% rule out the possibility that the betaine closure is just sufficiently rapid to appear invisible to our currently spectroscopic methods however every report of isolated/observed betaines that I've came across thus far has either intentionally been formed from an oxaphosphetane or can be explained by this without the authors recognising it. One thing that is hugely missing from the story is computational work, whilst some studies have calculated the relative energies of the two intermediates, to my knowledge, the entire reaction coordinate hasn't been fully explored. In conclusion, I think the cycloaddition mechanism is safest, and indeed it was the one I was taught as an undergraduate (and the one I know several other leading universities use), so if in doubt i'd go for that explanation, but on the firm understanding that more mechanistic work is needed. References Generally: Modern Carbonyl Olefination, Carey Advanced Organic A, and Comprehensive Organic Synthesis I. [1] Neumann, R. A.; Berger, S. Observation of a Betaine Lithium Salt Adduct During the Course of a Wittig Reaction. Eur. J. Org. Chem. 1998, 6, 1085. Subramanyam had reported similar chemistry earlier, but the NMR data was incomplete. [2] Wittig, G.; Haag, A. Über Phosphin-alkylene als olefinbildende Reagenzien, VIII. Allenderivate aus Ketenen. Chem. Ber. 1963, 96 , 1535. DOI: 10.1002/cber.19630960609. A very old paper in German. [3] Schlosser, M.; Christmann, K. F. Olefinierungen mit Phosphor-Yliden, I. Mechanismus und Stereochemie der Wittig-Reaktion. Liebigs Ann. Chem. 1967, 708, 1. DOI: 10.1002/jlac.19677080102. [4] Vedejs, E.; Meier, G. P.; Snoble, K. A. J. Low-temperature characterization of the intermediates in the Wittig reaction. J. Am. Chem. Soc. 1981, 103, 2823. DOI: 10.1021/ja00400a055. [5] Vedejs, E.; Snoble, K. A. J. Direct observation of oxaphosphetanes from typical Wittig reactions. J. Am. Chem. Soc. 1973, 95, 5778. DOI: 10.1021/ja00798a066. [6] Mazhar-Ul-Haque; Caughlan, C. N.; Ramirez, F.; Pilot, J. F.; Smith, C. P. Crystal and molecular structure of a four-membered cyclic oxyphosphorane with pentavalent phosphorus, PO2(C6H5)2(CF3)4C3H2. J. Am. Chem. Soc. 1971, 93, 5229. DOI: 10.1021/ja00749a044. [7] Maryanoff, B. E.; Reitz, A. B.; Mutter, M. S.; Whittle, R. R.; Olofson, R. A. Stereochemistry and mechanism of the Wittig reaction. Diasteromeric reaction intermediates and analysis of the reaction course. J. Am. Chem. Soc. 1986, 108, 7664. DOI: 10.1021/ja00284a034.
{ "domain": "chemistry.stackexchange", "id": 8765, "tags": "organic-chemistry, reaction-mechanism, reference-request, wittig-reactions" }
Why isn't the Time-Independent Schrödinger Equation an equation of motion?
Question: I thought an equation of motion was something where you are given a Lagrangian and, using the Euler-Lagrange equation, you then find the equations of motion for that system. Same basic idea for the Hamiltonian but with Hamilton's equations. But the time-dependent Schrödinger equation is written as \begin{equation} i\hbar \frac{\partial}{\partial t} \psi = \hat{H}\psi \end{equation} and although I gather this is an equation of motion, I never see anyone plugging it in to Hamiltons equations so I assume it must work differently somehow. I also assumed \begin{equation} \hat{H}\psi = E\psi \end{equation} was an equation of motion, but I gather it isn't. My Question: Can someone explain why the time-independent Schrödinger equation isn't an eom? Can someone explain in what sense exactly is the time-dependent Schrödinger equation an equation of motion? Answer: Can someone explain why the time-independent Schrödinger equation isn't an eom? The TISE is an eigenvalue equation due to applying separation of variables to the TDSE; it is an equation for the spatial function alone. Can someone explain in what sense exactly is the time-dependent Schrödinger equation an equation of motion? A Lagrangian (density) for which the TDSE (and its conjugate) is an EOM is $$\mathcal L = \frac{i}{2}\left(\phi^*\partial_t\phi - \phi\,\partial_t\phi^* \right) - \frac{1}{2}\partial_x\phi^*\partial_x\phi + V(x)\phi^*\phi$$
{ "domain": "physics.stackexchange", "id": 20617, "tags": "quantum-mechanics, schroedinger-equation, terminology, time-evolution, equations-of-motion" }
Can spin and macroscopic angular momentum convert to each other?
Question: Suppose an isolated system with a number of particles with parallel spins. Can the macroscopic angular momentum of the system increase at expense of the number of particles having parallel spins (that is by inverting the direction of the spins of some of them) or by converting all particles into spinless ones? Conversely in a rotating system of spinless particles can the macroscopic algular momentum be decreased by converting some of the particles into those having parallel spins? Can one choose a rotating frame of reference in such a way so in it a particle that has spin in inertial frame, does not have one? Answer: First of all yes, you can convert angular momentum to spin orientations, as only the total angular momentum $\vec J=\vec L+\vec S$ is conserved. The setup you describe is very similar to a famous experimental effect dubbed the "Einstein-de-Haas" effect. You can read more about it on Wikipedia. The main idea is that you have a ferromagnet hung on a string and when you magnetize it (which really only means that you orient the spins of the electrons in a coordinated way) the magnet has to pick up angular (mechanical) momentum, since angular momentum is conserved. What you observe is that it starts to turn seemingly out of nothing once you magnetize it. Now the part where you are mistaken is that you cannot change the spins of particles. The spin of a particle is a property of the particle itself (much like its mass. In fact those are the two properties particles have as a result of their being representations of the symmetry group of spacetime, c.f. Wigner's classification). You can only orient a component of the spin, but not the total spin magnitude. A Spin 1/2 particle (fermion) will always stay a fermion.
{ "domain": "physics.stackexchange", "id": 30673, "tags": "quantum-spin" }
Where do Newton's Laws not work?
Question: I'm working on high school level project about Newton's Laws and I picked topic that describes situations, where they dont work. Can you name any practical cases where they do not work? Why do they not work in special and general relativity or quantum mechanics? Why do they not work on very light things (atoms) or super heavy (black holes) or super fast particles in accelerator? Thanks for any advice! Answer: A quick search and it dows not look like this has been asked before. For a high school project I suggest that you might think about atomic physics. The way the negatively charged electrons travel around the positively charged nucleus. I suggest this for two reasons. 1) It is a clear case where Newton's laws do not work. 2) There are simple experiments that you could try to get evidence for it Newton's laws do not work on the small scale of atoms and molecules and we need to use quantum mechanics. For example electrons, as you may already know, are organized in shells in atoms; 1s, 2s and 2p, and so on which all have different energies. The experiments you could try would be to look at the spectrum of light from a street light or a fluorescent (strip) light. If you can observe the spectrum with a prism, for example, you should find bright lines in the spectrum. These bright lines are due to electrons moving from one energy level to another. The wavelength (or colour) of the light depends on the ammount of energy emitted when an electron drops from a higher level to a lower level. It is possible to make a simple spectrometer using an old CD or DVD as a diffraction grating. The link has some advice about how to do this in practice and examples. So you could build a spectrometer to observe transitions between different states, which are not predicted by newtonian mechanics. The question of why Newtonian Mechanics does not work at small scales and why we need to use quantum mechanics is not straightforward to answer. One way of answering it is to say that experiments have found that Newtonian Mechanics don't work for atoms and it turns out that quantum mechanics makes a good job of predicting energies. Another answer could be based on the quantum mechanical example of a particle in a 1 dimensional box because in this case for an electron in a box the size of an atom there are large energy gaps between different states that the electron can fit in, but if the box is larger - for example the size of a cup - the levels are so close together that there is effectively a continuous band of states the electron can fit in so it can have almost any energy, which is like Newtonian mechanics.
{ "domain": "physics.stackexchange", "id": 17983, "tags": "newtonian-mechanics" }
Operator in coherent state basis
Question: I am reading "Introductory to Quantum Optics" by Christopher C. Gerry and Peter L. Knight but I don't understand a solution from which you can obtain the matrix elements of an operator in the number basis if you know the diagonal coherent-state matrix elements of that operator. page 56: The diagonal elements of an operator $\hat{F}$ in a coherent state basis completely determine the operator. From Eqs. (3.76) and (3.77) we have $$ \langle\alpha|\hat{F}|\alpha\rangle e^{\alpha^*\alpha} = \sum_n\sum_m \frac{\alpha^{*m}\alpha^n}{\sqrt{m!n!}} \langle m |\hat{F} | n \rangle$$ Treating $\alpha$ and $\alpha^*$ as independent variables it is apparent that $$ \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m} \left( \langle \alpha | \hat{F} | \alpha \rangle e^{\alpha^*\alpha} \right)}{\partial\alpha^{*m}\partial\alpha^n }\right] \right|_{\alpha^*=0 \\ {\alpha=0}} = \langle m | \hat{F} | n \rangle $$ I am a bit confused right now. In the first equation, $n$ and $m$ are just indices but when I take the derivatives in the second equation into account, $n$ and $m$ are both outside of the sum and that doesn't really make sense to me. $$ \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m} \left( \langle \alpha | \hat{F} | \alpha \rangle e^{\alpha^*\alpha} \right)}{\partial\alpha^{*m}\partial\alpha^n }\right] \right|_{\alpha^*=0 \\ {\alpha=0}} = \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m}}{\partial\alpha^{*m}\partial\alpha^n} \sum_n\sum_m \frac{\alpha^{*m}\alpha^n}{\sqrt{m!n!}} \langle m |\hat{F} | n \rangle \right] \right|_{\alpha^*=0 \\ {\alpha=0}}$$ As I understood it, $n$ and $m$ only have a real meaning inside of the sums and I don't really know how to apply the derivatives to prove the solution. Answer: Your confusion is due to a slight abuse of notation, but is fairly common practice. One should not treat $m$ and $n$ to have the same meaning in the first and second equation within your quote. In the first equation, $m,n$ are dummy variables, in the second they are specific choices. Let us rename the $m,n$ in the second equation to $m',n'$. Now, when performing the derivative, you will pull out precisely the piece of in the sum of the first equation where $m=m',n=n'$, arriving at the desired result. To get this, you also assume $\langle m | \hat{F} |n \rangle$ is independent of $\alpha, \alpha^*$.
{ "domain": "physics.stackexchange", "id": 50011, "tags": "homework-and-exercises, operators, hilbert-space, coherent-states" }
actionlib tutorial errors
Question: Hi! I'm new to ROS and I've been having some trouble getting started with the actionlib tutorial under Fuerte and Xubuntu 12.04. I'm following the steps to create a Simple Action Server. In step 1.1, I used the following command to define the Fibonacci action, and then used the following command to generate the message files: $ rosrun actionlib_msgs genaction.py . Fibonacci.action This didn't work at first, and I needed to follow the guidance in another ROS answers topic to continue. The code that worked was as follows: $ rosrun actionlib_msgs genaction.py -o msg/ action/Fibonacci.action I followed the steps to edit CMakeLists.txt, created fibonacci_server.cpp, and copied the code given by the tutorial. If I make the project (with rosmake), though, the build fails with the follow error. Scanning dependencies of target fibonacci_server make[3]: Leaving directory `/home/will/ros_workspace/tutorials/learning_actionlib/build' make[3]: Entering directory `/home/will/ros_workspace/tutorials/learning_actionlib/build' [100%] Building CXX object CMakeFiles/fibonacci_server.dir/src/fibonacci_server.o Linking CXX executable ../bin/fibonacci_server /usr/bin/ld: CMakeFiles/fibonacci_server.dir/src/fibonacci_server.o: undefined reference to symbol 'vtable for boost::detail::thread_data_base' /usr/bin/ld: note: 'vtable for boost::detail::thread_data_base' is defined in DSO /usr/lib64/libboost_thread.so.1.46.1 so try adding it to the linker command line /usr/lib64/libboost_thread.so.1.46.1: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[3]: *** [../bin/fibonacci_server] Error 1 make[3]: Leaving directory `/home/will/ros_workspace/tutorials/learning_actionlib/build' make[2]: *** [CMakeFiles/fibonacci_server.dir/all] Error 2 make[2]: Leaving directory `/home/will/ros_workspace/tutorials/learning_actionlib/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/will/ros_workspace/tutorials/learning_actionlib/build' Any assistance would be greatly appreciated. Thanks! Originally posted by willshepherdson on ROS Answers with karma: 33 on 2012-06-13 Post score: 3 Answer: You need this in your CMakeLists.txt: rosbuild_add_boost_directories() rosbuild_add_executable(fibonacci_server src/fibonacci_server.cpp) rosbuild_link_boost(fibonacci_server thread) This changed in Fuerte. The tutorial explains it correctly, but you need to click on the fuerte button at the top of the page to see the new CMakeLists.txt update. Originally posted by joq with karma: 25443 on 2012-06-16 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by joq on 2012-06-17: If this answer is correct, please mark it so we know to update that Tutorial. Comment by willshepherdson on 2012-06-18: Works like a charm; thank you very much! Comment by joq on 2012-06-18: That Tutorial is already correct if you select Fuerte up at the top of the page. Unfortunately, the default version is Electric. Comment by jbohren on 2012-06-20: Jack, I actually added the distro switch and the doc update after you answered the question. (: Comment by joq on 2012-06-20: Ah, good job. Thanks! There is an open defect ticket to change the Version() default to Fuerte. Comment by jbohren on 2012-06-20: Where is that ticket? (Never mind, found it: https://code.ros.org/trac/ros/ticket/3967 )
{ "domain": "robotics.stackexchange", "id": 9788, "tags": "ros, boost, actionlib, actionlib-tutorials" }
What's the procedure of the antigen recognition by the B cells in a clear way?
Question: Before presenting my confusion, I really sincerely thank everyone for any advices or clarifying , every single comment is helpful. And my english writing skill is still very bad, Just ask anything if you are confused. I have first referenced JaneWay's immunobiology textbook and this question about roughly the same question. However, the exact process is still not very clear in my mind. So for clearance, Let's imagine the entire process at the beginning, that is, the time a pathogen having already successfully invade to our body and start circulating throughout blood vessel. So the first question: The antigen could be distinguished into two types, one of which we have already identified and one of which is totally new to our body,is this understanding correct? Then, the second question:for the identified pathogen, our antibody which is circulating throughout our body will somehow bind the pathogen and triggers a series of immune response, is this understanding correct? And then The third and seems like the most untrivial question: for the antigen which is totally new (didn't remembered by our immune cells) to our body, How does our cell knows it is at the first place indeed a pathogen? Just give me some references will be enormously helpful for me, thank you! Answer: I can't pull resources for you at this time, and my response is too long for a comment, but just to give you some direction to research: Firstly, if you are not clear, I would try to understand how antigen and pathogen are different terms. Understanding "epitope" and "hapten" may also help with your understanding. Yes, BUT it's less "type of antigen" and more "state of immune system" - either there is immunologic memory [to the particular antigen] or there is not. Yes, there is an antibody binding domain on the variable region of the antibody. Keywords: "Fc/Fab, Constant region/Variable region, Heavy chain/Light chain, V(D)J recombination, CDR1/2/3." It does not "know." Ideally, it recognizes all foreign particles (of sufficient immunogenicity) as "antigens," but this process is not perfect. Common examples of "improper" recognition are autoimmune disorders and allergies. Try looking into T/B Cell Maturation/Activation, Thymus cells, and B Cell Dual Recognition (the classic/dominant method of activation in response to pathogenic infection). I think this will give you a better understanding of how the body tries to limit its immune response to what we consider "pathogenic." If the question doesn't get locked, hopefully someone can provide a more complete response to you.
{ "domain": "biology.stackexchange", "id": 12158, "tags": "immunology" }
NITE Ubuntu 10.04
Question: Is there any way to install NITE on Ubuntu 10.04? In ROS the page seems to be empty. The only download that I could find was on <openni.org>, and it was for Ubuntu 10.10. Update: What commands do I use to run the NITE package? I want to use the "Prime Sense User Tracker Viewer" that it shows on the page. Originally posted by qdocehf on ROS Answers with karma: 208 on 2011-06-29 Post score: 1 Answer: The package is named nite-dev -> sudo apt-get install nite-dev Its source is listed as a ROS source, so you will have to have ubuntu setup for a package based installation. (You don't need to install from packages though.) Originally posted by dornhege with karma: 31395 on 2011-06-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by qdocehf on 2011-06-30: I was able to install the package, but how do I use it?
{ "domain": "robotics.stackexchange", "id": 6001, "tags": "ros, kinect, nite, ubuntu" }
Non-Fock representation of quantum field theory
Question: I cannot find reference, but I read that in curved spacetime, there exists representation that is not Fock satisfying CCR and unitarily inequivalent to a Fock representation. In usual understanding of quantum field theory in flat spacetime, CCR is, or can be, written between annihilation and creation operators as well, so any representation satisfying CCR is automatically Fock representation. So is the "satisfying CCR" part referring to CCR not between annihilation and creator operators, and we cannot arrive at creation and annihilation operator or something like that? OR is this referring to the fact that once we pick one Fock representation, other Fock representations are unitarily inequivalent to this Fock representation and are considered not to be Fock representation? Answer: Independently of the background spacetime, be it curved or flat, there are uncountably many inequivalent irreducible representations of the algebra of canonical commutation and anticommutation relations. Many of them are Fock representations, corresponding to fields of the same spin (and charge eventually) but with different masses. There are, however, also interacting representations that are inequivalent to the free Fock representations and that are not of Fock type. Some explicit examples are known in flat spacetimes with $1+1$ and $2+1$ dimensions. In general, Haag's theorem guarantees that there are inequivalent irrepresentations of any C*-algebra of quantum observables that comes with a representation of some group $G$ (e.g., the Poincaré group), as long as there are at least two different $G$-abelian pure states (the $G$-abelian states are $G$-invariant states that satisfy additional properties).
{ "domain": "physics.stackexchange", "id": 47759, "tags": "quantum-field-theory, hilbert-space, commutator, qft-in-curved-spacetime" }
If NP in BPP then NP equals RP
Question: I am looking for a reference to the fact that if NP is included in BPP then NP is equal to RP. See for instance these links: https://cs.stackexchange.com/q/80509 http://www.inf.ed.ac.uk/teaching/courses/cmc/cw3_solns.pdf https://www.csie.ntu.edu.tw/~lyuu/complexity/2011/20120103s.pdf I know that this is folklore, but I'd still like to cite something that is published and where this would be properly proved. Answer: An actual factual reference is K. Ko. Some observations on the probabilistic algorithms and NP-hard problems. Information Processing Letters, 14(1):39–43, 1982. (When I first saw this result --- I don't remember where it was now --- it was called "Ko's Theorem". Googling suggests that another theorem has that name as well...)
{ "domain": "cstheory.stackexchange", "id": 4885, "tags": "reference-request, complexity-classes, approximation" }
Inverse $\mathcal Z$-transform of rational functions
Question: What will be inverse $\mathcal Z$-transform for this function: $$H(z) = \frac{\left(1+\beta z^{-1}\right)\left(1+\beta z\right)}{\left(1+\alpha z^{-1}\right)\left(1+\alpha z\right)}$$ Answer: Given the $\mathcal Z$-transform : $$H(z) = \frac{\left(1+\beta z^{-1}\right)\left(1+\beta z\right)}{\left(1+\alpha z^{-1}\right)\left(1+\alpha z\right)}$$ The inverse $\mathcal Z$-transform can be found by the method of partial fraction expansion guided with the Region of Convergence ROC associated with $H(z)$ I will proceed with the most fundamental method and apply necesary algebraic manipulations to get the required standard form as follows: \begin{align} H(z) &= \frac {\left(1 + \beta z^{-1}\right)\left(1 + \beta z\right)}{\left(1 + \alpha z^{-1}\right)\left(1 + \alpha z\right)} &\scriptstyle{\text{convert to negative power of $z$}}\\ &= \frac{\left(1 + \beta z^{-1}\right)\left(z^{-1} + \beta\right)}{\left(1 + \alpha z^{-1}\right)\left(z^{-1} + \alpha\right)} &\scriptstyle{\text{convert to standard form of } (1-d_k z^{-1})}\\ &= \frac{\beta}{\alpha} \frac{\left(1 + \beta z^{-1}\right)\left(1 + \frac{1}{\beta} z^{-1}\right)}{\left(1 + \alpha z^{-1}\right)\left(1 + \frac{1}{\alpha}z^{-1}\right)} &\scriptstyle{\text{expand the brackets}}\\ &= \frac{\beta}{\alpha} \frac{\left(1 + \left(\beta + \frac{1}{\beta}\right) z^{-1} + z^{-2}\right) }{\left(1 + \left(\alpha + \frac{1}{\alpha}\right) z^{-1} + z^{-2}\right)} &\scriptstyle{\text{apply the long division}}\\ &= \frac{\beta}{\alpha} \left( 1 + \frac{ \left(\beta + \frac{1}{\beta} - \alpha - \frac{1}{\alpha}\right)z^{-1} }{\left(1 + \alpha z^{-1}\right)\left(1 + \frac{1}{\alpha}z^{-1}\right)} \right) &\scriptstyle{\text{go further...}}\\ &= \frac{\beta}{\alpha} \left( 1 + \frac{A}{1 + \alpha z^{-1}} + \frac{B}{1 + \frac{1}{\alpha}z^{-1}} \right) &\scriptstyle{\text{where $A$ and $B$ are found from the right side quotient of $H(z)$}}\\ \end{align} Now assuming $A$ and $B$ are found as functions of $\alpha$ and $\beta$, we shall define all possible ROC of $H(z)$ to find the corresponding inverse transform $h[n]$. Now If $0<|\alpha|<1<\frac{1}{|\alpha|}$ and ROC is such that $|z|<|\alpha|$ then: $h[n]$ will be unstable and anti-causal as $$h[n] = \frac{\beta}{\alpha} \delta[n] + \frac{\beta}{\alpha} \left(- A\left(-\alpha\right)^n \text{u}[-n-1] - B\left(- \frac{1}{\alpha}\right)^n \text{u}[-n-1] \right)$$ Or else if $0<|\alpha|<1<\frac{1}{|\alpha|}$ and ROC is such that $|\alpha| <|z|<\frac{1}{|\alpha|}$ then: $h[n]$ will be stable and two-sided as $$h[n] = \frac{\beta}{\alpha} \delta[n] + \frac{\beta}{\alpha} \left( A(-\alpha)^n \text{u}[n] - B\left(- \frac{1}{\alpha}\right)^n \text{u}[-n-1] \right)$$ Or else if $0<|\alpha|<1<\frac{1}{|\alpha|}$ and ROC is such that $|z|>\frac{1}{|\alpha|}$ then: $h[n]$ will be unstable and causal as $$h[n] = \frac{\beta}{\alpha} \delta[n] + \frac{\beta}{\alpha} \left( A(-\alpha)^n \text{u}[n] + B\left(- \frac{1}{\alpha}\right)^n \text{u}[n] \right)$$ The cases for $0<\frac 1\alpha<1<\alpha$ are simply the same, merely interchanging $\alpha$ with $\frac 1\alpha$
{ "domain": "dsp.stackexchange", "id": 3806, "tags": "z-transform, transfer-function" }
Reverse Polish notation calculator similar to dc on Unix/Linux systems using dynamic libraries
Question: This problem is using dynamic libraries so that additional calculator functions can be added by dropping a library into a specific directory. What I'd like to get out of this code review is: What do I still need to do to make this more C++ and less C? Is the object oriented nature of the code good, or am I missing something in the object oriented design? Is the debug and test part of the code something I should keep around or toss? I've used the boost headers and libraries in some portions of the code to decrease the amount of code I need to write and make it more portable. I can't find anything to make the portion of the program that deals with dynamic/shared libraries more portable. Is there some library I can use to be able to port the code from Linux/Unix to Windows and Mac? An example of what I'm looking for: This morning looking at other questions I found out about nullptr. I should have used nullptr rather than NULL in the constructor in RpnDLData.cpp. I started using C++ in 1989, ten years before C++98 was implemented. I never learned C++98, C++03 or C++11 until now. To decrease the amount of code here in the question, I have excluded the objects that deal with I/O or the Operating System Interface (relies heavily on boost for portability (parsing command lines or environment variables)). TstDbgCommon.h #ifndef TSTDBGCOMMON_H_ #define TSTDBGCOMMON_H_ const unsigned int NODEBUGORTEST = 0; const unsigned int NODEBUG = 0; const unsigned int NOTEST = 0; const unsigned int DEFAULTOBJECTTESTLEVEL = 1; const unsigned int DEFAULTOBJECTDEBUGLEVEL = 0; const unsigned int LEVEL1 = 1; const unsigned int LEVEL2 = 2; const unsigned int LEVEL3 = 3; const unsigned int LEVEL4 = 4; const unsigned int LEVEL5 = 5; const unsigned int LEVEL6 = 6; const unsigned int LEVEL7 = 7; const unsigned int LEVEL8 = 8; const unsigned int LEVEL9 = 9; const unsigned int LEVEL10 = 10; const unsigned int LEVEL11 = 11; const unsigned int LEVEL12 = 12; const unsigned int LEVEL13 = 13; const unsigned int LEVEL14 = 14; const unsigned int LEVEL15 = 15; const unsigned int LEVEL16 = 16; const unsigned int LEVEL17 = 17; const unsigned int LEVEL18 = 18; const unsigned int LEVEL19 = 19; const unsigned int LEVEL20 = 20; #endif /* TSTDBGCOMMON_H_ */ TestBase.h #ifndef TESTBASE_H_ #define TESTBASE_H_ #include "TstDbgCommon.h" using namespace std; class TestBase { private: static unsigned int mA_Level; unsigned int mA_ObjectMinimumLevel; inline unsigned int mF_CheckLevelAgainstObjectLevel(unsigned int level) { return (mA_ObjectMinimumLevel < level); }; inline unsigned int mF_CheckLevelAgainstProjectLevel(unsigned int level) { return (GetProjectTestLevel() > level); }; inline unsigned int mF_CheckLevel(unsigned int level) { return ( (mF_GetObjectLevel() < level) && (level < GetProjectTestLevel()) ); }; protected: void mF_SetLevel(unsigned int Level) { this->mA_Level = Level; }; inline unsigned int mF_GetLevel() { return this->mA_Level; }; inline void mF_SetObjectLevel(unsigned int Level) { this->mA_ObjectMinimumLevel = Level; }; inline unsigned int mF_GetObjectLevel() { return this->mA_ObjectMinimumLevel; }; inline unsigned int mF_CheckTestLevel(unsigned int level) { return mF_CheckLevel(level); }; public: TestBase(); virtual ~TestBase(); inline void SetProjectTestLevel(unsigned int TestLevel) { mF_SetLevel(TestLevel); }; inline unsigned int GetProjectTestLevel() { return mF_GetLevel(); }; inline void SetObjectTestLevel(unsigned int level) { mF_SetObjectLevel(level); }; inline unsigned int GetObjectTestLevel() { return mF_GetObjectLevel(); }; inline unsigned int IsTesting() { return ( (this->mA_Level) ? ( (this->mA_ObjectMinimumLevel) ? ( this->mA_Level >= this->mA_ObjectMinimumLevel ) : 0 ) : 0 ); }; void ObjectLevelTesting(const char *OutputBuffer); virtual void ShowOnlyIfLevelGreaterThan(int forceLevel, const char* format, ...) = 0; virtual void TestReportThisObject() = 0; }; #endif /* TESTBASE_H_ */ TestBase.cpp #include "TestBase.h" #include <iostream> using namespace std; unsigned int TestBase::mA_Level = NOTEST; TestBase::TestBase() { mA_ObjectMinimumLevel = DEFAULTOBJECTTESTLEVEL; } TestBase::~TestBase() { } void TestBase::ObjectLevelTesting(const char *OutputBuffer) { if (IsTesting()) { cout << OutputBuffer; } } DebugBase.h #ifndef DEBUGBASE_H_ #define DEBUGBASE_H_ #include "TstDbgCommon.h" #include <functional> using namespace std; class DebugBase { private: static unsigned int mA_Level; unsigned int mA_ObjectMinimumLevel; protected: inline void mF_SetProjectLevel(unsigned int Level) { this->mA_Level = Level; }; inline unsigned int mF_GetProjectLevel() { return this->mA_Level; }; inline void mF_SetObjectLevel(unsigned int level) { this->mA_ObjectMinimumLevel = level; }; inline unsigned int mF_GetObjectLevel() { return this->mA_ObjectMinimumLevel; }; inline unsigned int mF_CheckLevelAgainstObjectLevel(unsigned int level) { return (mF_GetObjectLevel() < level); }; inline unsigned int mF_CheckLevelAgainstProjectLevel(unsigned int level) { return (mF_GetProjectLevel() > level); }; inline unsigned int mF_CheckLevel(unsigned int level) { return ( (mF_GetObjectLevel() < level) && (level < mF_GetProjectLevel()) ); }; public: DebugBase(); virtual ~DebugBase(); inline void SetProjectDebugLevel(unsigned int TestLevel) { mF_SetProjectLevel(TestLevel); }; inline unsigned int GetProjectDebugLevel() { return mF_GetProjectLevel(); }; inline void SetObjectDebugLevel(unsigned int TestLevel) { mF_SetObjectLevel(TestLevel); }; inline unsigned int GetObjectDebugLevel() { return mF_GetObjectLevel(); }; inline unsigned int IsDebugging() { return ( (this->mA_Level) ? ( (this->mA_ObjectMinimumLevel) ? (this->mA_Level >= this->mA_ObjectMinimumLevel) : 0) : 0 ); }; void ObjectLevelDebugging(const char *OutputBuffer); virtual void ShowOnlyIfLevelGreaterThan(int forceLevel, const char* format, ...) = 0; }; #endif /* DEBUGBASE_H_ */ DebugBase.cpp #include "DebugBase.h" #include <iostream> using namespace std; unsigned int DebugBase::mA_Level = 0; DebugBase::DebugBase() { SetObjectDebugLevel(0); } DebugBase::~DebugBase() { } void DebugBase::ObjectLevelDebugging(const char *OutputBuffer) { if (IsDebugging()) { cout << OutputBuffer; } } DbgTstHandling.h #ifndef DBGTSTHANDLING_H_ #define DBGTSTHANDLING_H_ #include "TestBase.h" #include "DebugBase.h" #include <vector> using namespace std; class DebugAndTestHandling : protected TestBase, protected DebugBase { private: void mF_CommonDebugAndTestReporting(const char *Output); inline int mF_CheckLevelAgainstObjectLevels(unsigned int level) { return ( ((TestBase::mF_GetObjectLevel() > level)) || (DebugBase::mF_GetObjectLevel()> level) ); }; inline int mF_CheckLevelAgainstProjectDebug(unsigned int level) { return (GetProjectDebugLevel() > level); }; inline int mF_CheckLevelAgainstProjectTest(unsigned int level) { return (GetProjectTestLevel() > level); }; inline int mF_CheckLevelAgainstProjectLevels(unsigned int level) { return ( (mF_CheckLevelAgainstProjectDebug(level)) || (mF_CheckLevelAgainstProjectTest(level)) ); }; inline int mF_CheckLevels(int level) { return ( (TestBase::mF_CheckTestLevel(level)) || (DebugBase::mF_CheckLevel(level)) ); }; protected: void ObjectLevelDebuggingOrTesting(const char *format, ...); void ShowOnlyIfLevelGreaterThan(int forceLevel, const char* format, ...); inline unsigned int IsTestingOrDebugging() { return ((IsTesting()) || (IsDebugging())); }; public: DebugAndTestHandling(); virtual ~DebugAndTestHandling(); inline void SetObjectMinimumDebugOrTestLevel(int level) { TestBase::SetObjectTestLevel(level); DebugBase::SetObjectDebugLevel(level); }; }; #endif /* DBGTSTHANDLING_H_ */ DbgTstHandling.cpp #include "DbgTstHandling.h" #include <iostream> #include <cstdarg> #include <ctype.h> #include <string.h> DebugAndTestHandling::DebugAndTestHandling() { TestBase::SetObjectTestLevel(DEFAULTOBJECTTESTLEVEL); DebugBase::SetObjectDebugLevel(DEFAULTOBJECTDEBUGLEVEL); } DebugAndTestHandling::~DebugAndTestHandling() { } void DebugAndTestHandling::mF_CommonDebugAndTestReporting( const char *OutputBuffer) { if (IsDebugging()) { ObjectLevelDebugging(OutputBuffer); return; } if (IsTesting()) { ObjectLevelTesting(OutputBuffer); return; } } void DebugAndTestHandling::ObjectLevelDebuggingOrTesting( const char *format, ...) { if (IsTestingOrDebugging()) { char localBuffer[2028]; va_list args; va_start(args, format); vsprintf(localBuffer, format, args); mF_CommonDebugAndTestReporting(localBuffer); va_end(args); } } void DebugAndTestHandling::ShowOnlyIfLevelGreaterThan(int forceLevel, const char *format, ...) { if (mF_CheckLevels(forceLevel)) { // Indent output by level for (int TabOutput = forceLevel; --TabOutput; ) { cout << "\t"; } char localBuffer[2028]; va_list args; va_start(args, format); vsprintf(localBuffer, format, args); mF_CommonDebugAndTestReporting(localBuffer); va_end(args); } } RpnDLData.h #ifndef RPNDLDATA_H_ #define RPNDLDATA_H_ #include <string> using namespace std; #include "plugins.h" #include "DbgTstHandling.h" using namespace std; class RpnDLData : protected DebugAndTestHandling { private: void *m_LibHandle; string *m_LibPath; OpTableEntry *m_Data; void m_OpenLibrary(); void m_CloseLibrary(); void m_FindRpnHubSymbol(); public: RpnDLData(string FullLibraryPath, int ObjectDebugTestLevel=DEFAULTOBJECTTESTLEVEL); virtual ~RpnDLData(); inline const string *GetPath() { return m_LibPath; }; inline int IsLibraryOpen() { return ((m_LibHandle) ? 1 : 0); }; const OpTableEntry *GetOperationTableData(); inline int IsRpnLibrary() { return (IsLibraryOpen() && m_Data); }; void TestReportThisObject(); void Test_ReportM_LibHandle(); void Test_ReportM_LibPath(); void Test_ReportM_Data(); void Test_Reportm_IsOpen(); }; #endif /* RPNDLDATA_H_ */ RpnDLData.cpp #include <iostream> #include <dlfcn.h> #include <string> #include <typeinfo> #include "RpnDLData.h" using namespace std; void RpnDLData::Test_Reportm_IsOpen() { ObjectLevelDebuggingOrTesting("\tRpnDLData->m_IsOpen: %d\n", IsLibraryOpen()); } void RpnDLData::Test_ReportM_LibPath() { ObjectLevelDebuggingOrTesting("\tRpnDLData->m_LibPath %s\n", m_LibPath->c_str()); } void RpnDLData::Test_ReportM_LibHandle() { ObjectLevelDebuggingOrTesting("\tRpnDLData->m_LibHandle 0x%x\n", m_LibHandle); } void RpnDLData::Test_ReportM_Data() { if (m_Data) { ObjectLevelDebuggingOrTesting("\tRpnDLData->m_Data = 0x%x\n", m_Data); ObjectLevelDebuggingOrTesting("\tRpnDLData->m_Data->FuncPtr = 0x%x\n", m_Data->FuncPtr); ObjectLevelDebuggingOrTesting("\tRpnDLData->m_Data->name = %s\n", m_Data->name); } else { ObjectLevelDebuggingOrTesting("\tRpnDLData->m_data = NULL\n"); } } void RpnDLData::TestReportThisObject() { ObjectLevelDebuggingOrTesting("RPN Dynamic Library Object Test Report\n"); ObjectLevelDebuggingOrTesting("\tRpnDLData->TestLevel = %d\n", GetObjectTestLevel()); ObjectLevelDebuggingOrTesting("\tRpnDLData->DebugLevel = %d\n", GetObjectDebugLevel()); Test_ReportM_LibPath(); Test_Reportm_IsOpen(); Test_ReportM_Data(); Test_ReportM_LibHandle(); } RpnDLData::RpnDLData(string FullLibraryPath, int ObjectDebugTestLevel) { m_LibPath = NULL; m_LibHandle = NULL; m_Data = NULL; SetObjectMinimumDebugOrTestLevel(ObjectDebugTestLevel); m_LibPath = new string(FullLibraryPath); m_OpenLibrary(); m_FindRpnHubSymbol(); TestReportThisObject(); } RpnDLData::~RpnDLData() { m_CloseLibrary(); delete m_LibPath; } void RpnDLData::m_CloseLibrary() { if (m_LibHandle) { dlclose(m_LibHandle); m_LibHandle = 0; } } void RpnDLData::m_OpenLibrary() { if (!IsLibraryOpen()) { if (!(m_LibHandle = dlopen(m_LibPath->c_str(), (RTLD_NOW | RTLD_LOCAL)))) { ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't open shared library %s\n", m_LibPath); } } } void RpnDLData::m_FindRpnHubSymbol() { if (IsLibraryOpen()) { if (m_Data) { return; } void *Found = NULL; dlerror(); // Clear any previous errors Found = dlsym(m_LibHandle, "rpnhub_plugin"); if (Found) { OpTableEntry *TableEntry = static_cast<OpTableEntry *>(Found); if (!TableEntry) { m_CloseLibrary(); ShowOnlyIfLevelGreaterThan(LEVEL3, "Symbol does not convert %s\n", m_LibPath); } else { ShowOnlyIfLevelGreaterThan(LEVEL3, "TableEntry = {%s,0x%x}\n", TableEntry->name, TableEntry->FuncPtr); } m_Data = TableEntry; } else { ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't find symbol rpnhub_plugin %s\n", m_LibPath); } } } const OpTableEntry *RpnDLData::GetOperationTableData() { if (!IsRpnLibrary()) { m_OpenLibrary(); m_FindRpnHubSymbol(); } return m_Data; } RpnOpsTab.h #ifndef RPNOPSTAB_H_ #define RPNOPSTAB_H_ using namespace std; #include <vector> #include <stack> #include <string> #include <map> #include "plugins.h" #include "DbgTstHandling.h" class RpnOperationsTable : protected DebugAndTestHandling { private: string *m_SearchPath; map<string, const OpTableEntry *> m_Operations; vector<class RpnDLData *> m_OpenedLibraries; int m_FindAndAddPluginLibraries(); int m_AddLibraryToTable(string Library); void m_CloseAllLibraries(); public: RpnOperationsTable(string *LibPath, int ObjectDebugTestLevel=DEFAULTOBJECTTESTLEVEL); virtual ~RpnOperationsTable(); void ExecuteOperation(string InputToken, stack<double>& Operands); void TestReportThisObject(); }; #endif /* RPNOPSTAB_H_ */ RpnOpsTab.cpp #include <stdexcept> #include <cstdlib> #include <error_code.hpp> #include <range.hpp> #include <filesystem.hpp> #include "RpnDLData.h" #include "RpnOpsTab.h" using namespace std; using namespace boost; using namespace boost::system; using namespace boost::filesystem; using namespace boost::range; RpnOperationsTable::RpnOperationsTable(string *PathSpec, int ObjectDebugTestLevel) { SetObjectMinimumDebugOrTestLevel(ObjectDebugTestLevel); if (!PathSpec) { string Emsg = "In RpnOperationsTable Constructor: Drop in directory path not specified"; throw runtime_error(Emsg); } m_SearchPath = new string(*PathSpec); ObjectLevelDebuggingOrTesting("Current Path = %s\n", m_SearchPath->c_str()); if (!m_FindAndAddPluginLibraries()) { string Emsg = "No plugin libraries found for rpn in the search directory "; Emsg.append(*m_SearchPath); throw runtime_error(Emsg); } } RpnOperationsTable::~RpnOperationsTable() { m_CloseAllLibraries(); delete m_SearchPath; } void RpnOperationsTable::m_CloseAllLibraries() { for (auto OpenSharedLib : m_OpenedLibraries) { RpnDLData *DLCloseData = OpenSharedLib; ShowOnlyIfLevelGreaterThan(LEVEL2, "Closing shared Library: %s\n", DLCloseData->GetPath()->c_str()); delete DLCloseData; } } int RpnOperationsTable::m_FindAndAddPluginLibraries() { int Found = 0; path plugins_dir(*m_SearchPath); path SharedLibExtention(".so"); // Change to .dll on Microsoft Windows if ((exists(plugins_dir)) && (is_directory(plugins_dir))) { for(auto& File_Iter : make_iterator_range(directory_iterator(plugins_dir), {})) { path PathToCheck = File_Iter; ShowOnlyIfLevelGreaterThan(LEVEL2, "Found library: %s\n", PathToCheck.c_str()); if ((!is_directory(File_Iter)) && (PathToCheck.extension() == SharedLibExtention)) { if (m_AddLibraryToTable(PathToCheck.string())) { Found++; } } } } else { string Emsg = "The search path : "; Emsg.append(*m_SearchPath); Emsg.append(" either doesn't exist or is not a directory"); throw runtime_error(Emsg); } return Found; } int RpnOperationsTable::m_AddLibraryToTable(string Library) { int Added = 0; ObjectLevelDebuggingOrTesting("Attempting to insert library %s\n", Library.c_str()); RpnDLData *DLCloseData = new RpnDLData(Library, GetObjectDebugLevel()); if (!DLCloseData->IsLibraryOpen()) { ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't open shared library : %s\n", Library.c_str()); return Added; // If errors occur then ignore this library } const OpTableEntry *TableEntry = DLCloseData->GetOperationTableData(); if (!TableEntry) { delete DLCloseData; ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't find symbol rpnhub_plugin in : %s\n", Library.c_str()); } else { m_OpenedLibraries.push_back(DLCloseData); m_Operations[TableEntry->name] = TableEntry; Added++; } return Added; } void RpnOperationsTable::ExecuteOperation(string InputToken, stack<double>& Operands) { OpTableEntry const *TableEntry = m_Operations[InputToken]; if (TableEntry) { TableEntry->FuncPtr(Operands); ShowOnlyIfLevelGreaterThan(LEVEL2, "Performed : %s\n", InputToken.c_str()); } else { char Operator[32]; strncpy(Operator, InputToken.c_str(), 32); Operands.push(atof(Operator)); ShowOnlyIfLevelGreaterThan(LEVEL2, "Added %s to Operands\n", InputToken.c_str()); } } void RpnOperationsTable::TestReportThisObject() { ObjectLevelDebuggingOrTesting("RPN Operations Table Object Test Report\n"); ObjectLevelDebuggingOrTesting( "\tRpnOpsTable->TestLevel = %d\n", GetObjectTestLevel()); ObjectLevelDebuggingOrTesting( "\tRpnOpsTable->DebugLevel = %d\n", GetObjectDebugLevel()); ObjectLevelDebuggingOrTesting( "\tCurrent Path : %s\n", m_SearchPath->c_str()); ObjectLevelDebuggingOrTesting( "\t%d Operations were added to the Operations Table\n", static_cast<int> (m_Operations.size())); for (auto& kv : m_Operations) { ObjectLevelDebuggingOrTesting("\t\tKey '%s' 0x%x\n", kv.first.c_str(), kv.second); } } RpnCalc.h #ifndef RPNCALC_H_ #define RPNCALC_H_ #include "DbgTstHandling.h" using namespace std; class RpnCalculator : protected DebugAndTestHandling { private: class RpnOperationsTable *m_OpsTable; class RpnCalculatorIOSystem *m_IOSystem; protected: inline int Test_DoesOperationsTableExist() { return ((m_OpsTable) ? 1 : 0); }; inline int Test_DoesIOSystemExist() { return ((m_IOSystem) ? 1 : 0); }; virtual void CalCulatorRunLoop(); public: RpnCalculator(int argc, char const * const argv[], int ObjectDebugTestLevel=DEFAULTOBJECTTESTLEVEL); virtual ~RpnCalculator(); virtual void RunUntilQuit(); inline int Test_InternalTestsPassed() { return ( Test_DoesOperationsTableExist() && Test_DoesIOSystemExist() ); }; void TestReportThisObject(); }; #endif /* RPNCALC_H_ */ Answer: I see a number of things that may help you improve your code. Don't abuse using namespace std Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid. It's especially bad to put it into header files, so please don't do that. Use standard Boost directory structure In files such as RpnOpsTab.cpp, there are a number of included Boost headers, but unfortunately, they're not in the standard hierarchy. For example, the code currently includes these lines: #include <error_code.hpp> #include <range.hpp> #include <filesystem.hpp> but they should normally be written like this: #include <boost/system/error_code.hpp> #include <boost/range.hpp> #include <boost/filesystem.hpp> Use the newer style parametric constructors In many of the class constructors, there is code like this: TestBase::TestBase() { mA_ObjectMinimumLevel = DEFAULTOBJECTTESTLEVEL; } However the more modern style would be to write it like this instead: TestBase::TestBase() : mA_ObjectMinimumLevel{DEFAULTOBJECTTESTLEVEL} { } Let the compiler create member functions In TestBase the virtual destructor has no body and no effect. Rather than writing one manually, you could simply let the compiler write it for you. virtual ~TestBase() = default; Avoid explicitly using this Many of the clases include explicit references to this that aren't really needed and just add to visual clutter. For example, instead of this: inline unsigned int mF_GetObjectLevel() { return this->mA_ObjectMinimumLevel; }; You can write this: unsigned int mF_GetObjectLevel() { return mA_ObjectMinimumLevel; } Note that I've omitted the inline keyword and the trailing ;, neither of which are necessary. The function is likely to be inlined anyway (the keyword is only a suggestion) and the semicolon is syntactically superfluous. Use const where practical Functions such as the above mentioned mf_GetObjectLevel() do not and should not alter the underlying object. For that reason, they should be declared const: unsigned int mF_GetObjectLevel() const Simplify naming There is not really any useful reason to have mA_ or mF_ prefixes. I'd recommend using the common m_ prefix for member data items, and no prefix for functions. Rethink your class design The DebugBase and TestBase classes are very similar and the DebugAndTestHandling class inheirits from both. It may make more sense instead to have a single Log class and have the DebugAndTestHandling class contain two instances of it (one for test and one for debug). It would seem to simplify and rationalize the interface. Avoid new and delete Modern C++ uses new and delete and raw pointers much much less often than the version you and I both learned in the 1980s. For instance, within RpnOperationsTable::m_AddLibraryToTable() you could simply create the RpnDLData object to be added to the std::vector and rely on the object going out of scope to implicitly delete it. int RpnOperationsTable::addLibraryToTable(std::string Library) { int Added = 0; ObjectLevelDebuggingOrTesting("Attempting to insert library %s\n", Library.c_str()); RpnDLData DLCloseData(Library, GetObjectDebugLevel()); if (!DLCloseData.IsLibraryOpen()) { ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't open shared library : %s\n", Library.c_str()); return Added; // If errors occur then ignore this library } const OpTableEntry *TableEntry = DLCloseData.GetOperationTableData(); if (!TableEntry) { ShowOnlyIfLevelGreaterThan(LEVEL3, "Can't find symbol rpnhub_plugin in : %s\n", Library.c_str()); } else { m_OpenedLibraries.push_back(DLCloseData); m_Operations[TableEntry->name] = TableEntry; Added++; } return Added; } Note that this change also assumes that the m_OpenedLibraries becomes an array of objects rather than an array of pointers. That is, it would be declared like this: std::vector<class RpnDLData> m_OpenedLibraries; This also simplifies other things considerably. For example, the routine to close all libraries can now be as simple as this: void RpnOperationsTable::m_CloseAllLibraries() { for (auto& OpenSharedLib : m_OpenedLibraries) { ShowOnlyIfLevelGreaterThan(LEVEL2, "Closing shared Library: %s\n", OpenSharedLib.GetPath()->c_str()); } m_OpenedLibraries.clear(); } Simplify class interfaces Most of the instances in which the code calls the various logging functions, tend to look like this: ShowOnlyIfLevelGreaterThan(LEVEL2, "Found library: %s\n", PathToCheck.c_str()); Note that the c_str() operation is called in very many intances and further, that there is no real prevention to passing an incorrect number of arguments. That is, this would also compile and run without complaint: ShowOnlyIfLevelGreaterThan(LEVEL2, "Found library:\n", PathToCheck.c_str()); The difference is just that the "%s" is missing from the format string. I'd suggest instead something that supports operator << so that it could be written like this: if (logLevel > LEVEL2) { log << "Found library: " << PathToCheck << '\n'; } This also has the advantage that if the log level is such that it would not be printed anyway, that the rest of the statement is never evaluated. This may have a performance advantage. Eliminate "magic numbers" This code has a number of "magic numbers," that is, unnamed constants such as 32, 2048, etc. Generally it's better to avoid that and give such constants meaningful names. That way, if anything ever needs to be changed, you won't have to go hunting through the code for all instances of "32" and then trying to determine if this particular 32 means the maximum size of an operator or some other constant that happens to have the same value.
{ "domain": "codereview.stackexchange", "id": 18868, "tags": "c++, c++11, boost, portability, dynamic-loading" }
Convertible templated math vector
Question: I've made a templated math vector struct with a templated type and templated dimension count. I want my vectors to be convertible so I can easily make, for example, a vec<int, 3> from a vec<float, 3> I want to do it in a safe way, or at least I'd like to know the ways it's unsafe. I've never used static_cast or dynamic_cast and I've never used the operator user-defined conversion. Here is what I came up with (only the relevant parts of the struct): template<class t, size_t dimensions> struct vec{ typedef t coord_type; static constexpr size_t dimension_count = dimensions; std::array<coord_type, dimension_count> coords; template<class new_coord_type> operator vec<new_coord_type, dimension_count>() const{ std::array<new_coord_type, dimension_count> casted_coords; std::transform( coords.begin(), coords.end(), casted_coords.begin(), [](coord_type coord){return static_cast<new_coord_type>(coord);} ); return {casted_coords}; } }; It seems to work how I want it to. For example, this compiles and gives me the correct results: vec<float, 3> v1{1.1f, 2.5f, 3.9f}; vec<int, 3> v2 = v1; // converted correctly to {1, 2, 3} The only way I could figure out how to cast the contents of an array is to use std::transform. I'm not sure if there's a better way. Final questions: Is this unsafe or incorrect? How can this be improved? Answer: I had to adjust your code style a bit before I could easily read the code. In particular: Use InitialCaps for template parameters Add a space before { A newline after template<...> generally improves readability Thus: template<class T, size_t Dimensions> struct vec { using coord_type = T; static constexpr size_t dimension_count = Dimensions; std::array<T, Dimensions> coords; template<class U> operator vec<U, Dimensions>() const { std::array<U, Dimensions> casted_coords; std::transform( coords.begin(), coords.end(), casted_coords.begin(), [](T coord) { return static_cast<U>(coord); } ); return {casted_coords}; } }; Notice that I personally prefer to use the original template parameters T, Dimensions, etc. in places where your code used the member typedefs coord_type, dimension_count etc.; I find that this improves readability but I believe reasonable people may differ on the subject. (In particular, the C++ Standard always uses member typedefs in declarations such as reference operator*() const as opposed to T& operator*() const; but I think that's because the name reference is part of the API of the class, whereas the name T is given for exposition only. That's a Standard-ism that we don't necessarily need to emulate when writing our own code.) Nit: Dimensions is technically the wrong name for a single dimension. Also, by capitalizing the template parameter, we've freed up the shorter name dimensions (or dimension) for the constexpr member variable. You never have to use a tediously long name like dimension_count if you manage your naming real estate effectively! return {casted_coords}; The extra braces here are a pessimization; they inhibit move-elision (a.k.a. Named Return Value Optimization). Remove them... oh wait, I see, casted_coords is a std::array, not a vec! Why are you constructing a separate array and then copying it into the vec? That seems pointless! At the very least, this should be return { std::move(casted_coords) }; but ideally you'd define casted_coords to be of type vec<U, Dimensions> and then transform the data straight into casted_coords.coords.begin(). [](T coord) { return static_cast<U>(coord); } This lambda takes its parameter by-copy, which is going to be expensive if T is, say, std::string. Prefer to take generic parameters by const& or (in C++14) by perfect-forward: [](const T& coord) { return static_cast<U>(coord); } [](auto&& coord) { return static_cast<U>(std::forward<decltype(coord)>(coord)); } In this case the former is better because it's less typing (and thus fewer chances to screw something up). Your operator vec<U, Dimensions>() should be explicit, unless you have a darn good reason for it not to be. Most possible reasons I can think of do not qualify as "darn good." In fact, I'd prefer to go further, and say that you shouldn't be using C++'s native "conversion" mechanisms for this kind of conversion at all. IMO you should write a user-defined function, along the lines of static_pointer_cast or any_cast, that does precisely and explicitly what you want. template<class To, class From, size_t Dim> auto static_vec_cast(const vec<From, Dim>& from) -> vec<To, Dim> { vec<To, Dim> to; std::transform( from.coords.begin(), from.coords.end(), to.coords.begin(), [](const From& x) { return static_cast<To>(x); } ); return to; } int main() { vec<int, 3> a = {{1,2,3}}; vec<float, 3> b = static_vec_cast<float>(a); } Season with SFINAE to taste. However, this still has a major flaw: You're accidentally requiring that To be default-constructible! That's no good (especially if it does happen to be default-constructible but the default constructor is expensive). You can fix this issue, but it requires metaprogramming. template<class To, class From, size_t Dim, size_t... Is> auto static_vec_cast(const vec<From, Dim>& from, std::index_sequence<Is...>) -> vec<To, Dim> { vec<To, Dim> to = {{ static_cast<To>(from.coords[Is])... }}; return to; } template<class To, class From, size_t Dim> auto static_vec_cast(const vec<From, Dim>& from) -> vec<To, Dim> { return static_vec_cast<To>(from, std::make_index_sequence<Dim>{}); } The advantage of this approach is that it's hardly any more lines of code, but it's more correct, and it doesn't rely on running std::transform at runtime — it just generates the correct code directly inline. Writing an efficient static_vec_cast<U>(vec<T, Dim>&&) is left as an exercise for the reader.
{ "domain": "codereview.stackexchange", "id": 26806, "tags": "c++, coordinate-system, type-safety, casting" }
Is a magnetic monopole really necessary for charge quantization?
Question: The usual remark that goes about in a first encounter of Dirac monopoles is that it solves the problem of electric charge quantization. I have also studied t'Hooft Polyakov monopoles which asymptotically quantize the charge. Basically they prove that $ e g \in k\mathbb Z$ where $k$ is a dimensionless number, $e$ is the electric charge and $g$ is the magnetic charge. First of all, what do we mean by saying that charge should be quantized? Are we trying to say that all electrical charges we find in nature are integer multiples of the electronic charge ($e=1.6\times10^{-19}C$)? In that case, isn't the obvious explanation the fact that everything is made up of electrons and protons? Why take the trouble to invent monopoles to explain this matter of fact? Perhaps we are trying to explain why the electronic charge is that particular number. Surely, the $U(1)$ charge can be any real number. But I do not see how quantization of the electrical charge helps in explaining this number anyway. Instead, it raises more questions. If $U(1)$ charge is quantized, which is to say that several integral multiples of a charge quantum are allowed to exist, where are all the other elementary particles with all the integral multiple charges that are allowed? How many times the charge quantum is the electronic charge? What particle has the minimum allowed charge? If that particle happens to be the electron (and why is that?), and you are going to explain charges of other composites of electrons in terms of the electronic charge, why bother invent the magnetic monopole machinery in the first place? Answer: Yes, "quantization of charge" means that all charges are a multiple of some fundamental charge unit $e$. Of course, everything being made up of electrons and protons with a fixed charge explains quantization. But it doesn't explain why there are only electrons and protons, or why the charge of the proton is a multiple of that of the electron. What Dirac quantization explains a priori, i.e. without any further experimental input about the number of charged fundamental particles, is that all charged particles must have a charge that is a multiple of $e$, regardless of whether they are fundamental or composite. This is different from the indeed rather trivial observation that composite particles made of two charged fundamental particles with charges $e^+ = -e^-$ are charged as multiples of $e^+$. That other fundamental particles with other integral multiples of $e$ may exist doesn't mean they must exist. Dirac quantization only says that if other charged particles exist, they must be quantized in terms of the fundamental unit $e$. Note that, since we now know of the existence of quarks, the fundamental charge unit of our universe would be one third of the electron charge. Note also that the electron is not composite, so your explanation for the charges of atoms because they're composites of electron and proton utterly fails to explain why the electron charge is an integral multiple of the quark charge (as it does with the electron and the proton charge, but perhaps two equal fundamental charges could be seen as natural, while a 1:3 relationship certainly demands explanation for why it could not be irrational). As an aside, Dirac quantization is not a useful argument if you already know the gauge group of electromagnetism is $\mathrm{U}(1)$ - the representations of $\mathrm{U}(1)$ are classified by integers, and their charges are integral multiples of the fundamental representation's charge. But, classically, it is impossible to decide whether the gauge group of electromagnetism is $\mathbb{R}$ or $\mathrm{U}(1)$. Dirac's argument in modern language essentially shows that if the Aharonov-Bohm effect exists and if magnetic monopoles exist, then the gauge group of quantum electromagnetism must be $\mathrm{U}(1)$, not $\mathbb{R}$, if it is to be consistent.
{ "domain": "physics.stackexchange", "id": 45024, "tags": "quantum-mechanics, charge, discrete, magnetic-monopoles, dirac-monopole" }
Is it a good practice to pad signal before feature extraction?
Question: Is padding, before feature extraction with VGGish, a good practice? Our padding technique is to find the longest signal (which is loaded .wav signal), and then, in every shorter signal, put zeros to the size of the longest one. We need to use it because one size of input data is desirable. Perhaps there is any other techniques you recommend? The difference between padding before and after the features extraction by accuracy is quite big - more than 20%. Using padding before extraction gives 97% accuracy. I'd be glad to read your feedback, and explain me why that happens, and tell me if that kind of padding is correct action or is there a better solution. Answer: Padding is a common practice both in image-processing (typically via CNNs) and in sequence-processing tasks (RNNs, Transformers). For CNNs all the standard convolutional layers - Conv1D, Conv2D and Conv3D,- have the padding argument. The padding values can be valid or same for 2d and 3d convolutions. And extra causal type of padding is possible for 1d convolutions and the documentation refers to this paper: WaveNet: A Generative Model for Raw Audio - which sounds quite close to what you are interested in. This animations might be useful to get a bit more intuition about the convolutions and strides/padding. The general consensus is that using same padding is advantageous for model performance - your network gets more information about the borders of your inputs (and deeper networks are possible). For sequential models padding is even more important. Training samples are usually of unequal lengths, so you have to pad them with a special token (usually called [PAD] that gets encoded as 0). Here are some examples of this mentioned in tensorflow docs, huggingface.transformers or BERT tutorial.
{ "domain": "ai.stackexchange", "id": 2792, "tags": "feature-extraction, accuracy, signal-processing, feature-engineering, padding" }
Finding median from unsorted array with duplicate elements without sorting it
Question: I am implementing a method to find the median of an unsorted array using a counting sort. I would happily go for a median of medians or selection algorithm for better performance but they are essentially sorting the array (or partially sorting the array if I choose to go for minHeap) which I am not in favor of. int getRange(int *array, int count) { int i, max = 0; for(i = 0; i < count; i++) { if(array[i] > max) { max = array[i]; } } return max; } int countFreq(int *array, int size_array, int item) { int i, freq = 0; for(i = 0; i < size_array; i++) { if(array[i] == item) freq++; } return freq; } int median(int *array, int count) { int range = getRange(array, count); int i, mid_index, addition = 0; //Yes I can use calloc here int *freq = (int *)malloc(sizeof(int) * range + 1); memset(freq, 0, sizeof(int)* range + 1); for(i = 0; i < range + 1; i++) { //Count i in array and insert at freq[i] freq[i] = countFreq(array, count, i); } if(count % 2 == 0) { mid_index = count / 2; } else { mid_index = count / 2 + 1; } for(i = 0; i < range + 1; i++) { addition += freq[i]; if(addition >= mid_index) { break; } } free(freq); return i; } I followed this answer to implement using C. Certainly, I want to improve upon this or maybe a better algorithm that doesn't sort the array. For me, this algorithm has some problems: What if there are just 2 elements, say {10, 10000}; this will still go on for creating an array of size 10000 which essentially has zeros in it except at the last index. I find it hard to digest the performance of this algorithm with larger arrays to sort, for now, this is O(n³) as far as I can think of. Answer: If we never modify the elements of array, then it should be passed as a pointer to const: int const *array. The frequency counts can be unsigned, and ought to be able to represent any size of input array (that suggests that count should be a size_t). We absolutely must test that the return value of malloc() (and family) is not null before trying to dereference it (including passing it to memset()). Additionally, it's not necessary or desirable to cast it to the target type: size_t *freq = calloc(sizeof *freq, range + 1); if (!freq) { fputs("Memory allocation failed!\n", stderr); exit(EXIT_FAILURE); } The algorithm has undefined behaviour if any element in the array is less than zero. We need to find the minimum as well as maximum value, or perhaps change the input to be an unsigned type. The counting is strange, with the nested loop. Normally, we'd loop just once, incrementing the index for each element we look at - something like this: for (int i = 0; i < count; ++i) { ++freq[array[i]]; } To avoid excessive temporary memory use for the count array, we could use a multi-pass approach. Divide the range [min,max] into into (say) 256 buckets. Count the inputs into those buckets. Identify the median bucket. Call that M. Now divide the range represented by M into buckets. Make another pass over the inputs, counting values within M into these new buckets (discarding values not in M). Repeat until the bucket size is 1.
{ "domain": "codereview.stackexchange", "id": 34323, "tags": "algorithm, c, median" }
Given a network flow, are there bounds on the change in weight on nodes?
Question: Here's my precise situation: I have a graph with nodes $V$ and edges $E$, and the nodes have some non-negative integer weights $w_i$. In one step of the protocol, I am now allowed to move weight around among nodes. This is expressed through a flow $f$ defined on the edges: $f(i,j)$ tells me how much weight I transfer from $i$ to $j$. The flow cannot create new weight. The flow must be integer. $f(i,j)$ is allowed to be larger than $w_i$, but after the entire flow has been applied all the $w_i$ must be non-negative again. Let $\Delta_i$ be the total change in the weight on node $i$, with the sign convention such that $\Delta_i$ is positive if more tasks leave node $i$ than arrive at $i$. An upper bound on $\Delta_i$ is $w_i$, by virtue of the third condition on the flow. My question now is: Is there also a lower bound? A naive lower bound for each $\Delta_i$ would be given by the sum of the $\Delta_j$ for all neighbors of $i$, but I wonder if some network- and graph-theory can find better bounds? If a good lower bound on the $\Delta_i$ is not possible, maybe there is a good \emph{upper} bound on the quantity $$\sum_{i \in V} \Delta_i^2$$? Answer: I think you're using a confusing sign convention, but I'll stick with it. It's pretty easy to see that for any connected graph you can have all weight flowing into a single vertex (unless I'm misunderstanding something), so the lower bound you'll get is $$\Delta_i\geq -\sum_{i\in V}w_i.$$ Things won't really be better for bounding the square. For example, if your graph is a star in which every leaf has $w_i=1$, then $$\sum_{i\in V}\Delta_i^2 = (|V|-1)+(|V|-1)^2.$$ You can't get a neighbourhood restriction, either, because you can take the example of a $k$-ary tree in which every leaf has weight 1 and all the weight goes to the root. What you can get, however, is the possibly useful bound $$\sum_{i\in V}{|\Delta_i|} \leq 2\sum_{i\in V}w_i.$$ To see this, just note that when charge leaves a vertex you'll count it twice.
{ "domain": "cstheory.stackexchange", "id": 778, "tags": "ds.algorithms, graph-theory, application-of-theory, network-modeling" }
Free boundary conditions
Question: I am trying to simulate liquid film evaporation with free boundary conditions (in cartesian coordinates) and my boundary conditions are thus: $$ \frac{\partial h}{\partial x} = 0, \qquad (1) $$ $$ \frac{\partial^2 h}{\partial x^2} = 0, \qquad (2) $$ $$ \frac{\partial^3 h}{\partial x^3}=0. \qquad (3) $$ However, I need only two of the above three conditions to satisfy my 4th order non-linear partial differential equation for film thickness, which looks something like. $$ \frac{\partial h}{\partial t} + h^3\frac{\partial^3 h}{\partial x^3} + ... = 0 $$ My question is: what does a combination of 1st and 2nd derivative conditions mean and what does a combination of 2nd and 3rd derivatives mean? If I apply (1) and (2), does it mean that slope and curvature are zero and if I apply (1) and (3), does it mean that slope and shear stress are zero (from analogies of bending beams etc.) Answer: Here is the answer that I gathered from months of looking at these boundary conditions: (1) and (2) would mean that the slope is zero and the bending moment / curvature at the ends is zero. (1) and (3) mean that the slope is zero and the shear stress at the end is zero.
{ "domain": "physics.stackexchange", "id": 5055, "tags": "mathematical-physics, boundary-conditions, navier-stokes" }
Why does this barbell use two split barrel washers?
Question: Why does this barbell use two half washers? For example: see this link Navigate to the “Taking Apart A Barbell Sleeve” section. They use a washer, two half washers, a shim washer, another washer and then retaining rings. The split washer is not the traditional split washer that acts as a positive locking mechanism. It looks like this. My question is this: why did the designers use this type of washer? I do not see the advantage over a normal washer. Answer: I'm looking at a video and I don't have a completely clear view: https://youtu.be/zUFIJQqsoQE?t=90 But from what I can see, I think those half-washers are primary load bearing components that hold everything on the shaft in the axial direction, not the retaining rings. That would make sense since retaining rings aren't very strong. Retaining rings are meant to be springy pieces of metal that can be removed which means they are limited in thickness. From the glimpses I can see, there is a circular groove cut into the central shaft and the half washers sit into that groove from opposite sides. Sliding the sleeve over them keeps them in place by preventing motion in the radial direction. The sleeve must have a lip or shelf at the bottom of the hole inside it that hits the half washers if anything tries to drag the sleeve off the barbell. Obviously, a full washer that must slide on from the end can't sit snugly in a groove cut into the shaft. That means they're the most important piece out of everything you listed...I think? But then what stops the sleeve from sliding too deep onto the barbell? It seems like it would need to be the retaining rings but it doesn't seem like they would be strong enough, and obviously the sleeve itself slides back by design when you remove the retaining rings so you can access the half washers.
{ "domain": "engineering.stackexchange", "id": 5014, "tags": "mechanical-engineering" }
Is there a common name to refer to the groups 13 and 14?
Question: The group 15, 16, and 17 are called the pnictogens, chalcogens, and halogens respectively. Is there a name for the groups 13 and 14 as well? Answer: It seems that the carbon group is known as either the tetrels or the crystallogens; while the boron group is known as triels or icosagens. These names are very rarely used. Source: Wikipedia articles on Carbon family (also, crystallogen) and Boron family. Although other (seemingly) respectable sites use them.
{ "domain": "chemistry.stackexchange", "id": 3020, "tags": "periodic-table, terminology" }
Breaking Bad name generator
Question: I've created a script to print all periodic table symbolic permutations of a string. As seen in the opening credits of each episode of Breaking Bad: © 2010-2022 AMC Networks Entertainment LLC. Aaron Paul contains the symbol for Argon (Ar) which is highlighted green, and overwrites the casing on the original (ar). I have made this just for fun, and given the script the following restrictions/scope (which are inherited from the TV show): Text replaced by an element's symbol must: Be green Have the casing of the element as seen in the periodic table (H,Ar) There is only one replacement per permutation A[Ar]on and Aaro[N] are seperate. I.e, don't have any more than 1 symbol in the string at a time Case is ignored for matches (h & H should both be replaced with a green H) And an extra that's not in the show: Print the element's name afterwards Example output: A[Ar]on - Argon This is what I have created on a basic level (no user input or text entering loops). import colorama as col import re data = {'H': 'Hydrogen', 'He': 'Helium', 'Li': 'Lithium', 'Be': 'Beryllium', 'B': 'Boron', 'C': 'Carbon', 'N': 'Nitrogen', 'O': 'Oxygen', 'F': 'Fluorine', 'Ne': 'Neon', 'Na': 'Sodium', 'Mg': 'Magnesium', 'Al': 'Aluminium', 'Si': 'Silicon', 'P': 'Phosphorus', 'S': 'Sulphur', 'Cl': 'Chlorine', 'Ar': 'Argon', 'K': 'Potassium', 'Ca': 'Calcium', 'Sc': 'Scandium', 'Ti': 'Titanium', 'V': 'Vanadium', 'Cr': 'Chromium', 'Mn': 'Manganese', 'Fe': 'Iron', 'Co': 'Cobalt', 'Ni': 'Nickel', 'Cu': 'Copper', 'Zn': 'Zinc', 'Ga': 'Gallium', 'Ge': 'Germanium', 'As': 'Arsenic', 'Se': 'Selenium', 'Br': 'Bromine', 'Kr': 'Krypton', 'Rb': 'Rubidium', 'Sr': 'Strontium', 'Y': 'Yttrium', 'Zr': 'Zirconium', 'Nb': 'Niobium', 'Mo': 'Molybdenum', 'Tc': 'Technetium', 'Ru': 'Ruthenium', 'Rh': 'Rhodium', 'Pd': 'Palladium', 'Ag': 'Silver', 'Cd': 'Cadmium', 'In': 'Indium', 'Sn': 'Tin', 'Sb': 'Antimony', 'Te': 'Tellurium', 'I': 'Iodine', 'Xe': 'Xenon', 'Cs': 'Caesium', 'Ba': 'Barium', 'La': 'Lanthanum', 'Ce': 'Cerium', 'Pr': 'Praseodymium', 'Nd': 'Neodymium', 'Pm': 'Promethium', 'Sm': 'Samarium', 'Eu': 'Europium', 'Gd': 'Gadolinium', 'Tb': 'Terbium', 'Dy': 'Dysprosium', 'Ho': 'Holmium', 'Er': 'Erbium', 'Tm': 'Thulium', 'Yb': 'Ytterbium', 'Lu': 'Lutetium', 'Hf': 'Hafnium', 'Ta': 'Tantalum', 'W': 'Tungsten', 'Re': 'Rhenium', 'Os': 'Osmium', 'Ir': 'Iridium', 'Pt': 'Platinum', 'Au': 'Gold', 'Hg': 'Mercury', 'Tl': 'Thallium', 'Pb': 'Lead', 'Bi': 'Bismuth', 'Po': 'Polonium', 'At': 'Astatine', 'Rn': 'Radon', 'Fr': 'Francium', 'Ra': 'Radium', 'Ac': 'Actinium', 'Th': 'Thorium', 'Pa': 'Protactinium', 'U': 'Uranium', 'Np': 'Neptunium', 'Pu': 'Plutonium', 'Am': 'Americium', 'Cm': 'Curium', 'Bk': 'Berkelium', 'Cf': 'Californium', 'Es': 'Einsteinium', 'Fm': 'Fermium', 'Md': 'Mendelevium', 'No': 'Nobelium', 'Lr': 'Lawrencium', 'Rf': 'Rutherfordium', 'Db': 'Dubnium', 'Sg': 'Seaborgium', 'Bh': 'Bohrium', 'Hs': 'Hassium', 'Mt': 'Meitnerium', 'Ds': 'Darmstadtium', 'Rg': 'Roentgenium', 'Cn': 'Copernicium', 'Nh': 'Nihonium', 'Fl': 'Flerovium', 'Mc': 'Moscovium', 'Lv': 'Livermorium', 'Ts': 'Tennessine', 'Og': 'Oganesson'} user_input = "Aaron" for symbol, name in data.items(): for i in [x.start() for x in re.finditer(symbol, user_input, re.IGNORECASE)]: temp = list(user_input) temp[i:i+len(symbol)] = col.Fore.GREEN + symbol + col.Fore.RESET # [V]ince - Vanadium print(f"{''.join(temp)} - {name}") How could I improve on the cleanliness and efficiency of the text substitution? As I'm literally just putting it in a list and replacing with re index matches: temp = list(user_input) temp[i:i+len(symbol)] = col.Fore.GREEN + symbol + col.Fore.RESET And what other general constructive comments can be made about my attempt? Answer: Define a proper constant for the elements. The name data is both vague and improperly formatted for a constant. One option would be something like ELEMENTS. Put your code in functions. Even for small scripts. Parameterize the script. Take the name from the command line. Don't use an extra layer of iteration as a workaround for a simple assignment. Drop the innermost iteration and just use this instead: i = x.start(). [And after some further edits, you might not need i at all.] You don't need to listify the name. Python strings are already sequences, so converting from string to list doesn't help in this situation. Just grab the substrings you need and glue everything together. Don't print in the function responsible for core computation. The function to compute the breaking-bad names already has a primary job. Leave the printing to a different part of the program. Among other benefits, this type of discipline aids testing. Optionally, shift some of the calculation out of the innermost loop. Values derived from the symbol can be computed at the level of the outer loop, and making this change has some benefits in terms of readability and to instill good habits. In a different context -- where performance is a consideration -- you don't want to repeat calculations than can be done once at a higher level. import colorama as col import sys import re ELEMENTS = {...} def main(args): name = args[0] for display, element in breaking_bad_names(name): print(display, element) def breaking_bad_names(name): for symbol, element in ELEMENTS.items(): green_sym = col.Fore.GREEN + symbol + col.Fore.RESET for m in re.finditer(symbol, name, re.IGNORECASE): display = name[0 : m.start()] + green_sym + name[m.end() :] yield (display, element) if __name__ == '__main__': main(sys.argv[1:])
{ "domain": "codereview.stackexchange", "id": 43231, "tags": "python, python-3.x, regex" }
Retrieving the lotto numbers - Node.js
Question: I'm looking for feedback as to how I've structured the async calls, and If anything should be written differently. endpoints.js module.exports = { lotto: 'https://www.norsk-tipping.no/api-lotto/getResultInfo.json?drawID=', keno: 'https://www.norsk-tipping.no/api-keno/getResultInfo.json?drawID=', extra: 'https://www.norsk-tipping.no/api-extra/getResultInfo.json?drawID=', vikinglotto: 'https://www.norsk-tipping.no/api-vikinglotto/getResultInfo.json?drawID=', joker: 'https://www.norsk-tipping.no/api-joker/getResultInfo.json?drawID=', eurojackpot: 'https://www.norsk-tipping.no/api-eurojackpot/getResultInfo.json?drawID=' }; service.js var request = require('request'); var gametypes = require('./endpoints'); function toJSON(response){ var data = response.toString(), result = data.match(/(^{[\s\w\W]+}$)/gm).join(''); return JSON.parse(result); } function getResults(opts, callback){ var url = gametypes[opts.type], fromDrawID = opts.fromDrawID, toDrawID = opts.toDrawID, numberOfRequestsWanted = toDrawID - fromDrawID, numberOfRequestsDone = 0, data = []; //If either the object is missing or //the object with no key of 'type' is passed in - return error. if(!opts.type){ return callback(new Error('Missing required param: Object with key: type'), null); } if(fromDrawID && toDrawID) { console.log('Fetching the requested results from https://www.norsk-tipping.no...'); while(fromDrawID <= toDrawID) { doRequest(url + fromDrawID++, callback); } } else { console.log('Fetching the latest result from https://www.norsk-tipping.no...'); doRequest(url, callback); } function doRequest(url, callback) { request(url, function(error, response, body) { if(!error && response.statusCode === 200) { data.push(toJSON(body)); if(numberOfRequestsDone++ === numberOfRequestsWanted) { return callback(null, data); } } else { return callback(error, null); } }); } } module.exports = getResults; example usage var lotto = require('./service'); var options = { type: 'vikinglotto', fromDrawID: 1, toDrawID: 500}; lotto(options, function(err, results) { if (err) console.log(err); else { console.log(data); } }); Answer: Suggesting you move to using promises instead of callbacks. This way, you don't have to do the tango with calling callbacks, collecting results and all. You can simply replace the request module with the request-promise module. function getResult(options){ ... return doRequest(options); } ... lotto(options).then(results => console.log(data), err => console.log(err)); There's little merit of putting doRequest inside getResults. It's only being used in getResults but it's not exposed anyways. Suggesting you move it out into the module. JavaScript doesn't have default argument values (yet) but you can easily do the same thing using Object.assign. Provide a default object, and merge to a new object the defaults and the one from the arguments. This way, you avoid having to do a lot of default logic and value checks. var defaults = {...} function getResults(options){ var mergedOptions = Object.assign({}, defaults, options); ... } Back to promises, firing multiple async calls in parallel using callbacks will require you to collect results and keep checking if all the requests have responded. When using Promises, you can easily use Promise.all to listen to an array of promises. var request = require('request-promise'); function getResult(options){ ... var drawRange = toDrawID - fromDrawID; // Create an array of drawIds and map them to promises var promises = Array(drawRange).fill(fromDrawID).map((fromDrawID, i) => { return request(url + (fromDrawID + i)); }); return Promise.all(promises); }
{ "domain": "codereview.stackexchange", "id": 20688, "tags": "javascript, node.js, asynchronous, https" }
When is Zaitsev product formed in pyrolytic elimination reaction?
Question: I recently came across a problem of choosing which product to be made in case of pyrolytic elimination reactions. I was taught that usually Hoffman products are formed in such reactions. For e.g.: In this reaction, I sort of got the chills in my spine to make Zaitsev product because of the aromaticity obtained in the product but of course I obeyed the Hoffman rule that was taught to me and made the open ring structure of the 5 member ring of nitrogen connected at bottom. The answer is of course what I have illustrated but I need to understand when do we decide these. Another example is: Can someone please explain how do I go about this i.e. how do I know when it is Hoffman and when it is Zaitsev? Answer: Cope elimination is a pericyclic reaction that always prefers syn-elimination. Due to hydroxylamine being a poor leaving group, the reactions often tend to prefer hofmann products. However the reason for preference towards hofmann products is the stability of carbanion formed due to less $\ce{\alpha-H}$ and consequently less destabilization by +H effect. (since leaving group is poor, the acidic-H departs before the leaving group departs inducing negative charge at the $\ce{\beta-C}$) The carbanion's stability is enhanced by the resonance effect in your first and last examples, and we know that resonance effect is often more stabilizing than hyperconjugation hence the preference of zaitsev product. Resonance effect almost always gets preference even if the reaction often produces hoffmann product. (since the leaving group can always be more bulkier there are edge cases) Regarding your 2nd and 3rd examples they are probably giving zaitsev out of the temperature range for hofmann product in cope elimination. Where resonance effect is not in play, the determination between hofmann and zaitsev becomes a balance between the thermodynamic stability of the final product and the kinetic stability of the carbanion transition state. All of the above examples at very high temperature will provide zaitsev product as thermodynamically favored product is formed. However in a certain temperature range these reactions give the hofmann product where the kinetic stability is dominating. Cope elimination produces hofmann product near $ \ce{T \approx 100^o C}$ pyrolysis of xanthates produces hofmann near $\ce{200^o C}$ and esters give pyrolytic syn elimination near $\ce{500^o C}$.* The conclusion is that unless temperature is given, it is not exactly clear that you need to produce hofmann which is also stated in wikipedia There are many factors that affect the product composition of Ei reactions, but typically they follow Hofmann’s rule and lose a β-hydrogen from the least substituted position, giving the alkene that is less substituted (the opposite of Zaitsev's rule).1 Some factors affecting product composition include steric effects, conjugation, and stability of the forming alkene. The example of 5 member T.S. in wikipedia also supports zaitsev products. Basically the selectivity is not very good, and for correct answer we can only refer to data given to us. This is why temperature is important to mention. *Source: Peter Sykes 6th edition section 9.9 (Pyrolytic syn elimination)
{ "domain": "chemistry.stackexchange", "id": 15762, "tags": "organic-chemistry, elimination" }
In counting degrees of freedom of a linear molecule, why is rotation about the axis not counted?
Question: I was reading about the equipartition theorem and I got the following quotations from my books: A diatomic molecule like oxygen can rotate about two different axes. But rotation about the axis down the length of the molecule doesn't count. - Daniel V. Schröder's Thermal Physics. A diatomic molecule can rotate like a top only about axes perpendicular to the line connecting the atoms but not about that line itself. - Resnick, Halliday, Walker s' Fundamentals of Physics. Why is it so? Doesn't the rotation take place that way? Answer: The energy levels of a diatomic molecule are $E = 2B, 6B, 12B$ and so on, where $B$ is: $$ B = \frac{\hbar^2}{2I} $$ Most of the mass of the molecule is in the nuclei, so when calculating the moment of inertia $I$ we can ignore the electrons and just use the nuclei. But the size of the nuclei is around $10^{-5}$ times smaller than the bond length. This means the moment of inertia around an axis along the bond is going to be about $10^{10}$ smaller than the moment of inertia around an axis normal to the bond. Therefore the energy level spacings will be around $10^{10}$ times bigger along the bond than normal to it. In principle we can still excite rotations about the axis along the bond, but you'd need huge energies to do it.
{ "domain": "physics.stackexchange", "id": 20267, "tags": "thermodynamics, rotation, molecules, vibrations, degrees-of-freedom" }
Date validation with html5 dates
Question: This code validates the following: enddate should not be less than or equal to startdate. startdate should not be less than or equal to the date today unless document.getElementById("ltype").value == "1". The maximum gap between startdate and enddate is one year, more than that should cause an error alert. Please review my code. I think I covered all test cases, but I am extremely new to JavaScript, so this code might have bugs. And is it fine to compare dates as strings? I did this because as far as I know HTML5 date is a string. // creates a date object then converts it to a string with yyyy-mm-dd format function dateFormat(date, addyear) { // addyear is boolean, true means add another year to the date var dd = date.getDate(); var mm = date.getMonth()+1; var yyyy = date.getFullYear(); if (addyear) { yyyy++; } if(dd<10) { dd='0'+dd; } if(mm<10) { mm='0'+mm; } return yyyy+'-'+mm+'-'+dd; } function date_compare() { var today = dateFormat(new Date(),false); // from and to are html5 date fields var d1=document.getElementById("from").value; var d2=document.getElementById("to").value; var startdate = dateFormat(new Date(d1),false); var enddate = dateFormat(new Date(d2),false); var yeardate = dateFormat(new Date(d1),true); if (enddate <= startdate || enddate > yeardate) { alert("Dates out of range"); return false; } if (document.getElementById("ltype").value != "1" && startdate <= today) { alert("Error! date goes backwards!"); return false; } return true } Answer: Instead of giving your variables names like dd/mm/yyyy, you should give them names like day, month, year. Other names like d1 or d2. Should have better names. You need some space between operators. For example, you have this condition in your code: if(dd<10). Stuff like this can be expanded to if(dd < 10). Variable definitions should also not look like this: var x=...;, but rather, var x = ...;. You should add some more comments, e.g, describe what the functions/code blocks do, and how the processes behind them work. Finally, just a small nitpicky thing. Both of your functions should be in camelCase. One of them is in underscore_case.
{ "domain": "codereview.stackexchange", "id": 14273, "tags": "javascript, beginner, strings, datetime, html5" }
pi_tracker/Skeleton.msg message interpretation
Question: Hello all, I am using using ROS Electric, and pi_tracker package for skeletal tracking. The launch file "skeleton.launch" publishes "/skeleton" topic, which is a "pi_tracker/Skeleton.msg" message. Using "rosbag -record", I recorded the "/skeleton" topic. The message definition is: Header header int32 user_id string[] name float32[] confidence geometry_msgs/Vector3[] position geometry_msgs/Quaternion[] orientation And dumping the /skeleton topic to a txt file, the output is like: ------------------------------------------ header: seq: 1 stamp: secs: 1363464929 nsecs: 445623850 frame_id: openni_depth_frame user_id: 1 name: ['head', 'neck', 'torso', 'left_shoulder', 'left_elbow', 'left_hand', 'right_shoulder', 'right_elbow', 'right_hand', 'left_hip', 'left_knee', 'left_foot', 'right_hip', 'right_knee', 'right_foot'] confidence: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0] position: - x: -0.377875793457 y: 0.0851003723145 z: 2.09605566406 - SIMILAR FOR OTHER 14 JOINTS orientation: - x: 0.213657463007 y: 0.188112339353 z: 0.0245842250896 w: 0.958310941594 - SIMILAR FOR OTHER 14 JOINTS ------------------------------------------ Now, my question is, if I want to know the position of the joint 'HEAD', How can I get the position? Means, what is the relation between 'position' and 'orientation' field? What is the unit of (x,y,z)? And, if I only want the (x,y) co-ordination in the image coordinate, how can I convert it to only 2-D (x,y) coordinate? Thanks in advance. Originally posted by Tariq on ROS Answers with karma: 3 on 2013-03-24 Post score: 0 Answer: Hi, The position of the joint head is the "position" field. The orientation gives the orientation of the coordinate frame associated with the point. It means that for example I turn my head to the right, the orientation will change, but the position will remain the same. In order to only take the x,y coordinate, you have to project the 3D data into a 2D. You can find the projection matrix of the kinect on the internet, see this topic for more information... EDIT : this link is also very useful for understanding Have a good day, Bests, Stéphane Originally posted by Stephane.M with karma: 1304 on 2013-03-24 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 13511, "tags": "ros, pi-tracker, quaternion, skeleton, skeletal-tracker" }
Database accessor functions
Question: I've been away from PHP for at least 5 years and I'm just starting to look into it again. I have a set of functions I created and used for database access and I'm wondering if they are still good to use. <?php // // A group of database function to hide any errors that may occur and // allow for some form of fallback. If dbConnect() fails for any reason // an error is displayed on the page, all other functions return empty // values. For example dbSQL() will return an empty recordset. Pages will // display the error once but still function to some degree. // $dbConnection = false; if (empty($dbType)) { $dbType = "MYSQL"; } // INPUTS: $type -- The type of database that is going to be used // MYSQL, MSSQL // OUTPUTS: None // // EXAMPLE: dbSetType("MYSQL"); function dbSetType($type) { global $dbType; $dbType=$type; } // INPUTS: NONE // OUTPUTS: true if the database API for PHP is installed false if no // // EXAMPLE: if(!dbOK()) { print "Error"; } function dbOK() { global $dbType; if($dbType == "MYSQL") { if(function_exists('mysql_connect')) { return true; } return false; } elseif ($dbType == "MSSQL") { if(function_exists('mssql_connect')) { return true; } return false; } return false; } // INPUTS: $server ---- Server name, "localhost" if same server as web server // $database -- The database name to use // $username -- Username to connect to $server with // $password -- Password of the user // OUTPUTS: None // // EXAMPLE: dbConnect("localhost", "test", "root", ""); function dbConnect ($server, $database, $username, $password) { global $dbType; global $dbConnection; if($dbType == "MYSQL") { if (dbOK()) { $dbConnection = mysql_connect ($server, $username, $password); if (!$dbConnection) { print "<h1>Can not connect to ".$server." with user ".$username."</h1>"; } else { $db_select = mysql_select_db ($database); if (!$db_select) { print "<h1>Database ".$database." does not exist</h1>"; } } } else { print "<h1>mySQL module is not installed</h1>"; } } elseif ($dbType == "MSSQL") { if (dbOK()) { $dbConnection = mssql_connect ($server, $username, $password); if (!$dbConnection) { print "<h1>Can not connect to ".$server." with user ".$username."</h1>"; } else { $db_select = mssql_select_db ($database); if (!$db_select) { print "<h1>Database ".$database." does not exist</h1>"; } } } else { print "<h1>MSSQL module is not installed</h1>"; } } } // Internal function should never be called outside of the dbSQL() function. // This function returns the Nth parameter passed into dbSQL during the // replacement of the $1, $2, $3, etc. in the $sql string. function dbSQL__callback($at) { global $dbSQL__parameters; return $dbSQL__parameters[$at[1]-1]; } // INPUTS: $sql --------- A SQL statment with $// to be replace by cleaned data // $parameters -- An array of unclean data to be inserted into the SQL // OUTPUTS: A recordset resulting from the SQL statment if approprate, false on // error // // EXAMPLE: dbSQL("SELECT * FROM t WHERE col1=$1 AND col2=$2", array(1, "hi")); function dbSQL($sql, $parameters = array(), $debug = false) { global $dbType; global $dbConnection; global $dbSQL__parameters; if (dbOK()) { if ($dbConnection) { foreach ($parameters as $k=>$v) { $v = trim($v); if (is_int($v)) { $parameters[$k] = $v; } else { if (is_null($v)) { $parameters[$k] = "'BLANK'"; } else { if (get_magic_quotes_gpc()) { $v = stripslashes($v); } if ($dbType == "MYSQL") { $parameters[$k] = "'".mysql_real_escape_string($v)."'"; } elseif ($dbType == "MSSQL") { $parameters[$k] = "'".mssql_escape_string($v)."'"; } } } } $dbSQL__parameters = $parameters; $safeSQL = preg_replace_callback('/\$([0-9]+)/', 'dbSQL__callback', $sql); if ($debug == true) { print "<p>SQL: ".$safeSQL."</p><br />"; } if ($dbType == "MYSQL") { $ret = mysql_query($safeSQL, $dbConnection) or die(mysql_error()); } elseif ($dbType="MSSQL") { $ret = mssql_query($safeSQL, $dbConnection) or die(mssql_get_last_message()); } return $ret; } } return false; } // INPUTS: $recordset -- A recordset as returned by dbSQL() // OUTPUTS: Number of rows in the recordset // // EXAMPLE: $rows = dbRecordTotalRows($rs); function dbRecordTotalRows($recordset) { global $dbType; global $dbConnection; if (dbOK()) { if ($dbConnection) { if ($dbType == "MYSQL") { return mysql_num_rows($recordset); } elseif($dbType == "MSSQL") { return mssql_num_rows($recordset); } } } return 0; } // INPUTS: $recordset -- A recordset as returned by dbSQL() // OUTPUTS: None // // EXAMPLE: dbRecordNextRow($rs); function dbRecordNextRow($recordset) { global $dbConnection; if (dbOK()) { if ($dbConnection) { $recordset->MoveNext(); } } } // INPUTS: $recordset -- A recordset as returned by dbSQL() // OUTPUTS: Array of key=value pair for the current row, false if // past last row of recordset // // EXAMPLE: $row = dbRecordGetRow($rs); function dbRecordGetRow($recordset) { global $dbType; global $dbConnection; if (dbOK()) { if ($dbConnection) { if ($dbType == "MYSQL") { $row = mysql_fetch_array($recordset); } elseif ($dbType == "MSSQL") { $row = mssql_fetch_array($recordset); } return $row; } } return null; } // INPUTS: $row -------- A row as returned by dbRecordGetRow() // $fieldname -- The name of the field whos value is returned. // OUTPUTS: Value in the requested field // // EXAMPLE: $value = dbRowGetField($row, "id"); function dbRowGetField($row, $fieldname) { if (dbOK()) { return stripslashes($row[$fieldname]); } return null; } function dbGetLastInsertID() { global $dbType; if ($dbType == "MSSQL") { $sql="select SCOPE_IDENTITY() AS last_insert_id"; $parms = Array(); $ret = dbSQL($sql, $parms); $row = dbRecordGetRow($ret); return dbRowGetField($row, "last_insert_id"); } return -1; } // INPUTS: None // OUTPUTS: None // // EXAMPLE: dbDisconnect(); function dbDisconnect() { global $dbType; global $dbConnection; if (dbOK()) { if ($dbConnection) { if ($dbType == "MYSQL") { mysql_close($dbConnection); } elseif ($dbType == "MSSQL") { mssql_close($dbConnection); } $dbConnection = false; } } } // INPUTS: $string_to_escape -- This is the unsafe string to pass to the mssql database // OUTPUTS: A safe string that is ok to pass to a mssql database // // EXAMPLE: mssql_escape_string("Not 's'a'f'e' String"); function mssql_escape_string($string_to_escape) { $replaced_string = str_replace("'","''",$string_to_escape); $replaced_string = str_replace("%","[%]",$replaced_string); $replaced_string = str_replace("_","[_]",$replaced_string); return $replaced_string; } /* End Of File */ Answer: A general OOP advice: use the factory pattern and move MySQL related codes to a MySqlDatabaseclass and MSSQL related codes to an MsSqlDatabase class. In this way you'll have two separate classes (one for MySQL and one for MSSQL) instead of the if-elseif statements in (almost) every method. You will also need a common interface which both classes implement. interface Database { public function dbOK(); public function dbRecordTotalRows($recordset); public dbDisconnect(); ... } You could put your common methods (usually the ones which don't contain the if-elseif statements) to a common abstract base class: class AbstractDatabase implements Database { public function dbGetLastInsertID(); ... } Then the two concrete implementations: class MsSqlDatabase extends AbstractDatabase { ... } class MySqlDatabase extends AbstractDatabase { ... } And finally the factory method: function createDatabase($type) { if ($type == "MSSQL") { return new MsSqlDatabase(); } else if ($type == "MYSQL") { return new MySqlDatabase(); } else { throw new Exception('Invalid type: ' . $type); } } If you use this pattern you could easily create new implementations (you only need for example a new PostgreSqlDatabase class and a new else if statement in the factory method) and you get rid of a lot of error-prone if-elseif statements. One small thing: function dbOK() { if($dbType == "MYSQL") { ... } elseif ($dbType == "MSSQL") { ... } return false; } In similar cases instead of the last line (return false) log the error and throw an exception to the user or call a die() since it's an internal error.
{ "domain": "codereview.stackexchange", "id": 872, "tags": "php, mysql" }
Test file word cloud in F#
Question: This code is meant to take in a command line argument and output a 'tag cloud'. It's more of an exercise in learning F# for me because this is my first non-tutorial code file. How could this code be improved? open System.IO open System [<EntryPoint>] let main args = let directory = args.[0] let fileNames = Directory.EnumerateFiles(directory, "*Tests.cs", SearchOption.AllDirectories) |> Seq.toList let allLines = List.collect(fun (x:string) -> System.IO.File.ReadLines(x) |> Seq.toList) fileNames let allWords = List.collect(fun (x:string) -> x.Split([|' ';'_';';';':';'(';')';'\\';'/';'>';'<';'{';'}';'0';'1';'2';'3';'4';'5';'6';'7';'8';'9';'.'|], StringSplitOptions.RemoveEmptyEntries) |> Seq.toList) allLines //printfn "%A" allWords let countOccurance (word:string) list = let count = List.filter (fun x -> word.Equals(x)) list (word, count.Length) let distinctWords = allWords |> Seq.distinct |> Seq.toList let print (tup:string*int) = match tup with | (a,b) -> printfn "%A: %A" a b let rec wordCloud distinct (all:string list) (acc:(string*int) list) = match distinct with | [] -> acc | head :: tail -> let accumSoFar = acc @ [(countOccurance head all)] wordCloud tail all accumSoFar let acc = [] let cloud = (wordCloud distinctWords allWords acc) let rec printTup (tupList:(string*int) list) = match tupList with | [] -> 0 | head :: tail -> printfn "%A" head printTup tail printTup cloud 0 Answer: inb4 Scott :-) Use functions more. Composability is not very important in this example, but it's good to get into that habit early on. In that spirit, turn the non-trivial value bindings into functions instead of closing over a previously defined value. Also try turning the non-trivial lambda expressions into functions; that gives them a name and makes the call site more readable. Also, pipe more. :-) Especially for the list/sequence functions that take a higher order function, it's more readable to pipe the list into them, because it makes it more obvious what you're starting from. Also, in my experience, defining "constant" values and functions first before actually doing any work makes the code much easier to follow, because you don't have to think about what values you already "have" at which point. Oh, and while tail recursion is a very cool thing to have, it's not very useful here and actually makes the code quite a bit less understandable. Did you put the type annotations in on purpose? In most cases, you don't need them because of F#'s powerful type inference. They can sometimes make code more easy to follow, but they might also constrain it where it could actually be more generic, and in general not having to specify types all the time is one of the really nice things about F#. :-) Edit: Reading this again made me think of another important point: The individual function declarations don't belong in the main function; moving them out of there makes the distinction between "parts" and the actual "running" program even clearer. With these things taken into consideration, your code could look like this: open System open System.IO // It's a tiny bit more maintainable to keep this list separate, and makes it more readable where it is used let separators = [|' ';'_';';';':';'(';')';'\\';'/';'>';'<';'{';'}';'0';'1';'2';'3';'4';'5';'6';'7';'8';'9';'.'|] let fileNamePattern = "*Tests.cs" // Function with the directory and pattern as a parameter let getFileNames fileNamePattern directory = Directory.EnumerateFiles(directory, fileNamePattern, SearchOption.AllDirectories) |> Seq.toList // Function with the list of filenames as a parameter let getAllLines fileNames = fileNames |> List.collect (System.IO.File.ReadLines >> Seq.toList) // Function with the list of lines as a parameter let getAllWords lines = lines |> List.collect (fun (line : string) -> line.Split(separators, StringSplitOptions.RemoveEmptyEntries) |> Seq.toList) // Reordered parameters for pipeability let countOccurrences allWords word = // The = operator can be used as a function that takes two arguments. let count = allWords |> List.filter ((=) word) |> List.length word, count // Function that takes a sequence of "anything" and returns a list of the distinct values let distinctWords = Seq.distinct >> Seq.toList // F# string formatting is strongly typed and compiler checked; take advantage of that, don't just use %A let printWordCount (word, count) = printfn "%s: %i" word count // Now that everything is "set up", we can "run" with the current data: [<EntryPoint>] let main args = // This binding isn't really necessary, but it's nice to give the value a name let directory = args.[0] // We now have a clear execution path that is very easy to follow let allWords = directory |> getFileNames fileNamePattern |> getAllLines |> getAllWords let distinct = distinctWords allWords distinct |> List.map (countOccurrences allWords) |> List.iter printWordCount 0
{ "domain": "codereview.stackexchange", "id": 13184, "tags": "recursion, f#" }
What's the difference between stabilizing selection and balancing selection?
Question: I came across these terms in Darwin's "Origin of Species" and I wasn't sure what the difference is. Answer: Usual meaning Usually, Stabilizing selection is a concept that applies to a phenotypic trait while balancing selection is a concept that applies to a given locus. Balancing selection can either be due to negative-frequency dependence selection or due to overdominance (=heterozygous advantage at a single locus). What Darwin may have meant Because Darwin didn't know about genes, he was necessarily not using the term balancing selection as I am using it. I can think of several more or less related pattern of selection that might fall withing the definitions for "stabilizing selection" and/or "balancing selection". Maybe he meant "frequency-dependent selection", "positive selection for an intermediate trait", "varying selection through time (temporally heterogenous environment)" and "varying selection through space (spatially heterogenous environment)". Eventually again, that might mean "positively correlated traits that undergo opposite selection pressure (or the opposite)" but that would definitely be surprising.
{ "domain": "biology.stackexchange", "id": 3445, "tags": "natural-selection" }
Notifying view controller of changes to any of five types of models
Question: I have a swift 3 project with a code that's look like very generic, but I don't know if I can reduce a method like this one : var pages: [Page] // List of page object func RefreshList(){ guard let currentIndex = currentIndex else { return } if let listTableViewController: listTableViewController<ModelA> = pages[currentIndex].viewController as? listTableViewController<ModelA> { listTableViewController.didRefreshNodesList() } else if let listTableViewController: listTableViewController<ModelB> = pages[currentIndex].viewController as? listTableViewController<ModelB> { listTableViewController.didRefreshNodesList() } else if let listTableViewController: listTableViewController<ModelC> = pages[currentIndex].viewController as? listTableViewController<ModelC> { listTableViewController.didRefreshNodesList() } else if let listTableViewController: listTableViewController<ModelD> = pages[currentIndex].viewController as? listTableViewController<ModelD> { listTableViewController.didRefreshNodesList() } else if let listTableViewController: listTableViewController<ModelE> = pages[currentIndex].viewController as? listTableViewController<ModelE> { listTableViewController.didRefreshNodesList() } } The only things that change is the Model(LETTER) is there a way to reduce that ? Answer: The type annotation in if let listTableViewController: listTableViewController<ModelA> = pages[currentIndex].viewController as? listTableViewController<ModelA> is not needed because the type is automatically inferred from the expression on the right-hand side: if let listTableViewController = pages[currentIndex].viewController as? listTableViewController<ModelA> According to the Swift API Design Guidelines Names of types and protocols are UpperCamelCase. Everything else is lowerCamelCase. Therefore it should be func refreshList() and ListTableViewController. To simplify your code, define a protocol for the common methods: protocol RefreshController { func didRefreshNodesList() // ... } and make ListTableViewController conform to that protocol. Then you can reduce it to a single test if let refreshController = pages[currentIndex].viewController as? RefreshController { refreshController.didRefreshNodesList() }
{ "domain": "codereview.stackexchange", "id": 27652, "tags": "mvc, swift, generics, swift3" }
Delta to Star/Y Conversions and vice versa in Electric Ciruits
Question: We all know the basic rules for conversion of $"Delta"$ circuits to $"Star"$ circuits and vice versa. We also know that this is needed for simplification of circuits in complex cases. Can anyone please explain HOW the concept of such conversions came about? To be clearer, can anyone show the derivation of the Conversions? for both Capacitors and Resistors? Answer: The concept is a special case of a more general topological notion of graph theoretic duality: see the Wikipedia page for Dual Graph. Graph theoretic duality is "compatible" with the Kirchoff voltage law (voltages around a loop sum to nought) and charge conservation (currents into a node sum to nought) insofar that nodes in a graph map to loops in a graph theoretic dual, so that we get a meaningful electric circuit for the dual if we swap the roles of voltage and current - the two laws (Kirchoff voltage and charge conservation also swap places). The impedances naturally transform too. So, with graph-theoretic duality and electrical duality combined, we get the procedure written up in the Dual Impedance Wiki Page. The relationships between the star and its topological dual delta are specifically worked through as an example on this page. As in Alfred's comment, which references: http://www.engineersblogsite.com/delta-to-wye-and-wye-to-delta-conversion.html he says the rules work for any impedance, not simply resistances. The topological reasons given above show why.
{ "domain": "physics.stackexchange", "id": 11286, "tags": "electric-circuits, electrical-resistance, capacitance, linear-systems" }
Using json in msg
Question: Is it good practice to use the ros2 json message, ie, in a standard message like std_msgs / string, write a string with a json structure? Originally posted by D0l0RES on ROS Answers with karma: 23 on 2019-02-07 Post score: 1 Answer: Is it good practice to use the ros2 json message, ie, in a standard message like std_msgs / string, write a string with a json structure? I would say "no". One of the main points of using standardised messages as we do in ROS is that they allow you to encode both syntax (ie: the exact form of data (layout, sizes of fields, etc)) and the semantics (ie: the meaning (so x is the first element of a vector that has its origin at frame_id)) in a single definition. This allows both consumers and producers of such messages to be very explicit about the information they are communicating, and allows things like decoupling in time (ie: a consumer that receives messages that were produced 2 years ago should still be able to interpret them). Your suggestion essentially comes down to using a field of type string and populating it with data that syntactically fits that field perfectly fine (a JSON string is still a string), but semantically seems like a bad fit: a JSON string is not just a string, it's (typically) actually a stringified (ie: serialised) representation of some higher order data structure (such as a list, map or even worse: arbitrary application-specific classes). Interpretation of that string field now has two "layers" (in contrast to the 'normal' situation where we use only appropriate msg types): the middleware layer where the sequence of bytes coming in as part of a message is supposed to be a string the application layer where "special knowledge" is required to know that this string is not just a string, but actually requires interpretation again to be able to get the actual message content out of it We could say that "special knowledge" (ie: layer 2) is always required to be able to interpret a message, but the main difference here is that by using properly typed messages the special knowledge is (at least partly) embedded in the contract that exists between the producer and consumer: it's the semantics part of the message definition. That is very powerful, as it greatly reduces coupling between the producer and consumer: very little knowledge of the internals of the producer are imported into the consumer, making things like replacing components and mixing-and-matching components to create applications a lot easier, as neither will assume (too much) about the other. If producer A starts putting JSON (or anything that is not actually really a plain string) into a field, the correct interpretation of that field completely depends on consumer B knowing that field contains JSON (and not just an arbitrary plain string). This couples A and B, as they both must assume their communication channel works that way, or they can't function. To make it more explicit: even though the message definition tells me that consumer B accepts a string, if I send it anything but JSON, B will fail to process the message, even though both syntax and semantics have been adhered to (but note that string is a less-than-ideal type for this sort of communication as well, see below). Note that a similar argument can be made for all message types in the std_msgs package: a std_msgs/String doesn't convey much semantics, neither does publishing a temperature measurement as a std_msgs/Float64. That is also the reason that direct usage of messages in std_msgs is discouraged: it's always better to use a more semantically meaningful type for your topics, as that will allow consumers to reuse the contract (ie: knowledge) that comes with those message types. Originally posted by gvdhoorn with karma: 86574 on 2019-02-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2019-02-07: Note: I'm not an expert, so this is my opinion, based on some years of ROS usage and my background. Comment by VictorLamoine on 2019-02-07: This question is similar to mine: https://answers.ros.org/question/302648/custom-message-with-variable-fields/. Comment by gvdhoorn on 2019-02-07: @VictorLamoine: I'm not sure actually. Using polymorphism (as you suggest in your question) would maintain syntax and semantics (to a certain degree), while throwing typing out of the window (which using JSON in a string field is akin to imo) does not.
{ "domain": "robotics.stackexchange", "id": 32426, "tags": "ros, ros2, msg" }
How to identify sudden changes in signals?
Question: There are instances when neural networks, after being trained continually on a subset of data, tend to drastically lose their performance. I've attached an image below depicting the same. I am looking for an algorithm to detect this sudden fall in performance. I'm quite new to this area and therefore don't have any idea where to start. My initial guess would be to first smooth the plot, so as to reduce the noise. Edit 1: I highly appreciate the good advice that I have received in the comments and answers. I have added some code that generates live time-series, that'll help better understand my problem. The code is designed to run in Jupyter Notebook cells. import matplotlib.pyplot as plt from IPython.display import clear_output import time import numpy as np def test(): accuracy = [] for _ in range(1000000000): time.sleep(1) accuracy.append(np.random.randint(30)) clear_output(wait=True) plt.figure(figsize=(20,5)) plt.plot(accuracy) plt.show() test() Answer: The change in any signal with respect to another variable is defined by it's derivative with respect to that variable. In order to compute the "derivative" when you have a discrete signal (like in your case), (by asuming that the samples are spaced by a unity) you should use the finite difference in the following way s_change = s - s_old s_old = s "s" is the current sample, "s_old" is inizialized to zero the first time then and it is updated to contain the value of the previous sample. if "s_change" is huge it means that the signal suddenly changed, if its positive it means it's a sudden increased, if it's negative it means that it's a sudden drop.The bigger it's value the bigger the sudden change. You can average each N samples and also compute the s_change each N averaged samples if you wanna filter out some noise. I hope this is what you were looking for.
{ "domain": "dsp.stackexchange", "id": 11306, "tags": "signal-analysis, signal-detection, machine-learning" }
What is the compound called when it is in solution?
Question: When a compound precipitates out of solution, for example Calcium Carbonate from a water solution, it is called the precipitate. What is the same compound called before it precipitates out - once it has dissolved? I found the word "electrolyte", but I'm unsure this applies to all solutions - only ionic, and I'm not sure a CaCO3 solution is ionic. Answer: I believe the term you are looking for is solute; a solute is a substance dissolved in another substance, known as a solvent.
{ "domain": "chemistry.stackexchange", "id": 5388, "tags": "aqueous-solution, terminology" }
Why is friction force negative in ice skater problem?
Question: A 68.5 kg skater moving initially at 2.40m/s on rough horizontal ice comes to rest uniformly in 3.52s due to friction from the ice. What force does friction exert on the skater? I am not really asking about the answer here, because I can calculate that, but more of an explanation. We find the acceleration with $\frac{V-V_0}{t}=a$, so the acceleration is: $-0.68$. $F=ma$, so force is $-46.7$ Newton, or N. First of all, why is it negative? Does the guy skating put an equal force on the ice? So the skater extends a force of 46.7 N on the ice, and the ice extends a force of -46.7 back at the skater? Why does this sound like the Normal force? I am getting really confused and I simply don't understand it, but I got the math right by just plugging in the numbers. Answer: It's very common to get mixed up about signs. The only recommendation I can give is to establish a clear sign convention and stick carefully to it. To show what I mean let's consider your skater: I'm going to use the convention that positive is to the right and negative is to the left. remember that quantities like velocity and acceleration are vectors, so they have a direction as well as a magnitude. According to my convention a vector pointing to the right is positive while one pointing to the left is negative. The skater's velocity points to the right. We know the skater is slowing down, so the skater's acceleration points to the left. That means the acceleration must be negative. We know the force on the skater is related to the acceleration of the skater by: $$ \vec{F} = m \vec{a} $$ and since mass is positive that must mean that $\vec{F}$ is negative, just as you concluded.
{ "domain": "physics.stackexchange", "id": 23639, "tags": "homework-and-exercises, newtonian-mechanics, forces, friction, conventions" }
Which transformations *aren't* symmetries of a Lagrangian?
Question: As far as I understand, Noether's theorem for fields works, as explained in David Tong's QFT lecture notes (page 14) for example, by saying that a transformation $\phi(x) \mapsto \phi(x) + \delta \phi (x)$ is called a symmetry if it produces a change in the Lagrangian density which can be expressed as a four divergence, $$\delta \mathcal{L} = \partial_{\mu} F^{\mu}\tag{1.35} $$ for some 4-vector field $F^{\mu}$. We thengo onto show that the change in this Lagrangian density may also be expressed for an arbitrary transformation as $$\delta \mathcal{L} = \partial_{\mu}\bigg(\frac{\partial \mathcal{L}}{\partial(\partial_{\mu} \phi)}\delta \phi\bigg)\tag{1.37}.$$ Which is a 4-divergence. So how could we say any transformation is not a symmetry in the sense above? Answer: The point is that eq. (1.35) should hold off-shell to have a symmetry, while eq. (1.37) may only hold on-shell. [The term on-shell (in this context) means that the Euler-Lagrange equations are satisfied. See also this Phys.SE post.] In other words: On-shell, the action will only change with at most a boundary term for any infinitesimal variation, whether or not it is a symmetry. Phrased differently: By a symmetry is meant an off-shell symmetry. An on-shell symmetry is a vacuous notion.
{ "domain": "physics.stackexchange", "id": 16910, "tags": "lagrangian-formalism, symmetry, field-theory, noethers-theorem" }
What are gravitational waves made of?
Question: The following facts are what I think I know about gravitational waves: Distortion of space-time moving away from a source at light speed. Produced by very powerful event in the universe such as merging black holes. What I still don't know is what are they made of? Are they empty? Answer: A wave is a traveling distortion. This goes for any type of wave. An ocean wave is a distortion of the water surface. A sound wave is a distortion in air pressure. A light wave is a distortion in electromagnetic fields. A wave is made of the thing that is vibrating--ocean waves are made of water, etc. So, a gravitational wave is made of space and time, since gravity is the effect of space and time warping due to nearby masses.
{ "domain": "physics.stackexchange", "id": 29672, "tags": "gravitational-waves" }
Javascript Snake
Question: I have implemented a little snake game with Javascript and Html. This is my first game with Javascript. I use a Linkedlist to represent the snake. Both code snippets are in snake.js file. It would be really nice if someone could give me feedback. LinkedList: function Point(x, y) { this.x=x; this.y=y; this.toString = function toString() { return this.x +" "+this.y; } } function Node(point) { var next =null; this.point=point; this.getX = function getX() { return this.point.x; } this.getY = function getY() { return this.point.y; } this.setX = function setX(x) { this.point.x=x; } this.setY = function setY(y) { this.point.y=y; } this.toString = function toString() { return this.point.x +" "+this.point.y + " "+this.next; } } function LinkedList() { var first = null; var elements = 0; this.getLength = function getLength() { return elements; } this.addFirst = function addFirst(point) { elements++; var newNode = new Node(point); newNode.next = first; first = newNode; } this.addLast = function addLast(point) { elements++; var newNode = new Node(point); var currentNode = first; while(currentNode.next!=null) { currentNode = currentNode.next; } currentNode.next = newNode; newNode = currentNode; } this.getX = function getX(index) { var currentNode = first; var tmp =0; while(true) { if(index> elements - 1) { //console.log("Fehler: ungültiger Index"); return null; } if(tmp==index) { return currentNode.getX(); } tmp++; currentNode= currentNode.next; } } this.setX = function setX(index, value) { var currentNode = first; var tmp =0; while(true) { if(index> elements - 1) { //console.log("Fehler: ungültiger Index"); return null; } if(tmp==index) { currentNode.setX(value); return currentNode; } tmp++; currentNode= currentNode.next; } } this.setY = function setX(index, value) { var currentNode = first; var tmp =0; while(true) { if(index> elements - 1) { //console.log("Fehler: ungültiger Index"); return null; } if(tmp==index) { currentNode.setY(value); return currentNode; } tmp++; currentNode= currentNode.next; } } this.getY = function getY(index) { var currentNode = first; var tmp =0; while(true) { if(index> elements - 1) { //console.log("Fehler: ungültiger Index"); return null; } if(tmp==index) { return currentNode.getY(); } tmp++; currentNode= currentNode.next; } } } Snake: var boardWidth = 40; var boardHeight = 40; var canvas; var context; var snakeList = new LinkedList(); function Food() { var positionX; var positionY; this.randomFood = function randomFood() { this.positionX= Math.floor((Math.random() * (boardWidth-1))); this.positionY = Math.floor((Math.random() * (boardHeight-1))); } this.getX = function getX() { return this.positionX; } this.getY = function getY() { return this.positionY; } } function Snake() { var positionX = Math.floor((Math.random() * (boardWidth-5))); var positionY = Math.floor((Math.random() * (boardWidth-5))); var interval = setInterval(update, 100); var pressedKey = 0; // start game with arroa key var food = new Food(); var points =0; food.randomFood(); snakeList.addFirst(new Point(positionX,positionY)); snakeList.addFirst(new Point(positionX-1,positionY)); snakeList.addFirst(new Point(positionX-2,positionY)); function update() { if(pressedKey!=0) { move(); } eatApple(); changeDirection(); checkGameOver(); repaint(); if(pressedKey==0) { context.strokeText("Start Game with arrow key, new Game with F5",100,20,150); } } function move() { for(var i= snakeList.getLength()-1; i > 0; i--) { snakeList.setX(i,snakeList.getX(i-1)); snakeList.setY(i,snakeList.getY(i-1)); } } function eatApple() { if(snakeList.getX(0)==food.getX(0) && snakeList.getY(0)==food.getY(0)) { snakeList.addLast(new Point()); points++; document.getElementById("textareapoints").value = "Points: "+points; food.randomFood(); } } function changeDirection() { var xDirection= snakeList.getX(0); var yDirection= snakeList.getY(0); window.onkeydown = function(evt) { if (evt.keyCode == 37) { pressedKey=37;} //left if (evt.keyCode == 38) { pressedKey=38;} //up if (evt.keyCode == 39) { pressedKey=39;} //right if (evt.keyCode == 40) { pressedKey=40;} //down } switch(pressedKey) { case 37: snakeList.setX(0,xDirection-1); break; case 38: snakeList.setY(0,yDirection-1); break; case 39: snakeList.setX(0,xDirection+1); break; case 40: snakeList.setY(0,yDirection+1); break; default: break; } } function checkGameOver() { for(var i=1; i<snakeList.getLength(); i++) { // eat itself if(snakeList.getX(0)== snakeList.getX(i) && snakeList.getY(0)==snakeList.getY(i)) { clearInterval(interval); context.strokeText("new Game with F5",100,20,150); } } if(snakeList.getX(0) < 0 || snakeList.getX(0) > boardWidth-1 || // hit border snakeList.getY(0) < 0 || snakeList.getY(0) > boardHeight-1) { clearInterval(interval); context.strokeText("new Game with F5",100,20,150); } } function repaint() { context.fillStyle = "yellow"; // repaint whole canvas for(var i=0; i<boardWidth; i++) { for(var j=0; j<boardHeight; j++) { context.fillRect(i*10, j*10,10,10); } } context.fillStyle = "green"; // food context.fillRect(food.getX()*10, food.getY()*10, 10, 10) context.fillStyle = "red"; //snake head context.fillRect(snakeList.getX(0)*10, snakeList.getY(0)*10,10,10); context.fillStyle = "blue"; // snake body for(var i=1; i<snakeList.getLength(); i++) { context.fillRect(snakeList.getX(i)*10, snakeList.getY(i)*10,10,10); } } } function initCanvas() { canvas = document.getElementById("canvas"); context = canvas.getContext("2d"); canvas.width = 400; canvas.height = 400; context.fillStyle = "yellow"; context.fillRect(0,0,canvas.width, canvas.height); context.fillStyle = "black"; } window.onload = function() { initCanvas(); var s = new Snake(); } Html page: <!DOCTYPE html> <html lang="de"> <head> <meta charset="utf-8" /> <title>Snake</title> <link href="snakecss.css" rel="stylesheet"> <script src="snake.js"></script> </head> <body> <header> <h1>Snake</h1> </header> <canvas id="canvas"> </canvas> <textarea id="textareapoints" type="text" rows="1" cols="30"> Points: 0 </textarea> </body> </html> Answer: Worst use of linked list, ever! Linked list are convenient for many reasons, one of them is fast splicing, much faster than array splicing and thus the best option when you are often inserting and deleting items at random positions in the list. Linked list are very (VERY) poor at random access. Worst case to find the last item on the list you have to iterate each item. Compared to arrays that are very very good at random access, any item can be accessed in the same fixed time. Linked list and arrays are equally good at push/pop and shift/unshift (if you keep a head and tail on the linked list, and if the array can grow up and down) Looking at your code the linked list has the functions addFirst, addLast which are the equivalent to array push / unshift. You do not do any splicing, and all the access is via an index (random access). You create a linked list and only use its worst attributes, and never even implement its best. Snake games use stacks. Modern variants of the game use stacks. Memory is cheap and plentiful so the stack is a practical solution. The stack is O(1) time and O(n) memory. The JS array is also a stack. The snake's head at the bottom and the tail at the top (or you can do it the other way around) To move the snake one step forward, you pop the tail, set its x,y to the new head position and then unshift it to the bottom of the stack. To move and grow you don't pop the tail, but create a new head and just unshift it to the bottom of the stack. Moving forward using the your linked list is O(n (log n)) while the array stack is O(1) in time complexity. (where n is the length of the snake) An alternative is the display list. Snake game A well designed classic style snake game is O(1) in time and storage. Almost all modern versions of this game that I have seen are O(n) in time and storage (where n is the max snake length) In the game there are only two points of change, the head and the tail, all the rest is static (and on occasion an apple). To determine if the snake eats it self you need only check the display list (a single lookup O(1)) and not iterate each body segment. And the same applies to the apple. The display list also encodes the tail direction so it can be removed in the correct sequence. Property access Don't use functions to access public properties, its slow due to call stake quaking. (quake as in rapidly oscillating) Each variable access requires a new function context that needs to be created and pushed to the stack. Because you do it over two levels of abstraction, linkedList then point updating a single coordinate (x,y) requires 8 functions calls and 4 new function contexts. While direct access requires 0 function calls, and no stack or heap changes Besides the negative performance it is also syntactically poor as direct access or getters and setters would provide cleaner code. Minor points Only add the key event once in the onload event. You have window.onkeydown = function (evt) { in the function changeDirection and should be in the window onload event. window is redundant and you do not need to use it. KeyboardEvent.keyCode has depreciated, you should use KeyboardEvent.key or KeyboardEvent.code Add events using addEventListener Idiomatic name for CanvasRenderingContext2D is ctx, or use context2D because context could be anything. Magic numbers and strings all over the place, many for the same abstract. Put them in one place at the top of the code as a named constant. For example, say you want to change the cell size from 10 to 20 pixels. In your code you would have to manually change 16 numbers, and you can not just search replace, as the 10 may have other meanings. You would have to check each and every instance of 10. With const cellSize = 10; at the top of the code and ctx.fillRect(x * cellSize, y * cellSize, cellSize, cellSize); when you use it making changes is easy and fast. Same applies to all numbers and all strings. even if used only once, keeping them all in one place makes it easy.
{ "domain": "codereview.stackexchange", "id": 31356, "tags": "javascript, beginner, object-oriented, game, snake-game" }
2D map with stationary LIDAR
Question: I'm trying to make a 2D map of a room. This is a project for learning ROS in the first place, so there's no robot involved. I just want to be able to make a map from either my laserscan data or pc2 data. I'm using the Scanse Sweep LIDAR, and I'm able to view live data in rviz. Can I use gmapping, and input static odometry data, or is Hector mapping better in this case? Thanks in advance Originally posted by stianjoer on ROS Answers with karma: 3 on 2018-10-18 Post score: 0 Original comments Comment by abdullahsindhu on 2018-11-14: Hi I am also working on making a 2D map of a room kindly explain me some steps to follow they will be help me in my undergrad project Thanks Regards Abdullah Sindhu Answer: With gmapping just adding a static frame for odometry isn't enough because you have paramaters like : ~linearUpdate (float, default: 1.0) Process a scan each time the robot translates this far ~angularUpdate (float, default: 0.5) Process a scan each time the robot rotates this far your map would be updated only if you move, and I'm pretty sure there are other params like that. You can check #q63457, they already discussed about gmapping without odom. So yes, hector_mapping is exactly what you are looking for. Originally posted by Delb with karma: 3907 on 2018-10-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by stianjoer on 2018-10-18: Thank you!
{ "domain": "robotics.stackexchange", "id": 31927, "tags": "ros, ros-kinetic, 2dlidar" }
Observability of proper distance
Question: Hubble's law at current time is as follows. $$v=H(t_0)r$$ But if you look at the explanation on Wikipedia: Strictly speaking, neither $v$ nor $D$ in the formula are directly observable, because they are properties now of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it. For relatively nearby galaxies (redshift $z$ much less than unity), $v$ and $D$ will not have changed much, and $v$ can be estimated using the formula $v=zc$ where $c$ is the speed of light. This gives the empirical relation found by Hubble. For distant galaxies, $v$ (or $D$) cannot be calculated from $z$ without specifying a detailed model for how $H$ changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: ($1+z$) is the factor by which the universe has expanded while the photon was travelling towards the observer. My question is as follows: Current proper distance indicates where the galaxy should be by expansion when photons from the past start and reach us now. In the expression of Hubble's law, Hubble's law has a current proper distance. Therefore, Hubble's law at current time already has a meaning that implies information about photons in the past. It's strange to say that Hubble's law only holds close distance because of the inability to observe current information. Why does $cz=Hd$ exist only at a close distance? Answer: I'm not sure I understand the question, but I'll try to provide some context for the paragraph from Wikipedia. It's ambiguous what law Hubble should be credited with finding, because there are many different notions of distance that can be used in cosmology, and all of them satisfy some equation of the form $H_0 D = cz + O(z^2)$, and Hubble's original data points all had $z^2<0.000015$, and they were noisy enough to be equally consistent with any of these notions of distance. There is one version of "Hubble's law" that is exactly correct at all distances and times in any (exactly homogeneous) FLRW cosmology, regardless of the specific parameters of the model. That is $v=HD$ where $D$ is the metric distance measured at a constant cosmological time and $v$ is the derivative of $D$ with respect to cosmological time. However, the relationship between the quantities in that Hubble law and directly measurable quantities, such as $z$, luminosity, and angular size, does depend on the details of the model. There is no law as simple and universal as $v=HD$ that relates measurable quantities.
{ "domain": "astronomy.stackexchange", "id": 5924, "tags": "cosmology, hubble-constant" }
What happens if I move air at relativistic speeds
Question: What happens if I accelerate air to the speed of light? To be more specific I'd like to know what will happen if I do the following: some volume of air (just regular atmospheric air), I accelerate it to speeds comparable to the speed of light ($v\in [10^{-3},{1})c$). To be even more specific: imagine we have one milligram of air as we accelerate this volume to the $v=0.2c$ spending $t=10^{-20}$ seconds of time. Let's imagine that we somehow were able to achieve this results under normal atmospheric pressure etc. The question is: what exactly will happen? I've calculated the approximate kinetic energy of air that we will get with this speed and it equals approximately $E_k = 1.852\operatorname{Gj}$, but I do not have enough knowledge to predict anything more than just 'explosion' or similar thing happening after. Note: I'm looking for more or less simple answer without too much of insight, even though I will be really grateful if you could provide more explanation or link to the book/article that could help me understand what will happen. Note: I've googled quite a bit before asking this question and found something that looked familiar to what I wanted: hypersonic gas flows. But after I've looked a bit more I understood that it is not what I want. I'm interested in very quick acceleration of small portion of the gas to the near-light speeds but not the continuous flow of gas even if it is also really fast. Answer: The moving air will collide with the static air in the atmosphere. The kinetic energy of the moving air will convert to both heat and the kinetic energy of the pushed molecules of air that were static. There will be a high pressure in the front and vacuum behind (until everything settles down). The pressure differences will create a loud supersonic "boom" just like with fighter jets. If you move enough air, say, downward, the effect would be like of a burning meteorite hitting the Earth. Tree branches will break, but the trunks still stand in the epicenter: https://en.m.wikipedia.org/wiki/Tunguska_event The total energy of the explosion is defined by $E=(\gamma-1)mc^2$ where $m$ is the mass of the relativistic air, $c$ is the speed of light, and $\gamma=\dfrac{1}{\sqrt{1-\beta^2}}$ where in turn $\beta$ is speed of the moving air expressed as a fraction of the speed of light. In your example, $m=1mg$ and $\beta=0.2$, so $\gamma-1\approx 0.02$ and the mass converted to energy is $0.02mg$. This is $34,000$ times less than in the Hiroshima bomb, so your explosion would be equivalent roughly to $0.44$ tons of TNT or approximately $2$ thunder lightnings at once ($1.85GJ$ of energy, as in your calculations).
{ "domain": "physics.stackexchange", "id": 49646, "tags": "special-relativity, energy, speed-of-light, air, gas" }
Why does ice melting not change the water level in a container?
Question: I have read the explanation for this in several textbooks, but I am struggling to understand it via Archimedes' principle. If someone can clarify with a diagram or something so I can understand or a clear equation explanation that would be great. Answer: Good question. Assume we have one cube of ice in a glass of water. The ice displaces some of that water, raising the height of the water by an amount we will call $h$. Archimedes' principle states that the weight of water displaced will equal the upward buoyancy force provided by that water. In this case, $$\text{Weight of water displaced} = m_\text{water displaced}g = \rho Vg = \rho Ahg$$ where $V$ is volume of water displaced, $\rho$ is density of water, $A$ is the area of the ice cube base and $g$ is acceleration due to gravity. Therefore the upward buoyancy force acting on the ice is $\rho Ahg$. Now the downward weight of ice is $m_\text{ice}g$. Now because the ice is neither sinking nor floating, these must balance. That is: $$\rho Ahg = m_\text{ice}g$$ Therefore, $$h = \frac{m_\text{ice}}{\rho A}$$ Now when the ice melts, this height difference due to buoyancy goes to 0. But now an additional mass $m_\text{ice}$ of water has been added to the cup in the form of water. Since mass is conserved, the mass of ice that has melted has been turned into an equivalent mass of water. The volume of such water added to the cup is thus: $$V = \frac{m_\text{ice}}{\rho}$$ and therefore, $$Ah = \frac{m_\text{ice}}{\rho}$$ So, $$h = \frac{m_\text{ice}}{\rho A}$$ That is, the height the water has increased due to the melted ice is exactly the same as the height increase due to buoyancy before the ice had melted. Edit: For completion, since it is raised as a question in the comments Melting icebergs boost sea level rise, because the water they contain is not salty. Although most of the contributions to sea-level rise come from water and ice moving from land into the ocean, it turns out that the melting of floating ice causes a small amount of sea-level rise, too. Fresh water, of which icebergs are made, is less dense than salty sea water. So while the amount of sea water displaced by the iceberg is equal to its weight, the melted fresh water will take up a slightly larger volume than the displaced salt water. This results in a small increase in the water level. Globally, it doesn’t sound like much – just 0.049 millimetres per year – but if all the sea ice currently bobbing on the oceans were to melt, it could raise sea level by 4 to 6 centimeters.
{ "domain": "physics.stackexchange", "id": 83276, "tags": "water, buoyancy, states-of-matter, ice" }
I can has(kell) cheezburger?
Question: Edit 13 June: I have made a new and improved version here. Original Question This is a very rudimentary lolcats translator (it only works on the phrase "Can I have a cheeseburger") in Haskell. This is my first ever attempt at Haskell (or any functional programming language). I am sure that my code is absolutely atrocious (the documentation was dismal) but it works. Teh Codez import Data.Strings main = do putStrLn "Enter your text to translate" inputText <- getLine let capsText = strToUpper inputText let a = strReplace "HAVE" "HAS" capsText let b = strReplace "CAN I" "I CAN" a let c = strReplace "CHEESEBURGER" "CHEEZBURGER" b let d = strReplace " A " " " c putStrLn (d) getLine Answer: It is good practice in Haskell to separate the functional code from the IO. In this case, you could (and therefore should) define a lolcat :: String -> String function. Be sure to put a type declaration on all functions — you didn't write one for your main. Defining variables a, b, c, and d is overkill. I would write this as a composition of functions. lolcat :: String -> String lolcat = strReplace " A " " " . strReplace "CHEESEBURGER" "CHEEZBURGER" . strReplace "CAN I" "I CAN" . strReplace "HAVE" "HAS" . strToUpper
{ "domain": "codereview.stackexchange", "id": 20372, "tags": "beginner, strings, haskell" }
The physics of hit & run
Question: It is usually said that you can minimize the damage caused by a car crash by increasing its duration. This way the impulse (F x dt) will be the same, but the F component wil decrease and hence the acceleration for the passengers. For example, by manufacturing the cars of steel that is deformable, you make them stick to each other and thus travel together during more time, instead of bouncing off and accelerating sharply. Logically, the opposite is also true: by chosing an elastic naterial, you would reduce collision time and increase the damage. But I wonder if you can obtain this result without changing the actors, without altering the display, just by reducing the interaction time. For example, the bird pecks at the wood of the tree. Of course, thus the surface exposed to the force is reduced and this maximizes the presure and the impact of the force. But the bird also uses quick repeated movements for some reason, doesn't it? I tend to think that the same rationale lies behind in both cases. When the interaction time increases with a deformable car, it is because this material is like a coward army with little cohesion. If the soldiers (molecules) are hit and displaced, there is no courage (restoring force) making them strike back, so they are disbanded (potential energy is not re-converted into kinetic energy). Hence the non-bouncing off and the longer interaction time. When the interaction time decreases out of the sheer will of the bird, the effect is that its beak, even if it faces a brave army, just hits a couple of soldiers and retreats before they can obtain assistance from their colleagues... This would be a sort of hit & run strategy, guerrillas war... reducing the attack time and maximizing the force? Well, that is what I initially thought but have doubts... Answer: by manufacturing the cars of steel that is deformable, you make them stick to each other It is not so much a "sticking together" effect, but rather a "pillow" effect. Even if they don't stick at all, the deformation still absorbs energy. This reduces the total kinetic energy. Logically, the opposite is also true: by chosing an elastic naterial, you would reduce collision time and increase the damage. "Elasticity" is not the opposite of "deformable". You can easily have a non elastic surface that doesn't deform. A stone for example. Those two are not opposites but rather in the same category: first a material elastically compresses/stretches and then it permanently deforms. Not either one or the other. These are called elastic deformation and plastic deformation, respectively. Both the amounts of plastic deformation and of elastic deformation are positive factors increasing the duration of impact. Think of a deformable car and think of a trampoline; both reduce impact time and save your from injury. When the interaction time increases with a deformable car, it is because this material is like a coward army with little cohesion. If the soldiers (molecules) are hit and displaced, there is no courage (restoring force) making them strike back, so they are disbanded (potential energy is not re-converted into kinetic energy). Hence the non-bouncing off and the shorter interaction time. (I believe you mean "longer interaction time" in that last sentence here.) This analogy is fine so far, as far as I can see. When the interaction time decreases out of sheer will of the bird, the effect is that its beak, even if it faces a brave army, just hits a couple of soldiers and retreats before they can obtain assistance from their colleagues. This analogy is a little bit odd. I'm pretty sure (without being bird expert) that the quick head withdrawal happens because if simply bounces back. The backwards motion is not "out of sheer will" but simply the bounce. Then the impact is not independent of the motion afterwards, because the momentum change is larger which is a force applied in the wood. I do not believe the "retreating fast to reduce impact time" analogy is working here and I do not see the correlation with car collisions.
{ "domain": "physics.stackexchange", "id": 35627, "tags": "newtonian-mechanics, forces, energy, acceleration, time" }
How to generate .deb of ROS package step by step outside ros buildfarm
Question: Hi there, Can anyone provide a tutorial to generate a .deb of a ROS package step by step so that the others can use it to install the package without source code. Many thanks, Rickardo Originally posted by rikardo on ROS Answers with karma: 31 on 2016-08-13 Post score: 3 Original comments Comment by tanasis on 2018-01-03: Hey Rikardo, did you solve this? Answer: If you want to build a ROS package into a deb, you can do it just like any debian package manually I highly recommend reading through the Debian documentation. There's a relatively steep learning curve unfortunately. A few starting points are: https://wiki.debian.org/HowToPackageForDebian https://www.debian.org/doc/manuals/maint-guide/build.en.html https://debian-handbook.info/browse/stable/debian-packaging.html I don't know what your use case is, but if you're building ROS packages privately you can still use bloom to generate a lot of the boilerplate needed above. And you can use git-buildpackage to build the resultant debian packages locally in approximately one line. A simpler approach that can fill many use cases is to use the checkinstall tool. You can use checkinstall with the install target for a catkin workspace. Originally posted by tfoote with karma: 58457 on 2016-08-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 25515, "tags": "ros, deb" }
How does the echo of a radio wave from Venus depend on the rotation speed and direction of the planet?
Question: In this paper from around 1960 scientist used echos of radio waves from the Venus' surface to determine its rotational speed and direction. How can the rotation of Venus affects the echo of the radio wave? The brief explanation is that one side of Venus is coming towards us and one side is going away and that is what causes the frequency of the echo changes. Yet, I do not understand why this is so. Answer: There is a Doppler signal. The frequency of the reflected wave is changed by the reflector moving, and that shift is measured to get the velocity, usually by mixing with a wave of known frequency or by measuring a beat or phase shift. Radio waves like acoustic wave shift in frequency when an object is moving. So, just like you hear a higher pitch as a train or an ambulance with a siren moves towards you and a lower pitch as it moves away from you. Doppler Radar Weather station radars use the effect to measure the speed and rotation of storms. Edit: In the paper they note that they subtract the doppler shift due to the relative motion of Earth and Venus and then look at how the remaining frequency spectrum changes. Since the period of rotation of Venus is so long 250 days they also had to play with some range gating and other tricks to understand the signal that they were getting with the experiment. The data was collected over about a 90 day time period.
{ "domain": "physics.stackexchange", "id": 89994, "tags": "astronomy, doppler-effect, radio-frequency" }
How useful is the assumption that water doesn't auto-ionize in the following?
Question: Now in the first row of the iCe-table, $[\ce{H3O+}]$ is said to be 0. But water autoionizes to some extent, so it shouldn't really be 0. I accept that, because the pH of this solution is 2.5, the autoionization effect is negligible but are there instances where this can't be ignored (e.g. where the pH is close to 7?)? Academic, industrial (or hobby) examples would all be fine. Answer: First, while it is useful to know approximate methods of solving aqueous solution equilibrium problems, you have to realize that in a professional context, these methods are not used other than for back-of-the-envelope calculations. There exist software that perform that kind of calculations without introducing approximations, and they are widely used. So, to answer the question in your title: such approximations are useful for simple calculations, and for those only. Now, let’s try to evaluate the effect of neglecting auto-ionization of water on your problem. You can write the full system of equations describing the aqueous equilibrium: conservation of mass: $[\ce{HA}] + [\ce{A-}]=c_\ce{A}$, where $c_\ce{A}$ is the analytical concentration in chlorobenzoic acid (0.1 mol/L). electroneutrality: $[\ce{H+}]=[\ce{A-}]+[\ce{HO-}]$ water autoionization equilibrium: $[\ce{H+}][\ce{HO-}]=K_\text{e}$ acid-base equilibrium: $\displaystyle K_\text{a}=\frac{[\ce{H+}][\ce{A-}]}{[\ce{HA}]}$ You can solve this system of equations, as you have four equations and four unknowns ($K_\text{a}$, $[\ce{A-}]$, $[\ce{HA}]$ and $[\ce{HO-}]$). The answer gives $\mathrm{p}K_\text{a} = 9.19$ (given the precision given in experimental measurements). You don’t say what figure you obtain, but I can take the same system above and simulate “neglecting auto-ionization” by setting $K_\text{e}=0$. Doing this, I obtain $\mathrm{p}K_\text{a} = 9.18$, which means the error involved in your approximation is 0.01 on the $\mathrm{p}K_\text{a}$. So, this is indeed a good approximation.
{ "domain": "chemistry.stackexchange", "id": 178, "tags": "acid-base, water, ph" }
What exactly is power and how does it work?
Question: Lets say that a car engine can output a maximum power of $P$. Now initially, the car starts from rest would have zero velocity. Now the tires start slipping and a force of $f_k = \mu mg$ will act on the car accelerating it forward where $\mu$ is the friction coefficient between the ground and the tires. Now the car starts accelerating with $a = \frac{f_k}{m}$ and the power output of the car engine slowly rises until it reaches $P$. $f_k \cdot v = P \implies v = \frac{P}{f_k}$ Now I couldn't understand what exactly would happen after the power output of the engine reaches $P$ but these were my thoughts : Since there is a external force $f_k$ acting, it feels like it would still accelerate but that would increase its velocity, but that would increase its power beyond $P$ which is not possible. The velocity($v$) would increase and the external force($F$) would be less than the maximum kinetic friction($F < f_k$) such that $F \cdot v = P$. But I couldn't think about how that would work because the tire can only either slip or roll on the surface which would leave room for only 2 values of $F$ which are $f_k$ and $0$. So please help me understand how exactly this situation proceeds after the engine has achieved power output $P$ Answer: Your question includes the key equation, $P=Fv$. Power is the force times the current velocity. Now in a real world setting, the power of an engine varies with speed (as many other answers have shown), but we'll use your idealized setting with an engine that outputs constant power at all speeds up to some maximum power limit, $P_{max}$. You've done the calculations for what happens up to that limit. Your question is what happens after that limit is achieved. In this regime, we have a constant power, and an obviously increasing velocity. This implies the force must be decreasing. Indeed, the force applied by this idealized car on the road must be $F=\frac{P}{v}$ or else the engine is not achieving these ideals. Your concern was that the car must either be spinning its tires, such that we use the dynamic friction equation $F=\mu_d mg$ (assuming a normal force of $mg$, as you did), or the car must have tires that are now sticking, such that static friction applies. In your question, you indicate that this means $F=0$, but this is not the case. Were that to be the case, static friction could never apply a force and nothing could be held in place by friction. In reality, the equation for static friction is $F\le \mu_s mg$. The force is whatever force is required to keep the objects stationary with respect to eachother, up to a max limit of $\mu_s mg$, at which point they start slipping. But any amount of force up to that point is valid. So what will happen is that your car will accelerate off the line. At the start, velocity is zero, so force is infinite... that can't be right! Your idealized car is too idealized! So let's recognize that the spinning of tires at the start permits some "bleeding" of power into heat. So the car accelerates along some velocity curve. That curve will be based on the $f_k$ equations you did. What matters is that the velocity grows, and thus the corresponding force decreases. At some velocity, the force is so low that the tires succeed in "sticking." At what force this occurs is a very complicated topic, especially with rubber tires when bend and flex and stick and do all sorts of complex physicsy things. But at some point this will occur, because your constant power engine will continue to apply less and less force, thus less and less acceleration. Once the tires stick, we need to switch over to the static friction regime. In this regime, the force of friction is whatever is needed to satisfy the $F=ma$ equation for the car, so long as it doesn't exceed the maximum (which we've already seen because force only goes down in your scenario, never up). The fact that this force does not need to be 0 (as initially stated in the question) is how our car can keep going fast and faster with no limit (although its acceleration gets smaller and smaller)
{ "domain": "physics.stackexchange", "id": 99424, "tags": "newtonian-mechanics, power" }
Where is pcd_viewer on ROS fuerte?
Question: In electric, I can run: rosrun pcl pcd_viewer office_chair_scene_360.pcd But I can run it on ROS fuerte. sam@sam:/opt/ros/fuerte/stacks/perception_pcl/pcl_ros/bin$ rosrun pcl pcd_viewer office_chair_scene_360.pcd [rosrun] Couldn't find executable named pcd_viewer below /opt/ros/fuerte/share/pcl sam@sam:/opt/ros/fuerte/stacks/perception_pcl/pcl_ros/bin$ Is something I'm missing? Where is it? Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2012-08-15 Post score: 1 Answer: $ which pcd_viewer /opt/ros/fuerte/bin/pcd_viewer Originally posted by dejanpan with karma: 1420 on 2012-08-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by sam on 2012-08-16: Why I can't click this is the right answer? It doesn't work when I click it. Thank you~
{ "domain": "robotics.stackexchange", "id": 10622, "tags": "pcl" }
What are the physics principles behind magnetic strength and distances
Question: This is for my report and I wanna know what are the theory behind why magnets repel. Why magnetic force decreases as distance increases. I wrote down one of the theory, magnetic field but I'm not sure what are the other theories are behind this. (I did two experiments, one of them is magnetic spring) Also how does length of a magnet affect magnetic strength and distance? Answer: For simpler way of explanation, a magnet is considered to resemble an eleclectic dipole. Like an electric dipole, a magnet is considered to have positive and negative poles ( that is, here , having negative and positive magnetic charges at either ends of the magnet ). When two magnets interact each other, these negative and positive charges attracts and repels each other depending on their respective charges and dI stance between them. The net attraction and repulsion is estimated by calculating the net force on the magnet.Here, Columbs law ( used for determining electric force between two electric charges ) is valid with the electric charges in the equation replaced by the magnetic charges. As of electric charges, like magnetic charges repel each other and unlike magnetic charges repels each other. This i guess will help you understand. As noted, this isn't an accurate theory and is just theoretical, not practical. This I guess is apt for you as you are not that familiar with electric and magnetic forces. The distance relation you asked is experimentally derived as you did, and the equations are developed based on those experiments.
{ "domain": "physics.stackexchange", "id": 40952, "tags": "homework-and-exercises, magnetic-fields, magnetic-monopoles" }
What's the meaning of the $\sigma$'s of a particle physics measurement?
Question: In particle physics experiments, one often quotes the result of measurement of an observable with $1\sigma$, $2\sigma$, $3\sigma$ ranges. The experiments typically give a best-fit value with a $3\sigma$ range for that observable. Let us take an example. The best-fit value of the neutrino solar mixing angle $\theta_{12}$ is $33.62°$ with a $3\sigma$ range $31.42°$-$36.05°$. To get some ideas I was looking at the links How Many Sigma? and What does a 1-sigma, a 3-sigma or a 5-sigma detection mean?. But I failed to understand several points. Why is the range $\theta_{12}\in[31.42°,36.05°]$ called a $3\sigma$ range? Does it mean that the probability that $\theta_{12}$ lies between $31.42°$ and $36.05°$ is $0.9973$. Equivalently, the chance of getting a value between $31.42°$ and $36.05°$ is $99.73\%$? When one talks about $\sigma$'s, they must generate a Gaussian or Normal distribution with a mean $\mu$ and a standard deviation $\sigma$. How is this Gaussian distribution obtained? What is the best fit value has to do with this distribution? What do we mean when we say that some value is disfavoured at $3\sigma$? Of course, we mean that it lies outside the $3\sigma$ range. However, in terms of probability, is it that if a value is outside the $3\sigma$ range, we mean, that probability of obtaining it is $\leq 0.27\%$? You have a normal distribution you can always calculate any $N\sigma$ range. How is it possible that the $3\sigma$-range is known but not the $5\sigma$? Can we expect that $3\sigma$ range of an observable to become narrower in future? Answer: I am a bit rusty on my statistics so the following may not be the most precise. Why is the range $\theta_{12}\in[31.42°,36.05°]$ called a $3σ$ range? Does it mean that the probability that $\theta_{12}$ lies between $31.42^o$ and $36.05^o$ is 0.9973? Yes. I have more commonly seen either a z (or t)-score $z=\dfrac{x-\mu}{\sigma}$ or an $x\%$ (in this case 99.73%) confidence interval, but the idea is the same. When one talks about $\sigma$'s, they must generate a Gaussian or Normal distribution with a mean $\mu$ and a standard deviation $\sigma$. No, not necessarily. The standard deviation $\sigma$ simply refers to the quantity $\sigma = \sqrt{\langle x^2\rangle-\langle x\rangle^2}$. However, when creating confidence intervals or more general hypothesis testing one wants the sampling distribution as close to normal as possible as this gives meaningful interpretations. This distribution is naturally obtained. (the number of samples being greater than $30$ is one criteria for test validity since distributions tend towards normal; see Central Limit Theorem) ...in terms of probability, is it that if a value is outside the $3\sigma$ range, we mean, that probability of obtaining it is $\leq 0.27%$? Yes. The reason we want (some high number)$\sigma$ is that we can never statistically prove anything (i.e. with 100% confidence), so in general we assume that some event happens due to random chance (null hypothesis), and if that chance is $<x\%$ then it is statistically significant and we may reasonably assume that it is not mere chance; that some other factor may explain the phenomena. How is it possible that the $3\sigma$-range is known but not the $5\sigma$? I do not know of a case in which this is true. In general, one may always construct any $\sigma$-range; however, if not many samples are taken it is more useful to take lower $\sigma$ values as the interval's spread may be too large at higher values for any meaningful analysis. Can we expect that $3\sigma$ range of an observable to become narrower in future? Generally, yes. If one takes more samples the range (at a fixed $\sigma$ value) should become smaller, proportional to $n^{-1/2}$
{ "domain": "physics.stackexchange", "id": 50174, "tags": "experimental-physics, measurements, probability, error-analysis, statistics" }
ROS2 plan a trajectory without mapping (without lidar)
Question: Hello, first of all, for a little bit of context, i'm using ros2 foxy with ubuntu 20.04 and gazebo 11. I'm still learning how everything works together so, if i'm saying something wrong, i would really appreciate to be corrected ! I have a robot defined within a urdf file, with plugins to work properly in gazebo. My robot spawn correctly, i can control it with teleop_twist_keyboard and the /tf odom -> base_link and base_link -> [other parts of my robot] are published correctly. As odometry sources I am using differential drive encoder and an absolute location plugin in gazebo (gazebo_ros_p3d) and I have an IMU. My robot spawn in a gazebo world that is just a planar ground with no obstacles Now I just want to be able to say to my robot something like "go to x=10 and y=12" in an empty map, I don't need to map my environment (as i believe it will be hard with my available sensors anyway). The problem I am facing is that in tutorial like https://navigation.ros.org/setup_guides/sensors/setup_sensors.html they tends to use lidars which i can't use. From what i understood I have to publish the map -> Odom transform in order to go further but everything i found used lidars to publish it. Now if anyone could give me some hints in where to look at, that would be wonderful ! On a side note, my robot have a trailer (for now i'm doing as if it wasn't here) but if you happens to have some documentation on path planning with a trailer I would greatly appreciate ! Thank you anyone who took the time to ream me, if you want me to add anything just tell me ! Would you have a tutorial I could follow with Originally posted by Bastian2909 on ROS Answers with karma: 68 on 2022-05-17 Post score: 0 Answer: The map->odom is responsible for compensating for sensor drift, and also for locating the robot starting location in a map. If you have no drift and do not care about absolute coordinates, feel free to provide a static identity transform between map and odom (ros2 run tf2_ros static_transform_publisher 0.0 0.0 0.0 0.0 0.0 0.0 map odom). Originally posted by Per Edwardsson with karma: 501 on 2022-05-18 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Bastian2909 on 2022-05-18: This is exactly what i needed, thank you !
{ "domain": "robotics.stackexchange", "id": 37678, "tags": "navigation, odometry, mapping, planner" }
How to identify Dominant terms in big-O notation and understand Time Complexity of an algorithm?
Question: I understand that Time Complexity is the measure of how an algorithm scales with respect to the input size. Let us say an algorithm is having a runtime of O($3^n$) + O($n^{20}$) In order to identify the Non-Dominant term, I followed my intuition for a sample of values: $3^n$ = {1, 3, 9, 27, ...} for n={0, 1, 2, ...} $n^{20}$ = {0, 1, 1048576, 3486784401, ...} for n={0, 1, 2, ...} I see that $n^{20}$ scales up more quickly than $3^n$ for all values of n>=2 Can I say in this regard that the Non-Dominant term is $3^n$? Answer: No, when we talk about dominant terms in big-O notation we talk about asymptotic domination, i.e. how terms scale as $n \to \infty$. To see this you can think about the limit: $$\lim_{n \to \infty}\frac{f(n)}{g(n)} = C \ .$$ If this value is $C \in \mathbb{R}$, then we have that $f(n)=O(g(n))$ and so the sum $O(f(n))+O(g(n))=O(g(n))$. If instead, $C=+\infty$, then it's the opposite, i.e. $g(n)=O(f(n))$ and so the sum will be $O(f(n))+O(g(n))=O(f(n))$. In your case, even if for small values of $n$ it seems that $n^{20}$ grows faster than $3^n$, this is not the case. Indeed, $\lim_{n\to \infty} \frac{3^n}{n^{20}} = + \infty $. You can see it in this way: $3^n=(3^{\log_3 n})^{n/\log_3 n}=n^{n/\log_3 n}$ which grows obviously bigger than $n^{20}$, in particular it surpasses $n^{20}$ as soon as the exponent $\frac{n}{\log_3 n}$ surpasses $20$. It it worth to notice that asymptotic behavior isn't everything. In practice, polynomials with large exponents or large multiplicative constants can be bigger than exponentials for values of $n$ from real world applications.
{ "domain": "cs.stackexchange", "id": 21383, "tags": "algorithms, time-complexity" }
Variable declaration closer to usage Vs Declaring at the top of Method
Question: As part of our sprints, we do peer review each others code. Here is the code that I am reviewing. public void SendEmail() { string emailAddress = string.Empty; string managerEmailAddress = string.Empty; if (//condition) { //do something //Retrieve emailAddress and managerEmailAddress EmailMessage.To = emailAddress; EmailMessage.CC = managerEmailAddress; } } The suggestion from Resharper was to have the string declarations closer to the usage. However, my colleague suggest that it is not a good idea to have them closer to the usage and have them declared right at the beginning. And, I sort of agree with him because for methods where the variables are used in multiple places with different values being assigned depending upon conditions...it would make sense to have them all grouped right at the top but for smaller methods, where there is just one if condition, stick them along with the usage (for instance the above mentioned code). What do you think? Answer: I agree with Guffa and dreza, variables should only be declared at an outer scope if it is really really necessary as doing so is confusing and error prone. Also, if a method is long enough that having the variable declaration at the top means it isn't close to it's use then it could probably benefit from refactoring into separate methods. Doing this will also help readability because you'll have method names explaining what's happening. Some examples (quotes as this is only my opinion): 1: "Bad" example As an outside coder, if I saw this I would wonder why the variables are declared at the outer scope and would waste time worrying there was a special reason for it. string emailAddress = string.Empty; string managerEmailAddress = string.Empty; if (//condition) { emailAddress = Foo(); EmailMessage.To = emailAddress; } else { emailAddress = Bar(); // Do something different with emailAddress; } // No more usages of emailAddress 2: "Improved" example: This is better because the variables are defined where they are used so there is no need to go looking around for other usages. if (//condition) { var emailAddress = Foo(); EmailMessage.To = emailAddress; } else { var emailAddress = Bar(); // Do something different with emailAddress; } 3: "Even better" example: I think this is the best it can get because the method names will help with understanding the code. if (//condition) { ExplanatoryName1(//pass anything in that is required); } else { ExplanatoryName2(//pass anything in that is required); } 4: If you need a return from the methods: var methodReturn = condition ? ExplanatoryName1(args) : ExplanatoryName2(args);
{ "domain": "codereview.stackexchange", "id": 5393, "tags": "c#" }
Why don't mitochondria have plasmids?
Question: According to the endosymbiotic theory, mitochondria are descended from specialised bacteria (probably purple nonsulfur bacteria) that somehow survived endocytosis by another species of prokaryote or some other cell type, and became incorporated into the cytoplasm [ref]. And plasmids naturally exist in bacterial cells, and they also occur in some eukaryotes [ref]. I was however taught that mitochondria have no plasmid and only have circular DNA. If the endosymbiotic theory is true, then how come mitochondria have no plasmid? Answer: The mitochondrial genome is highly reduced; many mitochondrial genes have been transferred to the nuclear genome (see endosymbiotic gene transfer) and therefore the mitochondria are fully dependent on the nucleus to function. Bacteria need not necessarily have a plasmid. Usually, all the important genes are present in the chromosomal DNA. Since the mitochondria have lost most of their genes and retain only a few genes that are highly essential for their function, the likelihood of retention of any plasmid DNA is very low. However, there are some reports of plasmid-like DNA in mitochondria (mostly in plants). Handa (2008): in Brassica Robison et al., (2005): in carrots Collins et al., (1981): in Neurospora (a fungus) Likewise, chloroplasts also harbour plasmid-like DNA (google-scholar hits).
{ "domain": "biology.stackexchange", "id": 9592, "tags": "evolution, dna, mitochondria, plasmids, prokaryotes" }
Why can’t one see for 2nd order system that it is at its stability limit neither in the Nyquist plot nor Bode plot?
Question: Consider $$\hat{G}(s) = \frac{1}{s^2+s}$$ than the Nyquist plot is and the Bode plot is In both plots it seems that the closed-loop system is stable even when the eigenvalues are {0, -1}. For a higher order closed-loop system one can see that a system like $$\hat{G}(s) = \frac{1}{s^3+s^2+s}$$ is at its stability limit since ne Nquist plots hits the real axis at -1 and in the Bode plot there is no phase reserve when magnitude hits 0. Why dose the Nyquist and the Bode plot fail for a closed-loop system of 2nd order? Answer: I think you are mixing up the closed- and open-loop systems. The closed-loop system is $$\frac{\frac{1}{s^2+s}}{1+\frac{1}{s^2+s}}=\frac{1}{s^2+s+1}$$ This has poles at $-0.5\pm 0.866025 i$ which is stable.
{ "domain": "engineering.stackexchange", "id": 1181, "tags": "control-engineering, control-theory" }
Directory Snapshot
Question: The following code creates a recursive backup of a directory in the form of interlinked HTML files, structured in the same form as the input directory. It does not take the backup of the contents of a directory, just stores the names and sizes of all the files and folders contained in the directory. (It can be thought of as a hyperlinked version of the dir /s or tree /f commands.) For more details (including an example), refer to this. As an example, this would be the output directory considering this as the input directory. I cannot use C++ 14. #include <iostream> #include <sstream> #include <fstream> #include <string> #include <cassert> #include <vector> #include <cstdlib> #include <boost/filesystem.hpp> using namespace std; using namespace boost::filesystem; const path LogFileName = "DirectorySnapshotLog.txt"; ofstream Log; stringstream LogErrorStream; // Buffer soft errors to output them separately after the informational messages in the log file. // Convert any type to its string representation template<typename T> std::string ToString( const T obj ) { std::stringstream ss; ss << obj; return ss.str(); } // Convert the input size ( in bytes ) to its nearest units in the ratio of 1024. // ( Trying to do how Windows reports size of a file on right clicking and checking its properties ) string RoundSize( const long long& size ) { double ret = ( double )size; vector<string> units; units.push_back( "bytes" ); units.push_back( "KB" ); units.push_back( "MB" ); units.push_back( "GB" ); units.push_back( "TB" ); const unsigned ratio = 1024; unsigned i = 0; while ( ret > ratio && i < units.size() - 1 ) { ret /= ratio; i++; } return ToString( ret ) + " " + units[i]; } // Iterate through a directory and store everything found ( regular files, directories or any other special files ) in the input container void DirectoryIterate( const path& dirPath, vector<path>& dirContents ) { if ( exists( dirPath ) && is_directory( dirPath ) ) { copy( directory_iterator( dirPath ), directory_iterator(), back_inserter( dirContents ) ); } } // Create a set of HTML files containing information about source directory's contents and store it in the destination directory, in a directory structure similar to the source directory // Returns the total size of the source directory long long Snapshot( const path& sourcePath, const path& destinationPath ) { Log << sourcePath << endl; long long sourcePathSize = 0; // Total size of the source directory vector<path> dirContents, files, directories; try { DirectoryIterate( sourcePath, dirContents ); } catch ( const filesystem_error& ex ) { LogErrorStream << ex.what() << endl; return 0; } sort( dirContents.begin(), dirContents.end() ); // sort, since directory iteration is not ordered on some file systems for ( const auto& item : dirContents ) { if ( is_directory( item ) ) { directories.push_back( item ); } else { files.push_back( item ); } } path pwd = destinationPath / sourcePath.filename(); // Present working directory try { create_directory( pwd ); } catch ( const filesystem_error& ex ) { LogErrorStream << ex.what() << endl; return 0; } // Write the HTML file header. const path outFilePath = ( pwd / sourcePath.filename() ).string() + ".html"; ofstream outFile( outFilePath.string() ); if ( !outFile ) { LogErrorStream << "Error creating " << absolute( outFilePath ) << " : " << strerror( errno ) << endl; return 0; } outFile << "<!DOCTYPE html>\n"; outFile << "<meta charset=\"UTF-8\">\n"; outFile << "<html>\n"; outFile << "<title>" << sourcePath.filename() << "</title>\n"; outFile << "<body>\n"; // Write information about the files outFile << "<h1> Files </h1>\n"; for ( const auto& file : files ) { auto size = file_size( file ); outFile << file.filename() << "----" << RoundSize( size ) << "<br>\n"; sourcePathSize += size; } // Write information about the directories outFile << "<h1> Directories </h1>\n"; for ( const auto& directory : directories ) { long long size = Snapshot( sourcePath / directory.filename(), pwd ); sourcePathSize += size; outFile << "<a href=\"" << ( directory.filename() / directory.filename() ).generic_string() << ".html\">" << directory.filename() << "</a>----" << RoundSize( size ) << "<br>\n"; } // Write the footer outFile << "<br>\n"; outFile << "<h3>Total directory size = " << RoundSize( sourcePathSize ) << "</h3><br>\n"; outFile << "</body>\n"; outFile << "</html>\n"; return sourcePathSize; } int main() { string sourcePath, destinationPath; cout << "Enter source directory path -:\n"; getline( cin, sourcePath ); if ( !is_directory( sourcePath ) ) { cout << absolute( sourcePath ) << " is not a directory !\n"; return -1; } cout << "Enter destination directory path -:\n"; getline( cin, destinationPath ); if ( !is_directory( destinationPath ) ) { cout << absolute( destinationPath ) << " is not a directory !\n"; return -1; } cout << "\n"; Log.open( LogFileName.string() ); if ( !Log ) { cerr << "Error creating " << absolute( LogFileName ) << " : " << strerror( errno ) << endl; } Snapshot( sourcePath, destinationPath ); if ( Log ) { if ( LogErrorStream.str().empty() ) { cout << "The program ran without any errors.\n"; } else { Log << "\nERRORS -:\n\n" << LogErrorStream.str() << endl; cout << "There were some errors during the execution of this program !\n\nCheck " << absolute( LogFileName ) << " for details.\n"; } } } Answer: I see a number of things that may help you improve your code. Don't hardcode file names The LogFileName might be something that a user of this program wants to place elsewhere and it will fail entirely if run from a read-only directory such as a CD or DVD. Prefer command line parameters to runtime interaction There is not an easy way to use this program in a script because it requires a response to a prompt rather than allowing command line parameters to be passed. You can provide both, if you wish, by looking for required command line parameters and then only prompting for missing ones. Consider using standard SI units The code calculates units in ratio of 1024 but then uses the abbreviation "MB" which is 1000*1000 rather than 1024*1024 which should be "MiB". See the Wikipedia article on Mebibyte for more details. Avoid dynamically creating const structures Within your RoundSize routine, the units vector is created and destroyed every time the function is called, which is not at all necessary. Further, since you're using C++11, you can simply create the vector using a std::initializer_list: static const vector<string> units{ "bytes", "KiB", "MiB", "GiB", "TiB" }; It would be even better as a constexpr, but we can't do that as written because std::vector has a non-trivial destructor. Use for instead of while where appropriate In that same RoundSize routine, the while loop would be much more idiomatic C++ if it were instead a for loop. unsigned i; for ( i = 0; ret > ratio && i < units.size() - 1; ++i) { ret /= ratio; } Consider the performance cost of creating objects I don't know if you had performance goals for your program, but it's useful to understand the performance characteristics of code you write. Consider this line, also from RoundSize: return ToString( ret ) + " " + units[i]; The ToString() routine creates a string, then the space character is converted to a string and those two strings are concatenated, and then the [] operator is called on the vector and that string concatenated. Whew! One simple way to avoid some of that would be to simply have the space as part of the units. Similarly, your sourcePath and destinationPath variables are declared as string objects but are used as path objects in most uses. This means that a conversion is done from string to path almost every time you use those variables. Better would be to declare both as path and then use explicit strings sourcePathStr and destinationPathStr for the few places you actually need strings. Pass const references where possible The ToString templated function should take const T &obj as its argument rather than const T obj to avoid making a unnecessary copy of the passed object. Declare file scope items as static Unless you intend to share the variables with other code in other files, your variables and functions should be declared static. Catch exceptions In a number of places within the code the underlying boost library call can throw an exception, but it is not caught by your program. An example of that is in this loop within Snapshot: for ( const auto& file : files ) { auto size = file_size( file ); outFile << file.filename() << "----" << RoundSize( size ) << "<br>\n"; sourcePathSize += size; } The call to file_size can throw an error and did when I tried it on my Linux machine. The issue was a symlink that pointed to a nonexistent file. You could either catch the exception using try...catch or use the form of file_size that takes an error_code as a parameter. Separate data manipulation from output The Snapshot function really does two things. It creates the directories and files vectors and then it outputs those data structures as HTML. Better would be to split that one long function into the two logical halves. Or, even better... Use object-oriented programming Your files and directories are both objects. Why not instead create a filesystem object? That way, the files and directories would be parts of the filesystem and your Snapshot function would be better expressed as a constructor and a output member function. This would be much cleaner and also have the advantage of making it easier to create alternative output formats. Use constant string concatenation The Snapshot routine currently has these lines: outFile << "<!DOCTYPE html>\n"; outFile << "<meta charset=\"UTF-8\">\n"; outFile << "<html>\n"; outFile << "<title>" << sourcePathName << "</title>\n"; outFile << "<body>\n"; But you don't really need to do it that way which potentially calls the << operator seven times. Instead, you could express the same thing as this: outFile << "<!DOCTYPE html>\n" "<meta charset=\"UTF-8\">\n" "<html>\n" "<title>" << sourcePathName << "</title>\n" "<body>\n"; This only calls << three times. The compiler automatically concatenates the string literals together. Use const where practical In the Snapshot routine, pwd can and should be declared as const. Fix relative paths When I enter /home/Edward/test as the source path, which does indeed exist, the program reports: "/home/Edward/test" is not a directory ! That is simply not correct, but I'm not sufficiently famiiliar with that portion of the boost libraries to troubleshoot it at the moment. Reduce coupling The use of global variables and passing the same variable to multiple routines both suggest excessive coupling which makes programs more difficult to maintain. Reduce the number of global variables and coupling in general by carefully considering how to break the task up into smaller pieces and by using objects. Check and replace special HTML characters with entities If I have a directory named this&that the & character should be converted into the HTML entity &amp; for correct output. Additional HTML notes There are a number of changes I'd recommend to the output HTML. Among them are to fix it (the <html> and <meta> tags are out of order). Also, consider changing the outputs to <table> rather than just lists. This would potentially allow nicer formatting via stylesheets. Re-read the boost pages As noted on the boost Filesystem home page, using #define BOOST_FILESYSTEM_NO_DEPRECATED above the line where you include the boost library is highly recommended for new code. Doing so now will help you preserve functionality as you expand this program and as the boost library evolves.
{ "domain": "codereview.stackexchange", "id": 12827, "tags": "c++, recursion, file-system, logging, boost" }
Recursive a+b function
Question: I am practising recursion and have written the small code for summing a+b as below: #include <stdio.h> int b=6; static int cnt; void main(void) { int a=9,sum; sum=succ(a); printf("Sum returned : %d\n",sum); } int succ(x) { cnt++; return(cnt<=b ? succ(x+1) : x++); } I am getting the results fine using this. Can anyone suggest optimization or better code for this? I also want to understand how stacking happens in this case for variables cnt and x. Are they stacking in pairs like 0-9, 1-10 ,2-11,3-12 ,4-13, then at last when cnt is 5 it return x from the post-increment return? Answer: First of all, main should be declared int main. Second, I would advise passing the cnt (which I have called iteration_number) as a parameter as well. Also, you should avoid using a global (b). If you define a helper function, that won't change the signature of your function: int add_helper(int value, int max, int iteration_number) { if (iteration_number >= max) return value; return add_helper(value + 1, max, iteration_number + 1); } int add(int a, int b) { add_helper(a, b, 0); } Note that I have refactored your b to be a parameter, which I have called max. The function can be used like this: int main(void) { int start = 9; // Your a int count = 6; // Your b int sum = succ(start, count); } (If you are using C89 instead of C99 or C11, there should be a return 0; at the end of main()). However, the "good" way to define addition recursively is by increasing one number and decreasing the other: int add(int a, int b) { if (b == 0) return a; return add(a + 1, b - 1); } Modifying the function to work with negative numbers is left as an exercise to the reader. (Note: Normally you want to subtract from the lowest number and add to the largest. I left that out to keep the example compact.) I also want to understand how stacking happens in this case for variables cnt and x. By "stacking", I assume you mean how the variables are pushed onto the stack as the function is called. Your cnt has static linkage, and is not pushed onto the stack at all. In my example, value, max and iteration_number will be pushed onto the stack for each call. (Exactly how is, as far as I know, implementation defined and depends on the calling convention in use.) In other words, assuming no optimization, the arguments will take iteration_number * sizeof(int) bytes on the stack. If you want to keep track of the values, simply print them in the recursive function like this: int succ_helper(int value, int max, int iteration_number) { if (iteration_number >= max) return value; printf("%d: %d - %d\n", iteration_number, value, max); return succ_helper(value + 1, max, iteration_number + 1); } Can anyone suggest optimization or better code for this? int sum = 9 + 6; :-)
{ "domain": "codereview.stackexchange", "id": 4235, "tags": "c, recursion" }
Condensed matter physics for mathematicians
Question: What is a good way for me to learn the basics of condensed matter physics? I'd like to get a better understanding of the fundamentals behind recent technological developments like OLEDs, applications of graphene, or get a grip on what the fractional hall effect is about. I don't expect to do research in these areas, but I would like to be able to have meaningful conversations with people working in these areas. My background is this: I have a PhD in pure mathematics, I even studied a little solid state physics (eg. Bloch waves), QFT and a first course in String Theory, but a long time ago now. My mathematics is much stronger than my physics intuition so I'm looking for a more mathematical treatment. My lab experience is precisely zero. I'm more interested in the theoretical ideas than precise details. Are there any books or downloadable lecture notes that might be recommended for someone in my position? Answer: Here are some online lecture notes by Chetan Nayak which look pretty good: Introductory: http://www.physics.ucla.edu/~nayak/solid_state.pdf More advanced: http://www.physics.ucla.edu/~nayak/many_body.pdf
{ "domain": "physics.stackexchange", "id": 1858, "tags": "condensed-matter, resource-recommendations" }
Relation between order and stability in IIR filter
Question: What is the effect of increasing order of IIR filter? does it also effects stability? If we have an IIR filter of order 6, now we change its order to 7, will there be any change in its stability? Answer: Stability depends on the pole locations, so one cannot generally say that increasing the filter order does or doesn't affect stability. It depends on how you increase the filter order. What are the additional coefficients of the denominator? (Because the denominator polynomial of the transfer function determines the pole locations). Another question are the finite word length effects. If you quantize the filter coefficients, e.g., by using fixed point arithmetic (but of course also by using floating point), then higher filter orders tend to be more problematic than lower filter orders, because the chance that poles get very close to (or even outside) the unit circle through quantization becomes larger. But again, you cannot say anything in general unless you specify the filter's transfer function, the filter structure, and the type of quantization.
{ "domain": "dsp.stackexchange", "id": 7097, "tags": "filters, infinite-impulse-response, stability" }
Discrete Fourier Transform in Signal Processing - Interpreting graphs of transformed signals
Question: Given above are the real parts of the signals I to IV. Which of the following statements are correct? (i): Signal III is the result of the discrete Fourier transform of signal I. The associated imaginary part is 0. (ii): Signal III is the frequency spectrum of an oscillation with a frequency. (iii): Signal IV is the frequency spectrum of an oscillation with 2 different frequencies. (iv): Signal IV is the result of the discrete Fourier transform of Signal III. The associated imaginary part is 0. Answer: (iii) seems be to correct. Spectrum of two frequencies. The other choices can be verified, (i) Fourier transform of $f(t=0)=1$ is 1. So it cannot be graph (iii) (ii) Signal (iii) is sine wave and there is no spectrum associated with that. (iv) It's Fourier transform of sine wave (iii) which is $$f(t)=0.5\cdot \sin(2\pi Ft)$$ and it's transform is $$\frac{\pi}{i}\left[\delta\left(\omega-2\pi F\right)-\delta\left(\omega+2\pi F\right)\right]$$ Imaginary part is not zero
{ "domain": "dsp.stackexchange", "id": 8318, "tags": "signal-analysis, fourier-transform, homework, fourier" }