anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Do orbital resonances always form naturally?
Question: For example, if I throw two planets to orbit a star at random direction, would they form an orbital resonance? Answer: If the question is "if I throw two planets to orbit a star at random direction, would they form an orbital resonance?" -- then in general, no. A resonance is an integral ratio (1/1, 2/1, 3/5, etc.) between the periods of motion of objects -- i.e., the ratio of their periods forms a rational number. Formally speaking the odds of getting a integral ratio (let alone a strong, low-order ratio, since those are the dynamically interesting ones) if you set the system up "randomly" should be infinitesimal, because irrational numbers are (infinitely) more abundant that rationals. However, if the orbits of one or both of the planets can change over time, then the ratio between their periods changes, and they can end up in a resonance. (Which is maybe answering the title question.) How often this happens depends on whether the planets happen to start near a strong resonance, and on how rapidly the orbits change. (If the orbit of a planet changes slowly, then it won't encounter new resonances very often; on the other hand, rapid orbital change can overwhelm the effect of weak resonances, so that the planet passes through the resonance without being caught.) For example, it's thought that Neptune and Pluto were originally not in resonance; but the gradual outward migrations of Neptune (due to various gravitational encounters between planetesimals and the giant planets) changed its orbital period and meant that eventually it reached 2/3 resonance with Pluto, and Pluto was "captured" by the resonance, after which it stayed in resonance with Neptune. The vast majority of objects in the Solar System are not in resonance with anything else, which is perhaps another way of answering your question. (I.e., in practice it doesn't happen very often.)
{ "domain": "astronomy.stackexchange", "id": 1171, "tags": "orbit, celestial-mechanics, orbital-resonance" }
Use of basis set in DFT (Density Functional Theory)
Question: Basis sets are used to guess the electronic wave functions for Hartree Fock or similar methods, which are quite legitimate since these methods deal with the wave function of each and every electron. Density functional theory, on other hand, uses the electron density at every point of space for optimization and the calculation of properties. This has led to two doubts which I want to clarify: Is the basis set used to estimate the initial electron density of the system? If so, how the basis functions of an STO or GTF basis set change in order to exhibit the electron density? If not, what exactly is the application of basis set in DFT calculation? In HF SCF, the coefficients of basis functions (for basis sets with GTF) along with geometry changes with change in nuclear geometry while optimising the structure. Does optimisation with DFT follow a similar procedure? P.S.: Please do not suggest me this question, since this is not the answer I am searching for. Answer: This answer only deals with the most common variety of Density Functional Theory, namely, Kohn-Sham DFT. This is what most people mean by "DFT", but, as noted in the comments, things such as orbital-free DFT exist. Kohn-Sham DFT was created to solve a historical problem of DFT: The electronic kinetic energy term (as used in the Thomas-Fermi model) was not accurate enough. Kohn and Sham proposed to make use of what is typically called a "fictitious, non-interacting reference", that is, a wave-function whose main purpose would be to yield a density to be used in all other (nucleus-electron attraction, Coulomb, exchange, and correlation energy/potential) and whose secondary function is to provide the kinetic energy, which is accurate from wave-function theory. This wave-function works just like a Slater determinant and is called the Kohn-Sham determinant. See this question for pointers on where to read more: Equivalent of Szabo and Ostlund book for DFT. As often in DFT, practicality showed a way that was justified later. When implementing this idea, one learns that one can basically use a HF code and replace the exchange term by the exchange-correlation (XC) potential (which is projected back on to the AO basis set). The potential needs to evaluated numerically on a grid, a procedure that yields the XC energy for almost all choices of density functionals. Thus one obtains the energy and the KS operator (the equivalent of the Fock operator) and can perform a SCF procedure in a given basis set. For the mathematical details, see e.g. JA Pople, PMW Gill, BG Johnson Chem Phys Lett 199, 557 (1992). To answer your questions in this context: 1) The initial density depends on the guess, which is a whole different can of worms, but the same is true for HF calculations. (Procedures based on tabulated atomic densities exist, but the initial guess business is a bit of a dark art, and I don't think this is what you meant by your question.) The basis set and its MO-like coefficients (defining the KS determinant) are used on every iteration to yield the density. 2) Naturally, yes. The MO-coefficients of basis sets change on geometry change for a given system, just like in HF. The procedure and workable algorithms are very similar to HF (because it all is).
{ "domain": "chemistry.stackexchange", "id": 9157, "tags": "quantum-chemistry, computational-chemistry, theoretical-chemistry, density-functional-theory, basis-set" }
Can I teleport a string of 0s and 1s?
Question: I have recently started with quantum computing and created a quantum teleportation circuit to transmit a qubit state from q_0 to q_2 using Qiskit. I understand that I can transmit any state information from q_0 to q_2. Is it then fair to expect that I can also transmit a morse code like string of 0s and 1s (say 1001)? This is the circuit I built after watching/reading Qiskit tutorials. Answer: As you noticed, the first thing you do is to put the $q_0$ to the state you want to teleport to $q_2$. For instance, if you want to transport $|1\rangle$ to $q_2$ then you would first apply the $X$ gate to flip $q_0$ to the state $|1\rangle$. This is because the initial state of a quantum computer is usually starts in the state $|000\cdots0\rangle$. Thus, if you want to teleport $1$ then apply $X$ gate to $q_0$ in the beginning, if you want to teleport $0$ then do nothing. So if you insist to design a program in Qiskit to generate a quantum circuit to teleport a morse code of some sort, you can do it as follow: %matplotlib inline from qiskit import QuantumCircuit, execute, BasicAer, IBMQ from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit from numpy import pi def teleported_circuit(code): qreg_q = QuantumRegister(3, 'q') creg_c = ClassicalRegister(1, 'c') circuit = QuantumCircuit(qreg_q, creg_c) if code == 1: circuit.x(qreg_q[0]) circuit.barrier(range(3)) circuit.h(qreg_q[1]) circuit.cx(qreg_q[1], qreg_q[2]) circuit.cx(qreg_q[0], qreg_q[1]) circuit.h(qreg_q[0]) circuit.barrier(range(3)) circuit.cx(qreg_q[1], qreg_q[2]) circuit.cz(qreg_q[0], qreg_q[2]) circuit.measure(qreg_q[2], creg_c[0]) backend = BasicAer.get_backend('statevector_simulator') job = execute(circuit, backend, shots = 1) return job.result().get_counts() #### Example #### code_string = [1,0,0,1,1,1] teleported_code = [ teleported_circuit(code_string[i]) for i in range(len(code_string)) ] print('Here is your telported code:', teleported_code) The output would be: Here is your telported code: [{'1': 1}, {'0': 1}, {'0': 1}, {'1': 1}, {'1': 1}, {'1': 1}]
{ "domain": "quantumcomputing.stackexchange", "id": 2175, "tags": "qiskit, programming, circuit-construction, teleportation" }
How to predict one equilibrium constant given two others?
Question: $$\begin{alignat}{2} \ce {CO2 (g) + 3H2 (g) \;& <=> CH3OH (g) + H2O (g)}\qquad&&{k_1=\;?} \\ \ce {CO (g) + H2O (g) \;& <=> CO2(g) + H2(g)}\qquad&&{k_2= 1.0\times10^5} \\ \ce {CO (g) + 2H2 (g) \;& <=> CH3OH(g)}\qquad&&{k_3= 1.4\times10^7} \end{alignat}$$ What is the value of $k_1$? I have tried reversing equation two and then multiplying $k_2$ and $k_3$ together, but I am not sure how to work this problem out. Answer: Remember that when you reverse the reaction the equilibrium constant changes. For the general gas-phase reaction $\ce{A(g) + B(g) <=> C(g) + D(g)}$ the equilibrium constant expression is $$K_\text{f} = \frac{p_\ce{C} p_\ce{D} }{p_\ce{A} p_\ce{B}}$$ (Strictly, it's activities, not partial pressures, but the principle is the same.) The reversed reaction $\ce{C(g) + D(g) <=> A(g) + B(g)}$ has the equilibrium constant expression $$K_\text{r} = \frac{p_\ce{A} p_\ce{B} }{p_\ce{C} p_\ce{D}} = \frac{1}{K_\text{f}}$$ You are correct in multiplying the equilibrium constants to get the equilibrium constant for the combined reaction.
{ "domain": "chemistry.stackexchange", "id": 4246, "tags": "physical-chemistry, equilibrium" }
How to extract the map matrix after subscribing to /map of map_server?
Question: I need to run Dijkstra's algorithm on this map. So till now, I've used map_server and got the /map and /map_metadata topics. I'm not quite sure how to proceed further. This is what I've done so far. #include < ros/ros.h> void dijkstra() { } int main(int argc, char **argv) { ros::init(argc, argv, "shortestpath"); ros::Nodehandle n("~"); ros::Subscriber sub = n.subscribe("/map", 1000, dijkstra); } /map gives a 1D array so I need to use it and convert it in a 2D matrix. Originally posted by Parth2851 on ROS Answers with karma: 63 on 2019-11-07 Post score: 0 Original comments Comment by Choco93 on 2019-11-07: Why don't you use amcl, and move_base? What is that you want to do exactly? Answer: Hi Parth2851, As you mentioned, the map message consists on a 1D array containing all the information. If you want to convert this to a 2D matrix, you can extract the positions using the map info metadata, where the height, width and resolution of the map are provided. A simple function to extract a given position (in pixels) of the map, can be like this int getMapValue(int height, int width, const nav_msgs::OccupancyGrid& map) { int position = map.info.width * height + width; // Finds the position in the data vector. return map.data[position]; // Gets the value from the position } if you want to extract a position on meters, then you should take into account the resolution of the map, also available on the map info message. Regards, Originally posted by Mario Garzon with karma: 802 on 2019-11-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Parth2851 on 2019-11-07: Hi, thanks for the reply. How about making a new 2D array from the given array, since it would be easier to visualise that way? Is there a downside to that? int get2Dmap(int height, int width, const nav_msgs::OccupancyGrid& map) { int k = 0, a[height][width]; for(int i = 0; i Please let me know if I've made an error. Comment by Mario Garzon on 2019-11-08: well, it the code seems right, although I don't know if that's the most efficient way to visualize the map. Usually ROS maps are very large (the default resolution is about 5cm), so the resulting 2D array will be very large. Depending on what's your objective, it may be more effective to work directly with the OccupancyGrids. Comment by Parth2851 on 2019-11-09: Yeah, makes sense. Thanks a lot!
{ "domain": "robotics.stackexchange", "id": 33982, "tags": "navigation, mapping, ros-melodic, map-server" }
Survival on a rogue planet
Question: Are there any planets not orbiting a celestial body which can support life despite the temperature? Also what is the absolute minimum temperature that life can survive in? Answer: IHMO, I think it is possible into any planet with a hot core, that means if there is life on some of this planets they should be on the subterran part of the planet on underneath the deep freezed oceans, underground seas and caves, that means too that the life we will find on such planets will be quite shiny and probably looking like much of the animals we find at deep see here on earth.
{ "domain": "astronomy.stackexchange", "id": 1646, "tags": "rogue-planet" }
Speed to throw something into space
Question: I'm new here because I think I need some help. Our teacher gave us the task to find out how fast you would have to throw something upwards (friction and mass are not considered) to get it to leave the atmosphere. So my first approach (I'm not really good at physics, don't judge me) would be just to use the formula for vertical throws that calculates height dependent on ground speed and then fill in 100km so about the distance between earth and the end of atmosphere. Leaving out mass and friction would make it really easy then, but I know that that earth's gravity force gets lower the wider something turns away from earth. So I would just use a gravity formula and a standard vertical throw formula, but is there anything else I should consider or am I thinking completely wrong? Answer: If you are going to ignore friction (atmospheric drag) but not how graviational acceleration varies with distance from the Earth, use Conservation of Energy. The potential energy $U$ of an object with mass $m$ in the central gravitational field of an object with mass $M$ is given by: $$U(r)=-\frac{mMG}{r},$$ with $r$ the distance from the centre of the field. By throwing up an object, kinetic energy $K$ will be converted to potential energy: $$K=\frac12 mv^2,$$ where $v$ is the speed of the object. Conservation of Energy tells us that: $$\Delta K=\Delta U(r)\tag{1}$$ Say we throw the object from $r_0$ (surface of the Earth) to $r_1$, starting velocity $v_0$ and end velocity $v$, equation $(1)$ is then used to calculate $v_0$ as a function of $v$, $r_1$ and $r_0$. Note that neglecting air drag will underestimate $v_0$ because work (kinetic energy) needs to be done against the drag force.
{ "domain": "physics.stackexchange", "id": 36823, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, escape-velocity" }
Single-photon Mach-Zehnder interferometer with complete destructive interference
Question: This article describes an experiment where you can set up a Mach-Zehnder interferometer, and send one photon through at a time, and see complete destructive interference at one of the detectors (one of the detectors detects no photons, the other detects all the photons). I'm very conscious that that is not a scientific article, but the implications of that experiment have informed my ideas about quantum physics for a long time. However, I've looked and never seen any actual papers in trusted physics journal that unambiguously describes this experiment (along with the one in figure 3, preferably, where he blocks a path and the interference disappears) and confirms that this does indeed work even with one photon at a time. I'd like to have confirmation and a link to such a published piece if possible. I'd REALLY like to know if I could affordably recreate this experiment myself somehow. But that's less important than just a published piece in a respectable physics journal. Answer: Not sure if this is what you are looking for, but check it out: Stable single-photon interference in a 1 km fiber-optic Mach–Zehnder interferometer with continuous phase adjustment https://arxiv.org/abs/1104.2866
{ "domain": "physics.stackexchange", "id": 90973, "tags": "quantum-mechanics, interference, interferometry" }
Loading data from a text file into the database as fast as possible
Question: I'm building a Django (1.8.12) application that parses a .bc3 file (Standard Interchange Format for Databases of Construction and Real State) and loads all the data into the database (PostgreSQL 9.3.9). A .bc3 file looks like this, and a common one has more than 2000 concepts (those records that start with ~C). To sum up, the user uploads the file and the webapp in a short period of time is able to insert the data into the database to start working on it. Models class Concept(models.Model): code = models.CharField(_('code'), max_length=20, primary_key=True) root = models.BooleanField(_('is it root'), default=False) chapter = models.BooleanField(_('is it chapter'), default=False) parent = models.BooleanField(_('is it parent'), default=False) unit = models.CharField(_('unit'), blank=True, max_length=3) summary = models.CharField(_('summary'), blank=True, max_length=100) price = models.DecimalField(_('price'), max_digits=12, decimal_places=3, null=True, blank=True) date = models.DateField(_('creation date'), null=True, blank=True) concept_type = models.CharField(_('concept type'), max_length=3, blank=True) def __str__(self): return '%s: %s' % (self.code, self.summary) class Deco(models.Model): parent_concept = models.ForeignKey(Concept, null=True, blank=True, related_name='decos') concept = models.ForeignKey(Concept, null=True, blank=True) factor = models.DecimalField(max_digits=12, decimal_places=3, default=Decimal('0.000')) efficiency = models.DecimalField(max_digits=12, decimal_places=3, default=Decimal('0.000')) def __str__(self): return '%s: %s' % (self.parent_concept, self.concept) bc3parser.py #!/usr/bin/env python # -*- coding: utf-8 -*- """Parses bc3 files and insert all the data into the database.""" import re from enkendas.models import Version, Concept, Deco, Text from .utils import optional_codes, parse_dates # regex stuff # parsers stuff concepts = {} decos = {} # decos = {'PER02': [('Qexcav', '1', '231.13'), ('Qzanj', '1', '34.5'), # ('Qexcav2', '1', '19.07'), ('Qrelltras', '1', '19.07')], # ... # 'Qexcav': [('MMMT.3c', '1', '0.045'), ('O01OA070', '1', '0.054'), # ('M07CB030', '1', '0.036'), ('%0300', '1', '0.03')]} def dispatch_record(record): """ Dispatch every record. Check the first character of the record and send it to the proper function. """ if record.startswith('D'): parse_decomp(record) elif record.startswith('V'): parse_version(record) elif record.startswith('C'): parse_concept(record) elif record.startswith('T'): parse_text(record) else: pass def parse_file(file): """ Parse the whole file. file is a generator returned by file.chunks(chunk_size=80000) in views.py. """ while True: try: record = '' incomplete_record = '' # Iterates over the file sent by the user. byte_string = next(file) byte_stripped_string = byte_string.strip() string = byte_stripped_string.decode(encoding='ISO-8859-1') # List of records. durty_strings_list = string.split('~') # Check if one chunk in chunks is complete. if durty_strings_list[-1] != '' and incomplete_record != '': incomplete_record = incomplete_record + durty_strings_list.pop(-1) dispatch_record(incomplete_record) incomplete_record = '' elif durty_strings_list[-1] != '' and incomplete_record == '': incomplete_record = durty_strings_list.pop(-1) for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string == '': record = record + '' # TODO: I didn't create a regex for 'M' and 'E' records yet. elif durty_string[0] == 'M' or durty_string[0] == 'E': continue if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string else: record = record + stripped_string except StopIteration as e: dispatch_record(record) break concept_instances = [] for key_code, data in concepts.items(): code = key_code root = chapter = parent = False if len(key_code) > 2 and key_code[-2:] == '##': root = True code = key_code[:-2] elif len(key_code) > 1 and key_code[-1:] == '#': chapter = True code = key_code[:-1] if code in decos: parent = True concept = Concept(code=code, root=root, chapter=chapter, parent=parent, unit=data['unit'], summary=data['summary'], price=data['price'], date=data['date'], concept_type=data['concept_type']) concept_instances.append(concept) Concept.objects.bulk_create(concept_instances) deco_instances = [] cobjs_storage = {} for concept in Concept.objects.all(): if concept.parent is False: continue dec = decos[concept.code] for child, factor, efficiency in dec: if child == '': continue if factor == '': factor = '0.000' if efficiency == '': efficiency = '0.000' # To avoid extra queries. if child in cobjs_storage: cobj = cobjs_storage[child] else: cobj = Concept.objects.get(code=child) cobjs_storage.update({child: cobj}) deco = Deco(parent_concept=concept, concept=cobj, factor=float(factor), efficiency=float(efficiency)) deco_instances.append(deco) decos.pop(concept.code, None) Deco.objects.bulk_create(deco_instances) Process Parsing the .bc3 file uploaded by the user. Everything is working as expected. Instantiating the Concept model. I save the instances in concept_instances = [c1, c2, c3... cn]. Inserting Concept instances into the database. In order to speed up the load I use the bulk_create(concept_instances) method. Instantiating the Deco model. I save the instances in deco_instances = [d1, d2, d3... dn]. But, to do that I need to retrieve each Concept object from the database because of the parent_concept and concept fields. Inserting Deco instances into the database. As before, to speed up the load I use the bulk_create(deco_instances) method. Bottleneck The whole process on the .bc3 file mentioned earlier is taking too much (95230 ms) because I'm doing 1278 SQL queries, but inserting 1276 Concept objects just takes 693 ms and 2826 Deco objects 289 ms. Research I read some Stack Overflow questions and the Django official documentation about Database access optimization, but I didn't find any useful improvement for this case. My Assumption I think this line of code is the main problem, but in my opinion it is absolutely necessary. Questions Is it possible to create Deco objects without getting every Concept object? Is running tasks in the background the only approach to follow? Am I missing something? Answer: An important aspect when doing optimisations is profiling. You should really start with that instead of asking random strangers on the internet. Anyhow, let me take a quick look. Filtering for concept in Concept.objects.all(): if concept.parent is False: continue ... This seems a bit redundant, why not just for concept in Concept.objects.filter(parent=True): ... Many queries I took a close look at the line you indicated might be troublesome. You have not profiled (I assume), but it looks suspicious because it performs a query in a loop. So basically the code looks like this: for concept in Concept.objects.all(): ... for child, factor, efficiency in dec: ... if child in cobjs_storage: cobj = cobjs_storage[child] else: cobj = Concept.objects.get(code=child) cobjs_storage.update({child: cobj}) ... So, ideally, you'd want to make sure that cobjs_storage contains as much as possible. One way to do that would be to add the following before the first for loop above: # Pre-fetch required objects. needs_prefetch = set(child for child, __, __ in decos.values()) for cobj in Concept.objects.filter(code__in=needs_prefetch): cobjs_storage[cobj.code] = codj It's a bit hacky, perhaps, but it should lower the number of queries, and as such improve results. [edit: I just found a better way] Using in_bulk (https://docs.djangoproject.com/en/1.9/ref/models/querysets/#django.db.models.query.QuerySet.in_bulk) you can rewrite it a bit: # Pre-fetch required objects. needs_prefetch = set(child for child, __, __ in decos.values()) cobjs_storage.update(Concept.objects.in_bulk(needs_prefetch)) Also, make sure to add any created Concept object to the cobjs_storage after creating, so that you don't incur a database hit for that. Dispatch def dispatch_record(record): """ Dispatch every record. Check the first character of the record and send it to the proper function. """ if record.startswith('D'): parse_decomp(record) elif record.startswith('V'): parse_version(record) elif record.startswith('C'): parse_concept(record) elif record.startswith('T'): parse_text(record) else: pass This is not as expensive as a database hit, but it's still someplace that might need some optimisation, or at least a bit of a refactoring to make it cleaner. def dispatch_record(record): dispatch_table = { 'D': parse_decomp, 'V': parse_version, 'C': parse_concept, 'T': parse_text, } try: parser = dispatch_table[record[0]] except (IndexError, KeyError): return parser(record) This makes it easier to add extra parsers, and the .startswith() is no longer called multiple times. Parsing files The following piece of code is quite suspect. while True: try: record = '' incomplete_record = '' # Iterates over the file sent by the user. byte_string = next(file) byte_stripped_string = byte_string.strip() string = byte_stripped_string.decode(encoding='ISO-8859-1') # List of records. durty_strings_list = string.split('~') # Check if one chunk in chunks is complete. if durty_strings_list[-1] != '' and incomplete_record != '': incomplete_record = incomplete_record + durty_strings_list.pop(-1) dispatch_record(incomplete_record) incomplete_record = '' elif durty_strings_list[-1] != '' and incomplete_record == '': incomplete_record = durty_strings_list.pop(-1) for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string == '': record = record + '' # TODO: I didn't create a regex for 'M' and 'E' records yet. elif durty_string[0] == 'M' or durty_string[0] == 'E': continue if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string else: record = record + stripped_string except StopIteration as e: dispatch_record(record) break First of all, it's quite long, but there is one thing I would very much like to comment on. If possible, do not use while loops when a for loop suffices. But there is actually a lot more going on. Let me walk you through a few refactorings I'd like to suggest. First, the code just before the except StopIteration: if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string else: record = record + stripped_string In the else, you know record == '', and '' + stripped_string is always the same as stripped_string. if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string else: record = stripped_string In both branches, the last line is the same, so we can move it out, and drop the else which is now empty. if record != '': # Dispatch the previous record. dispatch_record(record) # Reset the used record. record = '' # Assign the current record. record = stripped_string This makes the record = '' in the if redundant. if record != '': # Dispatch the previous record. dispatch_record(record) # Assign the current record. record = stripped_string Already so much cleaner. for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string == '': record = record + '' # TODO: I didn't create a regex for 'M' and 'E' records yet. elif durty_string[0] == 'M' or durty_string[0] == 'E': continue if record != '': # Dispatch the previous record. dispatch_record(record) # Assign the current record. record = stripped_string The record = record + '' is a bit useless. Because we already know it's a string, we can modify the elif a bit. for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string and durty_string[0] == 'M' or durty_string[0] == 'E': continue if record != '': # Dispatch the previous record. dispatch_record(record) # Assign the current record. record = stripped_string (I broke PEP8 here, but I'm going to fix that now.) for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string and durty_string[0] in ('M', 'E'): continue if record != '': # Dispatch the previous record. dispatch_record(record) # Assign the current record. record = stripped_string Marginally better. I have a bit more overview now, and I'd really like to get rid of the try/except, so let me see what's necessary for that. while True: try: ...1 byte_string = next(file) ...2 except StopIteration as e: dispatch_record(record) break I did the hard work at looking over the rest of the code (the ...1 and ...2), and I feel confident that those parts won't throw a StopIteration. So let's factor those out. while True: ...1 try: byte_string = next(file) except StopIteration as e: dispatch_record(record) break ...2 Now, to continue I need to elaborate on ...1 a bit, filling it in again while True: record = '' incomplete_record = '' try: # Iterates over the file sent by the user. byte_string = next(file) except StopIteration as e: dispatch_record(record) break ...2 We can move incomplete_record to after the try/except. while True: record = '' try: # Iterates over the file sent by the user. byte_string = next(file) except StopIteration as e: dispatch_record(record) break incomplete_record = '' ...2 I'd like to do the same for the record, but it's used in the except clause. But, it's still '' at that point, so let's cheat a bit and substitute that by hand. while True: try: # Iterates over the file sent by the user. byte_string = next(file) except StopIteration as e: dispatch_record('') break record = '' incomplete_record = '' ...2 Looking at dispatch_record we see that '' is handled as pass. So it does nothing. Let's remove that call. while True: try: # Iterates over the file sent by the user. byte_string = next(file) except StopIteration as e: break record = '' incomplete_record = '' ...2 And this is a fairly common pattern, so common in fact that this is the basis of the for loop. for byte_string in file: record = '' incomplete_record = '' ...2 Let me zoom out again. for byte_string in file: record = '' incomplete_record = '' byte_stripped_string = byte_string.strip() string = byte_stripped_string.decode(encoding='ISO-8859-1') # List of records. durty_strings_list = string.split('~') # Check if one chunk in chunks is complete. if durty_strings_list[-1] != '' and incomplete_record != '': incomplete_record = incomplete_record + durty_strings_list.pop(-1) dispatch_record(incomplete_record) incomplete_record = '' elif durty_strings_list[-1] != '' and incomplete_record == '': incomplete_record = durty_strings_list.pop(-1) for durty_string in durty_strings_list: stripped_string = durty_string.strip() if durty_string and durty_string[0] in ('M', 'E'): continue if record != '': # Dispatch the previous record. dispatch_record(record) # Assign the current record. record = stripped_string Because incomplete_record = '' is inside the loop, it always gets reset. Are you sure you have tried the algorithm with larger files? (And tested it is correct)? There are more reasons why I think your code is broken, for instance the handling of dispatch_record, and where the assignments take place. Rewriting parse_file. What parse_file should do is the following: Iterate over all the records in file (separated by ~), and call parse_record on all of them. Assuming memory was infinite (or just 'large enough'), you could just do for record in file.read().split('~'): dispatch_record(record) But from your code, I assume it's not 'large enough', and we get it in chunks. def parse_file(chunks): partial_record = '' for chunk in chunks: stripped_chunk = byte_string.strip() string = stripped_chunk.decode(encoding='ISO-8859-1') records = chunk.split('~') # Prepend the partial record to the first record records[0] = partial_record + records[0] # Get the last partial_record = records.pop(-1) for record in records: dispatch_record(record) # If we still have data left, it's a full record, but just at # the end of the file. if partial_record != '': dispatch_record(partial_record) Ideally, you'd split out the parsing of the ~-chunked blocks from the iteration, but this is good enough for now, I think.
{ "domain": "codereview.stackexchange", "id": 19377, "tags": "python, parsing, file, database, django" }
Finding big O notation of function with two parameters
Question: I'm looking to work out the big-O notation for the following: $$\frac{n^{s + 1} - 1}{n - 1} - 1$$ I have a feeling the result is $O\left( n^s \right)$ but I'm not sure how to prove it. Any help greatly appreciated! :) Answer: Some of modifications of O described in Concrete Mathematics: Foundation for Computer Science says: $ \qquad O(f(n)) + O(g(n)) = O(\mid f(n)\mid + \mid g(n) \mid) \qquad (9.22)\\ \qquad O(f(n))O(g(n)) = O(f(n)g(n)) \qquad \qquad \qquad (9.26) $ And using some basic knowledge of O notation and functions: $ \qquad O(f(n) +c) = O(f(n)) \\ \qquad \forall_{n\in \mathbf N} \forall_{k>0} n^k > 0 $ Use those transformation you can came up with something like this: $$\frac{n^{s+1}-1}{n-1}-1 = O\left(\frac{n^{s+1}-1}{n-1}-1\right) = O\left(\frac{n^{s+1}-1}{n-1}\right) = O\left(\frac{1}{n-1}\right)O\left(n^{s+1}-1\right) = $$ $$ O\left(\frac{1}{n}\right)O\left(n^{s+1}\right) = O\left(\frac{n^{s+1}}{n}\right) = O(n^s) $$ So your intuition was right.
{ "domain": "cs.stackexchange", "id": 570, "tags": "time-complexity" }
Fluid mechanics assumptions
Question: I have just started studying fluid mechanics and on the very first page two assumptions are made about liquids while I understood the first one but am not able to understand the Second one: Parts of the liquids in contact do not exert any tangential force on each other. the force by any part of the liquid on the other part is perpendicular to the surface of contact. thus there is no friction between adjacent layers of liquids. why are such weird assumptions being made here? what do they want to signify? Like if there is friction between adjacent layers of liquids so what? Why assume there is no friction? Answer: Suppose there is an imaginary boundary plane between two parcels of fluid, and parcel A is standing still while parcel B is moving with some speed. Fluids have temperature and pressure, meaning they are made up of molecules in motion, constantly bouncing off each other. Suppose A and B have the same temperature and pressure, just to keep things simple At the boundary, some of B's molecules are going to cross into A's territory, and vice-versa. This transfers momentum, which has the effect of reducing, and eliminating, the average speed difference, at the boundary, between A and B. So the speed difference cannot be a step discontinuity at the boundary, but is spread out through both fluids. That's why fluids cannot slip past each other.
{ "domain": "physics.stackexchange", "id": 25693, "tags": "fluid-dynamics, friction" }
OOP UserAuthenticator Class
Question: I've been a PHP procedural programmer for several years but I'm trying to learn OOP and I'm getting a bit confused with some patterns and principles. I would appreciate it if you could give me some tips and advice. interface LoginAuthenticator { public function authenticate(UserMapper $userMapper); } class UserAuthenticator implements LoginAuthenticator { private $user; private $session; public function __construct(User $user, Session $session) { $this->user = $user; $this->session = $session; } public function authenticate(UserMapper $userMapper) { if (!$user = $userMapper->findByUsernameAndPassword($this->user->getUsername(), $this->user->getPassword())) { throw new InvalidCredentialsException('Invalid username or password!'); } $this->logUserIn($user); } private function logUserIn(User $user) { $this->session->setValue('user', $user); } public function logUserOut() { $this->session->unsetValue('user'); $this->session->destroy(); } } try { $user = new User(); $user->setUsername($_POST['username']); $user->setPassword($_POST['password'], new MD5()); $pdo = new PDO('mysql:host=localhost;dbname=database', 'root', ''); $userAuth = new UserAuthenticator($user, new Session()); $userAuth->authenticate(new PdoUserMapper($pdo)); header('Location: index.php'); } catch (InvalidArgumentException $e) { echo $e->getMessage(); } catch (PDOException $e) { echo $e->getMessage(); } catch (InvalidCredentialsException $e) { echo $e->getMessage(); } Well, here is my first concern, the SRP: I don't really know if I should inject a Mapper into my UserAuthenticator::authenticate method or if I should create a UserFinder class and inject it instead. I don't know if it's a Mapper responsibility to find. What do you think ? Furthermore, I'm also confused about the $user property: my findByUsernameAndPassword method returns a User object, so I have two Users instances in the same class: one injected and another returned by the Mapper. Should I inject just $username and $password instead of a User object in order to authenticate ? I have also some wrappers classes like Session and MD5 but they are not needed to understand how my classes works. Edit My classes after changes and user related classes: interface Authenticator { public function authenticate(UserCredentials $userCredentials); } class LoginAuthenticator implements Authenticator { private $userMapper; public function __construct(UserMapper $userMapper) { $this->userMapper = $userMapper; } public function authenticate(UserCredentials $userCredentials) { if (!$user = $this->userMapper->findByUsernameAndPassword($userCredentials->getUsername(), $userCredentials->getPassword())) { throw new InvalidCredentialsException('Invalid username or password!'); } return $user; } } class UserCredentials { private $username; private $password; public function getUsername() { return $this->username; } public function setUsername($username) { if (!is_string($username) || strlen($username) < 3) { throw new InvalidArgumentException('Invalid username.'); } $this->username = $username; } public function getPassword() { return $this->password; } public function setPassword($password, Encryptor $encryptor) { if (!is_string($password) || strlen($password) < 8) { throw new InvalidArgumentException('Invalid password.'); } $this->password = $encryptor->encrypt($password); } } class User { private $id; private $firstName; private $lastName; private $email; private $username; private $password; public function getPassword() { return $this->password; } public function setPassword($password, Encryptor $encryptor) { $this->password = $encryptor->encrypt($password); } //more getters and setters } interface UserMapper { public function insert(User $user); public function update(User $user); public function delete($id); public function findByUsernameAndPassword($username, $password); public function findAll(); } class PdoUserMapper implements UserMapper { private $pdo; private $table = 'users'; public function __construct(PDO $pdo) { $this->pdo = $pdo; } public function insert(User $user) { $statement = $this->pdo->prepare("INSERT INTO {$this->table} VALUES(null, ?, ?, ?, ?)"); $userValues = array( $user->getFirstName(), $user->getLastName(), $user->getEmail(), $user->getUsername(), $user->getPassword() ); $statement->execute($userValues); return $this->pdo->lastInsertId(); } public function update(User $user) { $statement = $this->pdo->prepare("UPDATE {$this->table} SET name = ?, last_name = ?, email = ?, password = ? WHERE id = ?"); $userValues = array( $user->getFirstName(), $user->getLastName(), $user->getEmail(), $user->getPassword(), $user->getId() ); $statement->execute($userValues); } public function delete($id) { $statement = $this->pdo->prepare("DELETE FROM {$this->table} WHERE id = ?"); $statement->bindValue(1, $id); $statement->execute(); } public function findById($id) { $statement = $this->pdo->prepare("SELECT * FROM {$this->table} WHERE id = ?"); $statement->bindValue(1, $id); if (!$result = $statement->execute()) { return null; } $user = new User(); $user->setId($result['id']); $user->setFirstName($result['name']); $user->setLastName($result['last_name']); $user->setUsername($result['username']); $user->setEmail($result['email']); return $user; } public function findByUsernameAndPassword($username, $password) { $statement = $this->pdo->prepare("SELECT * FROM {$this->table} WHERE username = ? AND password = ?"); $statement->bindValue(1, $username); $statement->bindValue(2, $password); $statement->execute(); if (!$result = $statement->fetch()) { return null; } $user = new User(); $user->setId($result['id']); $user->setFirstName($result['name']); $user->setLastName($result['last_name']); $user->setEmail($result['email']); $user->setUsername($result['username']); $user->setPassword($result['password'], new MD5()); return $user; } public function findAll() { $statement = $this->pdo->query("SELECT * FROM {$this->table}"); while ($result = $statement->fetch(PDO::FETCH_ASSOC)) { $user = new User(); $user->setId($result['id']); $user->setFirstName($result['name']); $user->setLastName($result['last_name']); $user->setUsername($result['username']); $user->setEmail($result['email']); $userCollection[] = $user; } return $userCollection; } } try { $userCredentials = new UserCredentials(); $userCredentials->setUsername($_POST['username']); $userCredentials->setPassword($_POST['password'], new MD5()); $pdo = new PDO('mysql:host=localhost;dbname=database', 'root', ''); $loginAuth = new LoginAuthenticator(new PdoUserMapper($pdo)); $user = $loginAuth->authenticate($userCredentials); $session = new Session(); $session->setValue('user', $user); header('Location: index.php'); } catch (InvalidArgumentException $e) { echo $e->getMessage(); } catch (PDOException $e) { echo $e->getMessage(); } catch (InvalidCredentialsException $e) { echo $e->getMessage(); } One thing that is bothering me is the need to pass an Encryptor two times: to the User::setPassword method and to UserCredentials::setPassword method. If I need to change my encryption algorithm, I'll have to change it in more than one place, what leads me to think that I'm still making something wrong. Answer: You are right about your concerns regarding the mapper. A mappers job is to map, not to find. In this case its the job of a repository. The repository finds an entry in a database, uses the mapper to translate between the database fields and the model, and returns the model. I had some more detailed explanation about this here. The method findByUsernameAndPassword most likely would be a method of this repository, returning an authenticated user on success. I find the arguments in your UserAuthenticator a bit weird (not wrong) though. Currently this class reads as follows: The UserAuthenticator allows to authenticate a given set of credentials against different mappers. The authenticate methods authenticates the UserAutheniticators credentials against the passed mapper. Simplified, it is this: $authenicator = new Authenticator($login, $password); $mapper = new PDOMapper(); $authenticator->authenticate($mapper); For me the last line really reads like Authenticator authenticates mapper using $login, and $password. Here, Authenticator actually does not resemble an class capably of authenticating users, it is missing a vital part. It represents Credentials which we later authenticate against a $mapper. Normally I would expect it the other way arround: The UserAuthenticator allows to authenticate different credentials against a given mapper. The authenticate methods authenticates the passed credentials against the previously set mapper. Simplified, this reads like this: $mapper = new PDOMapper(); $authenticator = new Authenticator($mapper); $authenticator->authenticate($login, $password); Which reads like, authenticator, authenticate $login and $password (using the $mapper). I feel this reads better and follows a more logical mental image on how we think this works. An analogy maybe would be a gatekeeper which demands your key card to enter a building. You usually pass the key card to the gatekeeper, not the gatekeeper to the key card. Out mental image here serves a gate keeper (your UserAuthenticator) who receives an key card reader (your mapper) at construction time (when he starts his shift). When someone arrives, he gives his key card (your login and password) to the gate keeper. Neither is wrong or right and it depends on your application requirements. I know popular frameworks to it otherwise too. There are pros and cons for either approach. I prefer the second approach though - feels more natural in most cases. Haven't seen many use-cases where credentials are tried against different mappers or repositories. Could elaborate here more if this really is your intended case. Your second concern comes from using a model for two different purposes: representing an invalid, unauthenticated user with pending authentication and later on for representing an existing, authenticated user. At this point I think this is more of a modeling issue: I'd either pass password and login name directly to UserAuthenticator::authenticate or create a new class Credentials for this. Your current approach poses one huge problem: you got two objects representing the same entity. Really really really avoid this at all costs. As another remark: your UserAuthenticator currently has two responsibilities: authentication and storing in a session. I'd move this to the service layer calling the authenticator. Of course this could be abstracted in a another layer (e.g. authentication storage adapter). This also reliefs the UserAuthenticator from performing logouts and so on. On last thing: your naming indicates you are logging in other entities then users, e.g. animals. The name UserAuthenticator says "I'm the only class responsible for logging in users". (Or you your 'admins' have to log-in by another adapter). While this might be true of course, I suppose your intention is rather more to provide different log-in facilities for users (and only users). Commonly this would result in a slightly different naming like: interface LoginAdapterInterface, DatabaseLoginAdapter, and so on. Update: I have updated further explanation about the two modeling approaches inline. About using different mappers for different databases: That's the wrong place here. Databases should be hidden behind an database abstraction layer (DBAL) as PDO is for example. The key problem is that you're using mappers to access the database. Mappers really only map between the database fields and your model. A repository queries the database (or any other source) and uses the mapper to map the result to the model. It completely hides all information how and where data is stored. The remainder of the application should never know how / where the data is stored. They interact on the repository only. This could look like: /- Mysql Mapper Authenticator <-- User Repository <--- PDO Mapper \- Mongo Mapper This way keeps the responsibilities clean: The mapper doesn't need to know how to query a database (actually in most cases its completely database agnostic), the repository knows how to query a particular data source (e.g. database), and the authenticator looks for the provided user without knowing where it actually does come from. In my opinion it is particularly important to name the things right. If I read Mapper I know exactly what to expect. Mapper is a 'reserved' terms in regards to patterns: In software engineering, a design pattern is a general reusable solution to a commonly occurring problem If your code doesn't follow this I do become really confused. I have to do a lot of read-up, and stuff. Your code becomes a surprise, which generally is really bad in terms of maintainability. So If your mapper is actually a repository, please please name it as so. But then keep in mind that a repository is suggesting behavior too. On the should the authenticator store to session thing: The authenticator directly: no. Maybe you want to re-use it without storing users to a session directly (e.g. API-Calls, ...). Commonly it is the job of a Service to provide concrete login / logout functionality. This usually is a two-step process: authenticate the user at a authenticator, and on success, store the result in a session. This keeps your authenticators reusable for situations where you don't want to start a session.
{ "domain": "codereview.stackexchange", "id": 6391, "tags": "php, object-oriented" }
Indian tropical fruit trees and fruit bearing
Question: Most Indian tropical fruit trees produce fruits in April-May. The best possible explanation for this is optimum water availability for fruit production. the heat allows quicker ripening of fruit. animals have no other source of food in summer. the impending monsoon provides optimum conditions for propagation This was asked in a competitive examination, and being a mathematician with interest in plants, this caught my attention. Please note that this was asked in aptitude section, and I feel it more aptly belongs this SE, correct me if I am wrong. My thoughts: April-May is summer in (tropical) India (where I live, Kerala) hence optimum water availability looks out of option. Considering fruit production being a part of process of propagation, though this is a favourable factor, I doubt if it is a determining factor Animals help disperse the fruits, and less availability of food may be a favouring factor attracting animals to fruits, it is not a determining factor. which for me looks like the factor influencing increased fruit production more than the other two. Sorry for the layman language, I would be grateful if someone can correct me if my reasoning is wrong and explain it more precisely (preferably in layman language, though I can understand some botanical terms) Answer: Trees fruit before the monsoon season to maximize seed germination and seedling recruitment. previous studies have shown that in many species, fruiting occurs just before the wet season such that seeds germinate and establish during the wet season when conditions are most favourable. Smythe N. 1970 Relationships between fruiting seasons and seed dispersal methods in a Neotropical forest. Am. Nat. 104, 25–35. (doi:10.1086/282638) So the answer is 4.
{ "domain": "biology.stackexchange", "id": 5027, "tags": "botany, fruit" }
Polyalphabetic cipher
Question: I am trying to write the simplest to undestand code possible, each function has documentation and examples and I tried to use the best style possible. """ This programme implements a polyalphabetic cipher. """ import string ALPHABET = string.ascii_lowercase CHARACTERS_THAT_MUST_REMAIN_THE_SAME = string.digits + string.punctuation + string.whitespace def cycle_get(lst,index): """ If the list ends go back to the start. >>> cycle_get(["lorem","ipsum","dolor","sit"],8) "lorem" """ new_index = index % len(lst) return(lst[new_index]) def cycle_increment_index(index,lst): """ If at the end: go back to the start else: increment. >>> cycle_increment_index(0,["a","b","c"]) 1 >>> cycle_increment_index(2,["a","b","c"]) 0 """ if index == len(lst) - 1: index = 0 else: index += 1 return(index) def shift(letter,value): """ Shifts a letter in the alphabet by the value, if the alphabet ends go back to the start. >>> shift('a',5) f >>> "".join([shift(i,20) for i in "hello"]) 'byffi' """ current_letter_value = ALPHABET.find(letter) end_value = current_letter_value + value return(cycle_get(ALPHABET,end_value)) def convert_key_to_numbers(key): """ Uses the alphabetic value of letters to convert a word to a list of numbers. >>> convert_key_to_numbers("abcde") [0,1,2,3,4] >>> convert_key_to_numbers("example") [4, 23, 0, 12, 15, 11, 4] """ return([ALPHABET.find(i) for i in key]) def encrypt(text,key,reverse_operation=False): """ Encrypts the text with a polyalphabetic cipher. >>> encrypt("lorem ipsum dolor sit amet, consectetur adipiscing elit","latine") 'wokmz masnu qswok avx lmxb, psysxkgieuk iqmailkvrr eeqg' >>> encrypt("the quick brown fox jumps over the lazy dog","gvufigfwiufw") 'zcy vcohg jltst aic rarla iaax obj tgeu lil' """ text = text.lower() key = convert_key_to_numbers(key) index_of_key = 0 result = "" for char in text: if char in CHARACTERS_THAT_MUST_REMAIN_THE_SAME: result += char else: if not reverse_operation: result += shift(char,key[index_of_key]) else: result += shift(char,- key[index_of_key]) index_of_key = cycle_increment_index(index_of_key,key) return(result) def decrypt(text,key): """ Decrypts the text previously encrypted with a polyalphabetic cipher. >>> decript('wokmz masnu qswok avx lmxb, psysxkgieuk iqmailkvrr eeqg',"latine") 'lorem ipsum dolor sit amet, consectetur adipiscing elit' >>> decrypt("zcy vcohg jltst aic rarla iaax obj tgeu lil","gvufigfwiufw") 'the quick brown fox jumps over the lazy dog' """ return(encrypt(text,key,reverse_operation=True)) Answer: I am trying to write the simplest to understand code possible, each function has documentation and examples and I tried to use the best style possible. The individual methods are simple and easy to understand. The docstrings are especially great, they are fantastic help for the reader. To improve further, try to zoom out. Take a step back and look at the outline of the code, like some advanced editors fold the function bodies and show only the method signatures: def cycle_get(lst,index): ... def cycle_increment_index(index,lst): ... def shift(letter,value): ... def convert_key_to_numbers(key): ... def encrypt(text,key,reverse_operation=False): ... def decrypt(text,key): ... I don't know if you see it, but things look less clear at this level: It's hard to guess what cycle_get(lst,index) and cycle_increment_index(index,lst) would do. Also, curiously, although both seem to take a list and a number parameter, the order of parameters is reversed. This will be difficult to remember for users of the class. shift(letter,value) could be clearer: the method shifts letter by some number, but value is too generic, so it doesn't help guessing that. convert_key_to_numbers(key) comes quite close, but it would help to be more specific and call "numbers" to "indexes" instead. encrypt(text,key,reverse_operation=False) is clear, but the reverse_operation parameter is a bit too long. But the biggest problem is that the parameter doesn't really make sense: what is a reverse operation of encrypt? The reverse of "encrypt" should be "decrypt", which shouldn't belong in this method decrypt(text,key) is clear :-) I would recommend this alternative outline: def get_circular_index(lst, index): return index % len(lst) def get_circular_item(lst, index): return lst[get_circular_index(lst, index)] def get_next_index(lst, index): return get_circular_index(lst, index + 1) def shift_letter_by(letter, num): # ... def convert_key_to_indexes(key): return [ALPHABET.find(i) for i in key] def cycle_text(text, key, reverse=False): # ... def encrypt(text, key): return cycle_text(text, key) def decrypt(text, key): return cycle_text(text, key, reverse=True) For the shorter ones I included the implementation too. I made some other changes too: The original cycle_get and cycle_increment_index were sharing the "index cycling logic". I moved that part of the logic in one place, a new get_circular_index function. encrypt is just as clear now as decrypt, and these methods shouldn't call each other, but the common purpose method cycle_text. As suggested above, the methods and method parameters are renamed to what I hope would be more natural, and the method parameters are consistently ordered. The CHARACTERS_THAT_MUST_REMAIN_THE_SAME variable is long. I tried to find a better name for it, but it was too hard. When it's too hard to name something, it's often the sign that there's a better way. In this case, is this variable really necessary? It seems the intention is to put characters in it that you don't want to encrypt / decrypt. Its purpose seems sort of the opposite of the ALPHABET variable. But it's not really the opposite of that. And it could be, and it would be simpler. So instead of the condition if char in CHARACTERS_THAT_MUST_REMAIN_THE_SAME: ..., how about using if char not in ALPHABET: ... ? With this in mind, I would recommend this implementation for cycle_text: def cycle_text(text, key, reverse=False): text = text.lower() indexes = convert_key_to_indexes(key) index_of_key = 0 result = "" for char in text: if char not in ALPHABET: result += char else: if not reverse: result += shift_letter_by(char, indexes[index_of_key]) else: result += shift_letter_by(char, - indexes[index_of_key]) index_of_key = get_next_index(indexes, index_of_key) return result Another improvement I slipped in here is not reassigning to the key parameter inside the method body, as it was in the original code. Why all the redundant parentheses in the return statements? return(lst[new_index]) return(index) # ... All these could be simply like this without parentheses: return lst[new_index] return index # ... You should follow PEP8, the official coding style guide of Python. For example, instead of this: return encrypt(text,key,reverse_operation=True) You should put a space after every comma separating a list of parameters, like this: return encrypt(text, key, reverse_operation=True) And put 2 empty lines before function declarations (before every def). This rule is only for functions in the global namespace, for functions inside classes 1 empty line is enough.
{ "domain": "codereview.stackexchange", "id": 10059, "tags": "python, functional-programming, cryptography" }
Can someone explain LO-TO Splitting?
Question: LO-TO splitting occurs in an ionic (i.e. polar) solid such as GaAs or NaCl. What happens is that the degeneracy of the transverse optical (TO) and longitudinal optical (LO) phonons at $k=0$ is broken and the LO phonon has a greater energy. From a physical point of view, in the limit that the wavelength is infinitely long (i.e. $k\rightarrow0$ or $\Gamma$-point), how is one supposed to tell the difference between a longitudinal and transverse excitation (i.e. from a fundamental physics point of view how is it possible that the LO and TO are non-degenerate)? My other question concerning this problem is that if the bonds were not ionic but instead covalent, then this splitting would not occur. However, the symmetry of the lattice has not changed. How is this possible? For reference, GaAs has a phonon dispersion spectrum that looks like so: while Ge has the following phonon dispersion: Answer: LO-TO splitting is caused by the long-ranged nature of the Coulomb interaction (i.e. because the Fourier Transform of the Coulomb interaction,$4\pi e^2/q^2$, is not well-defined at $q=0$). Also, it occurs near the Brillouin zone center, but not at the exact Brillouin zone center because of retardation effects (i.e. the finite speed of light). At $q=0$, the discrepancy between longitudinal and transverse modes ill-defined as stated in the question. It is impossible to tell the difference. Indeed, splitting only starts to occur in a very narrow wavelength window close to $q=0$ and persists to larger wavevectors. This is shown in a nice PRL from 1965: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.15.964. The relevant image is below (the solid black lines are the relevant ones here): As one dopes GaAs with electrons, the LO-TO splitting disappears. This is shown in another nice PRL: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.16.999. The relevant image is below: This is because the long-ranged nature of the Coulomb interaction is being screened. Therefore this effect (LO-TO splitting) is not due to symmetry, but in fact due to the long-ranged character of the Coulomb interaction.
{ "domain": "physics.stackexchange", "id": 23305, "tags": "condensed-matter, solid-state-physics, crystals, dispersion, phonons" }
Project Euler # 36 Double-base palindromes in Python
Question: The decimal number, 585 = 1001001001 (binary), is palindromic in both bases. Find the sum of all numbers, less than one million, which are palindromic in base 10 and base 2. (Please note that the palindromic number, in either base, may not include leading zeros.) Awaiting feedback. def is_palindrome(n): """returns True if palindrome, False otherwise.""" to_str = str(n) if to_str == to_str[::-1]: return True return False def find_palindromes(n): """generates all numbers if palindrome in both bases 2, 10 in range n. """ decimal_binary = {decimal: bin(decimal)[2:] for decimal in range(1, n) if is_palindrome(decimal)} for decimal, binary in decimal_binary.items(): if is_palindrome(binary) and not binary[0] == 0: yield decimal if __name__ == '__main__': print(sum(list(find_palindromes(1000000)))) Answer: Code review The sequence of if condition: return True return False is a long way to say return condition Consider instead def is_palindrome(n): return to_str == to_str[::-1]: Generator vs list. A list takes space. The entire point of a generator is to not take space. Your find_palindrome does yield, that is produces one palindrome at a time. Very well suited to sum them as they are produced. Your code collects them all in a list for no reason. Even more curious is that your code builds a dictionary then yields each entry to build the list which is sent to sum to traverse it. I see at least 4 traversals over the same data. Seems excessive. Efficiency Thou shalt not brute force. There are just 1000 decimal palindromes below 1000000: they are all in form abccba. In fact, we are not interested in all of them: if a is even, the binary representation would have a trailing 0, and to be a palindrome it would have a leading 0 as well. We may immediately disqualify such numbers. What remains, is just 500 candidates. So, we only need to iterate over 500 numbers, instead of 1000000 your code does. A 2000-fold speedup, immediately. In fact, a bit more, because there is no need to test wether a decimal representation is a palindrome anymore, and such test is quite expensive. There is also no need to test for the parity, but it is peanuts. The fun part is to design test that the binary representation is palindromic. The usually recommended binary = bin(n) return binary == binary[-1:1:-1] works well in general. In this particular setting you know a lot about the numbers and their binary representation (at the very least you know how many bits the number takes), and there are few more performant solutions. Rant Please keep in mind that solving Project Euler problems will not make you a better programmer. Project Euler is designed for programmers striving to be better mathematicians. And no matter what, do not brute force.
{ "domain": "codereview.stackexchange", "id": 35350, "tags": "python, beginner, python-3.x, programming-challenge" }
OpenGL shader abstraction class
Question: I have implemented a class to abstract the process of building a shader program in OpenGL (For now it does not deal with uniforms). I would like some feedback on the coding style, and more specifically the structure. I have implemented a bunch of small and private methods in the class to handle various aspects of compilation - but I am uncertain of when/where to create small functions, and when to just do it all in once place. Shader.hpp #pragma once #include <GL/glew.h> #include <iostream> #include <fstream> #include <string> #include <vector> class Shader { public: enum Type : unsigned int { Vertex = GL_VERTEX_SHADER, Fragment = GL_FRAGMENT_SHADER }; public: Shader(std::string vertex_shader_path, std::string fragment_shader_path); ~Shader(); void use(); private: unsigned int program; unsigned int vertex_shader; unsigned int fragment_shader; private: std::string get_contents(std::string file_path); unsigned int create_shader(const char* shader_code, Type shader_type); unsigned int create_program(); bool check_shader_compilation_status(unsigned int shader); bool check_program_linking_status(unsigned int program); void print_error(std::vector<char> error_message, std::string info); }; Shader.cpp #include "Shader.hpp" Shader::Shader(std::string vertex_shader_path, std::string fragment_shader_path) { vertex_shader = create_shader(get_contents(vertex_shader_path).c_str(), Type::Vertex); fragment_shader = create_shader(get_contents(fragment_shader_path).c_str(), Type::Fragment); program = create_program(); } Shader::~Shader() { glDeleteShader(vertex_shader); glDeleteShader(fragment_shader); glDeleteProgram(program); } void Shader::use() { glUseProgram(program); } std::string Shader::get_contents(std::string file_path) { std::ifstream file(file_path); return std::string(std::istreambuf_iterator<char>(file), std::istreambuf_iterator<char>()); } unsigned int Shader::create_shader(const char* shader_code, Type shader_type) { unsigned int shader = glCreateShader(shader_type); glShaderSource(shader, 1, &shader_code, nullptr); glCompileShader(shader); if (!check_shader_compilation_status(shader)) { glDeleteShader(shader); return 0; } else { return shader; } } unsigned int Shader::create_program() { unsigned int program = glCreateProgram(); glAttachShader(program, vertex_shader); glAttachShader(program, fragment_shader); glLinkProgram(program); if (!check_program_linking_status(program)) { glDeleteProgram(program); return 0; } else { return program; } } bool Shader::check_shader_compilation_status(unsigned int shader) { int is_compiled = 0; glGetShaderiv(shader, GL_COMPILE_STATUS, &is_compiled); if (!is_compiled) { int max_length = 0; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &max_length); std::vector<char> error_log(max_length); glGetShaderInfoLog(shader, max_length, &max_length, &error_log[0]); print_error(error_log, "Shader compalation failed"); return false; } else { return true; } } bool Shader::check_program_linking_status(unsigned int program) { int is_compiled = 0; glGetProgramiv(program, GL_LINK_STATUS, &is_compiled); if (!is_compiled) { int max_length = 0; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &max_length); std::vector<char> error_log(max_length); glGetShaderInfoLog(program, max_length, &max_length, &error_log[0]); print_error(error_log, "Program linking failed"); return false; } else { return true; } } void Shader::print_error(std::vector<char> error_message, std::string info) { std::cout << info << ": "; for (char letter : error_message) { std::cout << letter; } } Im primairly concered with simply bettering my coding skills - so any suggestions are appreciated. Answer: std::strings can be passed by const& (or as a std::string_view) rather than by value if we don't need to make a copy, e.g.: Shader(std::string const& vertex_shader_path, std::string const& fragment_shader_path) std::string get_contents(std::string const& file_path); void Shader::print_error(std::vector<char> const& error_message, std::string const& info) Incidentally, using a std::string instead of a std::vector<char> for the error message would make printing easier. Member functions that don't change member data should be const. void use() const; Member functions that don't require access to member data should be static. static std::string get_contents(std::string const& file_path); ... and all of the others! There are quite a few error cases we have to handle: File opening fails. Reading from the file fails. Shader compilation fails. Program linking fails. Currently the code continues attempting to create the shader program when an earlier step fails, even though it won't succeed. This adds complexity, since we have to check that everything will "work" (in this case fail gracefully) with our invalid state. While it might be helpful to show the compilation errors for every shader object, we probably don't want to try linking the program - we'll just be generating an OpenGL error, as well as extra noise from the linking failure in our logs. Similarly, if we fail to read from a file, we should output an appropriate error message, and not try to compile a shader object or link the program. I'd suggest using the specified OpenGL types for interacting with OpenGL. e.g. GLuint for shader object / program ids, GLint for compile status, etc. This is safer and more portable, and also makes the purpose of each variable more obvious. We don't need to immediately delete shaders that fail to compile (or programs that fail to link). The Shader class destructor will still do that for us (it might even be useful to keep the ID around for debugging). So we can simpify a bit: unsigned int Shader::create_shader(const char* shader_code, Type shader_type) { GLuint shader = glCreateShader(shader_type); glShaderSource(shader, 1, &shader_code, nullptr); glCompileShader(shader); check_shader_compilation_status(shader); return shader; } This looks fine for a simple shader class, but you might run into a few issues in future: A shader object can be composed from multiple shader source strings / files (which is very useful to avoid unnecessary duplication of shader code). There are other types of shader object (tessellation control / evaluation, geometry, compute) that may or may not be present in the shader program. It would be more flexible to load the shader sources from their files outside of the Shader class and pass them in. Or even to add a separate ShaderObject class, and create the Shader from a std::vector<ShaderObject>. But maybe that's more than you need right now. Technically, we should write error messages to stderr, not stdout, which means using std::cerr or std::clog instead of std::cout. It doesn't really matter for a graphical program though. if (!is_compiled) { // ... return false; } else { return true; } We don't need the else statement here, since we return from the other branch. We can just return true and avoid the brackets and the extra indent. (This is more of an issue when there's more code in the else branch).
{ "domain": "codereview.stackexchange", "id": 36217, "tags": "c++, opengl" }
How to convert benzene to 1-bromo-3-iodobenzene?
Question: Today, my chemistry teacher gave the following organic conversion as homework: benzene to 1-bromo-3-iodobenzene. I tried something like: I am not sure about it, can someone review my conversion (as I have got an exam tomorrow)? Answer: It looks pretty good overall. The order of substitution is correct to achieve the required pattern but there are two points I would pick up on. The final substitution of the diazonium salt is best done with $\ce{CuI}$ rather than $\ce{KI}$ as the copper(I) ions catalyse the reaction. Also heating is not required; the reaction can be run at room temperature (or possible slightly above but not much). The diazonium salt is drawn incorrectly since it is ionic in nature. There is a triple bond between the nitrogens and a formal positive charge on the middle nitrogen, although in reality the charge is delocalised over both atoms:
{ "domain": "chemistry.stackexchange", "id": 3946, "tags": "organic-chemistry, aromatic-compounds, synthesis" }
Time for critically damped oscillator to reach equilibrium?
Question: The title says it all. With my limited knowledge of physics and math, I have no idea where to begin, as the position function I have for a critically damped oscillator, $x=e^{-\omega_0t}[x_0+(v_0+\omega_0x_0)t]$ where $\omega_0$ is the undamped frequency of the oscillator, does not have an analytical solution for $t$. By equilibrium, I mean within a few decimal places of equilibrium, as the oscillator only approaches $0$ as $t$ goes to infinity. Answer: I think you could use the Newton-Raphson method to solve this for $x=a$, where $a$ is some constant close to zero that you choose (e.g. 0.01). Let's make a function $f(t)$ from your function: $$f(t) = \mathrm{e}^{-\omega_0 t}\left[x_0 + (v_0+w_0 x_0)t\right] - a$$ We would like this to be equal to zero, since that's the case in your original equation, with your choice of $x=a$. Calculate the derivative of $f(t)$ with respect to $t$. After some messing about I think (you must check!) that you get $$f'(t) = \mathrm{e}^{-\omega_0 t}\left[v_0-\omega_0(v_0+\omega_0 x_0)t\right]$$ Start with some initial guess for $t$, which we will call $t_0$. Then calculate a new guess $t_1$ as follows $$t_1 = t_0 - \frac{f(t_0)}{f'(t_0)}$$ Then another guess: $$t_2 = t_1 - \frac{f(t_1)}{f'(t_1)}$$ and keep going like that until your guesses get close enough together that you don't care about the difference. A problem with this method in general is that the guesses do not always converge. I think it should work in your case, but you'll need to be a bit careful about your first guess. If you want to be more rigorous, you can calculate the second derivative (check!): $$f''(t) = \omega_o \mathrm{e}^{-\omega_0 t}\left[-(2v_o+\omega_0 x_0) + \omega_0(v_0 + \omega_0 x_0)t\right]$$ then choose $t_0$ so that: $f'(t_0) < 0$ and $f''(t_0) > 0$ if $f(t_0) > 0$ $f'(t_0) > 0$ and $f''(t_0) < 0$ if $f(t_0) < 0$ In that case I suspect your guesses should go nicely to the right place.
{ "domain": "physics.stackexchange", "id": 76108, "tags": "classical-mechanics, harmonic-oscillator, oscillators" }
Does "lifetime of up quark" have a physical meaning?
Question: I saw this question about the lifetime of an up quark. As far as I know, free quarks are never observed in experiments. Then what is the significance of a statement like "the lifetime of an up quark is X units"? I am looking for a physical explanation without involving much mathematics. I am not very familiar with the mathematical formulation of QCD, but I know about Feynman diagrams. Answer: No, "lifetime of an up quark" is utterly meaningless (at least here, but I'd be hard pressed to find legitimate contexts for it...). The lifetime discussed is that of a neutral pion, decaying by the F diagram (sorry) In words, the pion "resolves" to virtual states of its valence quarks, u or d, which then couple to two real photons, to which the pion thus decays with a given width (/lifetime) thus computed. The lifetime discussion never applied to the quarks, but only to the size of the amplitude represented by this diagram/process. This size eventually determines the probability of decay per unit of time, related to the lifetime.
{ "domain": "physics.stackexchange", "id": 83299, "tags": "standard-model, quantum-chromodynamics, quarks, elementary-particles" }
how to launch file from computer B to execute in computer A
Question: Hi, I am running some navigation test with a mobile robot (A) with Ubuntu 12.04.1 and ROS hydro. I also have ROS on my laptop (B) and want to communicate both computers to launch the different nodes from my laptop. Both computers are in the same network and using the ROS_MASTER_URI and ROS_IP I am able to see the nodes of the robot in my computer. roscore is running in the robot and i have a launch file in the robot that launches different nodes, what i want is to call that launch file from my laptop (the computations and everything will execute in the robot but i want to call it from the laptop) Is it needed to create a package in the laptop just to call the launch file or is there any other way ? Sorry for my English, and thanks Originally posted by lfvm0001 on ROS Answers with karma: 3 on 2020-10-06 Post score: 0 Original comments Comment by mgruhler on 2020-10-06: Is SSH not enough? You cannot directly call a launch file on another machine, but you could use the machine tag to launch the nodes on the other computer: http://wiki.ros.org/roslaunch/XML/machine Comment by lfvm0001 on 2020-10-07: Thanks... if I use the machine tag , and I have some nodes that need some .yaml files, those files need to be in the computer where I’m launching the file right ? Even if the node is running in the other machine ? Answer: Thanks to mgruhler for the response! As he commented, what I want (to directly call a launch file from another computer) was not possible. However as recommended I use the tag machine in my launch file to tell to all the nodes to run in computer A and then run the file from computer B. And it’s working as intended. Thanks Originally posted by lfvm0001 with karma: 3 on 2020-10-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35603, "tags": "ros, roslaunch, ros-hydro, multiple-machines" }
Dark Energy / Accelerating universe: naive question
Question: Folks, I have a naive question regarding the subject of dark energy and an accelerating universe: From what I understand/read, it seems that the further we look out into deep space, the faster the objects are moving away from us - in all directions. Is this basically what is meant by "accelerating universe"? Because it seems that this situation is exactly what we should expect to see (according to Big Bang theory). The further out we peer into space, the further back in time we are "seeing", so wouldn't we expect to see higher rates of acceleration/expansion the further back in time we peer? Answer: We indeed expect that the universe expands on the basis of the Big Bang theory. Hence by looking at higher redshift, i.e. further in time, you should expect objects to recede faster. This is reflected in the linear relationshift that first measured by Hubble $$v=H_0D, $$ where $v$ is the recession speed, $D$ the distance to the object and $H_0$ the Hubble constant. However from the observation of SNIa, that completes the plot for higher redshifts, the linear relationshift deviates! It suggest that the relation between velocity $v$ and the distance $D$ is no longer linear for higher $z$. The expansion is accelerating. This acceleration depends on the geometry and energy content of the universe. Take a look at the graph below: in a empty universe, as you noted, the recession velocities to deviate from the linear Hubble relation and it is indicated by a straight line. The measurement from the SN Ia however show that the universe deviates from this relation due to the presence of a repulsive energy component that result in an accelerated expansion of the universe to account for the observed recession velocities. This repulsive energy content is called dark energy and is still highly mysterious.
{ "domain": "physics.stackexchange", "id": 14136, "tags": "cosmology, acceleration, universe, dark-energy" }
Authentication and session creation
Question: My session controller has a method for creating new user session. According to Rubocop's output there is 'Assignment Branch Condition' metric is too high [15.17/15]. def create agent = Agent.find_by(login: params[:session][:login]) if agent && agent.authenticate(params[:session][:password]) log_in agent redirect_to dashboard_url else flash.now[:danger] = 'Invalid login or password' render 'new' end end log_in method: def log_in(agent) session[:agent_id] = agent.id end I've extracted method for processing unsuccessful logins to reduce ABC size: def create agent = Agent.find_by(login: params[:session][:login]) if agent && agent.authenticate(params[:session][:password]) log_in agent redirect_to dashboard_url else unsuccessful_login 'Invalid login or password' end end private def unsuccessful_login(message) flash.now[:danger] = message render 'new' end Is it appropriate to make new method I'm going to use only once in this case? Is there any guideline when it's suitable to extract method? Answer: It is not technically wrong. In OOP languages you can encounter this sort of thing (a private method apparently only used once) especially in loops. Or if you want a specific part of the algorithm to not be overrideable, etc. Generally spoken however, I see no reason to do it in your example. The code was clear and now has way more clutter. Whenever you repeat yourself in a class, whenever you use copy paste from another class, is most oftentimes an indication that you could abstract the code to a set of smaller functions / objects. With regards to the tool reporting a high ABC rank: those tools do not IMHO replace common sense. You should never sacrifice readability and conciseness for the sake of a better score. This is a case where your code quality tool might have too rigid settings, or perhaps there is more code in the inspected scope that is better suited or in more need of refactoring.
{ "domain": "codereview.stackexchange", "id": 16444, "tags": "ruby, ruby-on-rails, comparative-review, authentication, session" }
Where can I find worldwide standards of manganese mining?
Question: Hi everybody who sees this post, I need urgent help for finding all official world wide standards of the manganese mining. Problem is that, many houses has been damaged in my village with manganese mining, mainly because of underground explosions. Company which runs this type of operations, does not make any information about standards public. so my questions are: where can I found OFFICIAL document which describes calculation formula about how far should the house be to consider it officially damaged, and can ask for compensation from the company. Is it possible for company to have it's own standard when it comes to damaging properties, I mean their custom formulas or something like that. Is their any websites which lists the countries which should follow worldwide standards of mining ? Answer: Your issue is an issue for all types of mining near residential or built up areas, not just manganese mining. The distance that mines should be from built up areas depends on the competency of the ground whether the ground is hard or soft the type of explosive used - more particularly the energy released by the explosives used. This influences the blast vibrations produced. Currently Australian and New Zealand recommendation is a maximum blast vibration of 10 mm/s in residential areas, preferably less than 2.5 m/s. the amount of explosives detonated at a time. Many smaller blasts are usually better than one very large blast. the presence of geological structures that may preferentially carry and appear to amplify blast energy The trouble with laws and standards for such things is they vary between mining jurisdictions. Even within countries, such as the US, Canada and Australia they can vary between states or provinces. Such countries do not have national legislation for the operation of mines. Each state of province is responsible for such laws within its jurisdiction. In the 1980s, in the State of New South Wales in Australia, to protect lakes and dams from unexpected drainage and also to protect underground coal mines and the people that work in them from water inundation from lakes, dams or other such large stores of surface water a 45 degree angle rule was used. From the edge of the lake draw a 45 degree line downwards. Where that line intersects the horizontal, or near horizontal, coal seam that defines the closest proximity the mine was allowed to get near the surface lake or dam. Because of the geometry of a 45 degree triangle both of the shorter sides are the same length. In the case of this rule the distance from the edge of the lake or dam is equal to the depth of mining. One of the problems with this rule is coal mines in Australia do not use explosives. Coal is mined using cutting machines via longwall mining. In the City of Ballarat, Australia, gold mining resumed in the 1990s, after the closure of mines during the 1914 to 1918 world war because of a labor shortage. The mine operates underneath the city of 105 000 people. Under the license to operate, from the State Government of Victoria: Our licence conditions state that the vibration limit for blasting is 10 mm/sec and that 95% of all blasts must be below 5 mm/sec. The mine has a self imposed limit of 2.5 mm/s. Existing underground mine development, in blue, as of July 2020. Under a proposed expansion to the mine, the newer region will be approximately 50 m below the surface. See the sectional diagram at the end of the webpage. The surface is at elevation RL 1205 and the top of the Nick O' Time Shoot is elevation RL 1150. I suspect this region will under fields or forest. Generally, for mines under urban environments 100 m is the closest that mines come to the foundations of buildings. This is not primarily due to blasting waves and seismic activity from blasting, but the requirement to maintain a thickness of competent rock beneath buildings to minimize potential for future subsidence. Factors that are considered in this thickness are: The strength and competency of the ground below the buildings The largest size of opening that will be developed underground Whether the stopes, the chambers from where ore is mined, are backfilled once the ore has been mined. If the stopes are backfilled, with what will they be backfilled: loose waste rock, cemented waste rock, loose sand, cemented sand or paste fill (larger grained tailings from the processing plant that is mixed with cement so it resembles toothpaste and is pumped into the mined stopes). The degree to which the stopes will be backfilled. Except for the placement of paste fill, stopes cannot be fully backfilled because of operational restrictions when other methods are used. There is usually an air gap of between 2 and 5 m in the top portion of the stope that cannot be filled. Backfill prevents the wall of the stopes from collapsing and it minimizes the amount of subsidence that can occur above the stopes. In your situation, something else to be wary of is exposure to manganese dust from the mine or processing plant. The human body requires small amounts of manganese, but too much can be toxic. Excessive exposure can lead to heath problems with the respiratory tract and/or the brain. Manganese effects occur mainly in the respiratory tract and in the brains. Symptoms of manganese poisoning are hallucinations, forgetfulness and nerve damage. Manganese can also cause Parkinson, lung embolism and bronchitis. When men are exposed to manganese for a longer period of time they may become impotent. A syndrome that is caused by manganese has symptoms such as schizophrenia, dullness, weak muscles, headaches and insomnia. Additional references concerning exposure to excessive amounts of manganese: National Institutes of Health Impact of open manganese mines on the health of children dwelling in the surrounding area Centers for Disease Control and Prevention (USA) World Health Organization Edit 5 September 2020 There is no formula, simple or complex, that will let you calculate how close an active underground mining region can be to buildings on the surface. The reasons for this are: Geology is complex: different rock types, strength of rock masses laterally and at depth, geological structures such as discontinuities, faults and folds. How the ground propagates blast energy. The magnitude of the blast energy produced during mining. Quality of construction of surface buildings: flexibility and rigidity. Unlike steel, rock is not uniform in its properties everywhere. Various types of steel are made according to a recipe: some much iron, so much carbon, so much nickel or chromium. Steel is also given different types of treatments when produced, such as hot or cold quenching, forging. All this affects the strength and other properties of various types of steel. When constantly made to the same recipe each type of steel can be tested to determine its properties. With this knowledge, structural and civil engineering can design a building, or any other structure, anywhere in the world with confidence knowing the steel will always behave the same way. Likewise for mechanical engineers when they design parts for machines. This cannot be said of geological material, such as rock. Limestone behaves differently to sandstone, which behaves differently to basalt, which behaves differently to komatiite or granite. Even the same type of rock can behave different in different locations. Discontinuities within rock, oxidation, weathering, the effect of water over prolonged periods of time, the effect of ground stresses can all change how a type of rock will behave in different locations. Unlike structural and mechanical engineers, mining engineers cannot have the same level of confidence in the properties of the materials (different rock types) they use. With experience they know that a certain type of rock will behave in certain way, but that may not be totally applicable elsewhere. Because of this, it is not possible to create a formula that can be used everywhere that will state how far an active underground mining region must be from surface buildings. The other factor which would need to considered is the manner of construction of the buildings near the mine. Buildings that are rigid, made of rock or brick will generally experience more damage, if only just cracked walls, than flexible building made of timber or bamboo. Flexible buildings can move to certain degree, through swaying, when subjected to forces such as blasting energy, seismic shocks from natural earthquakes and very strong winds. This movement can absorb some of the disruptive energy and the building remains intact. Rigid buildings have less opportunity to move when subjected to disruptive forces so they have to absorb more of the disruptive energies and in doing so they are more likely to crack and collapse. This is why buildings in earthquake prone regions (such as Italy), or cyclone/hurricane prone regions (such as northern Australia) are now built according to an earthquake or cyclone/hurricane code where reinforcing steel is utilized to increase flexibility of the building, when completed. From personnel experience I have seen newer, rigidly made, houses such as the one pictured below, experience clacked walls and other damage from underground mine blasting where the house was 2.5 km laterally from a mine and the blast was 500 m below the surface. The active mining zone was 1.5 km laterally from the house. An older, more flexible house, shown below, was only 1 km from the mine and it experienced no damage. Edit 8 January 2024 Additional information is available in my answer to the question, How far should a Manganese processing plant be built from a city?
{ "domain": "earthscience.stackexchange", "id": 2108, "tags": "geology, atmosphere, air-pollution, mining" }
Difference between a digital lock-in amplifier and a FFT when extracting phase of a signal?
Question: I am trying to measure the relative phase of a sine wave fixed at a particular frequency in a noisy environment. My initial approach is to simply collect $N$ samples, take an FFT, and then just extract the phase at the operating frequency (which I know apriori). When reading about methods to extract signals from noisy environments, I came across the lock-in amplifier. However, I am confused about whether I should have any reason to expect it to perform better than the FFT (ignoring run time, just looking at ability to extract my phase). Specifically, imagine that I implement the lock-in amplifier digitally. I would then numerically calculate the following two integrals (source): \begin{align} X_{LI} &= \frac{1}{T}\int\limits^{T}_{0} U_{\rm in}(s) \cos(2\pi f s) \,ds\\ Y_{LI} &= \frac{1}{T}\int\limits^{T}_{0} U_{\rm in}(s) \sin(2\pi f s) \,ds \end{align} and then I would get my phase using: $$ \theta_{LI} = \tan^{-1}\left(\frac{Y_{LI}}{X_{LI}}\right) $$ However, if I were to use an FFT, the FFT would evaluate \begin{align} X_{FFT}(f) + iY_{FFT}(f) &= \int\limits_0^T U_{\rm in}(s)e^{-2\pi i s f}\,ds\\ &=\int\limits_0^T U_{\rm in}(s)\cos(2\pi f s)\,ds + i \int\limits_0^T U_{\rm in}(s)\sin(2\pi f s)\,ds \end{align} Which seems to imply that just as with the lock-in amplifier, I would get \begin{align} X_{FFT} &= \int^{T}_{0} U_{\rm in}(s) \cos(2\pi f s)\,ds = \frac{1}{T}X_{LI}\\ Y_{FFT} &= \int^{T}_{0} U_{\rm in}(s) \sin(2\pi f s)\,ds = \frac{1}{T}Y_{LI}\\ \theta_{FFT} &= \tan^{-1}\left(\frac{Y_{FFT}}{X_{FFT}}\right) = \tan^{-1}\left(\frac{Y_{LI}}{X_{LI}}\right) = \theta_{LI} \end{align} So does this mean that I would always get the same result whether I use a digital lock-in amplifier, or a digital FFT on the same data set? What is the benefit of using a lock-in amplifier? Is there no advantage to the lock-in amp in my application? What is then an example of an application that the lock-in amplifier is best suited for? Answer: Is there a difference between the FFT and a lock-in amp? Yes, two of them: The FFT assumes the signal at its input to be periodic. What this means about your FFT integrals is that they are missing a phase variable (let's call it $\phi_r$) which will be random because it depends on the ratio of your signals period (or its frequency), the window of the FFT and the initial phase that the FFT will "catch" the signal on the first frame (and any subsequent phase changes whether desired or not). By the way, Frequency is integrated Phase. Consequently, your $\theta_{LI}$ will depend on that $\phi_r$ (which is all over the place). Intuitively now, the FFT assumes the signal to be periodic. That is, it repeats in the same way both to the left and to the right of the observation window. This is "alright" if the window happens to be an integer multiple of the signal's period but this is extremely unlikely because of noise and possible changes in the phase of the signal. It will therefore sound like a whistle with regular "pops" because of the discontinuities and at each "pop" the phase estimation will be disturbed. There are workarounds to this (and this one) and this brings us to the second difference. There is a concept in the FFT called Spectral Resolution. Spectral resolution relates the physical frequencies (in Hz) with each of the distinct harmonics that the discrete FT evaluates its integrals at. Therefore, you would have to accurately calculate the length of the window of the FFT with respect to the sampling frequency and the frequency of the input signal so that it lands exactly on one of the bins. And after doing this of course, it would seem a waste to be evaluating the FT for all harmonics when all you are interested in is just one of them. Having said this, let's try to tackle the rest of the questions: So does this mean that I would always get the same result whether I use a digital lock-in amplifier, or a digital FFT on the same data set? Provided that the above details are taken care of, the result will be almost identical. I say almost because there is another consequence from #1 above that is not exactly obvious. Even if you were to use overlap-add or overlap-save, you would still have no control over the initial phase at which the FFT would "catch" the incoming signal. And once the FFT "starts", it's own local oscillators are not going to adapt to the incoming signal. Therefore, both techniques would provide information about phase but the FFT's estimate would suffer by a systematic error proportional to the initial phase of the incoming signal. The lock-in amp takes care of this with its Phase-Locked-Loop (PLL). For (much) more information, please see this link. What is the benefit of using a lock-in amplifier? Should be becoming clearer by now. A lock-in amp can return amplitude, power and phase information at one specific frequency with the accuracy of its estimate depending on the clarity of the reference signal (the local oscillator). Is there no advantage to the lock-in amp in my application? On the contrary, there is a massive advantage. And this advantage is coming from the use of the PLL. The PLL is an automatic control system that adjusts the frequency (and therefore phase) of a local oscillator to match the frequency and phase of the incoming signal. It therefore can "adjust its frame of reference" and it returns much more accurate phase information. What is then an example of an application that the lock-in amplifier is best suited for? Please see above PDF. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 3886, "tags": "fft, noise" }
Counting number of messages for a mailbox over several periods
Question: The idea is to return a number of messages received today, this week, this month. def index @mailboxes = current_user.mailboxes @today, @month, @week, @all_time = 0, 0, 0, 0 @mailboxes.each do |mailbox| @today += mailbox.messages.today.length @month += mailbox.messages.week.length @week += mailbox.messages.month.length @all_time += mailbox.messages.length end end week, month scopes are similar to the following one scope :today, -> { where(created_at: ((Time.zone.now - 24.hours)..Time.zone.now)) } Answer: Some suggestions ... Use a counter cache for messages on Mailbox -- the count of all can be read from the mailbox instance. Use #size instead of #length. This will use a counter cache for messages on Mailbox if one is present, or will use a SQL count. #length will get all of the messages back, then check the size of the array. I would be concerned about having multiple mailboxes. If your user model has_many :messages, through: :mailbox you can run these queries more efficiently: messages = current_user.messages @today = messages.today.size @week = messages.week.size @month = messages.month.size @all_time = messages.size With a counter cache, you're only running three queries to get the counts, and thy should be efficient enough. Place a single index on the columns (user_id, id) on the mailboxes tables, and an index on (mailbox_id, created_at) on the messages table. Edit: I just noticed the week and month variable assignments were round the wrong way. Also, if you wanted to be slightly more efficient you could define a scope for week_except_today and month_except_week, etc, and then: @today = messages.today.size @week = @today + messages.week_except_today.size @month = @week + messages.month_except_week.size @all_time = messages.size
{ "domain": "codereview.stackexchange", "id": 27053, "tags": "ruby, ruby-on-rails, active-record" }
How can dilation of a wavelet function lead to its sign reversal?
Question: I am studying wavelets and it has been given that $$ \psi_{a,b} = \frac{1}{\sqrt{|a|}} \psi \left(\frac{t-b}{a}\right) $$ now the function $$ \psi(t)= \begin{cases} 1,& \text{if } 0\leq t<\frac 12\\ -1, & \text{if } \frac 12\leq t<1\\ 0& \text{otherwise} \end{cases} $$ is given as in terms of previous equation $$ \psi_{a,b}= \frac {1} {\sqrt{a}}\left[u(t-a)-2u\left(t-b-\frac a2\right)+u(t-b-a)\right] $$ when $a>0$ and $$ \psi_{a,b}=- \frac {1} {\sqrt{-a}}\left[u(t-a)-2u\left(t-b-\frac a2\right)+u(t-b-a)\right] $$ when $a<0$. My issue is, how can $a$ which is a dilation parameter lead to something like a negative function when $a<0$? Answer: Simply put, $\frac{t}{-a} =\frac{-t}{a}$.
{ "domain": "dsp.stackexchange", "id": 4510, "tags": "signal-analysis, wavelet" }
Why aren't animals diverse in phenotype?
Question: I am not comparing a cat with leopard. I am just saying that we humans are all one type of creature and we are diverse (I am not saying we are class of mammals and phylum of etc and kingdom etc, because my religion doesn't believe in it). So consider the class of cats they are one type of species so why aren't they diverse in phenotype like us? Why do other animals, plants, unicellular organisms not have diversity in their phenotype and how they can recognize each other, like a bird always brings food to his offspring and it can't make mistake by giving it to other offspring of its own species? So can I say they aren't diverse because of in their meiosis division their chromosomes don't cross over and random assort (alignment)? Answer: In your question, your assumption that animal species are less diverse phenotypically than humans is wrong. I am sure you will appreciate @terdon's answer to this post and @rg255 answer to this post. Don't forget that we are good at detecting differences among humans (because we evolved for this purpose). We are doing much worse at telling apart animals from other species just because we have not evolved for this purpose. This is the reason why we tend to see human faces when looking at clouds but we rarely see sheep faces! Several studies (here and here) showed that sheep are able to recognize each other (and we even know the number of neurones needed to remember one face). They are probably better at telling two sheep apart than telling two human apart. Another interesting fact is the so-called cross-race effect. We, humans, are better at recognizing faces of people from our own ethnic group than faces of people from other ethnic groups. For example, a Japanese is very good at Japanese faces recognition but not good at recognizing European faces. Same is true the other way around. As @user568459 said in the comments: some people are not able to recognize faces. This is due to a cognitive disease called Prosopagnosia (also called face blindness). Those suffering from this disease are not better at recognizing sheep faces than human faces. So consider the class of cats they are one type of species so why aren't they diverse in phenotype like us? There is no good definition I think of what is phenotypic diversity (no accurate and objective index to measure it) but at first sight I would tend to think that cats are more diverse than humans. One of the main features one would probably raise when talking about human diversity is skin color. And in terms of color, cats are much more diverse than humans. You may think of an extraordinary diversity when thinking of Norwegian that are taller than Indonesian (I may not have chosen the two extremes) by several centimeters on average but think about cats! The average cat weight 4 to 5 kg but some cats weight less than 2 kg and some other (like the coon cat) weight more than 10 kg (World Record: 21.3 kg). Imagine a human ethnic group that would on average weight 5 times more than another ethnic group! And think also about cats' hair length or tail shape! Humans vary in terms of facial feature (lips size, nose shape, etc.) so do cats. Some look like their face was smashed against a wall while others have a long muffle. Again I welcome you to have a look to this post. how they can know each like a bird always brings food to his offspring and it can't make mistake by giving it to other offspring of it's own species? As I said above humans evolved to recognize their own. Many species also evolved in order to recognize their own. In some species individuals use smell rather than visual features in order to recognize each other (odor is also a kind of phenotypic variation). But still some species are poor to recognize each other. For a bird, it seems rather easy to not feed the wrong individual as all their offspring are usually together in the same nest. However you might be interested the lifestyle of the cuckoo who parasites nests of other bird species. Cuckoos' babies and particularly the inner beak resemble to the babies of the species they parasite and often the parents (often the mother only is involved in feeding the young) get fooled and feed the cuckoo. So can i say they aren't diverse because of in their meiosis division their chromosomes don't cross over and random assort (alignment)? No, you can't say that! Because they are diverse and because for many of the species you may think about, cross-over does occur. Their genetic diversity as well as their phenotypic diversity is as high than in humans. There is nothing extraordinary about humans (except their brain and the related fact that we predigest our food by cooking it) compare to other lineages. And there is nothing extraordinary to have one extraordinary feature (such as a big brain) that you can't find in other lineages! Many lineages are extraordinary in some sense.
{ "domain": "biology.stackexchange", "id": 2415, "tags": "biodiversity" }
Checking image size in C++
Question: This is a follow-up question for 3D Inverse Discrete Cosine Transformation Implementation in C++. After checking G. Sliepen's answer, I am trying to update the part of width and height checking of Image class. Instead using macros, there are several template functions is_width_same, is_height_same, is_size_same, assert_width_same, assert_height_same, assert_size_same and check_size_same proposed in this post. The experimental implementation is_width_same template functions implementation: template<typename ElementT> constexpr bool is_width_same(const Image<ElementT>& x, const Image<ElementT>& y) { return x.getWidth() == y.getWidth(); } template<typename ElementT> constexpr bool is_width_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_width_same(x, y) && is_width_same(y, z) && is_width_same(x, z); } is_height_same template functions implementation: template<typename ElementT> constexpr bool is_height_same(const Image<ElementT>& x, const Image<ElementT>& y) { return x.getHeight() == y.getHeight(); } template<typename ElementT> constexpr bool is_height_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_height_same(x, y) && is_height_same(y, z) && is_height_same(x, z); } is_size_same template functions implementation: template<typename ElementT> constexpr bool is_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { return is_width_same(x, y) && is_height_same(x, y); } template<typename ElementT> constexpr bool is_size_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_size_same(x, y) && is_size_same(y, z) && is_size_same(x, z); } assert_width_same template functions implementation: wrap is_width_same function with assert. template<typename ElementT> constexpr void assert_width_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert(is_width_same(x, y)); } template<typename ElementT> constexpr void assert_width_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert(is_width_same(x, y, z)); } assert_height_same template function implementation: wrap is_height_same function with assert. template<typename ElementT> constexpr void assert_height_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert(is_height_same(x, y)); } template<typename ElementT> constexpr void assert_height_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert(is_height_same(x, y, z)); } assert_size_same template function implementation: template<typename ElementT> constexpr void assert_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert_width_same(x, y); assert_height_same(x, y); } template<typename ElementT> constexpr void assert_size_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert_size_same(x, y); assert_size_same(y, z); assert_size_same(x, z); } check_size_same template function implementation: template<typename ElementT> constexpr void check_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { if (!is_width_same(x, y)) throw std::runtime_error("Width mismatched!"); if (!is_height_same(x, y)) throw std::runtime_error("Height mismatched!"); } the updated version Image class: operator<< overloading updated. checkBoundary function implementation updated. I am trying to add a constructor with rvalue reference Image(std::vector<ElementT>&& input, std::size_t newWidth, std::size_t newHeight). Please also take a look about this part. template <typename ElementT> class Image { public: Image() = default; Image(const std::size_t width, const std::size_t height): width(width), height(height), image_data(width * height) { } Image(const std::size_t width, const std::size_t height, const ElementT initVal): width(width), height(height), image_data(width * height, initVal) {} Image(const std::vector<ElementT>& input, std::size_t newWidth, std::size_t newHeight): width(newWidth), height(newHeight) { if (input.size() != newWidth * newHeight) { throw std::runtime_error("Image data input and the given size are mismatched!"); } image_data = input; } Image(std::vector<ElementT>&& input, std::size_t newWidth, std::size_t newHeight): width(newWidth), height(newHeight) { if (input.size() != newWidth * newHeight) { throw std::runtime_error("Image data input and the given size are mismatched!"); } image_data = std::move(input); } constexpr ElementT& at(const unsigned int x, const unsigned int y) { checkBoundary(x, y); return image_data[y * width + x]; } constexpr ElementT const& at(const unsigned int x, const unsigned int y) const { checkBoundary(x, y); return image_data[y * width + x]; } constexpr std::size_t getWidth() const { return width; } constexpr std::size_t getHeight() const noexcept { return height; } constexpr auto getSize() noexcept { return std::make_tuple(width, height); } std::vector<ElementT> const& getImageData() const noexcept { return image_data; } // expose the internal data void print(std::string separator = "\t", std::ostream& os = std::cout) const { for (std::size_t y = 0; y < height; ++y) { for (std::size_t x = 0; x < width; ++x) { // Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number os << +at(x, y) << separator; } os << "\n"; } os << "\n"; return; } // Enable this function if ElementT = RGB void print(std::string separator = "\t", std::ostream& os = std::cout) const requires(std::same_as<ElementT, RGB>) { for (std::size_t y = 0; y < height; ++y) { for (std::size_t x = 0; x < width; ++x) { os << "( "; for (std::size_t channel_index = 0; channel_index < 3; ++channel_index) { // Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number os << +at(x, y).channels[channel_index] << separator; } os << ")" << separator; } os << "\n"; } os << "\n"; return; } friend std::ostream& operator<<(std::ostream& os, const Image<ElementT>& rhs) { const std::string separator = "\t"; rhs.print(separator, os); return os; } Image<ElementT>& operator+=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::plus<>{}); return *this; } Image<ElementT>& operator-=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::minus<>{}); return *this; } Image<ElementT>& operator*=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::multiplies<>{}); return *this; } Image<ElementT>& operator/=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::divides<>{}); return *this; } friend bool operator==(Image<ElementT> const&, Image<ElementT> const&) = default; friend bool operator!=(Image<ElementT> const&, Image<ElementT> const&) = default; friend Image<ElementT> operator+(Image<ElementT> input1, const Image<ElementT>& input2) { return input1 += input2; } friend Image<ElementT> operator-(Image<ElementT> input1, const Image<ElementT>& input2) { return input1 -= input2; } Image<ElementT>& operator=(Image<ElementT> const& input) = default; // Copy Assign Image<ElementT>& operator=(Image<ElementT>&& other) = default; // Move Assign Image(const Image<ElementT> &input) = default; // Copy Constructor Image(Image<ElementT> &&input) = default; // Move Constructor private: std::size_t width; std::size_t height; std::vector<ElementT> image_data; void checkBoundary(const size_t x, const size_t y) const { if (x >= width) throw std::out_of_range("Given x out of range!"); if (y >= height) throw std::out_of_range("Given y out of range!"); } }; Full Testing Code #include <algorithm> #include <cassert> #include <chrono> #include <cmath> #include <concepts> #include <cstdint> #include <exception> #include <fstream> #include <functional> #include <iostream> #include <iterator> #include <numbers> #include <numeric> #include <ranges> #include <string> #include <type_traits> #include <utility> #include <vector> struct RGB { std::uint8_t channels[3]; }; using GrayScale = std::uint8_t; namespace TinyDIP { template <typename ElementT> class Image { public: Image() = default; Image(const std::size_t width, const std::size_t height): width(width), height(height), image_data(width * height) { } Image(const std::size_t width, const std::size_t height, const ElementT initVal): width(width), height(height), image_data(width * height, initVal) {} Image(const std::vector<ElementT>& input, std::size_t newWidth, std::size_t newHeight): width(newWidth), height(newHeight) { if (input.size() != newWidth * newHeight) { throw std::runtime_error("Image data input and the given size are mismatched!"); } image_data = input; } Image(std::vector<ElementT>&& input, std::size_t newWidth, std::size_t newHeight): width(newWidth), height(newHeight) { if (input.size() != newWidth * newHeight) { throw std::runtime_error("Image data input and the given size are mismatched!"); } image_data = std::move(input); } constexpr ElementT& at(const unsigned int x, const unsigned int y) { checkBoundary(x, y); return image_data[y * width + x]; } constexpr ElementT const& at(const unsigned int x, const unsigned int y) const { checkBoundary(x, y); return image_data[y * width + x]; } constexpr std::size_t getWidth() const { return width; } constexpr std::size_t getHeight() const noexcept { return height; } constexpr auto getSize() noexcept { return std::make_tuple(width, height); } std::vector<ElementT> const& getImageData() const noexcept { return image_data; } // expose the internal data void print(std::string separator = "\t", std::ostream& os = std::cout) const { for (std::size_t y = 0; y < height; ++y) { for (std::size_t x = 0; x < width; ++x) { // Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number os << +at(x, y) << separator; } os << "\n"; } os << "\n"; return; } // Enable this function if ElementT = RGB void print(std::string separator = "\t", std::ostream& os = std::cout) const requires(std::same_as<ElementT, RGB>) { for (std::size_t y = 0; y < height; ++y) { for (std::size_t x = 0; x < width; ++x) { os << "( "; for (std::size_t channel_index = 0; channel_index < 3; ++channel_index) { // Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number os << +at(x, y).channels[channel_index] << separator; } os << ")" << separator; } os << "\n"; } os << "\n"; return; } friend std::ostream& operator<<(std::ostream& os, const Image<ElementT>& rhs) { const std::string separator = "\t"; rhs.print(separator, os); return os; } Image<ElementT>& operator+=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::plus<>{}); return *this; } Image<ElementT>& operator-=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::minus<>{}); return *this; } Image<ElementT>& operator*=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::multiplies<>{}); return *this; } Image<ElementT>& operator/=(const Image<ElementT>& rhs) { assert(rhs.width == this->width); assert(rhs.height == this->height); std::transform(std::ranges::cbegin(image_data), std::ranges::cend(image_data), std::ranges::cbegin(rhs.image_data), std::ranges::begin(image_data), std::divides<>{}); return *this; } friend bool operator==(Image<ElementT> const&, Image<ElementT> const&) = default; friend bool operator!=(Image<ElementT> const&, Image<ElementT> const&) = default; friend Image<ElementT> operator+(Image<ElementT> input1, const Image<ElementT>& input2) { return input1 += input2; } friend Image<ElementT> operator-(Image<ElementT> input1, const Image<ElementT>& input2) { return input1 -= input2; } Image<ElementT>& operator=(Image<ElementT> const& input) = default; // Copy Assign Image<ElementT>& operator=(Image<ElementT>&& other) = default; // Move Assign Image(const Image<ElementT> &input) = default; // Copy Constructor Image(Image<ElementT> &&input) = default; // Move Constructor private: std::size_t width; std::size_t height; std::vector<ElementT> image_data; void checkBoundary(const size_t x, const size_t y) const { if (x >= width) throw std::out_of_range("Given x out of range!"); if (y >= height) throw std::out_of_range("Given y out of range!"); } }; template<typename ElementT> constexpr bool is_width_same(const Image<ElementT>& x, const Image<ElementT>& y) { return x.getWidth() == y.getWidth(); } template<typename ElementT> constexpr bool is_width_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_width_same(x, y) && is_width_same(y, z) && is_width_same(x, z); } template<typename ElementT> constexpr bool is_height_same(const Image<ElementT>& x, const Image<ElementT>& y) { return x.getHeight() == y.getHeight(); } template<typename ElementT> constexpr bool is_height_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_height_same(x, y) && is_height_same(y, z) && is_height_same(x, z); } template<typename ElementT> constexpr bool is_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { return is_width_same(x, y) && is_height_same(x, y); } template<typename ElementT> constexpr bool is_size_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { return is_size_same(x, y) && is_size_same(y, z) && is_size_same(x, z); } template<typename ElementT> constexpr void assert_width_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert(is_width_same(x, y)); } template<typename ElementT> constexpr void assert_width_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert(is_width_same(x, y, z)); } template<typename ElementT> constexpr void assert_height_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert(is_height_same(x, y)); } template<typename ElementT> constexpr void assert_height_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert(is_height_same(x, y, z)); } template<typename ElementT> constexpr void assert_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { assert_width_same(x, y); assert_height_same(x, y); } template<typename ElementT> constexpr void assert_size_same(const Image<ElementT>& x, const Image<ElementT>& y, const Image<ElementT>& z) { assert_size_same(x, y); assert_size_same(y, z); assert_size_same(x, z); } template<typename ElementT> constexpr void check_size_same(const Image<ElementT>& x, const Image<ElementT>& y) { if (!is_width_same(x, y)) throw std::runtime_error("Width mismatched!"); if (!is_height_same(x, y)) throw std::runtime_error("Height mismatched!"); } } void checkSizeTest(const std::size_t xsize, const std::size_t ysize) { auto image1 = TinyDIP::Image<std::uint8_t>(xsize, ysize); auto image2 = TinyDIP::Image<std::uint8_t>(xsize + 1, ysize); auto image3 = TinyDIP::Image<std::uint8_t>(xsize, ysize + 1); auto image4 = TinyDIP::Image<std::uint8_t>(xsize + 1, ysize + 1); check_size_same(image1, image1); return; } int main() { auto start = std::chrono::system_clock::now(); checkSizeTest(8, 8); auto end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = end - start; std::time_t end_time = std::chrono::system_clock::to_time_t(end); std::cout << "Computation finished at " << std::ctime(&end_time) << "elapsed time: " << elapsed_seconds.count() << '\n'; return 0; } A Godbolt link is here. TinyDIP on GitHub All suggestions are welcome. The summary information: Which question it is a follow-up to? 3D Inverse Discrete Cosine Transformation Implementation in C++ What changes has been made in the code since last question? There are several template functions is_width_same, is_height_same, is_size_same, assert_width_same, assert_height_same, assert_size_same and check_size_same proposed in this post. Why a new review is being asked for? If there is any possible improvement, please let me know. Answer: Make it work for any number of images I see you have overloads for comparing two and for comparing three images. But that immediately makes me think: why not compare four? Or more? You can make a variadic function that uses a fold expression to check the dimensions of an arbitrary number of images with each other: template<typename T, typename... Ts> constexpr bool is_width_same(const Image<T>& x, const Image<Ts>&... y) { return ((x.getWidth() == y.getWidth()) && ...); } As a bonus, this will also allow you to check that the dimensions of two images using a different value type are the same. If you really don't want that, you can add a requires clause to force them to all be the same: requires (std::same_as<T, Ts> && ...) Consider removing the assert_*_same() helpers You are not saving much typing with these helper functions, compare: assert(is_width_same(x, y)); assert_width_same(x, y); It saves only 4 characters, but the drawback is that when the assert triggers, the second one will show a line number inside assert_width_same() instead of the line number of the call site. It is different for check_size_same(); at least on Linux, if an unhandled exception is thrown, it prints the what() but no line number, so nothing is lost by putting the throw statements in a function.
{ "domain": "codereview.stackexchange", "id": 42776, "tags": "c++, image, template, classes, c++20" }
Evaluating functional derivatives
Question: I am new to evaluating functional derivatives and I am having difficulty evaluating the following derivative: $$I = \frac{\delta}{\delta x(t)}\frac{\delta}{\delta x(t')}\int_{u_i}^{u_f}\frac{du}{2}\left(\frac{dx}{du}\right)^2~.$$ I have tried to do it and I obtain the following: $$I = -\frac{\delta}{\delta x(t)}\frac{d^2x}{du^2}\delta(t'-u)~,$$ but I am not sure how to take the second derivative and am actually not sure if what I've done so far is right either. How can this derivative be evaluated? Answer: I am having difficulty evaluating the following derivative: $$I = \frac{\delta}{\delta x(t)}\frac{\delta}{\delta x(t')}\int_{u_i}^{u_f}\frac{du}{2}\left(\frac{dx}{du}\right)^2~.$$ ... How can this derivative be evaluated? I will define a functional called F $$ F[x] = \int_{u_i}^{u_f}\frac{du}{2}\left(\frac{dx}{du}\right)^2 \equiv \int_{u_i}^{u_f}\frac{du}{2}\left(\dot x(u)\right)^2\;. $$ Consider this functional $F$ evaluated at $x(u)+\delta x(u)$, where we are planning to expand in a power series in $\delta x$. We have: $$ F[x+\delta x] = F[x] + \int du \dot x(u)\dot {\delta x(u)} + \frac{1}{2}\int du \dot {\delta x(u)}\dot {\delta x(u)}\tag{A}\;. $$ Here, the power series expansion is exact after three terms since the function is just a quadratic function. Also, by definition of the first and second functional derivatives we have: $$ F[x+\delta x] = F[x] + \int du {\delta x(u)}\frac{\delta F}{\delta x(u)} +\frac{1}{2!} \int du du' {\delta x(u)}\delta x(u') \frac{\delta^2 F}{\delta x(u)\delta x(u')}+\ldots\;.\tag{B} $$ For example, in order to identify the first functional derivative by comparing Eq. (A) to Eq. (B), we need to put the linear term in Eq. (A) into the correct form by integrating by parts. For example, in order to identify the second functional derivative by comparing Eq. (A) to Eq. (B), we need to insert a delta function and integrate by parts twice $$ \frac{1}{2}\int du \dot {\delta x(u)}\dot{\delta x(u)} = \frac{1}{2}\int du du' \delta(u-u')\dot {\delta x(u)}\dot{\delta x(u')} = -\frac{1}{2}\int du du' \ddot{\delta}(u-u')\delta x(u)\delta x(u') $$ to see that $$ \frac{\delta^2 F}{\delta x(u)\delta x(u')} = -\ddot{\delta}(u-u') $$ This can also be arrived at by first taking the first functional derivative of $F$ and then taking another functional derivative of the result: $$ \frac{\delta F}{\delta x(u)}[x] = -\ddot x(u) = -\int du' \ddot x(u')\delta(u-u') $$ $$ \frac{\delta F}{\delta x(u)}[x+\delta x] = \frac{\delta F}{\delta x(u)}[x]-\int du' \delta(u-u')\ddot {\delta x(u')} $$ $$ =\frac{\delta F}{\delta x(u)}[x] - \int du \ddot{\delta}(u-u')\delta x(u') $$ thus $$ \frac{\delta }{\delta x(u')}\frac{\delta F}{\delta x(u)} = -\ddot{\delta}(u-u') $$
{ "domain": "physics.stackexchange", "id": 100369, "tags": "homework-and-exercises, lagrangian-formalism, variational-calculus, functional-derivatives" }
Partitioning a set based on binary predicate
Question: Given a collection of objects $X = (x_0,x_1,...,x_{N-1})$ and a binary predicate $F$ which takes as parameters elements of the collection, find a better than $\mathcal{O}(N^2)$ algorithm which partitions the set as: $X= \bigcup X_j$ such that for any $j$ it holds that $F(x_k,x_l) = 1$ if $x_k,x_l \in X_j$ and $F(x_k,x_s) = 0$ if $x_k \in X_j \land x_s \notin X_j$ (boolean values are being used at True/False). My initial guess was to use a divide and conquer algorithm but was not able to find a merging algorithm which would make the divide-and-conquer strategy worth it in terms of complexity gain. I imagine this is a standard problem but I am not sure where to look for a solution or if such a solution exists for an arbitrary binary predicate. Answer: There is no deterministic algorithm whose worst-case running time is asymptotically better than $O(N^2)$. One can prove this with an adversarial argument. Consider running the algorithm on the following input: Input #1: $F(x_i,x_i)=1$, and $F(x_i,x_j)=0$ if $i \ne j$. Keep track of the sequence of pairs $(x_i,x_j)$ of objects that $F$ is evaluated on before it terminates. If the running time of the algorithm is is $o(N^2)$, then for some sufficiently large $N$ the algorithm must make strictly fewer than $N^2-N$ queries to $F$, so there must be some pair of objects that it doesn't evaluate $F$ on, say $F(x_3,x_7)$. But then consider running the algorithm on the following input: Input #2: Same as input #1, except that $F(x_3,x_7)=1$. Since the algorithm does not evaluate $F(x_3,x_7)$, the algorithm cannot distinguish these two inputs, and must produce the same output on Input #1 and Input #2. However, the correct answer is different for these two inputs. This means that the algorithm's output will be incorrect for at least one of these two inputs. Therefore, any deterministic algorithm whose worst-case running time is asymptotically better than $O(N^2)$ is not correct on all inputs.
{ "domain": "cs.stackexchange", "id": 15299, "tags": "algorithms, complexity-theory, algorithm-analysis, time-complexity" }
How do electrons ever receive the amount of energy needed to move up energy levels?
Question: Suppose there is a (blackbody) electromagnetic radiation source. It should emit a finite amount of photons every second with an intensity against frequency graph looking similar to a Maxwell Boltzmann distribution curve. Every photon has a specific amount of energy. Now, the source is opposite a collection of atoms of an element, for instance neon. Some of the photons have the precise amount of energy required to excite an electron and so move it up an energy level. I have been taught that the energy needed to move it up this level is exact or discrete, any more or less and the electron would not move up to the level. Frequency - or energy - of a photon can take on any value and thus is a continuous variable. Therefore in the distribution of frequencies/energies for the photons from the source described, surely the probability that any electron has the exact amount of energy required to move an electron up an energy level falls to 0. Despite this, clearly what I have suggested is not the case because electrons clearly absorb the exact amount of energy needed to move them up energy levels all the time as evident from absorption spectra. My question is therefore, how is it we see all this absorption if the probability that a photon has a precise energy on a continuous scale is 0? Is there some lee-way on how much energy would move an electron up an energy level? Answer: I have been taught that the energy needed to move it up this level is exact or discrete, any more or less and the electron would not move up to the level. This is an ideal statement, mostly true for an isolated atom. But even when true, it doesn't mean that other interactions are absent (such as scattering or ionization). First of all, the atom "sees" the frequency of the incoming radiation differently depending on its speed. Interactions with other atoms can affect the process as well. So at high temperatures and high pressures, the range of frequencies that can be absorbed by a single atom increases. More important for everyday experiences is that molecular electron configurations (especially as the molecular size increases) are significantly more complex than isolated atoms. The interactions of the electron shells mean that that the discrete, easily-detectable energy levels disappear with wide ranges of absorption possible. The nitrogen, oxygen, argon in our atmosphere is a good example of simple molecules that have trouble absorbing a wide range of light. But once you create bulk matter, the possible absorption goes up and it becomes harder to find materials that will pass large frequency ranges.
{ "domain": "physics.stackexchange", "id": 58378, "tags": "electromagnetic-radiation, photons, absorption" }
Why do particles have spins such as $1/2$, $3/2$, $5/2$?
Question: What does it mean to have 'half' spin? I have looked on Wikipedia and a few youtube videos on spin but they don't explain what it means to have $1/2$ spin. I am 18 and only starting to learning about quantum mechanics not so long ago, so please keep the vocabulary to a minimal. Answer: Quantum mechanics (QM; also known as quantum physics, or quantum theory) is a fundamental branch of physics which deals with physical phenomena at nanoscopic scales, where the action is on the order of the Planck constant The Planck constant is a very small number, 6.6*10^-34 Joulesecond Quantum mechanics was invented because the data showed that at these small dimensions measurable variables were often not continuous, but came in packets eventually called quanta. The necessity for this solution came from the photoelectric effect, the black body radiation, the discrete spectra of excited atoms, and it has been experimentally established that quantum mechanics is the underlying level of nature. For every measurable observable there corresponds a quantum mechanical operator which operating on the quantum mechanical state gives the probability of measuring the specific measurement. In the case of the operator corresponding to the angular momentum, the values are quantized. This theory developed because of the observation of quantization in orbital angular momentum in the solutions describing atoms. It was then found experimentally that there existed an intrinsic angular momentum (named spin) characterizing particles like protons, neutrons, electrons which make up atoms and molecules. Spin 1/2 is the smallest quantum of angular momentum, conceptually in the same way that charge +/- 1/3 is the smallest quantum assignable to elementary particles.. The spin of the electrons is 1/2*h_bar, where As elementary particles make up all matter, by algebra the only allowed values for spin are multiples of 1/2 and angular momentum multiples of 1 times h_bar. The smallness of the constant ensures that at macroscopic values angular momentum is to all intents and purposes continuous. Thus the real answer is "because that is what we have observed to be the case in the microscopic interactions of particles".
{ "domain": "physics.stackexchange", "id": 19356, "tags": "quantum-mechanics, angular-momentum, quantum-spin, representation-theory" }
Why isn't the ovum reabsorbed into the body (like sperms) if it is not fertilised?
Question: I have read that when sperms are not ejaculated out of the body, they are broken down and reabsorbed. Why can't the ovum be reabsorbed into the body instead of shedding it out during menstruation. Answer: Apparently they are Technically speaking, the ovum is not shed because it does not exist unless and until fertilization occurs. Development leading up to the ovum otherwise stops before the final division of meiosis. Now, you can ask why the body doesn't reabsorb oocytes, but then the answer is that it does, in a process called follicular atresia, with one or a few exceptions per month. Now what about that secondary oocyte that isn't fertilized? Well, it's time to pull out a lovely article from 1917 (Harry Carleton). Why are we so unable to match such prose today? The paper describes a mouse study, giving quite recognizable descriptions of membrane blebbing and nuclear fragmentation characteristic of apoptosis in a decade during which it is usually said to have been forgotten. (A 2005 work reports this for unfertilized human oocytes) Carleton likens the apoptotic changes he observed to atresia in the ovary, and says that the oocytes are reabsorbed by phagocytic cells.
{ "domain": "biology.stackexchange", "id": 11307, "tags": "reproduction, sexual-reproduction" }
TISE solutions should be combinations of eigenstates. Why this is not the case?
Question: I would really appreciate some help with a question I have about the TISE (Sch. tipe independent equation). This is a linear equation and linear combination of the solution should be solution too. The problem is that for the free particle, which solution can be written like exp[-ikx], a linear combination using gaussian coefficient is not anymore a solution (we should get a wave pocket this way). Of course taking a combination considering the temporal dependence give a solution to the TDSE. My question is why that does not apply in the TISE case. Answer: The time independent Schrodinger equation is not a single differential equation, rather it is a family of differential equations parameterized by the energy $E$. $$\frac{-\hbar^2}{2m}\nabla^2\psi + V\psi = E\psi$$ A solution to the TISE for one value of $E$ won't be a solution to the TISE for a different $E$, and so an arbitrary sum of solutions for different energies won't be a solution for any energy.
{ "domain": "physics.stackexchange", "id": 99656, "tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation, eigenvalue" }
Narrow-bandwidth laser and its beam size on uncertainty principle
Question: I read that a single frequency laser can have a bandwidth as low as a few kHz, but according to the uncertainty principle, $\Delta x \Delta p = \Delta x \Delta f h/c >=\hbar $, so $\Delta x \sim c/\Delta f$, how come the laser beam can be so narrow spatially? Answer: The $\Delta x$ relevant for your calculation is the longitudinal length of the wave. If you have a narrow bandwidth, then you need a lot of wave cycles to define it, and so the wave is long. The transverse width of a laser beam is limited by diffraction.
{ "domain": "physics.stackexchange", "id": 86709, "tags": "quantum-mechanics, experimental-physics, laser, heisenberg-uncertainty-principle" }
What is the pythonic way to update my nested dictionary?
Question: So I need to update my nested dictionary where the key is "compensationsDeltaEmployee". I created a string cost_perhour to hold the value of my conditional statements. Now that I am done with the string. What is the best or pythonic way to update my nested dictionary with a new key "costPerHour" with the value of my string cost_perhour? What I did was I created an empty dictionary cost_per_hour_dic then add the string then ran update. Is this okay or can I clean it up more? def add_cost_per_hour(json_dic): """ Add new dictionary value costPerHour to our data """ cost_perhour = "" cost_per_hour_dic = {} try: # Find key compensationsDeltaEmployee. for keys, values in json_dic.items(): if str(keys) == "compensationsDeltaEmployee": if "payBasis" in values: # If payBasis equal 9, 0, P, cost_per_hour field should be blank. if str(values["payBasis"]) in ("9", "0", "P"): cost_perhour = "" # If payBasis equal 1, A, B, C, D, H, J, 3, cost_per_hour equals salaryPayable divide by 2080. elif str(values["payBasis"]) in ("1", "A", "B", "C", "D", "H", "J", "3"): # Check if our value for salaryPayable is empty or None if values["salaryPayable"] == "" or values["salaryPayable"] is None: raise Exception("salaryPayable field is empty") else: cost_perhour = round(float(values["salaryPayable"]) / 2080, 2) # If payBasis equal 2, 4, 5, 7, E, F, X, cost_per_hour should match the salaryPayable field. elif str(values["payBasis"]) in ("2", "4", "5", "7", "E", "F", "X"): if values["salaryPayable"] == "" or values["salaryPayable"] is None: raise Exception("salaryPayable field is empty") else: cost_perhour = round(float(values["salaryPayable"]), 2) # If there are any unexpected values, the cost_per_hour field should be blank. else: cost_perhour = "" else: raise Exception("Could not find key payBasis") if cost_perhour is "": raise Exception("cost_per_hour is empty") cost_per_hour_dic["costPerHour"] = str(cost_perhour) json_dic["compensationsDeltaEmployee"].update(cost_per_hour_dic) return json_dic except Exception as e: print("Exception in add_cost_per_hour: ", e) The json_dict "compensationsDeltaEmployee": { "interPersonnelAgree": "N", "payRateDeterminant": "0", "payPlan": "1", "properPayPlan": null, "retainedGrade": null, "payBasis": "2", "gradeCode": "06", "step": "03", "basePayChangeYypp": null, "physicalCompAllowance": 0, "withinGradeEligibilityCode": "1", "salaryPayable": 21 }, Answer: I might be a bit too harsh, so feel free to read this review in small chunks. What is the number one thing that stands out when I read this code? It feels like it has been written in the spur of the moment, little thought have been given to the overall structure. Feautures seems to been added as needed, instead of taking a step back to look if anything is redundant. I can not recommend the following strongly enough Trace out the program structure on paper before you begin It can be a very rough sketch, but you need to imagine the flow of your program before you start. Imagine if Frodo and Bilbo had just started walking to mordor without a plan, or if the people building rockets at NASA just said YOLO? Be consious of your own code. JSON should not be used as an internal python datastructure I am lazy, but it is much clearer storing objects in python as classes. Dicts and especially json are great for reading data to and from python, but internally I recommend sticking for classes in this case. Avoid falling into the anti-arrow pattern Reading deeply nested code is difficult and hard to maintain, it should give you a signal that you need to refactor the code. Avoid Bare except The first rule of thumb is to absolutely avoid using bare except, as it doesn't give us any exceptions object to inspect. Furthermore, using bare except also catches all exceptions, including exceptions that we generally don’t want, such as SystemExit or KeyboardInterrupt. Catching every exception could cause our application to fail without us really knowing why. This is a horrible idea when it comes to debugging. Stop Using raise Exception Secondly, we should avoid raising a generic Exception in Python because it tends to hide bugs. Replace nested conditionals with guard clauses See for instance here for a longer explanation for keys, values in json_dic.items(): if str(keys) == "compensationsDeltaEmployee": # More code here Is better expressed as compensation = json_dic.get("compensationsDeltaEmployee") if compension is None: break / return / raise specific error # More code here Don't repeat yourself # If payBasis equal 9, 0, P, cost_per_hour field should be blank. if str(values["payBasis"]) in ("9", "0", "P"): cost_perhour = "" This breaks the DRY principle several times. Do not comment the obvious Do not comment what the code is doing Comment why you are doing it. Secondly cost_perhour is already set to "" so this entire block is redundant.This followng block is repeated twice when it is not needed if values["salaryPayable"] == "" or values["salaryPayable"] is None: raise Exception("salaryPayable field is empty") else: cost_perhour = round(float(values["salaryPayable"]), 2) What you are doing is first checking if should be x / 2080, then in the next clause you are checking if it should be x. Why not just check if it should be x, and if it should be x, then check if we should divide? A more sensible name for the dict, could be EMPLOYEES, but really it should be a descriptive name. Tell me what it is a dictionary of, instead of telling me it is a generic dictionary EMPLOYEES = { "compensationsDeltaEmployee": { "interPersonnelAgree": "N", "payRateDeterminant": "0", "payPlan": "1", "properPayPlan": None, "retainedGrade": None, "payBasis": "2", "gradeCode": "06", "step": "03", "basePayChangeYypp": None, "physicalCompAllowance": 0, "withinGradeEligibilityCode": "1", "salaryPayable": 21, }, } Using all the tricks int book the code windless down into this EMPLYEE_COMPENSATION = "compensationsDeltaEmployee" COST_PER_HOUR_DIVIDE_BY_CONSTANT = {"1", "A", "B", "C", "D", "H", "J", "3"} DIVIDE_COST_BY_HOUR_CONSTANT = 2080 COST_PER_HOUR_EQUALS_SALARY_PAYABLE = COST_PER_HOUR_DIVIDE_BY_CONSTANT.union( {"2", "4", "5", "7", "E", "F", "X"} ) def cost_per_hour(compensation): """Calculates the cost per hour from the compensationsDeltaEmplyee field""" pay_basis = str(compensation["payBasis"]) if not pay_basis in COST_PER_HOUR_EQUALS_SALARY_PAYABLE: return "" cost_perhour = float(compensation["salaryPayable"]) if pay_basis in COST_PER_HOUR_DIVIDE_BY_CONSTANT: cost_perhour /= DIVIDE_COST_BY_HOUR_CONSTANT return round(cost_perhour, 2) def update_cost_per_hour(employees): compensation = employees.get(EMPLYEE_COMPENSATION) if compensation is None: raise KeyError("Could not find", EMPLYEE_COMPENSATION) cost = cost_per_hour(compensation) if not cost: raise ValueError("cost_per_hour is empty") employees[EMPLYEE_COMPENSATION]["costPerHour"] = cost return employees if __name__ == "__main__": employees = update_cost_per_hour(EMPLOYEES) print(employees)
{ "domain": "codereview.stackexchange", "id": 42851, "tags": "python, python-2.x" }
Please clarify doubt concerning potential enegry
Question: Let $W$ between two point be defined as: $$W=\int_a^b \vec{F}.\vec{dr}$$ Here $W$ is the work done between two fixed points $a$ and $b$. Let $U$ at a point be defined as: $$U_{\text{at } b}=\int \vec{F}.\vec{dr}+ \text{constant}$$ Here is it proper to say that potential energy $(U)$ is the work done between a fixed point $b$ and another arbitrary point. That is, can we say potential energy is work done having many degrees of freedom. Answer: Potential at a point is indeed defined relative to an arbitrary reference. Once you define this reference, you should use it for all other points in the system. Work between two points is the difference between potentials and is not dependent on reference choice.
{ "domain": "physics.stackexchange", "id": 45593, "tags": "newtonian-mechanics, forces, work, potential-energy, integration" }
Is a non-linear activation function needed if we perform max-pooling after the convolution layer?
Question: Is there any need to use a non-linear activation function (ReLU, LeakyReLU, Sigmoid, etc.) if the result of the convolution layer is passed through the sliding window max function, like max-pooling, which is non-linear itself? What about the average pooling? Answer: Let's first recapitulate why the function that calculates the maximum between two or more numbers, $z=\operatorname{max}(x_1, x_2)$, is not a linear function. A linear function is defined as $y=f(x) = ax + b$, so $y$ linearly increases with $x$. Visually, $f$ corresponds to a straight line (or hyperplane, in the case of 2 or more input variables). If $z$ does not correspond to such a straight line (or hyperplane), then it cannot be a linear function (by definition). Let $x_1 = 1$ and let $x_2 \in [0, 2]$. Then $z=\operatorname{max}(x_1, x_2) = x_1$ for all $x_2 \in [0, 1]$. In other words, for the sub-range $x_2 \in [0, 1]$, the maximum between $x_1$ and $x_2$ is a constant function (a horizontal line at $x_1=1$). However, for the sub-range $x_2 \in [1, 2]$, $z$ correspond to $x_2$, that is, $z$ linearly increases with $x_2$. Given that max is not a linear function in a special case, it can't also be a linear function in general. Here's a plot (computed with Wolfram Alpha) of the maximum between two numbers (so it is clearly a function of two variables, hence the plot is 3D). Note that, in this plot, both variables, $x$ and $y$, can linearly increase, as opposed to having one of the variables fixed (which I used only to give you a simple and hopefully intuitive example that the maximum is not a linear function). In the case of convolution networks, although max-pooling is a non-linear operation, it is primarily used to reduce the dimensionality of the input, so that to reduce overfitting and computation. In any case, max-pooling doesn't non-linearly transform the input element-wise. The average function is a linear function because it linearly increases with the inputs. Here's a plot of the average between two numbers, which is clearly a hyperplane. In the case of convolution networks, the average pooling is also used to reduce the dimensionality. To answer your question more directly, the non-linearity is usually applied element-wise, but neither max-pooling nor average pooling can do that (even if you downsample with a $1 \times 1$ window, i.e. you do not downsample at all). Nevertheless, you don't necessarily need a non-linear activation function after the convolution operation (if you use max-pooling), but the performance will be worse than if you use a non-linear activation, as reported in the paper Systematic evaluation of CNN advances on the ImageNet (figure 2).
{ "domain": "ai.stackexchange", "id": 1708, "tags": "deep-learning, convolutional-neural-networks, activation-functions, pooling, max-pooling" }
Does a universe experiencing "heat death" have a temperature?
Question: As defined by Wikipedia: The heat death of the universe is a suggested ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that consume energy (including computation and life). Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. Does it even make sense to describe temperature in the system described? If so, would it be a very cold system or a very hot system? Answer: Yes, and it would be very cold. The paper "finite temperature in a deSitter universe" explains that the cosmological constant (if it is really a constant) creates a "horizon" that acts somewhat like an inside-out event horizon: objects that get too far away from you are unreachable. This Horizon will radiate Hawking radiation at an extremely low temperature of 10^(-30)K; the wavelength range of light this corresponds too is the same order of magnitude as the horizon's radius (also the case for black holes). It's conceivable that some relatively compact system could have an excited quantum state so close to it's ground state that it is thermally accessible even at this extremely cold temperature. Thus, there is still some form of heat swimming around. However, there is no reservoir colder than this temperature to dump heat into so this heat can't be converted into useful energy.
{ "domain": "physics.stackexchange", "id": 24680, "tags": "thermodynamics, entropy" }
Measuring inductance and resistance of a coil
Question: If you had a resistive coil with an inductance and a resistance, a sinusoidal voltage source with a variable frequency, and a meter that can measure rms voltages and currents, how would you go about determining the inductance and resistance of the coil? I know that the complex impedance of the coil is $i\omega L + R$, so the current through the coil will be $I = \frac{V}{i\omega L + R}$, but I am not sure what to do from this. Any help would be appreciated! Answer: Since $\left (\dfrac{V_{\rm rms}}{I_{\rm rms}}\right )^2=L^2\, \omega^2+R^2 = (4\,\pi^2 L^2)\, f^2+R^2$ is of the form of the general equation of a straight line, $y = m\,x +c$, then taking a series of $V_{\rm rms}$ and $I_{\rm rms}$ readings at different frequencies, $f$, and drawing a graph of $\left (\dfrac{V_{\rm rms}}{I_{\rm rms}}\right )^2$ against $f^2$ should give a straight line of gradient $4\,\pi^2 L^2$ and intercept on the $\left (\dfrac{V_{\rm rms}}{I_{\rm rms}}\right )^2$ axis of $R^2$.
{ "domain": "physics.stackexchange", "id": 81437, "tags": "electric-circuits, electrical-resistance, inductance" }
how to work with ros time
Question: Hi, I have saved some gps data using the gps fix message type. In this message, the gps data has a field called %time. Except, this %time data is kind of weird. The format the %time values are in is of the following: 1355859745049430000 1355859746522960000 1355859747052170000 ... How do I convert this time data to something that I can work with, for example, 1, 2, 3, 4, 5 seconds, etc. Thanks for the help. Originally posted by mte2010 on ROS Answers with karma: 80 on 2013-02-07 Post score: 0 Answer: I don't see any ROS functions for doing this, but here's an algorithm in PHP that you could use: http://www.andrews.edu/~tzs/timeconv/timealgorithm.html Originally posted by Jeffrey Kane Johnson with karma: 452 on 2013-02-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mte2010 on 2013-02-10: Hmm, this is talking about gps time. I looked into my data a bit more, and I found that: -this time data is actually rostime and not gps time, so basically I capture this gps data using play [rosbag_file_name] and then I just listen to my gps topic. So this time data is the same time data as rosbag Comment by mte2010 on 2013-02-10: So, how would I convert this rosbag time to something in seconds that makes sense? Once again, this rosbag time looks like this: 1355885782739630000, 1355885782763280000, and my loop rate is at 50 hz or 0.02 seconds Comment by Jeffrey Kane Johnson on 2013-02-10: The time field of the GPSFix message is supposed to be GPS time... how is that field getting filled? The ROS time should be in GPSFix.header.stamp
{ "domain": "robotics.stackexchange", "id": 12803, "tags": "ros" }
Is the 4th dimension an infinite set of 3rd dimensions?
Question: My understanding goes like this: an x-y plane (or flat 2-dimensional surface) is an infinite set of lines, each one dimensional. The x-y-z space (or 3d volume) is an infinite set of 2-dimensional planes. So the 4th dimension must be an infinite set of 3-dimensional volumes, right? My question is, if we see the world like this then is time not an arrow but a collection of well-defined events that is predetermined? Answer: Yes, you can think of it this way. You could also have started with 1 dimension and stated that it is an infinite set of points. With regard to the flow of time, this is indeed one of the challenges of General Relativity (GR). One of the interpretations within GR is that the flow of time is simply a collection of events which the human mind organizes/interprets as the flow of time.
{ "domain": "physics.stackexchange", "id": 92317, "tags": "special-relativity, spacetime, spacetime-dimensions, arrow-of-time" }
The Chemistry behind creating Polylactic Acid (PLA)
Question: I've seen a few videos of persons creating plastics from starch by adding an acid, glycerol, and water, however no explanation of the chemistry. Does anyone know what $\ce{(C6H10O5)_{n} + C3H8O3 + C2H4O2 + H2O -> (C3H4O2)_{n} + (what)}$? or does it? I only have a basic understanding of organic chemistry, so sorry if the equations don't make sense. Answer: Polylactic acid polymers are creating a lot of interest these days because they're biodegradable. On the downside, corn starch is often one of the starting materials...and corn is also used as a food source, so there's some controversy over the wisdom of this approach. I think I've found a site that takes the approach you're basing your question on...but there are a couple of issues with the equation you've written. So, I'll answer this question in two parts. Plastic from starch is what you saw in the video; PLA from starch is what you asked about. Plastic from starch. The recipe for the plastics-from-starch uses starch, glycerol, acetic acid (the $\ce{C2H4O2}$ in your equation) and water as a solvent. (They do not add lactic acid, the $\ce{C3H8O3}$ in your equation, but lactic acid can be produced by hydrolysis of the starch.) With the starch-acetic acid-glycerol recipe, you do not actually make a PLA polymer, but a starch polymer that is somewhat like it instead. (Ref) So, starch is a polymer that looks like this: In water, it tends to ball up into globular clusters due to hydrogen bonding. By adding acetic acid and heating the starch in water, you "denature" the starch and turn it into "disordered chains" that stretch out. When you let these disordered chains dry, they become entangled and form a flexible film, the plastic. In this recipe, the glycerol is included as a plasticizer, so it is unchanged chemically in the reaction. The glycerol also helps to keep the starch stretched out by stabilizing it with hydrogen bonding. Plasticizers (as the name would indicate) are additives that increase the plasticity of a plastic and make it more durable or flexible. Changing the amount of glycerol in the recipe can have a big effect on the strength of the plastic film you make. This is the process you're seeing in the videos, I think. Making PLA from starch. This is going to be a bit of a challenge to explain depending on your understanding of organic chemistry, but hopefully the pictures will help. When you make PLA commercially, you digest starch, usually with a bacteria, to turn it into lactic acid (see picture below for the structure of lactic acid). This lactic acid must be purified to make a high quality product. If you just combine a bunch of lactic acid molecules to make PLA, one water molecule would be lost each time they combine...sort of like this. (This is an acid and alcohol combining, not two acids, but the idea is the same.) The water in the polymerization reaction leads to a poor quality plastic. So, in commercial synthetic schemes, two purified lactic acid molecules are reacted to form lactide...they come together in a controlled condensation reaction, with the loss of water. (Ref for picture.) Now you can use the lactide, i.e. "lactic acids with the water removed", to do the polymerization to PLA. You add an initiator to open the lactic acid by breaking the bond between the $\ce{O-C}$ in $\ce{O-C=O}$ and this gives you 2 lactic acid molecules hooked together...which react with another lactide molecule to add two more lactic acid molecules to the growing chain, and so on. And that's how the PLA forms. So the reaction sequence is strach-> lactic acid-> lactide + $\ce{H2O}$ -> PLA Probably more than you wanted to know!
{ "domain": "chemistry.stackexchange", "id": 88, "tags": "organic-chemistry, polymers" }
Why is no bromine liberated in the reaction of potassium bromide and concentrated phosphoric acid?
Question: $\ce{KBr}$ reacts with concentrated $\ce{H3PO4}$ to give $\ce{HBr}$ and ($\ce{KH2PO4}$ or $\ce{K3PO4}$) (not sure which one, if someone knows it, please tell). Why isn't bromine gas liberated? Answer: Firstly, $\ce{H3PO4}$ is not a strong enough oxidizing agent to remove the electrons from the $\ce{Br-}$ ions in order for them to then form $\ce{Br2}$, which exists as a liquid at room temperature. Assuming we are working at room temperature, using a strong enough oxidizing agent such as $\ce{H2SO4}$ or a more reactive halogen like $\ce{Cl2_{(g)}}$ or $\ce{F2_{(g)}}$ would be enough to oxidize the $\ce{Br-}$ to $\ce{Br2_{(l)}}$. If we desired to evolve $\ce{Br2_{(g)}}$, we would have to heat $\ce{Br2_{(l)}}$ to its boiling point of $58.8\mathrm{^oC}$ and then continue to supply it with enough heat to vaporize it. As docscience stated, however, some oxidizing reactions might be exothermic enough to vaporize $\ce{Br2_{(l)}}$ itself. In solution this would depend on $[\ce{HBr}]$ as to whether or not the heat evolved is enough to vaporize the $\ce{Br2_{(l)}}$, but in the solid form $\ce{Br2_{(l)}}$ is vaporized instantly in the extremely exothermic reaction of $\ce{KBr}$ with either $\ce{F2_{(g)}}$ or $\ce{Cl2_{(g)}}$.
{ "domain": "chemistry.stackexchange", "id": 3358, "tags": "inorganic-chemistry, halides" }
Building Intuition for Relative Von Neumann Entropy
Question: This is how I think about classical relative entropy: There is a variable that has distribution P, that is outcome $i$ has probability $p_i$ of occuring, but someone mistakes it to be of a distribution Q instead, so when outcome $i$ occurs, instead of being $-log(p_i)$ surprised, they are $-log(q_i)$ surprised (or gain said amount of information). Now someone who knows both the distributions is calculating the relative Shannon entropy, so expectation value of their surprise is $-\Sigma p_i log(p_i)$ and they know that the mistaken person's probability of being $log(q_i)$ surprised is $p_i$, so their the expectation value of surprise is $-\Sigma p_i log({q_i})$ and the difference is $\Sigma p_i log(p_i) - \Sigma p_i log(q_i)$ which is the classical relative entropy. For a given state, the Von Neumann entropy is the Shannon entropy minimised over all possible bases. Since in the measurement basis, the eigenvalues are the probabilities, and both eigenvalues and trace are basis invariant, we can write this as $\Sigma \lambda _i log(\lambda_i)$ which is also equal to $Tr(\rho log( \rho ))$. Relative Von Neumann entropy is defined as follows: $$ Tr(\rho log(\rho)) - Tr(\rho log (\sigma))$$ The first term is understandable, but by analogy to the classical relative entropy, assuming that person Q is measuring in the sigma basis, let's call it ${\{| \sigma_i \rangle \}}$, the second term should reduce to $p^{'}_1 log (q1) + p^{'}_2 log(q2) ... $, where $p^{'}_i$ is the actual probability of the state $\rho$ landing on $| \sigma_i \rangle $. The log part is take care of, but I'm not sure how multiplying and tracing out will give this result. If there's a better way to understand relative Von Neumann entropy, that's welcome too. Answer: Posting an answer because I realised what my issue was: What I didn't realise then: When a density matrix is written in any basis, the diagonal elements correspond to the probabilities of the density matrix landing on the basis states of that basis. So, if in some basis formed by vectors $|x_1\rangle, |x_2\rangle, |x_3\rangle, |x_4 \rangle$, my density matrix is: $$\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \\ \end{bmatrix}$$ Then, the probability of this state showing up as $|x_1\rangle \langle x_1|$ when measured is $a_{11}$, as seen by the trace rule: Probability that $\rho$ gives $|x_1\rangle \langle x_1|$ = $Tr(|x_1\rangle \langle x_1| \rho)$ which is $a_{11}$. Therefore, in the above question, since the matrix is written in the $\{\sigma_i\}$ basis, $p_i'$ is the probability of $\rho$ landing on $|\sigma_i\rangle$, and when multiplied and trace is taken, it results in the corresponding expression. Therefore, the intuition holds. Note that the '$\sigma$ basis' is one in which the log matrix is diagonal.
{ "domain": "quantumcomputing.stackexchange", "id": 858, "tags": "information-theory, entropy" }
Difference between sine and cosine driven oscillators
Question: For a driven damped oscillator, my book only shows the solution for the driving force being a term of cos(t). However, in Fourier Series, the force may have terms of sin(t). How do I convert the cosine solution for the position x(t) into a solution for sines? If needed, the book is "Classical Mechanics" by John R. Taylor. Answer: Sine and cosine are the same curve, only shifted, so a phase difference transforms one into the other. For instance, wherever you have $\cos (\omega t +\psi)$ you replace it with $\sin (\omega t +\psi+\pi/2)$
{ "domain": "physics.stackexchange", "id": 72775, "tags": "classical-mechanics, oscillators" }
Understanding a proof for the existance of a non-computable function
Question: For school, we have a proof that some functions are not Turing computable. The example is: $$ G(k) = \begin{cases} f_k(k) + 1 & \text{ if $f_k(k)$ is defined}, \\ 1 & \text{ otherwise}.\end{cases} $$ Claim: $G$ is non computable. Proof: In view of obtaining a contradiction, let's say $G$ is computable, say by the $k$th Turing machine. Give the encoding of this $k$th Turing machine as an argument for $G$. This leads to a contradiction: if $f_k(k)$ is defined, then $f_k(k)$ is not equal to $g(k) = f_k(k) + 1$. Else $f_k(k)$ is undefined and not equal to $g(k) = 1$. I don't understand the contradiction, help please... Answer: The contradiction reached is that $0 = 1$ which violates one of Peano's axioms. Assume $G$ is computed by the $j$-th Turing machine. Observe that $G$ is everywhere defined. Then $$f_j(j) = G(j) = f_j(j) + 1$$ and by canceling $f_j(j)$ on both sides we get $$0 = 1.$$
{ "domain": "cs.stackexchange", "id": 2259, "tags": "computability, turing-machines, undecidability" }
The sign of $d\mathbf{r}$ when integrating the universal gravitational force in order to define gravitational potential energy
Question: I am trying to find gravitational potential energy by integrating the gravitational force: $$\mathbf{F}(\mathbf{r}) = - \ G \frac{Mm}{|\mathbf{r}|^2} \ \hat{\mathbf{r}}$$ where $\mathbf{r}$ is the vector from the centre of the Earth with mass $M$ towards the point of a mass $m$, and $\hat{\mathbf{r}}$ is the unit vector pointing towards the vector $\mathbf{r}$. The potential energy at a point of the distance $|\mathbf{r}|$ can be found from $$U(\mathbf{r}) = -\int_{\infty}^{r} - \ G \frac{Mm}{|\mathbf{r}|^2} \hat{\mathbf{r}} \ \cdot \ d\mathbf{r}$$ Then, removing the minus signs the expression becomes $$U(\mathbf{r}) = \int_{\infty}^{r} \ G \frac{Mm}{|\mathbf{r}|^2} \hat{\mathbf{r}} \ \cdot \ d\mathbf{r}$$ However, look at the picture below that I drew. Since I am integrating from infinity to a point r, I thought $d\mathbf{r}$ should be directed towards the opposite direction of $\hat{\mathbf{r}}$, but actually, if I calculate in that way I get $$U(\mathbf{r}) = \frac{GMm}{r},$$ in which the minus sign is omitting. From a mathematical perspective, either of $\hat{\mathbf{r}}$ and $\mathbf{r}$ direct towards the same direction, hence the scalar product should be a positive value. But I still do not understand why $\hat{\mathbf{r}}$ and $d\mathbf{r}$ should be considered the same direction whenever I am calculating. This is perhaps not accounting for the upper and lower bounds. Can anybody explain please? Answer: First let's do the math. For the path considered in this integral, one has $$ \hat{\bf r} \cdot {\rm d}{\bf r} = {\rm d}r $$ (pay careful attention to use of bold font here: the right hand side is a scalar not a vector). Using this we get $$ U = \int_{\infty}^r GM m \frac{1}{r^2} {\rm d}r = GMm \left[ -\frac{1}{r} \right]^r_{\infty} = - \frac{GMm}{r} $$ I think your worry is understandable, but ${\rm d}{\bf r}$ means the change in $\bf r$, and if this is in the opposite direction to $\bf r$ then this will be taken care of correctly---it will result in ${\rm d}r$ being itself negative, but it is still ${\rm d}r$ not $-{\rm d}r$. That is: $$ \mbox{if } \hat{\bf r} \cdot {\rm d}{\bf r} < 0 \;\mbox{ then } \; {\rm d}r < 0 $$ but this does not change the fact that, for the path under discussion, $$ \hat{\bf r} \cdot {\rm d}{\bf r} = {\rm d}r . $$ (For some other path this relationship would not be true; in general you would have to include the effect of an angle between $\hat{\bf r}$ and ${\rm d}{\bf r}$ that might not be either 0 or 180 degrees.)
{ "domain": "physics.stackexchange", "id": 59913, "tags": "newtonian-mechanics, classical-mechanics, gravity, potential-energy, vectors" }
Why am I getting empty expression data from GEO?
Question: I am trying to analyze the scRNAseq data from this study. In their Method section they write: The accession number for the RNA and DNaseq data reported in this paper is GEO: GSE116237. When I go ahead and try to pull this data in R using GEOquery, I am faced with a data frame with no features (i.e. no genes). The following code library("GEOquery") gse <- getGEO("GSE116237") eset1 <- gse[[1]] eset2 <- gse[[2]] print(dim(eset1)) print(dim(eset2)) yields Features Samples 0 184 Features Samples 0 681 I also didn't manage to download the data manually; I get an error message when trying to unzip the .zip files or get redirected to SRA Run Selector from which I also didn't manage to figure out how to download stuff. Am I missing something or are these files simply corrupted? The paper is published in Cell, so I would assume this data is, to some extend, legit... Answer: I am sure the data is legit, you are just approaching it incorrectly. getGEO is an application for microarray data, not for digital count data such as (sc)RNA-seq, therefore what you aim to do is simply not possible by design. Unless you want to start from the raw reads why not taking the at the file named GSE116237_scRNAseq_expressionMatrix.txt.gz provided at the bottom of https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE116237?
{ "domain": "bioinformatics.stackexchange", "id": 1787, "tags": "scrnaseq, gene-expression, sratoolkit, geoquery, geo" }
Why does the charge conjugation of the spinor transform as a spinor?
Question: I have come across (in QFT Nutshell, A. Zee) how the charge conjugation of the spinor, $\psi_c \equiv \gamma^2 \psi^*$, transform (where $\gamma^2=\sigma^2\otimes i\tau^2$ is the component of the gamma matrices). Under a Lorentz transformation, the spinor transforms as $$\psi \rightarrow e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}}\psi$$ where $$\sigma^{\mu\nu} = \frac{i}{2}[\gamma^\mu,\gamma^\nu]$$ and $\omega_{\mu\nu}$ is antisymmetric Complex conjugating, we have $$\psi^* \rightarrow e^{+\frac{i}{4}\omega_{\mu\nu}(\sigma^{\mu\nu})^*}\psi^* = e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}}\psi^*$$ Hence, $$\psi_c\rightarrow\gamma^2 e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}}\psi^* = e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}}\psi_c$$ It was not obvious to me in the last equation at all. I think that in order to make the last equation correct, we must have $\gamma^2$ commute with the exponential. Or equivalently, $$\omega_{\mu\nu}[\sigma^{\mu\nu},\gamma^\lambda] = 0$$ with $\lambda=2$. But this apparently is not correct to me since $$[\sigma^{\mu\nu},\gamma^\lambda]=2i(\gamma^\mu\eta^{\nu\lambda}-\gamma^\nu\eta^{\mu\lambda})$$ Using the fact that $\omega_{\mu\nu}$ is antisymmetric, we approach the result $$\omega_{\mu\nu}[\sigma^{\mu\nu},\gamma^\lambda] = 4i \gamma^\mu\omega_\mu^{\;\;\lambda}\neq0$$ Answer: First, note user Chiral Anomaly's comment that ${\sigma^{\mu\nu}}^* \neq -\sigma^{\mu\nu}$. $\gamma^2$ is purely complex in the Dirac basis, so it's the only one that switches sign under complex conjugation. This means that $(\gamma^\mu)^* = \gamma^\mu(1 - 2\delta^{\mu2})$. But we won't actually need this, since we can talk about it more abstractly. Let $C$ be the charge conjugation matrix, which is unitary. We first note the definition that $C^{-1} \gamma^{\mu} C = -({\gamma^{\mu}})^*$. This means that $C \sigma^{\mu \nu} = \frac{i}{2} [(\gamma^\mu)^*,(\gamma^\nu)^*] C = - (\sigma^{\mu\nu})^* C$, where the minus sign comes from the need to conjugate $i$. This means that $C(-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu})^* = (-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu})C$. And thus that $C (e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}})^* = e^{-\frac{i}{4}\omega_{\mu\nu}\sigma^{\mu\nu}} C$, which is the necessary property you want. See Tong's notes for more details. This is discussed there around EQ(4.85)
{ "domain": "physics.stackexchange", "id": 60381, "tags": "special-relativity, lorentz-symmetry, antimatter, dirac-equation, charge-conjugation" }
Is it theoretically exist a computer that never affected by computer virus?
Question: Is it theoretically exist a computer that never affected by computer virus? just wonder if there could be a revolution on the computer? Answer: Do you mean "that cannot be affected" or "that was never affected"? Then, under what conditions? Do you mean malware in general or a more restrictive notion of virus? And then, what is virus or malware? Is code for automatic update of your OS considered malware or not? Maybe you do not want to update your OS. Maybe someone has taken control of the updating service to introduce undesirable features in your system. The main problem with your question is probably the word "theoretically". What is a theoretical computer? If it is a Turing Machine, the answer is clearly yes: they do what their finite control says they should do. Period. Is there a fully precise theoretical model of a real computer. That is doubtful, if only because it is a complex physical objet and there is a limit to our ability to fully model such an object, since our knowledge of physics is not complete ... though I doubt that is the main issue. Assuming we could completely model the physical computer with a formal system, do we fare better? Now we have (or shall have) all these great tools for proving theorems about computers and programs. But this has limitations too. Gödel says that not all true facts are provable, especially in a theory that can model that much computation. But even without falling in this black hole, do we fare better? No because before proving anything about the computer we need to model precisely what a computer is, and also what it means to be affected by a virus. Even if we have a perfect and total proof system (which we do not), how do we prove that our theoretical specification of the problem is accurate with respect to the physical problem with real computers. Bugs are not the exclusive privilege of programs and proofs, you can also have them in specifications, and some may be hard to eradicate. If your child specifies he wants a train as a gift when it is actually a car that he wants, what are the formal tools that will identify the specification error - short of implementing and observing that things do not behave as expected. Coming back to a more practical view. Some machines are harware protected with encryption keys that allow only trusted software. First, many people consider this as malware by itself, as it is often used against the owner of the computer. Then encryption can be broken by mathematical or social means. And the protection may possibly also be circumvented by other means. A computer that stays off-line and does not access foreign data is unlikely to be affected. Then, there are different ways of accessing foreign data that may be more or less risky. Using foreign code is likely to be more risky. So the question cannot be independent of defining the conditions in which the computer is used. However, even off-line, there is always the risk that the computer was born affected by malware (to avoid the restrictive concept of virus). You should read Ken Thomson's 1984 Turing lecture: Reflections on Trusting Trust. Either you trust others to some extent, or you rebuild most of the technology from scratch by yourself, with no interference from others. May be not starting at stone age, but far enough ... assuming you do not have bugs in your own code (but if not intentionally malicious, they are less likely to be as much a problem).
{ "domain": "cs.stackexchange", "id": 2647, "tags": "terminology, security" }
How do atoms scatter X-rays?
Question: I am learning the theory behind X-ray diffraction but I have a question. According to the textbook I am using, X-ray tubes (in diffractometers) produce near monochromatic X-rays. In other words, they produce X-rays of nearly the same wavelength. Then, these X-rays interact with the electrons of atoms in a crystal lattice. The atom, in return, spherically emanates X-rays of the same wavelength as the X-rays from the diffractometer tube (elastic scattering). How exactly does elastic scattering work? Why do atoms accept incoming X-rays and then produce X-rays of the same wavelength in a spherical pattern? I am guessing that this is a somewhat advanced physics question. Answer: Perhaps the simplest way to think about this is that the electric field of the incoming photon "wiggles" the electron, and that this wiggling electron then produces an emission of electromagnetic radiation. This is called Thomson scatter. Note that the emission is not isotropic - some directions will see greater intensity than others. All that is explained in detail at the link given.
{ "domain": "physics.stackexchange", "id": 27183, "tags": "diffraction, crystals, x-rays, x-ray-crystallography" }
Where does the kinetic energy of an electric motor driven in reverse go?
Question: When you drive an electric motor forward with voltage +V, part of the electrical energy is dissipated as heat in the motor winding and the rest of it (I am overlooking other kinds of losses) goes into the kinetic energy of the rotor. However, when you drive this previously-forward-driven motor in the reverse direction (applying -V this time across the windings using a H-bridge, the supply is still providing electrical power), some of the electrical energy is also dissipated in the motor windings but what happens to the remaining mechanical power/energy provided? The kinetic energy of the rotor is decreasing, so what are the both of them transformed into? Answer: when you switch the motor polarity while the armature is spinning, the motor becomes a generator and it produces a significant surge of current that tries to run backwards out of the motor and into the power source. That current surge, times the voltage at each instant of the process, yields power that is leaving the system and flowing towards the source. When the armature finally comes to a stop, so does the current leaving the motor, which is replaced by current flowing into the motor from the source again, and the armature begins spinning in the opposite direction. This effect is used in electric cars to slow them down and recapture the kinetic energy not only of the armature but of the entire car; in this context the concept is known as dynamic or regenerative braking.
{ "domain": "physics.stackexchange", "id": 67240, "tags": "electromagnetism, classical-mechanics, energy, electricity" }
Problem with scaling of objects in Gazebo gui
Question: Hi there, I am trying to scale the object's visual and collision size (geometry => size tags) in Gazebo gui. Upon changing the value the following exception gets thrown: Qt has caught an exception thrown from an event handler. Throwing exceptions from an event handler is not supported in Qt. You must reimplement QApplication::notify() and catch all exceptions there. terminate called after throwing an instance of 'gazebo::common::Exception' Aborted (core dumped) Version 2: I also got the following bt: #0 0x00007ffff3a6d425 in raise () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007ffff3a70b8b in abort () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x00007ffff43bf69d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #3 0x00007ffff43bd846 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #4 0x00007ffff43bd873 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #5 0x00007ffff43bd9b6 in __cxa_rethrow () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #6 0x00007ffff598816c in QEventLoop::exec(QFlagsQEventLoop::ProcessEventsFlag) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4 #7 0x00007ffff598cf67 in QCoreApplication::exec() () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4 #8 0x000000000059d273 in gazebo::gui::run (_argc=3, _argv=0x7fffffffdbd8) at /home/lawnmower/work/simulator_gazebo/gazebo/build/gazebo-hg/gazebo/gui/Gui.cc:195 #9 0x000000000059567b in main (_argc=3, _argv=0x7fffffffdbd8) at /home/lawnmower/work/simulator_gazebo/gazebo/build/gazebo-hg/gazebo/gui/main.cc:2 This has been noted in https://code.ros.org/svn/ros-pkg/stacks/simulator_gazebo/trunk in ROS fuerte which pulls in the deprecated_parser_sdf_1.2_support branch of Gazebo. OS: Ubuntu 12.04. Originally posted by dejanpan on Gazebo Answers with karma: 60 on 2013-03-08 Post score: 0 Answer: The scaling feature was never implemented, and resulted in a segault. We have resolved this problem in gazebo 1.5 by removing the GUI based scaling feature. We'll spend more time on this in the near future to implement it properly. Originally posted by nkoenig with karma: 7676 on 2013-03-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3100, "tags": "gazebo" }
What data size is sent to and read from physical RAM?
Question: When you have a cache mis, you need to fetch a block from RAM. If said block is 64 bytes big, do you need to have buses that are 512 bits (= 64 bytes) wide to transfer data from the RAM to the cache? And when writing to RAM do you write to the whole 64 byte block? Answer: There is no requirement that a transfer between memory and cache use one line per bit. In fact, DDR3 DRAM uses a burst length of eight, meaning that eight bits are transferred per line in four cycles at double data rate. (DDR3 supports burst chop of only four transfers, but this has the same timing constraints for a read following a read and a write following a write. Burst chop can reduce energy use and other DRAM ranks can use the memory channel during the inactive time.) By requiring burst transfers, memory chips can be made more cheaply while still providing high bandwidth. The write interface to DRAM is likewise constrained. In addition, the modified nature of portions of cache smaller than a cache block is usually not tracked, so the cache would not know that it actually only needs to writeback part of the cache block. (Such constraints are not inherent in memory systems. Caches could track validity or cleanness at any granularity, at the cost of addition storage and complexity. Interfaces could be defined to support arbitrary sized transfers, including the use of mask bits to tell the memory chips to ignore specific data, but the complexity is generally not considered worthwhile.) One advantage of a narrower interface is that synchronization of all the lines is easier, making it easier to produce an interface with a fast clock providing higher bandwidth with the same number of lines. Note also that more than one memory channel can be implemented in a CPU. It is also possible to gang two or more memory channels together to form a single interface.
{ "domain": "cs.stackexchange", "id": 4290, "tags": "computer-architecture" }
Why should phospholipid non-polar tails be "protected" in the membrane bilayer?
Question: lipids are arranged within the membrane with polar head towards the outer side and non polar tails towards inner side, this ensures that the non polar tail is protected from aqueous environment. My question is why should we protect non polar part ,will it destroy in contact with polar part? What should be the correct reason for bilayer arrangement? Answer: What should be the correct reason for bilayer arrangement? I'll answer your second question first, but there is an almost identical question on this site already: Why do cells have a bilayer? There is water on the extracellular and intracellular side of the membrane. What's actually happening at a molecular dynamics level is the self-association of the hydrophobic lipid tail groups driven entropically by water. In other words the polar (hydrophilic) head-groups "prefer" interacting with the water (called the interfacial region) and the the hydrophobic tail groups "prefer" not interacting with the water. With those two preferences in play, the lipid bilayer formation we know and love emerges. why should we protect non-polar part, will it destroy in contact with polar part? To directly address the first part of the question: no, nothing would be destroyed. The word "protect" isn't appropriate (it's a bit too anthropomorphic for my taste!). Here is a video showing the bilayer spontaneously assemble in a molecular dynamics simulation. Read the more thorough 2003 journal article for an idea of early MD simulations of the bilayer formation. As you can see nothing "bad" happens when the water collides with the lipid tails and the lipids aren't destroyed. Interesting read: MEMBRANE LIPIDS OF THE PAST AND PRESENT. Good animations and explanations of different membrane formations. For an academic perspective, I'd recommend a couple of reviews: Cournia et al., 2015 and Gerit et al., 2008.
{ "domain": "biology.stackexchange", "id": 5637, "tags": "biochemistry, molecular-biology, cell-biology, cell-membrane" }
Why does CW imaging not provide the same data as Pulsed imaging?
Question: "Unlike pulsed THz imaging, the CW imaging (...) only yields intensity data and does not provide any depth, frequency-domain or time-domain information (...)." [1] Why does a pulsed signal provide more information than the CW signal? How is this achieved? [1] Comparison between pulsed terahertz time-domain imaging and continuous wave terahertz imaging, Nicholas Karpowicz et al. Semicond. Sci. Technol. 20, S293,(2005) Answer: With pulses you can measure the time it takes for the pulse to fly to the target and come back after reflection. This allows to calculate the depth of the target. In practice you may get multiple reflections for the same pulse and this will result in various features in the image. And fir each feature you have the depth relative to the transducer surface. There is no matter of atom excitation. The same principle applies to ultrasound imaging and pulsed Doppler.
{ "domain": "physics.stackexchange", "id": 78865, "tags": "optics, laser, signal-processing, imaging, photonics" }
Axiomatically, what characterizes “recursion”?
Question: My question is admittedly simple, but the desire is to have an insightful view on it behind a conventional definition. In different foundational or axiomatic systems, I have come to consider “compositionality” as possibly the single most general concept to use as a structuring principle, in any system that gives rise to increasingly rich amounts of information. “Composition” is a highly general, even cognitive, idea of, “put two things together, and call it a new thing”. The way I see it, whatever concepts or rules you start out some minimal system which is meant to define more complex informational structures, there is this fundamental “impasse”, which I think is like the Church-Turing thesis, where it’s actually hard or nigh impossible to derive a rule, which was not already implicitly there as a rule of interpretation, to begin with. For example, the simplest possible “compositional” / “combinatorial” system is something like, the free infinite commutative(/symmetric) monoid or semi group, I think, in abstract algebra. You have some things, you can put them together to make another thing. That’s it. Mathematically, you don’t have more rules of interpretation on if any of those elements generated are themselves “interesting” or useful in some way - they’re just “words” (compositions of elements). I think it’s common to define functions in terms of sets - like a set of tuples, where the first element is the domain, and the second element is the codomain. Thus, a “function” is a set of ordered pairs. But that requires a human interpreter to understand that way of writing down functional associations. What if you wanted to have a computer do it instead? I’m not sure, but it doesn’t sound elegant - maybe you need a program that’s checking every set generated in the cumulative hierarchy (of composition) - of it is a set of ordered 2-sets, then it recognizes that as a function. Perhaps you can also “use” the functions - you can specify a function (perhaps each object has a hash to identify it) and call it on an object in its domain: the program knows to return the corresponding value. So, you generated / constructed functions from nothing but the composition of elements (basically, power sets, subsets of sets). But you didn’t really - your computer program already had a more robust program to run it, that had an “if” conditional, a match / equality operator, a true/false object, a “for all” quantifier, a concept of set size (cardinality), and even somehow the idea that a function paired with an element in its domain “is associated with the corresponding element in its codomain”. You can’t construct a mathematically rich system unless you already have a system of equivalent mathematical richness. Right? I think we might be able to build up the idea of a universal Turing machine by considering very simple games or procedures, and adding in some fundamental capability, one at a time. Imagine dropping colored pebbles in a row, perhaps at random. Ok. Not much to say. Not too much change or order, as time goes on. Now: if we want to try to enhance how the system behaves by minimally importing rules or concepts from somewhere else (like, “every third stone shall be blue”), the first thing I am (only prematurely) confident we need is the ability to read our own sequence, and act based on a condition of that sequence - so, you have a ‘read’ and an ‘if’ command, and a concept of equality, let’s say. (Maybe, any of those three things alone would not be useful, so somehow we can bundle them together into a single thing?) I think right now we have a ‘context-sensitive grammar’ (Chomsky type-1, I think?). If we assume infinite colors for the marbles, we have Chomsky type-0, Turing complete, yes? If we have finite colors, we require infinite loops in our instructions? Expressions that say things like “do while”? (I guess that was implied by the idea that we were dropping stones in a line continuously, with no intention of stopping or rules for the game to ‘end’ / halt). I guess I am trying to see more clearly what might be a cleaner reduction on where “recursion” or infinite loops could come in - as there are different ways of presenting it. Is it a function which can call itself? Is it implied by any formal system with infinite starting elements, like my pebble game? Someone told me “recursion” can even be expressed as a “fixed point”? “In untyped lambda calculus,” a fixed-point combinatorial such as the Y-combinator “is used to define recursion” (1). Can someone help me understand: how can you minimally specify what recursion really is, from an axiomatic perspective? Is the real point of recursion “infinitude”, in which case, you don’t need the concept if you already assume infinite elements? Is recursion a condition on finite sets for them to be able to imply/construct/determine infinite ones? But is that just a mirror image, since a function which calls itself, has to actually be called an “infinite” number of times, to actually produce “infinite” elements.. can we say that there is no such “thing” as infinity, from a computability perspective - only halting, if something will stop on its own, or else, outer forces will have to stop it (someone has to turn off the machine / stop playing the game)? If we try to reduce these systems to the fewest possible elements, I am considering choosing “rules” as the only existing objects in the system. A rule calls another rule: Rule 1: If Rule 2, Rule 3. This could be either that if Rule 2 has been acted on, next you just act on Rule 3; or it can be a swapping rule, where you can swap out Rule 2 if it is present. This similarly I think reduces to the Universal Turing Machine - you can “read” in a way (you know what elements are present) - you can “write” (you can do, likes function, act on something, change in some way) - and you have conditions (if-statements). If we abandon a need to identify “recursion” and just focus on “halting” instead, then I think that makes “recursion” clearer. Any collection of rules that does not have some circularity, where one rule can lead back to itself, will not halt, and is “recursive” in the sense of a function calling itself (even if indirectly / via a chain of functions). What interests me is that you can make all these different systems like anti-foundational set theory and study the resulting behavior of that system. The Church-Turing thesis tells us that the Universal Turing Machine will be the maximum possible, of any such game or collection of rules. Or does it? Could there be all kinds of games we just haven’t explored yet, based on subtle changes in this or that simple scenario? It feels like a huge avenue for creative tinkering and poking around, navigating to uncharted territories. But it makes me think that the Turing Machine maybe can be expressed in fewer than four “things”. Maybe that’s what type theory is for - the modern attempt to establish the minimum necessary, in any constructive system? Answer: A lot to unpack here! It's hard to extract a clear question from your post, but I will just comment on a few of your statements: the Church-Turing thesis, where it’s actually hard or nigh impossible to derive a rule, which was not already implicitly there as a rule of interpretation, to begin with The Church-Turing thesis tells us that the Universal Turing Machine will be the maximum possible Your interpretation of the Church-Turing thesis is not correct. For example, we know that it's possible to define many models of computation stronger than Turing machines, and plenty of researchers have studied such models. The point of the Church-Turing thesis is to formalize the intuitive notion of "effectively calculable" that was common parlance among mathematicians by the early 1900s; at the time, it basically meant "computable (by a human) by following routine algorithm" but no one had a precise definition. The Church-Turing thesis is, no more and no less, the proposal to identify "effectively calculable" with the formal definitions "computable by a Turing machine" or "computable by a lambda calculus expression". See the Stanford Encyclopedia of Philosophy's article on the topic for a good overview. For example, the simplest possible “compositional” / “combinatorial” system is something like, the free infinite commutative(/symmetric) monoid or semi group you don’t have more rules of interpretation on if any of those elements generated are themselves “interesting” or useful in some way Actually, this structure actually already requires recursion to construct. The rule of interpretation here is that we know that every element of the free monoid was generated by some finite application of concatenation of individual elements. And this is also the core of what recursion is about. Axiomatically, what characterizes “recursion”? In short: the core of recursion is inductive constructions. It's the idea that every element of a set is constructed via some finite number of applications of some list of allowed rules. Mathematicians working on formal languages for proofs have found that this is sufficient -- you might be interested in the Calculus of inductive constructions which is used by the proof assistant Coq; it's been found that this very simple system is enough to formalize most properties of modern mathematics. The core idea of the calculus of inductive constructions is the idea that you can give an inductive definition, like "a string is either an empty string, or a character followed by a string" and interpret all ways of finitely constructing an object from that inductive definition.
{ "domain": "cs.stackexchange", "id": 21280, "tags": "recursion" }
Effect of self loops on mixing time?
Question: Consider 2 graphs G1 and G2. G1: Any non-regular graph. G2: Same graph but with added self-loops such that degree of each node is the same (either some $\Delta$, or maximum '$n$', where $n$ is the number of nodes in the graph). We run a lazy random walk on G1, such that the random walk at a node $v$ either stays at $v$ with probability $1/2$ or moves out to a neighbor with probability $1/2d_{v}$, where $d_{v}$ is the degree of the node $v$. Consider, $t_{mix}$ as the mixing time of this random walk. Additionally, we maintain a queue where the starting node of the random walk is added to a queue, and thereafter each time the random walk jumps to a neighbor, the node it jumps to is added to the queue. Let the queue contain $k$ nodes when the random walk has mixed $(k<= t_{mix})$. Note that since the graph is non-regular the stationary distribution that it converges to is non-uniform. Now, consider another random walk on G2, which chooses each edge with probability $1/d_{v}$, where $d_v$ is the degree of node $v$ including the self-loops. Assume that this random walk exactly follows the previous random walk (on G1) in the sense that each time there is a jump to the neighbor, it jumps to the same neighbor as the previous walk, i.e., it will push the exact same nodes to its maintained queue. When we consider time, of course, this would be slower as, in low-degree nodes, the random walk would have to undertake many self-loops before being able to jump to a neighbor. What I want to show that, is when the queue contains $k$ nodes or ($O(k)$ nodes), then this random walk has also mixed. Note that, as opposed to before, the stationary distribution here with respect to which we want the mixing is uniform (due to the added self-loops). Is there any existing work that shows this? Or if not, what can be a good coupling argument to prove this? Any reference or help would be greatly appreciated. (Observe that, the random walk on G2, can also be considered as another (biased) random walk on G1 where the random walk stays at the current node $v$ with probability $1-(deg(v)/n)$ and, with probability $1/n$, moves to a uniformly at random chosen neighbor.) Answer: Your question is essentially covered by Cor 9.5 in [1] which implies that as long as the ratio of self-loops to the original degree is bounded above and below, the mixing time of this modified walk is equivalent (up to constants) to the mixing time of the lazy walk on $G_1$. [1] Peres, Yuval, and Perla Sousi. "Mixing times are hitting times of large sets." Journal of Theoretical Probability 28, no. 2 (2015): 488-519. https://www.dpmms.cam.ac.uk/~ps422/mix-hit.pdf
{ "domain": "cstheory.stackexchange", "id": 5202, "tags": "graph-theory, pr.probability, markov-chains, random-walks" }
Is ghost-number a physical reality/observable?
Question: One perspective is to say that one introduced the ghost fields into the Lagrangian to be able to write the gauge transformation determinant as a path-integral. Hence I was tempted to think of them as just some auxiliary variables introduced into the theory to make things manageable. But then one observes that having introduced them there is now an extra global $U(1)$ symmetry - the "ghost number" Hence hasn't one now basically added a new factor of $U(1)$ to the symmetry group of the theory? How can the symmetry of the theory depend on introduction of some auxiliary fields? Now if one takes the point of view that the global symmetry has been enhanced then the particles should also lie in the irreducible representations of this new factor. Hence ghost number should be like a new quantum number for the particles and which has to be conserved! But one sees that ghost field excitations are BRST exact and hence unphysical since they are $0$ in the BRST cohomology. I am unable to conceptually reconcile the above three ideas - the first two seem to tell me that the ghost-number is a very physical thing but the last one seems to tell me that it is unphysical. At the risk of sounding more naive - if the particles are now charged under the ghost number symmetry then shouldn't one be able to measure that in the laboratory? Lastly this ghost number symmetry is a global/rigid $U(1)$ symmetry - can't there be a case where it is local and needs to be gauged? Answer: The mystery here should disappear once one realizes that the BRST complex -- being a dg-algebra -- is the formal dual to a space , namely to the "homotopically reduced" phase space. For ordinary algebras this is more familiar: the algebra of functions $\mathcal{O}(X)$ on some space $X$ is the "formal dual" to $X$, in that maps $f : X \to Y$ correspond to morphisms of algebras the other way around $f^* : \mathcal{O}(Y) \to \mathcal{O}(X)$. Now, if $X$ is some phase space, then an observable is simply a map $A : X \to \mathbb{A}$. Dually this is a morphism of algebras $A^* : \mathcal{O}(\mathbb{A}) \to \mathcal{O}(X)$. Since $\mathcal{O}(\mathbb{A})$ is the algebra free on one generator, one finds again that an observable is just an element of $\mathcal{O}(X)$. (All this is true in smooth geometry with the symbols interpreted suitably.) The only difference is now that the BRST complex is not just an algebra, but a dg-algebra. It is therefore the formal dual to a space in "higher geometry" (specifically: in dg-geometry). Concretely, the BRST complex is the algebra of functions on the Lie algebroid which is the infinitesimal approximation to the Lie groupoid whose objects are field configurations, and whose morphisms are gauge transformations. This Lie groupoid is a "weak" quotient of fields by symmetries, hence is model for the reduced phase space. So this means that an observable on the space formally dual to a BRST complex $V^\bullet$ is a dg-algebra homomorphism $A^* : \mathcal{O}(\mathbb{A}) \to V^\bullet$. Here on the left we have now the dg-algebra which as an algebra is free on a single generator which is a) in degree 0 and b) whose differential is 0. Therefore such dg-morphisms $A^*$ precisely pick an element of the BRST complex which is a) in degree 0 and b) which is BRST closed. This way one recovers the definition of observables as BRST-closed elements in degree 0. In other words, the elements of higher ghost degree are not observables.
{ "domain": "physics.stackexchange", "id": 3299, "tags": "quantum-field-theory, gauge-theory, brst, ghosts" }
What's the cause of the scars on these cherries?
Question: These cherries have some scars. Are they coming from a disease? Is it correct to call them "Cherry scabs"? Here's the photo: - Answer: Are they coming from a disease? No, this appears to be a combination of rain damage and pests. If a disease were present, much more of the cherry would suffer, there would be signs of rotting, and, the damage wouldn't solely be so superficial, as is the case in your posted images. Consider the next few images which demonstrate the effects that various diseases have on a cherry, and notice how much more detrimental [and different] the impact. Bitter Rot (Colletrotrichum gloeosporioides & C. acutatum) Brown Rot (Monolinia fructicola) Alternaria Rot (Alternaria sp.) There are other [equally common] diseases of cherry trees (Prunus sp.), including cherry spot leaf, black knot, silver leaf, and PNRSV, however, in pretty much all of these cases too, there's no modification to the cherry fruit (excluding extreme circumstances, as is the case with spot leaf, I believe). Instead, as mentioned, one major cause of the damage is rain. Just before harvest, when the cherry is at it's greatest [and possibly maximum] volume, rain that's absorbed through the surface of the cherry causes the cherry to further enlarge, which ultimately splits the cuticle (skin). The result is the following: For more information on cherry splitting/cracking: Fruit Splitting (Cracking) – What causes it and can we do anything about it? Fruit Split In Cherries: Learn Why Cherry Fruits Split Open A Review of Cherry Fruit Cracking Story: Stone fruit and the summerfruit industry Cherry Weather Worries Given the irregularities of scarring on some of the cherries, I also suggest that insects and/or birds scratched/ate away at them. Consider the following images that illustrate the kind of damage that birds & wasps cause when feeding on cherries, and notice the similarities to those in your images. bird damage wasp damage This, in combination with the rain, is most likely what caused damage to your cherries. Is it correct to call them "Cherry scabs"? I just call it scarring, and I believe many other people do too. I'm not sure if there's a specific term for fruit damage that's then been "scabbed" over. If someone knows of a term for this, please provide in a comment and I will supplement my response with it. Thank you. While continuing to research this topic, specifically when attempting to find alternative reasons that could explain this kind of damage, I did ran across this article which talks about wind damage, and provides an image that slightly resembles the markings on your cherries. I'm still heavily convinced though that your cherries suffered their damage from rain & pests.
{ "domain": "biology.stackexchange", "id": 7787, "tags": "botany, infectious-diseases" }
How can I get all currently running nodes from the master in rosjava?
Question: Is there a simple way to retrieve all currently running nodes from the ros master in rosjava? In order to obtain all topics in the system the following works: MasterClient masterClient = new MasterClient(this.node.getMasterUri()); Response<SystemState> systemState = masterClient.getSystemState(defaultNodeName); List<TopicSystemState> topicSystemStateList = systemState.getResult().getTopics(); Is there something similar to obtain all running nodes? The only way I found is to indirectly retrieve them via TopicSystemState.getPublishers() and TopicSystemState.getSubscribers() - but this would ignore nodes which do not subscribe or publish anything. Regards scr Originally posted by sacrif on ROS Answers with karma: 11 on 2012-09-03 Post score: 0 Answer: While the underlying information is available, much of the master API is not yet exposed as a public API. Feel free to file a feature request. http://code.google.com/p/rosjava/issues/list Originally posted by damonkohler with karma: 3838 on 2012-09-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10866, "tags": "ros, nodes, rosjava, rosmaster" }
Is hydrogen peroxide safe for cleaning porcelain?
Question: From my understanding hydrogen peroxide only reacts with bacteria in a way that it should burn them more or less. If that is true, can I then safely use hydrogen peroxide to clean my bathroom sink which are made of porcelain? Answer: While hydrogen peroxide is not normally used for cleaning surfaces, there is nothing in normal glazed porcelain that it would damage. Porcelain is about as unreactive to most things as glass, which is unreactive enough to be used for the bottles that store hydrogen peroxide. But you should probably be careful with it as it can be dangerous if the solution is a strong one. Normally available hydrogen peroxide should not be as strong as the solutions used in the lab, though, which is often a 30% solution in water and should only be handled if you know what you are doing (see the safety data sheet). Consumer hydrogen peroxide is usually a 3-6% solution and isn't any more dangerous that hypochlorite bleach (and is a safer way to bleach hair). Standard household (hypochlorite) bleach is usually better for cleaning, though.
{ "domain": "chemistry.stackexchange", "id": 840, "tags": "reaction-mechanism, safety, cleaning" }
Action of $M_{\mu \nu}$ on local operators $\mathcal{O}(x)$
Question: I'm following the TASI Lectures on the Conformal Bootstrap by David Simmons-Duffin. Let $M_{\mu \nu}$ be the conserved charge operator associated with rotations. The action of said operator on local operators follows a set of irreducible representations of the rotations groups $SO(d)$, $$[M_{\mu \nu}, \mathcal{O}(0)^a] = (\mathcal{S}_{\mu \nu})_{b}^{\,a}\mathcal{O}(0)^b\tag{39}$$ where $a$ and $b$ are indices for the $SO(d)$ rep. of $\mathcal{O}$. To study the action of $M_{\mu \nu}$ on local operators far from the origin, one does (using Euclidean signature and ignoring spin indices) \begin{split} [M_{\mu \nu}, \mathcal{O}(x)] & = [M_{\mu \nu}, e^{x\cdot P}\mathcal{O}(0)e^{-x\cdot P}] \\ & = e^{x\cdot P} [M_{\mu \nu}, \mathcal{O}(0)e^{-x\cdot P}]+[M_{\mu \nu}, e^{x\cdot P}]\mathcal{O}(0)e^{-x\cdot P}\\ & = \mathcal{S}_{\mu \nu}\mathcal{O}(x)+e^{x\cdot P}\mathcal{O}(0) [M_{\mu \nu},e^{-x\cdot P}]+[M_{\mu \nu}, e^{x\cdot P}]\mathcal{O}(0)e^{-x\cdot P}\\ & = \mathcal{S}_{\mu \nu}\mathcal{O}(x)+\mathcal{O}(x) e^{x\cdot P}M_{\mu \nu}e^{-x\cdot P}+ e^{x\cdot P}M_{\mu \nu}e^{-x\cdot P}\mathcal{O}(x)-\left[M_{\mu \nu},\mathcal{O}(x)\right]. \end{split} I can now use the Hausdorff formula, but this gives me an infinite set of commutators that I cannot simplify, namely $$\left[P \cdot x,M_{\mu \nu}\right]$$ What am I doing wrong? Is it possible to determine this commutator? Answer: Recall that $$[M_{\mu\nu}, P_{\eta}] = \delta_{\nu\eta} P_{\mu} - \delta_{\mu\eta} P_{\nu}$$ and $$e^X Y e^{-X} = \sum_{n=0}^{\infty} \frac{1}{n!} \underbrace{[X,[X,\ldots,[X}_{\text{n times}},Y]\ldots] \equiv \sum_{n=0}^{\infty} \frac{1}{n!} [(X)^n,Y].$$ So essentially your task boils down to computing the $n$-times computator of $P$ and $M$. But since $[P,M] \sim P$, computing $[(P)^n,M]$ shouldn't be a problem, should it?
{ "domain": "physics.stackexchange", "id": 85329, "tags": "homework-and-exercises, conformal-field-theory, commutator" }
Optical rotation and chirality
Question: I had the same question as in Molecular chirality and optical rotation In the answer, it says that there can't be such a mirror position where the effect of one molecule be cancelled by other. But I'm so confused. But don't the first and the third image in the original answer show molecules such that the both in combination could cancel each other's effects? Answer: Let's just look at the first situation. What you're asking is, in effect, what happens when I run the situation in reverse because for every orientation that looks like the first scenario, there is one that matches the reverse. But look at the effect down the axis from both sides. In both situations, the plane of polarization has shifted about 45° counterclockwise. So no matter the direction of light in this situation, the sense of rotation is the same, so there is no canceling of the net rotation due to symmetry; the symmetry reinforces the sense of rotation. This makes perfect sense if you were to look at screw. Regardless of which end you look at it from, for a regular right-handed screw, you always turn to the right (clockwise) to drive it in (away from you).
{ "domain": "chemistry.stackexchange", "id": 17912, "tags": "organic-chemistry, stereochemistry, chirality, symmetry" }
Why can't virtual images form on a screen?
Question: Part of the definition of a virtual image is that it cannot be formed on a screen. I understand this is the case when the screen is right next to the image, since there are no physical rays that can hit the screen. But what I don't understand is why an image can't form on the screen if the screen is located sufficiently far away from the image and/or lens so that rays do physically hit the screen? The 'explanation' usually given is that real rays converge while virtual rays don't, but how is the screen supposed to know if the rays it's seeing actually converged at some point or not? The only apparent difference compared to real rays I can see is that rays for virtual images would have greater angular divergence, which would create an image on the screen, just blurry. Answer: There seems to be some fundamental confusion here. An image is formed on a screen when light rays emanating from an object converge there. If there is no convergence of rays, then there is no image on a screen. Think about a portrait located positioned on the left side of the lens. The light emanating from a point on the tip of the nose focuses to the (single) corresponding point on the image. The same is true of all the neighboring points, so there is a one-to-one correspondence between points on the image and points on the object, and the image is clear. On the other hand, if you position a screen at a different location, then the light emanating from the tip of the portrait's nose will be spread over a whole region of the screen. The light from the neighboring points on the object will overlap, and the result will be a blurred image. The conclusion is that the calculated image distance is where you will get a clear image; if you put your screen anywhere else, then an image will not form. Now consider what you'd get with a diverging lens. The blue dotted lines are obtained by tracing the rays on the right hand side backward and pretending the lens wasn't there. The virtual image is the location from which the rays appear to be emanating from the perspective of somebody on the right-hand side of the lens. However, there are no actual light rays which converge there. If you place a screen at the location of the virtual image, can you see why you don't get a nice picture?
{ "domain": "physics.stackexchange", "id": 71255, "tags": "optics" }
$v^2 = 2ax$ or $v^2 = ax$?
Question: As far as I am aware, $v^2 = 2ax$ is the formula to find the velocity in various questions. If kinetic energy = work, $$\frac{1}{2}mv^2=Fx$$ $$mv^2=2max$$ $$v^2=2ax$$ We use this formula to solve some questions in school. But when i just fiddle around with basic formulas i get this. $$x/v = t$$ $$v/a = t$$ $$t = t$$ $$x/v = v/a$$ $$v^2 = ax$$ And this just confuses me. I assume that $x/v=t$ and $v/a=t$ is actually kind of simplified or else I cannot see why $v^2$ equals $ax$ on one and $2ax$ on the other. Can someone explain to me what am I doing wrong? Answer: In your second derivation, the correct formulas are $$\begin{align} \frac{\Delta x}{v} &\approx \Delta t & \frac{\Delta v}{a} &\approx \Delta t \end{align}$$ I'm sure you can easily find some examples to show you why $x/v = t$ and $v/a = t$ don't make any sense. Anyway, when you put these together, you get $v\Delta v \approx a\Delta x$, with the approximation becoming more accurate the smaller the $\Delta$s are. Note that if you take the limit as $\Delta v$ and $\Delta x$ go to zero, then integrate, you get $$\begin{align} \int_{v_i}^{v_f} v\,\mathrm{d}v &= \int_{x_i}^{x_f} a\,\mathrm{d}x \\ \frac{v_f^2 - v_i^2}{2} &= a(x_f - x_i) \\ v_f^2 &= v_i^2 + 2a\Delta x \end{align}$$ which is exactly the correct formula.
{ "domain": "physics.stackexchange", "id": 19330, "tags": "kinematics, integration, calculus" }
Moonrise and moonset roughly at the same hour for few days
Question: I generated ephemeris for my town (15.10, 17.03) and what caught my attention is for some days in a row, the difference between moonrise is like 15 minutes (diffence between moonset is ~1h at a time): However a couple days later situation is mirrored for moonset, namely difference between moonset in like ~25min and difference between moonrise is more than hour. Could someone explain this to me? Thank you! Link to ephemeris. Answer: The Moon’s orbit is slightly tilted (about 5.15°) with respect to the ecliptic, and the direction of this tilt is variable over a period of about 18.61 years. This means its tilt with respect to the equator is variable. But even if it weren’t, when the Moon is near the node (the crossing of its orbit with the equator), its “vertical” movement becomes faster, and it’s slower around 90° from the nodes. This explains why sometimes there is more or less time between two successive moonrises or moonsets, as the declination of the Moon “compensates” for its difference in right ascension which would make its risings and settings (almost) regularly spaced from each other.
{ "domain": "astronomy.stackexchange", "id": 6964, "tags": "the-moon" }
Is the metric expansion of space relatively uniform on different length scales?
Question: Is the Metric expansion of space relatively uniform in space? In other words, loosely speaking, does expansion happens everywhere, and over a wide range of length scales. For example, the Hubble constant (say 70 km/sec per megaparsec) would be about 2.3E-05 m/s at 10 billion km. Neglecting numerous profound experimental difficulties, if it were possible to make some kind of measurement with a controlled experiment, over such a short distance, would we expect to see expansion locally consistent with the cosmological rate? Assume the experiment is in a relatively empty area in space, where one is not distracted by large scale structure so that one tries to put all the expansion between those structures and not within those structures. note: the question is about the expansion rate itself, not about how difficult it would be to measure. The question is also not about how expansion has been historically inferred from earth-bound observations of complex structures like galaxies. It's about the space. Answer: Such measurements have been done, using lasers reflecting off mirrors on the moon. See e.g the paper Progress in Lunar Laser Ranging Tests of Relativistic Gravity (Williams et. al. 2008) which established an effective limit on the expansion at AU scales that is about 80 times smaller than what would be expected if cosmological expansion applied within our solar system. As John Rennie explained in an answer to this question, the expansion is a property of the FLRW metric, but the local distribution of matter doesn't match the assumptions for that metric (which hold well enough on cosmological scales). That doesn't prove by itself that a metric that describes our solar system doesn't have expansion, but the experimental evidence is that if it does it is much smaller than you'd expect from a simple extrapolation of Hubble's law down to AU scales.
{ "domain": "physics.stackexchange", "id": 30209, "tags": "cosmology" }
Convolution between two vectors. Length and normalization
Question: I have an RIR vector $h[n]$ with $N$ samples and an audio source $x[n]$ with $M$ samples. I wish to simulate a 5 seconds audio segment with $x[n]$ randomly located within (timewise). Using MATLABs conv(x,h) I am getting a result with vales in the range $[-0.3852,0.3242]$. Using pythons np.convolve(x,h) I get a result with vales in the range $[-12621.9,10624.08]$, which also sounds bad on the headset (I am assuming due to cutoffs). I do not know where is the difference comming from, as both $h$ and $x$ are the same before the convolution. Normalizing the output of the python version by: output=np.convolve(x,h) output=output/len(output) fixes the values. This is true for normalizing by either len(output), len(x) or len(h). Now I am confused about the best method of action. For a 5-second segment recording, do I have to generate a 5 - second length $h$? Is it best to first pad both $h$ and $x$ with zeros and then convolve or should I convolve and than allocate randomly within a 5 seconds zeros vector? Is it at all reasonable to normalize here? with respect to the former 3 questions, do I normalize by len(output), len(x), len(h) or the number of samples within a 5 seconds segment? I am aware that there may be more than one correct answer. I am looking for the pros and cons of each course of action and what is the best way to achieve my target. Answer: The scaling difference between your two output is exactly $32767= 2^{15}-1$ which is exactly the maximum amplitude of a 16-bit signed integer. I'm guessing, the difference is how you import the audio into the program: Matlab's $audioread()$ typically normalize the data to $[-1,1]$, i.e. divide by 32767. It would seem that whatever Python method you are using doesn't do this. Normalization has nothing to do with the length of the filters and it isn't affected by any zero padding. It's all about the scaling conventions of your inputs and outputs, how do you interface with drivers and/or files.
{ "domain": "dsp.stackexchange", "id": 7839, "tags": "convolution, impulse-response, audio-processing" }
Reducible representation of planar molecule N2H2 with bond lengths as a basis
Question: I'm studying about molecular symmetry and its representations. Today I got a little confused about the planar $\ce{N2H2}$ molecule. It looks like this The basis in the example was chosen as $\Delta r_1, \Delta r_2$, i.e. lengths of N-H bonds. Then, it was stated, that the molecule belongs to $\mathrm C_{2h}$ point group, which I agree with. The problem is, the example stated, that the reducible representation is \begin{array}{|l|l|l|l|} \hline {} & \mathrm E & \mathrm C_2 & i & \sigma_h\\ \hline \Gamma & 1+1=2 & 0+0=0 & 0+0=0 & 1+1=2\\ \hline \end{array} And here I don't understand, what's going on. I understand, that $\mathrm E$ and $\sigma_h$ are 2, as both $\Delta r_1$ and $\Delta r_2$ stay the same after those operations, i.e. bond length doesn't change. But, bond lengths don't change after $\mathrm C_2$ and $i$ either, so why are there zeros in the table? I'd understand it if we used vectors of Cartesian coordinates on every atom as a basis, but in this case, I'm quite lost. Could you, please, explain it to me? The example is taken from this presentation (slide 18). Answer: Instead of using the $\Delta r$ as in your figure redraw it with a vector shown as an arrow parallel to each NH bond and label them 1 and 2. You know from the point group that the NH bonds are the same length and at the same angle so these vectors are similar but point in different directions. Operate on the whole diagram including arrows (with labels) according to the operations in the point group. If, after an operation the figure is indistinguishable from your starting point, then count $1$ for each vector otherwise count $0$. In your figure the point in the point group is at the centre of the NN bond. The $C_2$ axis points out of the page centred at this point. The $\sigma_h$ plane is the plane of the figure. The identity, which is an 'I exist' or 'do nothing' operation count 2. The $C_2$ operation is rotation by 180 degrees and so interchanges the vectors 1 and 2 so is not indistinguishable and so each vector counts 0. Similarly for the inversion $i$ through the centre. Reflection in the mirror plane (the plane of the image) leaves the molecule indistinguishable so counts 2. Thus you have the reducible representation you give in your question.
{ "domain": "chemistry.stackexchange", "id": 10767, "tags": "symmetry, group-theory" }
move_base vs geometry_msgs/PoseStamped md5 error, Electric, deb-install
Question: I've just been trying to run (for the first time, so maybe I just set something wrong) move_base together with gmapping on ROS Electric (on Ubuntu 10.04). As soon as move_base goes up, an error appears: [ERROR] [1323959697.184014051]: Client [/move_base] wants topic /move_base/goal to have datatype/md5sum [move_base_msgs/MoveBaseActionGoal/660d6895a1b9a16dce51fbdd9a64a56b], but our version has [geometry_msgs/PoseStamped/d3812c3cbc69362b77dc0b19b345f8f5]. Dropping connection. I've installed all ROS packages via apt-get. There has been some update today, so maybe some of the packages weren't rebuilt properly. Has anyone else noticed this problem? Originally posted by tom on ROS Answers with karma: 1079 on 2011-12-15 Post score: 1 Answer: It seems like you´re somewhere mixing the actionlib-interface of move_base with the move_base_simple topic which expects to receive a geometry_msgs/PoseStamped. Are you running rviz and setting a goal-pose for the robot with it? If so, open the Tool Properties and (View-> Tool Properties) and set 2D Nav Goal to "move_base_simple" Originally posted by michikarg with karma: 2108 on 2011-12-15 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by tom on 2011-12-15: OK, thanks a lot, seems to have been it. So it's my setup, I have to read a bit more about using gmapping with move_base and rviz, I guess. Comment by chbloca on 2019-04-09: Have you found a solution for this?
{ "domain": "robotics.stackexchange", "id": 7650, "tags": "navigation, move-base" }
Extension on Python Lambda capability
Question: I asked for assistance creating a Python Lambda function capable of assignments and multi-line lambdas in this post. Following @l0b0's suggestions, I realized that it was better constructed as a class. There is a security risk associated with eval or exec and input strings as well as an increased difficulty debugging code but I wanted a more capable lambda for functional experiments. This is probably unsuitable for production code. Print will provide a code string that can be executed if you add the arguments. Any suggestions for further improvements would be appreciated. class lambda_(): """ Creates an executable anonymous function supporting multiple line and assignments. For one line the form is λ("parameters : code incl assignment")(*arguments) and the first exepression is assigned to rtrn which is returned For multiline, use triple quotes with the parameters on the first line, the : is followed directly by a line feed and then the code Input: code in the form "x, y,...: return_value_expression; other code" Output is assigned to variable rtrn """ def __init__(self, code): """ self.parameters parses parameters (preceding the colon) self.code is the code after the colon with extra leading spaces removed from multiline lambdas """ self.parameters = [] val_idx = 0 val_name = [] i = 0 if ":" in code: for i, c in enumerate(code): # loop until the : that ends parameters if c in ",:": if val_name: self.parameters.append(''.join(val_name)) val_name = [] val_idx += 1 if c == ":": break elif c != " ": val_name.append(c) # self.code sets rtrn = None for multiline or sets rtrn = first expression for # single line unless :: (used if first expression can't be evaluated) if code[i+1] != "\n": # single line, if code[i+1] != ":": if code[i] != ":": self.code = ''.join(("rtrn = ", code)) else: self.code = ''.join(("rtrn = ", code[i+1:])) else: self.code = ''.join(("rtrn = None; ", code[i+2:])) else: # multiline # wont run if excess leading spaces so remove them lead = 0 code_ = code[i+1:] while code_[lead+1] == " ": # how many on 1st line? lead += 1 for i in range(1, len(code_)): # remove that on all lines if code_[i-1: i+1] == "\n ": code_ = code_[: i] + code_[i+lead:] self.code = "rtrn = None" + "\n" + code_ # executable code def __call__(self, *args): """ All arguments are declared global as listcomps etc don't create closure when called using exec, so first declare parameters & rtrn as globals, then collect parameter=value pairs in assignments. Multiline will return None unless you assign rtrn a value. """ define_globals = ', '.join(["global rtrn"] + self.parameters) assign = '; '.join([' = '.join((p, str(args[i]))) for i, p in enumerate(self.parameters)]) exec('; '.join((define_globals, assign if assign else "pass", self.code))) return rtrn def __str__(self): args = ["??" for _ in self.parameters] define_globals = ', '.join(["global rtrn"] + self.parameters) assign = '; '.join([' = '.join((p, str(args[i]))) for i, p in enumerate(self.parameters)]) return '; '.join((define_globals, assign if assign else "pass", self.code)) λ = lambda_ # rebind to linux ctrl+shift+u 03BB Examples: print( λ("""var: for i in range(var): print(i, end = ",") print(' ', end='') if i == var - 1: print(var) print(var) rtrn = [i for i in range(var, 0, -1)] """)(5) ) print(λ("x: [x*i for i in range(x)]; print('Final:', rtrn[-1])")(3)) print(λ("x, y:: from math import cos, sin; rtrn = cos(x) + sin(y)")(3, 5)) def applyToEach(L, f): for i in range(len(L)): L[i] = f(L[i]) testList = [1, -4, 8, -9] apply_to_each(testList, λ("x: x if x >= 0 else -x")) print(testList, "\n") multiline = λ("""var: for i in range(var): print(i, end = ",") print(' ', end='') if i == var - 1: print(var) print(var) rtrn = [i for i in range(var, 0, -1)] """) print("Multiline lambda") ; print(multiline) p_lambda = λ('print("value")') print("parameterless lambda") ; print(p_lambda) Answer: I highly recommend that you adopt some sort of testing framework for your tests instead of having a stand alone script. There are quite a few options out there. For instance, pytest. I would read up on how to write tests for pytest and in particular have a look at how to capture IO since a lot of your tests depend on it. I would also recommend you give more descriptive names to your tests instead of, for example, test_print_1 and test_print_2 I would try to explain what the particular test does. So in the case of test_print_1 and test_print_2 what particular case is each one trying to break? Finally, you should look at PEP 8 Style Guide, functions should have snake_case instead of camelCase. So, for example, applyToEach becomes apply_to_each. Also, classes typically use CapWords convention, however, because you are trying to emulate the lambda keyword, using lambda_ is probably a reasonable choice.
{ "domain": "codereview.stackexchange", "id": 30696, "tags": "python, python-3.x, lambda" }
Electrochemistry (Spontaneous Reactions)
Question: Determine which of the following pairs of reactants will result in a spontaneous reaction at 25 celsius? Pb2+(aq) + Cu(s) Ag+(aq) + Br-(aq) Li+(aq) + Cr(s) Fe3+(aq) + Cd(s) None of the Above. I've tried using using tables like this http://ch302.cm.utexas.edu/images302/Electrochemistry_Reduction_Potentials.jpg to use the equation Ecell = Ecath - Eanode. I'm hoping anyone could answer the question, and explain why each of them could be, or could not be spontaneous. (I know that an Ecell being positive results in spontaneity). Here's my work: Pb2+(aq) + Cu(s) .34-(-.13) = .47 Ag+(aq) + Br-(aq) .80 - (.76) = .04 Li+(aq) + Cr(s) -.90-(-3.05) = 2.15 Fe3+(aq) + Cd(s) -.04 - (-.81) = 77 None of the Above. As you can see, they're all spontaneous. I'm wondering if I grabbed the wrong values from the above table. The question asks about ONE spontaneous reaction, from my calculations, they're all spontaneous. @MichaelD.M.Dryden Answer: For $\ce{Pb^2+_{(aq)} + Cu_{(s)}<=> Pb_{(s)} + Cu^2+_{(aq)}}$, your potentials are the wrong way around. The cathode is the electrode at which the reduction occurs so it's lead minus copper. For $\ce{2 Ag+_{(aq)} + 2 Br^{-}_{(aq)}<=> Ag_{(s)} + Br2_{(l)}}$, I don't know where you got .76 V for bromide, but that's not the right number. Also, consider the $K_{sp}$ of $\ce{AgBr}$ and think about whether a non-redox reaction is occurring. For $\ce{3Li^+_{(aq)} + Cr_{(s)}<=> 3Li_{(s)} + Cr^3+_{(aq)}}$, your potentials are backwards again and the potential for $\ce{Cr}$ is wrong. Consider how easily elemental lithium is oxidized; just putting it in water will evolve hydrogen and oxidize the lithium, so anything that reduces the lithium cation would have to be a phenomenal reducing agent. For $\ce{2Fe^3+_{(aq)} + 3Cd_{(s)}<=> 2Fe_{(s)} + 3Cd^2+_{(aq)}}$, you have the right order, but the wrong potential for $\ce{Cd}$. If you're having trouble keeping the order straight, it may help to think of it this way: $$E_{\mathrm{cell}}=E_{\mathrm{reduction}} + E_{\mathrm{oxidation}}$$ where $E_{\mathrm{oxidation}}=-E_{\mathrm{reduction}}$. It's the same equation, but instead of keeping track of anode and cathode, the half-cell where reduction is happening will match what the table says and you use the potential as-is (since it's a table of reduction potentials), and the half-cell where oxidation is happening will be backwards from that in the table, so you negate the potential.
{ "domain": "chemistry.stackexchange", "id": 2309, "tags": "physical-chemistry, everyday-chemistry, electrochemistry" }
Impulse Response to Frequency Response in Octave
Question: Having an impulse response of an audio system recorded as a wav file, how to calculate the frequency response of the system with octave? Answer: I recommend using freqz in Octave as this computes samples of the DTFT (Discrete Time Fourier Transform) instead of the DFT. The DTFT is a continuous function of frequency, which is more likely what you would want to see if you are looking for the frequency response. (freqz([time domain vector])). To see this clearly, consider the simplest FIR filter specified by the impulse response [1 1]. This is a two tap FIR filter with unity gain coefficients, and the frequency response is a continuous function given by the following equation, describing the expected low pass filter result: $$F(\omega) = 1 + e^{-j\omega}$$ Where $\omega$ is the normalized radian frequency with the sampling rate $f_s = 2\pi $ This result is the DTFT of the sampled impulse response, not the DFT (which the FFT computes). The FFT (fft([1 1]) would return just two samples on this frequency response, but freqz would provide 512 samples (default) of the true frequency response as described in the equation above. You could also simply do (fft[1 1], 512) to zero pad the fft as this will also return samples on the DTFT. (512 in this case to match the default number of samples used in freqz). Result for freqz([1 1]): Note that the frequency resolution of your answer will be 1/T where T is the length of the audio file. Adding zeros to the time domain sequence does NOT increase frequency resolution. For more detailed explanations on the difference between the DTFT and DFT please see: For 2D signals can it be said that the frequency response is the same as the Fourier transform?
{ "domain": "dsp.stackexchange", "id": 4839, "tags": "fft, audio, frequency-domain, octave" }
Why do we need complex representations in Grand Unified Theories?
Question: EDIT4: I think I was now able to track down where this dogma originally came from. Howard Georgi wrote in TOWARDS A GRAND UNIFIED THEORY OF FLAVOR There is a deeper reason to require the fermion representation to be complex with respect to SU(3) × SU(2) × U(1). I am assuming that the grand unifying symmetry is broken all the way down to SU(3) × SU(2) × U(1) at a momentum scale of $10^{15}$ GeV. I would therefore expect any subset of the LH fermion representation which is real with respect to SU(3) X SU(2) X U(1) to get a mass of the order of $10^{15}$ GeV from the interactions which cause the spontaneous breakdown. As a trivial example of this, consider an SU(5) theory in which the LH fermions are a 10, a 5 and two $\bar 5$'s. In this theory there will be SU(3) × SU(2) X U(1) invariant mass terms connecting the 5 to some linear combination of the two $\bar 5$-'s. These ten (chiral) states will therefore correspond to 5 four-component fermions with masses of order 10 as GeV. The 10 and the orthogonal linear combination of the two 5-'s will be left over as ordinary mass particles because they carry chiral SU(2) X U(1). Unfortunately I'm not able to put this argument in mathemtical terms. How exactly does the new, invariant mass term, combining the $5$ and the $\bar 5$ look like? EDIT3: My current experience with this topic is summarized in chapter 5.1 of this thesis: Furthermore the group should have complex representations necessary to accommodate the SU(3) complex triplet and the complex doublet fermion representation. [...] the next five do not have complex representations, and so, are ruled out as candidates for the GUT group. [...] It should be pointed out that it is possible to construct GUT's with fermions in the real representation provided we allow extra mirror fermions in the theory. What? Groups without complex representations are ruled out. And a few sentences later everything seems okay with such groups, as long as we allow some extra particles called mirror fermions. In almost every document about GUTs it is claimed that we need complex representations (=chiral representations) in order to be able to reproduce the standard model. Unfortunately almost everyone seems to have a different reason for this and none seems fully satisfactory to me. For example: Witten says: Of the five exceptional Lie groups, four ( G 2 , F 4 , E 7 , and E 8 ) only have real or pseu-doreal representations. A four-dimensional GUT model based on such a group will not give the observed chiral structure of weak interactions. The one exceptional group that does have complex or chiral representations is E6 This author writes: Since they do not have complex representations. That we must have complex representations for fermions, because in the S.M. the fermions are not equivalent to their complex conjugates. Another author writes: Secondly, the representations must allow for the correct reproduction of the particle content of the observed fermion spectrum, at least for one generation of fermions. This requirement implies that G gut must possess complex representations as well as it must be free from anomalies in order not to spoil the renormalizability of the grand unified theory by an incompatibility of regularization and gauge invariance. The requirement of complex fermion representations is based on the fact that embedding the known fermions in real representations leads to diculties: Mirror fermions must be added which must be very heavy . But then the conventional fermions would in general get masses of order M gut . Hence all light fermions should be components of a complex representation of G gut . And Lubos has an answer that does not make any sense to me: However, there is a key condition here. The groups must admit complex representations - representations in which the generic elements of the group cannot be written as real matrices. Why? It's because the 2-component spinors of the Lorentz group are a complex representation, too. If we tensor-multiply it by a real representation of the Yang-Mills group, we would still obtain a complex representation but the number of its components would be doubled. Because of the real factor, such multiplets would always automatically include the left-handed and right-handed fermions with the same Yang-Mills charges! So... what is the problem with real representations? Unobserved mirror fermions? The difference of particles and antiparticles? Or the chiral structure of the standard model? EDIT: I just learned that there are serious GUT models that use groups that do not have complex representations. For example, this review by Langacker mentions several models based on $E_8$. This confuses me even more. On the one hand, almost everyone seems to agree that we need complex representations and on the other hand there are models that work with real representations. If there is a really good why we need complex representations, wouldn't an expert like Langacker regard models that start with some real representation as non-sense? EDIT2: Here Stech presents another argument The groups E7 and E8 also give rise to vector-like models with $\sin^2 \theta = 3/4$. The mathematical reason is that these groups have, like G and F4, only real (pseudoreal) representations. The only exceptional group with complex... [...] Since E7 and Es give rise to vector-like theories, as was mentioned above, at least half of the corresponding states must be removed or shifted to very high energies by some unknown mechanism Answer: Charge conjugation is extremely slippery because there are two different versions of it; there have been many questions on this site mixing them up (1, 2, 3, 4, 5, 6, 7, 8, 9), several asked by myself a few years ago. In particular there are a couple arguments in comments above where people are talking past each other for precisely this reason. I believe the current answer falls into one of the common misconceptions. I'll give as explicit of an example as possible, attempting to make a 'Rosetta stone' for issues about chirality, helicity, and $\hat{C}$. Other discrete symmetries are addressed here. A hypercharge example For simplicity, let's consider hypercharge in the Standard Model, and only look at the neutrino, which we suppose has a sterile partner. For a given momentum there are four neutrino states: $$|\nu, +\rangle \text{ has positive helicity and hypercharge } Y=0$$ $$|\nu, -\rangle \text{ has negative helicity and hypercharge } Y=-1/2$$ $$|\bar{\nu}, +\rangle \text{ has positive helicity and hypercharge } Y=1/2$$ $$|\bar{\nu}, -\rangle \text{ has negative helicity and hypercharge } Y=0$$ There are two neutrino fields: $$\nu_L \text{ is left chiral, has hypercharge } -1/2, \text{annihilates } |\nu, -\rangle \text{ and creates } |\bar{\nu}, + \rangle$$ $$\nu_R \text{ is right chiral, has hypercharge } 0, \text{annihilates } |\nu, +\rangle \text{ and creates } |\bar{\nu}, - \rangle$$ The logic here is the following: suppose a classical field transforms under a representation $R$ of an internal symmetry group. Then upon quantization, it will annihilate particles transforming under $R$ and create particles transforming under the conjugate representation $R^*$. The spacetime symmetries are more complicated because particles transform under the Poincare group and hence have helicity, while fields transform under the Lorentz group and hence have chirality. In general, a quantized right-chiral field annihilates a positive-helicity particle. Sometimes, the two notions "right-chiral" and "positive-helicity" are both called "right-handed", so a right-handed field annihilates a right-handed particle. I'll avoid this terminology to avoid mixing up chirality and helicity. Two definitions of charge conjugation Note that both the particle states and the fields transform in representations of $U(1)_Y$. So there are two distinct notions of charge conjugation, one which acts on particles, and one which acts on fields. Acting on particles, there is a charge conjugation operator $\hat{C} $ satisfying $$\hat{C} |\nu, \pm \rangle = |\bar{\nu}, \pm \rangle.$$ This operator keeps all spacetime quantum numbers the same; it does not change the spin or the momentum and hence doesn't change the helicity. It is important to note that particle charge conjugation does not always conjugate internal quantum numbers, as one can see in this simple example. This is only true when $\hat{C}$ is a symmetry of the theory, $[\hat{C}, \hat{H}] = 0$. Furthermore, if we didn't have the sterile partner, we would have only the degrees of freedom created or destroyed by the $\nu_L$ field, and there would be no way of defining $\hat{C}$ consistent with the definition above. In other words, particle charge conjugation is not always even defined, though it is with the sterile partner. There is another notion of charge conjugation, which on classical fields is simply complex conjugation, $\nu_L \to \nu_L^*$. By the definition of a conjugate representation, this conjugates all of the representations the field transforms under, i.e. it flips $Y$ to $-Y$ and flips the chirality. This is true whether the theory is $\hat{C}$-symmetric or not. For convenience we usually define $$\nu_L^c = C \nu_L^*$$ where $C$ is a matrix which just puts the components of $\nu_L^*$ into the standard order, purely for convenience. (Sometimes this matrix is called charge conjugation as well.) In any case, this means $\nu_L^c$ is right-chiral and has hypercharge $1/2$, so $$\nu_L^c \text{ is right chiral, has hypercharge } 1/2, \text{annihilates } |\bar{\nu}, +\rangle \text{ and creates } |\nu, - \rangle.$$ The importance of this result is that charge conjugation of fields does not give additional particles. It only swaps what the field creates and what it annihilates. This is why, for instance, a Majorana particle theory can have a Lagrangian written in terms of left-chiral fields, or in terms of right-chiral fields. Both give the same particles; it is just a trivial change of notation. (For completeness, we note that there's also a third possible definition of charge conjugation: you could modify the particle charge conjugation above, imposing the additional demand that all internal quantum numbers be flipped. Indeed, many quantum field theory courses start with a definition like this. But this stringent definition of particle charge conjugation means that it cannot be defined even with a sterile neutrino, which means that the rest of the discussion below is moot. This is a common issue with symmetries: often the intuitive properties you want just can't be simultaneously satisfied. Your choices are either to just give up defining the symmetry or give up on some of the properties.) Inconsistencies between the definitions The existing answer has mixed up these two notions of charge conjugation, because it assumes that charge conjugation gives new particles (true only for particle charge conjugation) while reversing all quantum numbers (true only for field charge conjugation). If you consistently use one or the other, the argument doesn't work. A confusing point is that the particle $\hat{C}$ operator, in words, simply maps particles to antiparticles. If you think antimatter is defined by having the opposite (internal) quantum numbers to matter, then $\hat{C}$ must reverse these quantum numbers. However, this naive definition only works for $\hat{C}$-symmetric theories, and we're explicitly dealing with theories that aren't $\hat{C}$-symmetric. One way of thinking about the difference is that, in terms of the representation content alone, and for a $\hat{C}$-symmetric theory only, the particle charge conjugation is the same as field charge conjugation followed by a parity transformation. This leads to a lot of disputes where people say "no, your $\hat{C}$ has an extra parity transformation in it!" For completeness, note that one can define both these notions of charge conjugation in first quantization, where we think of the field as a wavefunction for a single particle. This causes a great deal of confusion because it makes people mix up particle and field notions, when they should be strongly conceptually separated. There is also a confusing sign issue because some of these first-quantized solutions correspond to holes in second quantization, reversing most quantum numbers (see my answer here for more details). In general I don't think one should speak of the "chirality of a particle" or the "helicity of a field" at all; the first-quantized picture is worse than useless. Why two definitions? Now one might wonder why we want two different notions of charge conjugation. Charge conjugation on particles only turns particles into antiparticles. This is sensible because we don't want to change what's going on in spacetime; we just turn particles into antiparticles while keeping them moving the same way. On the other hand, charge conjugation on fields conjugates all representations, including the Lorentz representation. Why is this useful? When we work with fields we typically want to write a Lagrangian, and Lagrangians must be scalars under Lorentz transformations, $U(1)_Y$ transformations, and absolutely everything else. Thus it's useful to conjugate everything because, e.g. we know for sure that $\nu_L^c \nu_L$ could be an acceptable Lagrangian term, as long as we contract all the implicit indices appropriately. This is, of course, the Majorana mass term. Answering the question Now let me answer the actual question. By the Coleman-Mandula theorem, internal and spacetime symmetries are independent. In particular, when we talk about, say, a set of fields transforming as a $10$ in the $SU(5)$ GUT, these fields must all have the same Lorentz transformation properties. Thus it is customary to write all matter fields in terms of left-chiral Weyl spinors. As stated above, this does nothing to the particles, it's just a useful way to organize the fields. Therefore, we should build our GUT using fields like $\nu_L$ and $\nu_R^c$ where $$\nu_R^c \text{ is left chiral, has hypercharge } 0.$$ What would it have looked like if our theory were not chiral? Then $|\nu, + \rangle$ should have the same hypercharge as $|\nu, -\rangle$, which implies that $\nu_R$ should have hypercharge $-1/2$ like $\nu_L$. Then our ingredients would be $$\nu_L \text{ has hypercharge } -1/2, \quad \nu_R^c \text{ has hypercharge } 1/2.$$ In particular, note that the hypercharges come in an opposite pair. Now let's suppose that our matter fields form a real representation $R$ of the GUT gauge group $G$. Spontaneous symmetry breaking takes place, reducing the gauge group to that of the Standard Model $G'$. Hence the representation $R$ decomposes, $$R = R_1 \oplus R_2 \oplus \ldots$$ where the $R_i$ are representations of $G'$. Since $R$ is real, if $R_i$ is present in the decomposition, then its conjugate $R_i^*$ must also be present. That's the crucial step. Specifically, for every left-chiral field with hypercharge $Y$, there is another left-chiral field with hypercharge $-Y$, which is equivalent to a right-chiral field with hypercharge $Y$. Thus left-chiral and right-chiral fields come in pairs, with the exact same transformations under $G'$. Equivalently, every particle has an opposite-helicity partner with the same transformation under $G'$. That is what we mean when we say the theory is not chiral. To fix this, we can hypothesize all of the unwanted "mirror fermions" are very heavy. As stated in the other answer, there's no reason for this to be the case. If it were, we run into a naturalness problem just as for the Higgs: since there is nothing distinguishing fermions from mirror fermions, from the standpoint of symmetries, there is nothing preventing matter from acquiring the same huge mass. This is regarded as very strong evidence against such theories; some say that for this reason, theories with mirror fermions are outright ruled out. For example, the $E_8$ theory heavily promoted in the press has exactly this problem; the theory can't be chiral.
{ "domain": "physics.stackexchange", "id": 94197, "tags": "standard-model, group-theory, lie-algebra, beyond-the-standard-model, grand-unification" }
Is an electron attracted to one of the magnetic poles in this scenario?
Question: Do magnets attract electrons? I don't think so, but maybe in certain cases they can be? I guess it would depend on what direction the velocity and magnetic field is in so that the force acting on the charge is towards one of the poles but I haven't tried that yet. If an electron is shot between a north pole and a south pole, is it attracted to any of them? Well the resulting force would either be into or out of the page (right?) so it wouldn't be attracted to any. Can anyone confirm/explain this if I'm wrong? Answer: The magnetic part of Lorentz' force law, which describes the force electric and magnetic fields exert on a point charge, is given by $$\vec{F}=q\,\dot\;\,\vec{v}\times\vec{B},$$ where $q$ is charge, $\vec{v}$ is the velocity of the moving particle and $\vec{B}$ is the magnetic field. As you can see, the particle has to move in order to be affected by the latter. Furthermore, we can see that the force is given by a cross product between velocity and magnetic field. This means that the resulting force points in a perpendicular direction with respect to the plane spanned by the vectors which are multiplied. Applying this logic to your example of an electron and two poles, the answer is that it will not be directed towards any of the two, as you have correctly assumed.
{ "domain": "physics.stackexchange", "id": 13468, "tags": "electromagnetism, magnetic-fields" }
Arm navigation with a virtual robot
Question: I have read the tutorial on the new arm_navigation architecture, and I am really excited to try it out. As most tutorials have not been updated yet, do you have any code example that shows how to plan a movement using a robot position which is different from the current one? Let's say the robot is at (x,y,th), and I want to see if I can reach a place by moving the robot to (x', y', th'). How can I do it using the planning scene? Originally posted by Lorenzo Riano on ROS Answers with karma: 1342 on 2011-10-13 Post score: 1 Answer: Lorenzo, It should be pretty easy. Basically, you are going to put a different transform between the world frame (in this case 'odom_combined') and the frame associated with the base of the robot ('base_footprint'). If operating with a running system you'll take the current state, apply whatever multi-dof and joint states are in the planning_scene_diff, and push the result to all components, who will then respond accordingly. You'll want to interact with the components directly instead of using MoveArm, as you probably don't want to actually act based on state that's not the current one, though you are given the option. Here's some code for generating IK solutions given another base configuration for your robot: #include <ros/ros.h> #include <kinematics_msgs/GetKinematicSolverInfo.h> #include <kinematics_msgs/GetConstraintAwarePositionIK.h> #include <arm_navigation_msgs/SetPlanningSceneDiff.h> static const std::string ARM_COLLISION_IK_NAME = "/pr2_right_arm_kinematics/get_constraint_aware_ik"; static const std::string ARM_QUERY_NAME = "/pr2_right_arm_kinematics/get_ik_solver_info"; static const std::string SET_PLANNING_SCENE_DIFF_NAME = "/environment_server/set_planning_scene_diff"; int main(int argc, char **argv){ ros::init (argc, argv, "get_fk"); ros::NodeHandle rh; ros::service::waitForService(ARM_QUERY_NAME); ros::service::waitForService(ARM_COLLISION_IK_NAME); ros::service::waitForService(SET_PLANNING_SCENE_DIFF_NAME); ros::ServiceClient set_planning_scene_diff_client = rh.serviceClient<arm_navigation_msgs::SetPlanningSceneDiff>(SET_PLANNING_SCENE_DIFF_NAME); ros::ServiceClient query_client = rh.serviceClient<kinematics_msgs::GetKinematicSolverInfo>(ARM_QUERY_NAME); ros::ServiceClient ik_with_collision_client = rh.serviceClient<kinematics_msgs::GetConstraintAwarePositionIK>(ARM_COLLISION_IK_NAME); arm_navigation_msgs::SetPlanningSceneDiff::Request planning_scene_req; arm_navigation_msgs::SetPlanningSceneDiff::Response planning_scene_res; planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.stamp = ros::Time::now(); planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.joint_names.push_back("world_joint"); planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.frame_ids.push_back("odom_combined"); planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.child_frame_ids.push_back("base_footprint"); planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.poses.resize(1); planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.poses[0].position.x = 4.0; planning_scene_req.planning_scene_diff.robot_state.multi_dof_joint_state.poses[0].orientation.w = 1.0; if(!set_planning_scene_diff_client.call(planning_scene_req, planning_scene_res)) { ROS_WARN("Can't get planning scene"); return -1; } // define the service messages kinematics_msgs::GetKinematicSolverInfo::Request request; kinematics_msgs::GetKinematicSolverInfo::Response response; if(query_client.call(request,response)) { for(unsigned int i=0; i< response.kinematic_solver_info.joint_names.size(); i++) { ROS_DEBUG("Joint: %d %s",i,response.kinematic_solver_info.joint_names[i].c_str()); } } else { ROS_ERROR("Could not call query service"); ros::shutdown(); exit(-1); } // define the service messages kinematics_msgs::GetConstraintAwarePositionIK::Request gpik_req; kinematics_msgs::GetConstraintAwarePositionIK::Response gpik_res; gpik_req.timeout = ros::Duration(5.0); gpik_req.ik_request.ik_link_name = "r_wrist_roll_link"; gpik_req.ik_request.pose_stamped.header.frame_id = "odom_combined"; gpik_req.ik_request.pose_stamped.pose.position.x = 4.75; gpik_req.ik_request.pose_stamped.pose.position.y = -0.188; gpik_req.ik_request.pose_stamped.pose.position.z = .94; gpik_req.ik_request.pose_stamped.pose.orientation.x = 0.0; gpik_req.ik_request.pose_stamped.pose.orientation.y = 0.0; gpik_req.ik_request.pose_stamped.pose.orientation.z = 0.0; gpik_req.ik_request.pose_stamped.pose.orientation.w = 1.0; gpik_req.ik_request.ik_seed_state.joint_state.position.resize(response.kinematic_solver_info.joint_names.size()); gpik_req.ik_request.ik_seed_state.joint_state.name = response.kinematic_solver_info.joint_names; for(unsigned int i=0; i< response.kinematic_solver_info.joint_names.size(); i++) { gpik_req.ik_request.ik_seed_state.joint_state.position[i] = (response.kinematic_solver_info.limits[i].min_position + response.kinematic_solver_info.limits[i].max_position)/2.0; } if(ik_with_collision_client.call(gpik_req, gpik_res)) { if(gpik_res.error_code.val == gpik_res.error_code.SUCCESS) { for(unsigned int i=0; i < gpik_res.solution.joint_state.name.size(); i ++) { ROS_INFO("Joint: %s %f",gpik_res.solution.joint_state.name[i].c_str(),gpik_res.solution.joint_state.position[i]); } } else { ROS_ERROR("Inverse kinematics failed"); } } else { ROS_ERROR("Inverse kinematics service call failed"); } ros::shutdown(); } Originally posted by egiljones with karma: 2031 on 2011-10-19 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 6960, "tags": "ros, planning-scene, arm-navigation" }
When should you disable all interrupts in a program?
Question: Consider a machine that has nested interrupts (a higher priority interrupt can interrupt a lower priority one, the current instruction's address is saved and later restored). Why would a programmer want to disable all interrupts using instructions like x86's CLI when you know that even when your code gets interrupted it will always return to what you it was doing? To me it looks like disabling interrupts is bad for performance because a very high priority and thus very important interrupt has to be dealt with as quickly as possible and you are stalling that by disabling interrupts. Answer: To implement atomic routine like a semaphore, for example. When a thread call routine to gain some critical section acess, the routine needs to change the value of semaphore's variables and it must be atomic. This change cannot be interrupted otherwise other thread, or process, can use or change some values that are under some critical section. You can find more info on Andrew Tanenbaum or William Stallings book.
{ "domain": "cs.stackexchange", "id": 3921, "tags": "operating-systems, concurrency" }
How are annihilation/creation operators used to reach an external state of $|0 \rangle$ in an $S$-matrix?
Question: I'm trying to understand how to compute the $S$-matrix element for $\phi \phi \to \phi \phi$. In "Peskin and Schroeder's Ch. 4.6". I'm lead to believe that, in $\phi^4$ theory, $$ S = \langle p_3 p_4 | N (-\frac{i\lambda}{4!})\int d^4 x \phi^4 (x)| p_1, p_2\rangle, \tag{4.92}$$ It is then stated: "Since the external states are $|0\rangle$ (...) we can use an annihilation operator from $\phi(x)$ to annihilate an initial-state particle, or a creation operator from $\phi(x)$ to produce a final-state particle. For example:" $$\phi(x)|p\rangle = e^{-ip\cdot x}| 0 \rangle , \hspace{5mm}\langle p| \phi(x) = \langle 0| e^{ip\cdot x}\tag{4.94}$$ Intuitively, to simplify the $(4.92)$ and reach an external state of $|0 \rangle$, I've substituted each of the 4 $\phi(x)$ according to $(4.94) $, but I'm not certain of this as I have two initial and two final states instead of the single state presented in $(4.94) $. Another reason why I believe my answer is incorrect is due to the need for commutations in the $\phi^4$ case, as shown in (4.95) on the link above. My answer would therefore be: $$ S = \langle 0| N (-\frac{i\lambda}{4!})\int d^4 x\ e^{i(p_3 +p_4)\cdot x} e^{-i(p_1 +p_2)\cdot x}| 0\rangle$$ Is my take on this correct? Answer: To be 100% correct, you should really use the LSZ formula to connect asymptotic in/out states to $n$ point vacuum Greens functions of time ordered fully interacting operators. You should then apply the Gell-Mann--Low theorem to connect the $n$ point vacuum Greens function of the fully interacting fields to a ratio of time ordered vacuum Greens functions of the interaction picture fields. Finally, one expands the result in a Dyson series and evaluates using Wick contractions. As a simple hack that'll work to leading order, note that at leading order $|\vec p\rangle_{in} = |\vec p\rangle = a^\dagger(\vec p)|0\rangle$ and similar for ${}_{out}\langle \vec q|$. Then simply apply the commutation relations for raising and lowering operators; depending on your conventions, something like $[a(\vec p),a^\dagger(\vec q)] = (2\pi)^m2E_{\vec p}\delta^{m}(\vec p - \vec q)$ for a field theory in $m+1$ spacetime dimensions. Remember to use the facts that $a(\vec p)|0\rangle=0=\langle0|a^\dagger(\vec q)$.
{ "domain": "physics.stackexchange", "id": 86472, "tags": "quantum-field-theory, hilbert-space, interactions, s-matrix-theory" }
Free-Fall time of a collapsing Star (Spherical Symmetry/No Rotation/Classical Mechanics)
Question: I have been trying to prove the free-fall time $(\tau_\text{ff})$ of a collapsing star, which is the time it would take a star to collapse due to gravity, in the absence of pressure or other supporting forces. Supposing the density to be constant in a homogeneous spherical non-rotating star, the following well-known result arises: $$\tau_\text{ff} = \sqrt{\frac{3\pi}{32 \langle\rho\rangle G}}$$ I have tried to derive this relationship, however a factor of $1/\sqrt{2}$ is missing from my result. My attempt is as follows: consider a particle of mass $m$ at a radius $r$ inside the star; then, its equation of motion, where $M_r$ stands for the mass contained inside the spherical shell of radius $r$, would be: $$m \frac{\mathrm d^2r}{\mathrm dt^2} = - \frac{G M_r}{r^2} m$$ Which is to say: $$\frac{\mathrm d^2r}{\mathrm dt^2} + \frac{G M_r}{r^2} = 0$$ Furthermore, since: $$\langle\rho\rangle = \frac{M}{\frac{4\pi}{3} R^3}$$ Then: $$\langle M_r\rangle = 4 \pi \int_{0}^{r} \mathrm dr \> r^2 \langle\rho_r\rangle = \frac{4\pi}{3} \langle\rho\rangle r^3$$ Thus, an order of magnitude approach to the equations of motion would be: $$\frac{\mathrm d^2r}{\mathrm dt^2} + \frac{G \langle M_r\rangle}{r^2} = 0$$ Ergo, the differential equation to solve becomes: $$\frac{\mathrm d^2r}{\mathrm dt^2} + \frac{4\pi}{3} \langle\rho\rangle G \> r = 0 \quad \Big| \quad r(0) = R \> \land \> \frac{\mathrm d}{\mathrm dt}r(0) = 0$$ Such differential equation has a well-known solution: $$r(t) = C_1 \> \cos(\alpha t) + C_2 \> \sin(\alpha t) \quad \Big| \quad \alpha = \sqrt{\frac{4\pi}{3} \langle\rho\rangle G}$$ Therefore, when imposing the initial conditions, the solution obtained is: $$r(t) = R \> \cos(\alpha t)$$ Then: $$r(\tau_\text{ff}) = 0 \Longrightarrow \alpha \tau_\text{ff} = \frac{\pi}{2} \Longrightarrow \tau_\text{ff} = \frac{\pi}{2\alpha} = \sqrt{\frac{3\pi}{16 \langle\rho\rangle G}}$$ I would greatly appreciate if anyone could point out any mistakes in the development. I realize that most proofs derive the result from energy conservation, however I would like to be able to plot the solution to the equations of motion. Answer: There is a problem with what you have done. In your solution you have made $M_r$ (the sum of mass interior to the radial coordinate of your test mass) a variable that depends on $r$ and hence depends on $t$. But that isn't right; $M_r$ is a constant that should be set equal to the mass interior to an object at the surface of the collapsing star - i.e. the mass of the star - and it is a constant. Thus you need to solve this equation $$\frac{d^2r}{dt^2} + \frac{G M}{r^2} = 0$$ This quite messy; for a body with $m\ll M$, falling from radius $R$ to radius $r$, the solution is given here as $$ t = R^{3/2}\left(\frac{ \arccos{(r/R)^{1/2}} + [(r/R)(1 - r/R)]^{1/2}}{\sqrt{2GM}}\right)$$ For the freefall time, set $r=0$ to get $$ t_{ff} = \frac{\pi R^{3/2}}{2\sqrt{2GM}}$$ Then replace $M$ using $M = 4\pi R^3 \bar{\rho}/3$ and this simplifies to $$t_{ff} = \sqrt{ \frac{3\pi}{32 G \bar{\rho}}}$$ Note that the freefall time is independent of radius for a homogeneous sphere of gas, so infalling particles do not "overtake" particles that are initially interior to them. This justifies treating $M_r$ as a constant.
{ "domain": "physics.stackexchange", "id": 52014, "tags": "homework-and-exercises, newtonian-gravity, stars, free-fall" }
Hoare Logic for Factorial
Question: I came across this hoare logic for factorials but I don't quite understand it. We multiply F and X but we're not adding up all values of F so how do we get the sum/factorial at the end? Precondition: $\{ X > 0 \land X = x \}$ $F := 1$ while $X > 0$ do $\quad F := F \cdot X$ $\quad X := X - 1$ od Postcondition: $\{F = x!\}$ Answer: What the Hoare invariant state is the following: If you run the code with $X$ equal to some value $x > 0$, then at the end, $F$ will have the value $x!$ ($x$ factorial). You can check that this is indeed the case.
{ "domain": "cs.stackexchange", "id": 15807, "tags": "logic, discrete-mathematics, hoare-logic, loop-invariants" }
The ugly Christmas tree, Haskell style
Question: Inspired by a few inverse tree ascii art F# questions, I wanted to give it a shot in Haskell. As seen in the linked questions, the resulting program reads an Int from stdin (\$0 \leq n \leq 5\$), and displays a tree of dimensions 100 * 63, consisting of \$n\$ Y-formed "trunks-and-branches" of which the three arms each have a height of \$16/2^{i-1}\$ (that is, the branch is \$16/2^{i-1}\$ high, and the branches are \$16/2^{i-1}\$ high). After each branch, the next Y-formed iterations start at the tops of the last Ys, until \$i\$ reaches \$n\$. The Ys are drawn in a 100 * 63 field of _ characters, and drawn with 1 characters. An example for \$n = 0\$ would be (halving all given dimensions to save space): __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ \$n = 1\$ would give: __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ __________________________________________________ ________________1_______________1_________________ _________________1_____________1__________________ __________________1___________1___________________ ___________________1_________1____________________ ____________________1_______1_____________________ _____________________1_____1______________________ ______________________1___1_______________________ _______________________1_1________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ And \$n = 4\$ would show: __________________________________________________ _________1_1_1_1_1_1_1_1_1_1_1_1_1_1_1_1__________ __________1___1___1___1___1___1___1___1___________ __________1___1___1___1___1___1___1___1___________ ___________1_1_____1_1_____1_1_____1_1____________ ____________1_______1_______1_______1_____________ ____________1_______1_______1_______1_____________ ____________1_______1_______1_______1_____________ _____________1_____1_________1_____1______________ ______________1___1___________1___1_______________ _______________1_1_____________1_1________________ ________________1_______________1_________________ ________________1_______________1_________________ ________________1_______________1_________________ ________________1_______________1_________________ ________________1_______________1_________________ _________________1_____________1__________________ __________________1___________1___________________ ___________________1_________1____________________ ____________________1_______1_____________________ _____________________1_____1______________________ ______________________1___1_______________________ _______________________1_1________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ ________________________1_________________________ The code is not as clean as I'd like it to be, and I'm sure it's not idiomatic Haskell (I don't see any Arrows, Functors or more than two types), but I'd like to learn and improve my skills. Please have at it :) module Main where import Data.List (groupBy, sortOn) data Point = Point Int Int deriving Show type Tree = [Point] trunk :: Point -> Int -> Tree trunk (Point x y) size = [Point x (y + d) | d <- [1..size]] split :: Point -> [Point] split (Point x y) = [Point (x + 1) (y + 1), Point (x - 1) (y + 1)] branch :: Point -> Int -> Tree branch start = branch' [start] where branch' _ 0 = [] branch' [single] size = split single ++ branch' (split single) (size - 1) branch' points size = widen points ++ branch' (widen points) (size - 1) where widen [Point leftx lefty, Point rightx righty] = [Point (leftx + 1) (lefty + 1), Point (rightx - 1) (righty + 1)] tree :: Point -> Int -> Int -> Tree tree _ _ 0 = [] tree start size splits = let trunks = trunk start size branches = branch (last trunks) size in trunks ++ branches ++ concat [tree st (size `div` 2) (splits - 1) | st <- take 2 $ reverse branches] formatTree :: Int -> Int -> Tree -> [String] formatTree width height = take height . flip (++) (repeat (replicate width '_')) . map (\points -> map (\x -> if x `elem` map (\(Point x _) -> x) points then '1' else '_') [1..width]) . groupBy (\(Point _ y1) (Point _ y2) -> y1 == y2) . sortOn (\(Point _ y) -> y) main :: IO () main = do sizeStr <- getLine let splits = read sizeStr mapM_ putStrLn $ reverse $ formatTree 100 63 $ tree (Point 50 0) 16 splits Answer: You can change your Point definition to some type that implements Bifunctor. Earlier bifunctor was part of bifunctors package. Bifunctor is functor of two arguments. Here you can find more info. In case you wouldn't like to change definition of Point you can define bimap-like function for your type. Your formatTree function inefficient, since you are sorting, grouping and do some other operations with list. I've used 2D array to represent canvas state, that approach asymptotically better, since we can loop only one time. During initialization array is mutable in ST monad and then converted to immutable via runSTArray. Also, I added error handling. Here's my attempt: module Main where import Control.Monad (forM_, mapM_) import Control.Applicative ((<$>)) import Data.Bifunctor (bimap, first, second) import Data.Array.MArray (newArray, writeArray) import Data.Array.ST (runSTArray) import Data.Array (Array, bounds, (!)) import Text.Read (readMaybe) type Point = (Int, Int) type Tree = [Point] type CordSum = (Int -> Int -> Int) type Canvas = Array (Int, Int) Bool makePoint :: Int -> Int -> Point makePoint = (,) line :: CordSum -> CordSum -> Point -> Int -> Tree line fX fY p height | height > 0 = [bimap (fX h) (fY h) p | h <- [0..(height -1)]] | otherwise = [] -- draw vertical line verticalLine :: Point -> Int -> Tree verticalLine = line (flip const) (+) -- draw diagonal line to right diagonalLineR :: Point -> Int -> Tree diagonalLineR = line (+) (+) -- draw diagonal line to left diagonalLineL :: Point -> Int -> Tree diagonalLineL = line subtract (+) -- draw subtree subtree :: Point -> Int -> Tree subtree p height = verticalLine p height ++ diagonalLineL pl height ++ diagonalLineR pr height where pl = bimap (subtract 1) (+height) p pr = bimap (+1) (+height) p -- calc cords of next subtree subtreeNext :: Point -> Int -> (Point, Point) subtreeNext p h = (pel, per) where next_h = 2 * h pel = bimap (subtract h) (+ next_h) p per = bimap (+ h) (+ next_h) p tree :: Point -> Int -> Int -> Tree tree _ _ 0 = [] tree _ 0 _ = [] tree start height splits = subtree start height ++ left_tree ++ right_tree where height' = height `div` 2 splits' = splits - 1 (pl, pr) = subtreeNext start height left_tree = tree pl height' splits' right_tree = tree pr height' splits' toCanvas :: Int -> Int -> Tree -> Maybe Canvas toCanvas width height tree | width > 0 && height > 0 = Just canvasArr | otherwise = Nothing where pointToIndex = id canvasArr = runSTArray $ do arr <- newArray ((0, 0), (width - 1, height - 1)) False forM_ tree $ \p -> writeArray arr (pointToIndex p) True return arr canvasToStrings :: Char -> Char -> Canvas -> [String] canvasToStrings f t can = strLine <$> yCords where (_, (maxX, maxY)) = bounds can xCords = enumFromThenTo maxX (maxX - 1) 0 yCords = enumFromThenTo maxY (maxY - 1) 0 toChar False = f toChar True = t strLine y = (\x -> toChar $ can ! (x, y)) <$> xCords drawCanvas :: Canvas -> IO () drawCanvas can = mapM_ putStrLn $ canvasToStrings '_' '1' can main :: IO () main = do mSize <- readMaybe <$> getLine case mSize of Just sp -> case toCanvas 100 63 (tree (makePoint 50 0) 16 sp) of Just canv -> drawCanvas canv Nothing -> putStrLn "N is to small" Nothing -> putStrLn "Please type integer" Also you can view code here. Some notes: You to often pattern match your Point type, you can define some access functions or use record syntax. For example you can define getX and getY functions to access x and y coordinates. groupBy (\(Point _ y1) (Point _ y2) -> y1 == y2) can be rewritten as groupBy ((==) `on` getY) in case you define getY. Here you can find more about on function.
{ "domain": "codereview.stackexchange", "id": 17443, "tags": "haskell, ascii-art, fractals" }
Analysing the results of various search engines and determining a winner
Question: This is a programming challenge I submitted as part of a job interview, which I failed because it lacked "maintainability" and "patterns and best industry practices", so I guess we could all learn something from my mistakes. The challenge was to write a program that, given a list of programming languages, it'd return a list of result counts in various search engines, a winner on each engine, and a total winner: C:\> searchfight.exe .net java .net: Google: 4450000000 MSN Search: 12354420 java: Google: 966000000 MSN Search: 94381485 Google winner: .net MSN Search winner: java Total winner: .net I was not allowed to use any external libraries, so no HtmlAgilityPack or stuff like that. I will only post what I consider the most relevant sections of the code to keep it short, but I also just uploaded the whole project to GitHub. So here is my Main function: static void Main(string[] args) { try { Run(args); } catch (Exception ex) { Console.WriteLine(); Console.WriteLine("An unexpected exception has occurred: " + Environment.NewLine + ex.ToString()); } } Here is Run: private static void Run(string[] args) { try { if (args.Length == 0) throw new ConfigurationException("Expected at least one argument."); var runners = ReadConfiguration().SearchRunners.Where(runner => !runner.Disabled).ToList(); var results = CollectResults(args, runners).Result; Console.WriteLine(); ConsoleHelpers.PrintAsTable(results.Languages, results.Runners, results.Counts, "{0:n0}"); // Using 'ConsoleHelpers.PrintAsList' will print as a list instead. Console.WriteLine(); ConsoleHelpers.PrintAsTable( new[] { "Winner" }, results.Winners.Select(winner => winner.Key).ToList(), new[] { results.Winners.Select(w => w.Value).ToList() }.ToRectangularArray() ); Console.WriteLine(); Console.WriteLine("Total winner: {0}", results.Winner); } catch (ConfigurationException ex) { Console.WriteLine(); Console.WriteLine(ex.Message); } catch (AggregateException ex) { ex.Handle(e => { var searchException = e as SearchException; if (searchException != null) { Console.WriteLine(); Console.WriteLine(string.Format("Runner '{0}' failed. {1}", searchException.Runner, searchException.Message)); return true; } else return false; }); } } PrintAsTable, which has this signature: public static void PrintAsTable<T>(IReadOnlyList<string> rowHeaders, IReadOnlyList<string> colHeaders, T[,] values, string formatString = "{0}") just prints the table to the console. Here is how the output would look like: | bing | google | stackoverflow | bingScrape | googleScrape .net | 50,200,000 | 867,000,000 | 221,181 | 50,700,000 | 7,180,000,000 java | 41,800,000 | 47,600,000 | 963,553 | 41,700,000 | 407,000,000 | bing | google | stackoverflow | bingScrape | googleScrape Winner | .net | .net | java | .net | .net Total winner: .net I did give them an option to print exactly as the original output by using PrintAsList instead of PrintAsTable. The ReadConfiguration static method deserializes a Configuration class from an XML which is included with the project. Here is the Configuration class: public class Configuration { [XmlArrayItem("SearchRunner")] public List<SerializableSearchRunner> SearchRunners { get; set; } } The point of this class is to be serializable, so that an end user could add, remove or modify search engines by editing the XML, which is read at run time. Here is SerializableSearchRunner: [XmlInclude(typeof(WebClientSearchRunner))] public abstract class SerializableSearchRunner : ISearchRunner { [XmlAttribute] public string Name { get; set; } [XmlAttribute] [DefaultValue(false)] public bool Disabled { get; set; } public abstract Task<long> Run(string query); } And here is ISearchRunner: public interface ISearchRunner { string Name { get; } bool Disabled { get; } Task<long> Run(string query); } This represents a search engine. Run is the function that takes a query string (a programming language name) and returns a long that contains the result count. So far, the only actual search runner is WebClientSearchRunner: public class WebClientSearchRunner : SerializableSearchRunner { [XmlAttribute] public string Address { get; set; } [XmlAttribute] public string QueryName { get; set; } public StringDictionary Headers { get; set; } public StringDictionary Parameters { get; set; } public ResultFinder Finder { get; set; } public ResultParser Parser { get; set; } public QueryFormatter QueryFormatter { get; set; } public TextClient Client { get; set; } public WebClientSearchRunner() { } public override async Task<long> Run(string query) { if (Finder == null) throw new ConfigurationException("Finder cannot be null."); if (Address == null) throw new ConfigurationException("Address cannot be null."); if (string.IsNullOrWhiteSpace(QueryName)) throw new ConfigurationException("QueryName cannot be empty."); var uriBuilder = BuildUri(query); var responseText = await (Client ?? TextClient.Default).GetResponseText(uriBuilder.Uri, Headers); var resultText = Finder.Find(responseText); return (Parser ?? ResultParser.Default).Parse(resultText); } private UriBuilder BuildUri(string query) { var parameters = HttpUtility.ParseQueryString(String.Empty); if (Parameters != null) { foreach (var param in Parameters) parameters[param.Key] = param.Value; } parameters[QueryName] = (QueryFormatter ?? QueryFormatter.Default).FormatQuery(query); try { var uriBuilder = new UriBuilder(Address); uriBuilder.Query = parameters.ToString(); return uriBuilder; } catch (UriFormatException ex) { throw new ConfigurationException("The given Address is not a valid URL.", ex); } } } StringDictionary is a serializable dictionary of strings (both key and values). ResultFinder, ResultParser, QueryFormatter and TextClient are all single method abstract classes (I removed a few things to make it shorter): public abstract class ResultFinder { public abstract string Find(string responseText); } public abstract class ResultParser { public abstract long Parse(string result); } public abstract class QueryFormatter { public abstract string FormatQuery(string query); } public abstract class TextClient { public abstract Task<string> GetResponseText(Uri uri, StringDictionary headers); } I'm not posting the classes that implement these but they're all pretty straightforward. Here is a sample XML with a couple of search runners. This should give you an idea of what some implementations do: <?xml version="1.0" encoding="utf-16"?> <Configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <SearchRunners> <SearchRunner xsi:type="WebClientSearchRunner" Name="stackoverflow" Address="https://api.stackexchange.com/2.2/questions" QueryName="tagged" Disabled="false"> <Parameters> <Item Name="site" Value="stackoverflow" /> <Item Name="filter" Value="!bCzphOiWu)Q3g)" /> </Parameters> <Finder xsi:type="JSONResultFinder"> <Path>["total"]</Path> </Finder> </SearchRunner> <SearchRunner xsi:type="WebClientSearchRunner" Name="bingScrape" Address="https://www.bing.com/search" QueryName="q" Disabled="false"> <Finder xsi:type="RegexResultFinder" GroupIndex="1"> <Pattern>\&lt;span[^\&gt;]+class="sb_count"[^\&gt;]*\&gt;([\d\.\,]+)</Pattern> </Finder> </SearchRunner> </SearchRunners> </Configuration> The RegexResultFinder is a nasty Finder that uses regex to search the result count in an HTML page. But then the JSONResultFinder can read JSON in case you'd rather use official APIs. I figured they may want me to read HTML since they're using MSN Search in the sample output, so I added both options. Finally, the CollectResults static method (called from the static method Run on the very top of this post) is a simple wrap around the static method Collect in the Results class. Here is the Results class: public class Results { private readonly Lazy<IReadOnlyList<KeyValuePair<string, string>>> _Winners; private readonly Lazy<string> _Winner; public readonly IReadOnlyList<string> Languages; public readonly IReadOnlyList<string> Runners; public readonly long[,] Counts; public IReadOnlyList<KeyValuePair<string, string>> Winners { get { return _Winners.Value; } } public string Winner { get { return _Winner.Value; } } private Results(IReadOnlyList<string> languages, IReadOnlyList<string> runners, long[,] counts) { if (languages == null) throw new ArgumentNullException("languages"); if (runners == null) throw new ArgumentNullException("runners"); if (counts == null) throw new ArgumentNullException("results"); if (languages.Count != counts.GetLength(0)) throw new InvalidOperationException("Counts first length must equal languages length."); if (runners.Count != counts.GetLength(1)) throw new InvalidOperationException("Counts second length must equal ruunners length."); Languages = languages; Runners = runners; Counts = counts; _Winners = new Lazy<IReadOnlyList<KeyValuePair<string, string>>>(GetWinners); _Winner = new Lazy<string>(GetWinner); } private IReadOnlyList<KeyValuePair<string, string>> GetWinners() { var result = new KeyValuePair<string, string>[Runners.Count]; for (var ri = 0; ri < Runners.Count; ri++) { var winnerIndex = Languages.Indexes().Select(li => Counts[li, ri]).MaxIndex(); var winner = Languages[winnerIndex]; result[ri] = new KeyValuePair<string, string>(Runners[ri], Languages[winnerIndex]); } return result; } private string GetWinner() { var winnerIndex = Languages.Indexes().Select(li => Runners.Indexes().Sum(ri => Counts[li, ri]) ) .MaxIndex(); return Languages[winnerIndex]; } public static async Task<Results> Collect(IReadOnlyList<string> languages, IReadOnlyList<ISearchRunner> runners, IProgressReporter progressReporter = null) { if (languages == null) throw new ArgumentNullException("languages"); if (runners == null) throw new ArgumentNullException("runners"); var results = new long[languages.Count, runners.Count]; if (progressReporter != null) { progressReporter.Initialize(languages.Count * runners.Count); } List<Task> tasks = new List<Task>(); for (var li = 0; li < languages.Count; li++) { for (var ri = 0; ri < runners.Count; ri++) { tasks.Add(StartTask(languages, runners, progressReporter, results, li, ri)); } } await Task.WhenAll(tasks.ToArray()); return new Results(languages, runners.Select(r => r.Name).ToList(), results); } private static async Task StartTask(IReadOnlyList<string> languages, IReadOnlyList<ISearchRunner> runners, IProgressReporter progressReporter, long[,] results, int li, int ri) { var arg = languages[li]; var runner = runners[ri]; try { var result = await runner.Run(arg); results[li, ri] = result; } catch (ConfigurationException ex) { throw new SearchException(arg, runner.Name, string.Format(ex.Message, arg, runner.Name), ex); } catch (WebRequestException ex) { throw new SearchException(arg, runner.Name, string.Format(ex.Message, arg, runner.Name), ex); } catch (ParsingException ex) { throw new SearchException(arg, runner.Name, string.Format(ex.Message, arg, runner.Name), ex); } if (progressReporter != null) progressReporter.Advance(); } } Indexes is an IReadOnlyCollection extension that returns Enumerable.Range(0, collection.Count), and MaxIndex returns the index of the max element given an enumerable. That's it. Hope is not too much code. Guess there's no need to analyze every single line to get an idea of what I did wrong. Answer: Your code is really hard to follow. I think it is beacause the API you used is just way too complex for such a straightforward task. It's also weird. Like, why would result class... collect itself? I had troubles wrapping my head around this stuff. I think you should keep it simple: interface ISearchEngineFactory { ISearchEngine[] CreateEngines(); } interface ISearchEngine { string Name { get; } Response Send(string query); } public class Response { public int HitCount { get; set; } public string Query { get; set; } public string SourceName { get; set; } } public class Result { //make sure to synchronize this method public void Aggregate(Response response) { ... } //override it to return required output public override string ToString() { ... } } Your Main function will then look like this (you can wrap it in try catch if you feel like it): //move your initialization logic to factory ISearchEngineFactory factory = new ConfigurationFactory(); var engines = factory.CreateEngines(); var result = new Result(); Parallel.ForEach(engines, engine => { foreach(var query in args) { var response = engine.Send(query); result.Aggregate(response); } }); Console.WriteLine(result); Some things can still be improved. You could add a timeout parameter to Send method, for example. But hopefully you get the idea. As for your WebClient implementation - I can't really comment on that, since I am not experienced enough in web development. Maybe someone else will. :)
{ "domain": "codereview.stackexchange", "id": 16874, "tags": "c#, interview-questions" }
What causes liquid helium to climb walls?
Question: It is a phenomenon which can be observed if your search on the web. Apparently liquid Helium can crawl up through walls. Does every superfluid do this crawling action or is it just special for liquid helium? How is the motion of liquid Helium described and modeled in quantum mechanics? Answer: When helium, which turns liquid at about 4.2 Kelvin, is cooled further to below approximately 2 Kelvin, it undergoes a phase change. This phase of helium, referred to as Helium II, is a superfluid. What this means is that the liquid's viscosity becomes nearly zero. At the same time, its thermal conductivity becomes infinite. Because the viscosity is almost zero, the fluid flows very easily as a result of the smallest pressure or change in temperature. The response is so strong that even the smallest forces will help the light-weight liquid climb against the force of gravity. If you have liquid helium inside and outside your system, the liquid inside will flow till it matches the level of the liquid outside and the temperature
{ "domain": "physics.stackexchange", "id": 38531, "tags": "quantum-mechanics, superfluidity" }
How do I convert tangential speed to angular speed in an elliptic orbit?
Question: I am running an animation of a satellite in an elliptic orbit (defined by a parametric equation for $x$ and $y$ as a function of $t$) and want to make sure the spacecraft is traveling at the right speeds at different points in its orbit. That is, it should go slower at is apoapsis and much faster at its periapsis. I can easily calculate the tangential speed of the satellite using this equation: $v=\sqrt{GM(\cfrac{2}{r}-\cfrac{1}{a})}$ How do I convert this to the angular speed of the satellite at this point? I've done extensive research (hours and hours) but haven't found anything of value. The closest thing was this expression of Kepler's Third Law: $\cfrac{dA}{dt}=\cfrac{1}{2}r^2\omega$ Since $\cfrac{dA}{dt}$ is a rate (area swept out per second) I rewrote this equation as $\cfrac{A}{P}=\cfrac{1}{2}r^2\omega$ where $A$ is the area of the elliptic orbit (given by $A=\pi ab$ where $a$ and $b$ are the semi-major and semi-minor axes of the ellipse, respectively), and $P$ is the period of the elliptic orbit (given by $P=2 \pi \sqrt{\cfrac{a^3}{GM}}$). Solving this for $\omega$ yields: $\omega=\cfrac{2A}{Pr^2}$ For each time step in my simulation I use the satellite's current position in this equation to compute $\omega$ and then use the result to update the current $\theta$. This updated $\theta$ is then plugged into the parametric equation mentioned above to get the satellite's $x$ and $y$ position. I can't find my mistake anywhere and would really appreciate it if someone could point it out to me. Answer: The formula $$ \dot{\theta} = \omega = \frac{2A}{Pr^2} $$ is correct; it can also be derived from the specific angular momentum $h$: $$ h = r^2\omega = \sqrt{GMa(1-e^2)} = b\sqrt{\frac{GM}{a}}, $$ with $e = \sqrt{(a^2-b^2)/a^2}$ the orbital eccentricity. However, this doesn't solve the Kepler problem, because both $\omega$ and $r$ depend on $t$ in a complicated way, which isn't specified by the above formula. In other words, the above formula gives you $\omega(r)$, but not $\omega(t)$ and $r(t)$. Also, note that $\theta$ is the true anomaly, which is the angle between the direction of periapsis and the current position of the body, as seen from the main focal point (where the attracting body is). And $r$ is the distance between the current position and the focal point. If you want to use cartesian coordinates $(x,y)$, it is better to parametrize them using the eccentric anomaly $E$: $$ x = a\cos E,\qquad y = b\sin E. $$ So how to find $E(t)$? For this, we need to introduce another parameter, called the mean anomaly $M$. The mean anomaly increases linearly with time: $$ M(t) = \frac{2\pi}{P}t = \sqrt{\frac{GM}{a^3}}t. $$ From $M(t)$, we can calculate the eccentric anomaly $E(t)$ using Kepler's equation $$ M = E - e\sin E, $$ which you have to solve numerically. Once $E(t)$ is known, $(x(t),y(t))$ follow immediately. For completeness, the true anomaly can be calculated from the eccentric anomaly: $$ \tan\frac{\theta}{2} = \sqrt{\frac{1+e}{1-e}}\tan\frac{E}{2}, $$ and the distance $r$ to the attracting body at the focal point is $$ r = \frac{a(1-e^2)}{1+e\cos\theta}. $$
{ "domain": "physics.stackexchange", "id": 11486, "tags": "newtonian-mechanics, orbital-motion, space, angular-velocity, satellites" }
How many photons does my remote control garage opener emit?
Question: Every time I drive up to my house I imagine all the photons spitting out of the remote control garage opener when I press the button. And I imagine the door opener in the garage receiving them. There must be a ton of these particles going in all directions if the door always opens, right? I'm just curious: How many photons does the remote control send out, roughly? And how many photons must my garage door opener receive to know it is time to open the door? Just one? Answer: Most likely, your garage door opener operates at a frequency of 315 MHz. Multiplying by Planck's constant, that means each photon has energy of about $2\times 10^{-25}$ joules. Most likely, your garage door opener operates at about $1/10$ of a watt (or less, per comments below). So each second, it emits 1/10 of a joules of energy. That's $(2/10) \times 10^{25}$ photons per second or (very roughly) $5\times 10^{24}$. In other words, $5,000,000,000,000,000,000,000,000$ photons per second. To see how many of those hit the receiver, let $r$ be the distance from the transmitter to the receiver. The surface area of a sphere of radius $r$ is $4\pi r^2$. So if the reciever has area $A$, a fraction $A/(4 \pi r^2)$ of the photons hit the receiver. That's going to be a pretty small fraction, but still a whole lot of photons.
{ "domain": "physics.stackexchange", "id": 26285, "tags": "photons, electrical-engineering" }
Can Jupiter's rings be seen with the naked eye by an astronaut nearby? How difficult would it be?
Question: Maybe a basic question, apologies. I understand that there would be different answers depending on how far is "nearby", depending on how inclined our orbit is, and what is our alignment with the planet and the sun (forward scattering and back scattering). Let's say we are beyond the radiation belts, on Callisto or Themisto, or a spacecraft with an orbit between them, (1.8 to 7 million km) The glare from the planet can make it very difficult, how much so? Voyager's, Galileo's, NH's and Juno's(navcam) pics of the rings all had high exposures, the Pioneers couldn't detect them at all. There is a dense band where Adrastea and Metis orbit , then "gossamer rings", and then a halo... here I ask about the first of these, the densest. We all know and agree that we would easily be able to see Saturn's rings from every angle, distance and orientation. How visible are Jupiter's rings from nearby, under what circumstances? If you can also comment on Uranus' and Neptune's cases I would very much appreciate it. Answer: Here is an answer based on photometric arguments: The astronaut probably would not be able to see the rings, but it would be worth a try. I would recommend the astronaut float somewhere at a distance of ~1.5 $R_J$ from Jupiter's center, where $R_J$ is the planetary radius of 74,000 km (at back of envelope precision). That way, they can look in a direction where Jupiter and the Sun are not in the field of view. You are correct that glare from the planet makes it difficult. This is especially a problem for the existing spacecraft and telescope observations of the rings, because the rings and Jupiter are typically close together in the sky. For an astronaut, that would be a huge problem. The astronaut would want to use some kind of a baffle to block bright objects (like a handmaid's bonnet, but in vantablack). Being at a location of 1.5 $R_J$ puts the astronaut inside the rings, so they can look toward the rings – and away from Jupiter – at the same time. First we need the solar illuminance at Jupiter, ~128,000/25 lux = 5000 lux (wiki:Lux), because Jupiter is 5x farther from the Sun than Earth is. Then we need the effective visible area of the rings. I will cheat a bit on the geometry. At 1.5 $R_J$, the astronaut is 0.3 $R_J$ from the highest-opacity part of the ring, near 1.8 $R_J$ (throop++2004, fig. 7). The equivalent thickness of the rings if they were perfectly reflective, called the VIF, is about 4-15 cm (throop++2004, fig. 14). The rings are going to stretch fully across the sky from the astronaut's viewpoint, so I will cheat and say the length is half the diameter of a 0.3-$R_J$ circle. This works out to an equivalent visible area (for fully reflective rings) of 7 km2. Now we can get the luminous flux of the part of the rings visible to the astronaut, by multiplying the illuminance by the equivalent visible area: 5000 lux x 7 km2 = 3.5x1010 lumens. The illuminance from the rings, at the astronaut, can be found if we assume each part of the rings is isotropically scattering light back into a half-sphere which has a radius of 0.3 $R_J$. This gives an illuminance of 1.1x10-5 lux, or 0.11 millilux (0.11 mlx). Would rings with illuminance of 0.11 mlx be visible to the astronaut? The Milky Way gives an illuminance of 13 mlx (NPS), so the rings would be 100x fainter than the Milky Way. This would be pretty hard to see. For comparison, a 6th magnitude star is barely visible and gives an illuminance of 8 nlx (wiki:Apparent_magnitutde). The rings at 0.11 mlx are giving more than 10x more light than this faintest star, but unlike the star, the ring light is spread out in a line across the whole sky from the astronaut's viewpoint. So I think it would be pretty difficult to see the rings, but worth a try.
{ "domain": "astronomy.stackexchange", "id": 6073, "tags": "jupiter, planetary-ring, naked-eye, visible-light" }
Visualizing vector fields lines in Kerr
Question: I want to visualise an test EM field in Kerr spacetime (I want to plot the integral field lines). The field is a test field in the sense, that it doesn't change the background (metric). Could anyone point me to some more technical literature or some Mathematica/python code that would help me? What I thought I would do: I have the field $F^{\alpha \beta}$ in Boyer-Lindquist coordinates. I then took the ZAMO tetrad (observer family) $e^\alpha_a$ with $e^\mu_0 = u^\mu$ is the for velocity normalized to $$u_\alpha u^\alpha = -1.$$ I then projected the EM tensor onto the tetrad $$ F^{\alpha \beta} e_\alpha^a e_\beta^a = F^{ab} $$ and from this I took the standard definition of electric field $$ E^\alpha = -\frac{1}{2} F^{\alpha \beta }u_\beta \implies E^a = -\frac{1}{2} F^{a0} $$ So now I have the electric field with respects to the ZAMO tetrad. But how do i visualize this? The tetrad is still expressed in Boyer-Lindquist coordinates (which are like spheriodial coordinates). Do I have to transform into Kerr-Schild coordinates which reduce to cartesian (minkowski) in the flat-spacetime limit? I hope it's obvious from the context, but I used Greek letters $\alpha \beta$ to denote the indices with respect to coordinates and Latin letters $ab$ to denote indices with respect to tetrad. For example I would like to plot something like figure 2 and figure 3 in this article. Edit: The question can also be reduced to - do I always need to transform to cartesian like coordinates when plotting fields on curved background. Answer: Nataa a asked: "I want to visualise an test EM field in Kerr spacetime [...] But how do i visualize this? [...] Could anyone point me to some more technical literature or some Mathematica/python code that would help me?" The Mathematica code for the $\rm \{\lambda,z\}$-plot is at kerr.newman.yukterez.net at 17) streamplot, also see doi:10.1103/physreva.36.5118 For the vertical and horizontal magnetic and electric field lines we have $\rm M_z=Q \ Im[(z-ia)/\sigma] \ \ \ \ \ \ \ \ \ M_{\lambda}=Q \ Im[\lambda/\sigma]$ $\rm E_z=Q \ Re[(z-ia)/\sigma] \ \ \ \ \ \ \ \ \ \ E_{\lambda}=Q \ Re[\lambda/\sigma]$ $\rm \sigma=[\lambda^2+(z-ia)^2]^{3/2} \ \ \ \ \ \ \ \ \ \ \lambda=\sqrt{x^2+y^2}$ In a streamplot you'd plot $\rm \{M_z, M_{\lambda}\}$ as vector, and in a contourplot its magnitude.
{ "domain": "physics.stackexchange", "id": 94596, "tags": "electromagnetism, general-relativity, magnetic-fields, electric-fields" }
How Is Universal Energy Conserved Here? (Cosmology)
Question: There are three scenarios I will like to discuss here. 1- Jupiter and other "failed star" planets. These are gas giant planets which are much much bigger than earth (100 times or more, by mass) but don't have enough gravitational crunch in the core to initiate fusion like a star. However, these massive planets still generate heat energy in their core due to outrageous pressures that exist there and the friction that takes place between the atoms. So. How do these planets generate heat energy without compensating for it in terms of mass etc? 2- Same question about neutron stars. Their core is composed entirely of neutrons which do not disintegrate into electron-proton pair. Yet their core temperatures reach millions of degrees. How come these beasts maintain such horrific temperatures without converting any matter into energy (like a "normal" star does)? 3- They say there are active volcanoes on some of Jupiter's moons (Io to be precise). Now at such distances from the sun, one would expect dead, activity-less moons. They also state that jupiter's massive pull causes a tidal effect on the moon and initiates colossal friction in its core which results in volcanic action. So ... can we simply "generate" energy out of gravitational force? I mean ... it would make sense if Io was gradually spiralling in towards gravitational doom, decreasing it's distance toward Jupiter with each orbit. But they say the distance of Io from Jupiter is constant. Oh well. Answer: The Earth also radiates more energy than it receives from the Sun. There are many mechanisms by which an astronomical body can generate heat, and nuclear fusion is just one. In the case of the Earth the extra heat is generated by radioactive decay in the Earth's core. There is an excellent discussion of the source of Jupiter's excess heat in this article from the Nanjing University web site. The main source of Jupiter's heat is probably simply the energy from the gravitational collapse that is still working its way out. At first glance it might seem improbably that Jupiter is still losing its original collapse energy, but Jupiter is big and heat transport from the core is extremely slow. Radiactive heating will be happening in Jupiter's core just as in Earth's core, but the outer planets contain relatively lower percentages of the heavy elements and therefore lower levels of radionuclides. There isn't enough radioactive decay in Jupiter's core to make much difference. The same applies to neutron stars. They aren't generating much in the way of heat, but they will take a long time to cool. Finally, tidal losses cause a satellite to move outwards not inwards. This is a result of the conservation of angular momentum. The Moon is moving away from the Earth for precisely this reason. You say the Jupiter-Io distance is not changing, but I'm not sure we can measure it accurately enough to say that. Although the volcanos may look spectacular the energy involved in tidal heating is relatively small and any changes in the Jupiter-Io distance will also be small.
{ "domain": "physics.stackexchange", "id": 24736, "tags": "cosmology, energy-conservation" }
Calculating potential Chess moves
Question: I started a Chess in Python for fun. I learned Python in my free time but I've been doing it for a while. The code works but I just want to know if there's anything non-pythonic in my code. The functions calculate all potentially possible moves (ignoring game state). from itertools import product from itertools import chain from functools import wraps from math import atan2 _angle_to_direction = {90: "up", -90: "down", 180: "right", 0: "left", -135: "right_down", 135: "right_up", 45: "left_up", -45: "left_down"} def move(f): @wraps(f) def wrapper(*args, **kwargs): x, y = args moves = f(x, y) if (x, y) in moves: moves.remove((x, y)) return moves return wrapper def check_range(x:tuple) -> bool: return x[0] >= 0 and x[1] >= 0 and x[0] < 8 and x[1] < 8 @move def _knight(x:int, y:int) -> set: moves = chain(product([x - 1, x + 1], [y - 2, y + 2]), product([x - 2, x + 2], [y - 1, y + 1])) moves = {(x, y) for x, y in moves if check_range((x, y))} return moves @move def _rook(x:int, y:int) -> set: return {(x, i) for i in range(0, 8)}.union({(i, y) for i in range(0, 8)}) @move def _bishop(x:int, y:int) -> set: possible = lambda k: [(x + k, y + k), (x + k, y - k), (x - k, y + k), (x - k, y - k)] return {j for i in range(1, 8) for j in possible(i) if check_range(j)} def direction(start:tuple, end:tuple) -> str: delta_a = start[1] - end[1] delta_b = start[0] - end[0] return _angle_to_direction[int(atan2(delta_a, delta_b) * 180 / 3.14)] Answer: The implementation of your move decorator is a little schizophrenic. On the one hand, it carefully allows any and all arguments to be passed to it by declaring itself def wrapper(*args, **kwargs). However it then requires exactly two positional arguments which it then passes on to the wrapped function. Keyword arguments are accepted but ignored and dropped. I would suggest changing this to be all at one end or all at the other end of this spectrum. Given the annotations elsewhere, I would expect you to prefer changing the function to def wrapper(x, y) (or maybe to def wrapper(x, y, **kwargs) but then also passing **kwargs to f). It might make sense to incorporate check_range into the wrapper as well, simplifying most of the move generators. It might also make sense to skip the decorator and instead call the filter in your move generators: return filter_move((x, y), moves). As a side note, it would be interesting to see if there are any performance advantages to either the current approach of checking for (x,y) and conditionally removing it or to unconditionally doing a set difference: return f(x, y) - {(x, y)}. The rest of your code is fairly straightforward, if perhaps a bit too clever. Clever is typically bad for other readers, or for debugging misbehavior. The two things that stand out to me is the name of your lambda possible (which seems more like a diagonal_moves_of_distance or simply diagonal), and the approach used for direction. Given the limited number of possible move deltas, bringing in trigonometry to handle it seems like overkill; I'd almost expect a simple if/else tree instead. As is, it will have problems classifying a knight's move.
{ "domain": "codereview.stackexchange", "id": 5308, "tags": "python, python-3.x, chess" }
Why doesn't the conservation of linear momentum hold in this case?
Question: So the little box in the image shows my isolated system consisting of two masses $m$ and $M$ of which $M$ is fixed to the floor of the system–it cannot be moved. The system is evacuated and in space (hence free from the influence of gravity). Furthermore the collision of $m$ and $M$ is elastic. My doubt is the following: Before the collision (the box on the left), the linear momentum of the system is $m$ $\vec{v}$. After the collision, the linear momentum of the system would be -$m$ $\vec{v}$. Isn't this a contradiction to the consecration of linear momentum in an isolated system? Answer: You are right that there is a contradiction in the scenario you set up. In fact, the contradiction comes in your very first sentence: my isolated system ... of which $M$ is fixed to the floor of the system. You have to pick. Either the system is isolated, or the mass $M$ is interacting with the "floor". You can't have both. Therefore you should not be surprised that starting from this contradictory premise, you have derived other seemingly impossible results. I think it would be helpful to discuss three correct ways of setting up the problem. The first way will be where an external agent is holding the mass $M$ fixed. The second case will be where the masses are truly isolated. For the third case, we will take again the case where the masses are truly isolated, but we will look at the motion in a frame fixed to the mass $M$. In the first case, where the mass $M$ is fixed in place by an external agent, the momentum of the two mass system does indeed change from $m\vec{v}$ to $-m\vec{v}$, and so the momentum of this system is not conserved, but that is allowed since it is not an isolated system. Momentum is transferred to the external agent through the force used to hold the mass $M$ in place. Therefore total momentum is still conserved. Now in the second case where we assume the masses are isolated, the mass $M$ recoils when impacted by the mass $m$, and the amount that it recoils is exactly determined by conservation of momentum, so momentum is again conserved in this case. The third and final case is where both systems are isolated, but we view the motion in the frame of the mass $M$. Here there does seem to be a contradiction because the masses truly are isolated, and the mass $M$ truly does appear to be fixed. The resolution to this contradiction is that the frame is non-inertial, and Newton's laws are asserted to be true in inertial frames. The frame fixed to the mass $M$ accelerates at the instant of the collision, which is exactly when the total momentum seems to change. So any way you analyze the situation, as long as you are consistent, you get a reasonable result.
{ "domain": "physics.stackexchange", "id": 39177, "tags": "newtonian-mechanics, momentum, conservation-laws, collision" }
Simple jQuery Slider Plugin
Question: I'm writing a very simple fading slider plugin for my own use. I would like to know what I could do to improve upon what I've already done. I think I've done well, but I am certain there are things that I can do to improve. (function($){ $.fn.slider = function(options){ var settings = $.extend({ selector:this, speed:3000 }, options); var $children = settings.selector.children();//Check the children from the element calling this function var $currentChildIndex = 0;//Set the current Index to 0 since we've not started yet. var $current = $($children[$currentChildIndex]);//Get the current child (Redundant?) setInterval(function(){ $current = $($children[$currentChildIndex]);//Assign current from list of children using currentChildIndex $current.addClass('active');//Assign active to current slider $children.each(function(key,obj){ if(!$current.is($(obj))){//Remove all active elements that aren't the current slider $(obj).removeClass('active'); } }); if($currentChildIndex==($children.length-1)){//Check if Index has reached the end, if so reset to start $currentChildIndex=0; }else{ $currentChildIndex++; } },settings.speed);//Wait every X seconds before changing slide } }(jQuery)); Answer: I think you can remove all of the comments you have here, as I think you are already using mostly meaningful variable names (a few minor comments on this below) and the code is clear in intent. The comments here mostly seem like clutter. This is not to mention that comments at the end of a line of code are generally hard to read. Comments should typically be before the line/section of code to which they apply on their own line(s). Typically jQuery plug-ins are applied like $([selector]).slider(). Such that inside the plugin code this refers to a jQuery collection. You are not treating this as a collection but rather as an individual element. Think of this as a collection. Also, I don't think selector setting is relevent for this reason. Typically element selection has already been applied by the point you are in the plug-in code. Keeping this philosophy of potentially working with a collection of elements, would also allow you to invoke multiple sliders with one pass. For example: // every element with slider class will be turned into a slider $('.slider').slider(); Typically jQuery plug-ins are designed to maintain chaining so a return statement should be applied to allow the collection that the plug-in is acting against (or some filtered subset of it depending on the logic of the plug-in) to be returned for further method chaining. Should $children be named $slides to indicate the selected children represent the slides to be displayed? Regarding var $currentChildIndex, why not just currentChildIndex? The styling approach of prepending $ to variables is typically to help identify jQuery collections. This is not a jQuery collection. This variable name is probably also unnecessarily long. Perhaps just currentIndex which still conveys clear meaning. I don't think you need $current at all. You can directly reference members of the $children collection directly using $children.eq(currentIndex) No need to iterate $children to remove class from inactive children. You can simplify this. For example: $children.removeClass('active'); $children.eq(currentIndex).addClass('active'); You can simplify your if-else (to avoid unnecessary else code paths). For example: currentChildIndex++; if(currentChildIndex === $children.length) { currentChildIndex = 0; } I would recommend using strict comparisons (===, !==) as your default means to compare rather than loose comparisons. Loose comparisons tend to introduce bugs around truthy/falsey behavior (though in this case there should not be concern as you are clearly only comparing integer values). IMO, this is just a good habit to get into with one's coding. Only use loose comparisons where there is a specific reason to do so. Putting it all together you might have something like: (function($){ $.fn.slider = function(options){ var settings = $.extend({ speed:3000 }, options); return this.each(function() { var $slides = $(this).children(); var currentIndex = 0; setInterval(function(){ $slides.removeClass('active'); $slides.eq(currentIndex).addClass('active'); currentIndex++; if (currentIndex === $slides.length) { currentIndex = 0; } },settings.speed); }); } }(jQuery));
{ "domain": "codereview.stackexchange", "id": 24642, "tags": "javascript, jquery" }
What does 'true concurrency' mean?
Question: I often hear phrases like 'true concurrency semantics' and 'true concurrency equivalences' without any references. What does those terms mean and why are they important? What are some examples of true concurrency equivalences and what is the need for them? E.g. in which cases they are more applicable than more standard equivalences (bisimulation, trace equivalence, etc)? Answer: The term "true concurrency" arises in the theoretical study of concurrent and parallel computation. It is in contrast to interleaving concurrency. True concurrency is concurrency that cannot be reduced to interleaving. Concurrency is interleaved if at each step in the computation, only one atomic computing action (e.g. an exchange of messages between sender and receiver) can take place. Concurrency is true if more than one such atomic action take place in a step. The simplest way of distinguishing both is to look at the rule for parallel composition. In an interleaving based setting, it would look something like this: $$\frac{P \rightarrow P'}{P|Q \rightarrow P'|Q}$$ This rule enforces that only one process in a parallel composition can execute an atomic action. For true concurrency, a rule like the following would be more appropriate. $$\frac{P \rightarrow P'\quad Q \rightarrow Q'}{P|Q \rightarrow P'|Q'}$$ This rule allows both participants in a parallel composition to execute atomic actions. Why would one be interested in interleaved concurrency, when concurrency theory is really the study of systems that execute computation steps in parallel? The answer is, and that's a great insight, that for simple forms of message passing concurrency, true concurrency and interleaving based concurrency are not contextually distinguishable. In other words, interleaved concurrency behaves like true concurrency as far as observers can see. Interleaving is a good decomposition of true concurrency. Since interleaving is easier to handle in proofs, people often only study the simpler interleaving based concurrency (e.g. CCS and $\pi$-calculi). However, this simplicity disappears for concurrent computation with richer forms of observation (e.g. timed computation): the difference between true concurrency and interleaved concurrency becomes observable. Standard equivalences like bisimulations and traces have the same definitions for true and interleaving based concurrency. But they may or may not equate different processes, depending on the underlying calculus. Let me give an informal explanation of why interleaving and truly concurrent interaction are indistinguishable in simple process calculi. The setting is a CCS or $\pi$-like calculus. Say we have a program $$ P \quad=\quad \overline{x} \ |\ \overline{y} \ |\ x.y.\overline{a} \ |\ y.\overline{b} $$ Then we have the following truly concurrent reduction: \begin{eqnarray*} P &\rightarrow& y.\overline{a} \ |\ \overline{b} \end{eqnarray*} This reduction step can be matched by the following interleaved steps: \begin{eqnarray*} P &\rightarrow & \overline{x} \ |\ x.y.\overline{a} \ |\ \overline{b} \\ &\rightarrow & y.\overline{a} \ |\ \overline{b} \end{eqnarray*} The only difference between both is that the former takes one step, while the latter two. But simple calculi cannot detect the number of steps used to reach a process. At the same time, $P$ has the following second interleaved reduction sequence: \begin{eqnarray*} P &\rightarrow & \overline{y} \ |\ y.\overline{a} \ |\ y.\overline{b} \\ &\rightarrow & \overline{a} \ |\ y.\overline{b} \end{eqnarray*} But this is also a reduction sequence in a truly concurrent setting, as long as true concurrency is not forced (i.e. interleaved executions are allowed even when there is potential for more than one interaction at a time).
{ "domain": "cs.stackexchange", "id": 13222, "tags": "terminology, reference-request, concurrency" }
Print pair representing objects from sequence of nonnegative integer pairs
Question: There are some things I am not sure of from a professional standpoint about the "correct" way to write C++ code. I have been looking through source code produced by various opensource projects, as well as other code posted here and on Stack Overflow. So let's just leave it at this. Let's say I am interviewing for company A over the phone. Not white board yet. They ask me to write the code below and turn it in a hour. This is a hypothetical situation, however it parallels how most "big" companies are interviewing nowadays. Would this code "get me the job"? If not, what would you change, or what can you coach me with so I can get a job programming? #include <iostream> #include <queue> #include <algorithm> #include <iterator> /* * This program reads a sequence of pairs of nonnegative integers * less than N from standard input (interpreting the pair p q to * mean "connect object p to object q") and prints out pair * representing objects that are not yet connected. It maintains an * array id that has an entry for each object, with the property * that id[p] and id[q] are equal if and only if p and q are * connected. */ static const int N = 10; int main( int argc, char *argv[ ] ) { /* * Ease the type of long types */ typedef std::ostream_iterator<int> output_data; typedef typename std::vector<int>::iterator v_it; /* * Generate Test Data */ std::pair<int,int> pairs[12] = { std::pair<int,int>( 3,4 ), std::pair<int,int>( 4,9 ), std::pair<int,int>( 8,0 ), std::pair<int,int>( 2,3 ), std::pair<int,int>( 5,6 ), std::pair<int,int>( 2,9 ), std::pair<int,int>( 5,9 ), std::pair<int,int>( 7,3 ), std::pair<int,int>( 4,8 ), std::pair<int,int>( 5,6 ), std::pair<int,int>( 0,2 ), std::pair<int,int>( 6,1 ) }; /* * Load Test Data onto Queue */ std::queue<std::pair<int,int> > queue; for( int x=0;x<12;x++ ) { queue.push( pairs[x] ); } /* * Data strucutre to represent nodes in graph. * An index in the vector is an id for a node. */ std::vector<int> id; /* * Start the nodes as not being connected */ id.reserve( N ); for( int i = 0;i<N;i++ ) id.push_back( i ); std::cout << "p q\t"; /* * Output the data */ copy( id.begin( ), id.end( ), output_data( std::cout, " " ) ); std::cout << std::endl; std::cout << "------------------------------------"; std::cout << std::endl; /* * Algorithm to find out if wheter or not any two * given pair p-q is connected. It does not show * how they are connected. */ v_it start = id.begin( ); while( !queue.empty( ) ) { std::pair<int,int> &pair = queue.front( ); int p = pair.first; int q = pair.second; int t = *(start+p); if( t == *(start+q) ) { } else { for( v_it i = id.begin( ); i < id.end( ); i++ ) { if( *(i) == t ) { *(i) = *(start+q); } } } std::cout << p << " " << q << "\t"; copy( &id[0], &id[N], output_data( std::cout, " " ) ); std::cout << std::endl; queue.pop( ); } } The output and arguments to gcc would look like this: [mehoggan@desktop robert_sedgewick]$ g++ -o qf -Wall ./quick_find.cpp [mehoggan@desktop robert_sedgewick]$ ./qf p q 0 1 2 3 4 5 6 7 8 9 ------------------------------------ 3 4 0 1 2 4 4 5 6 7 8 9 4 9 0 1 2 9 9 5 6 7 8 9 8 0 0 1 2 9 9 5 6 7 0 9 2 3 0 1 9 9 9 5 6 7 0 9 5 6 0 1 9 9 9 6 6 7 0 9 2 9 0 1 9 9 9 6 6 7 0 9 5 9 0 1 9 9 9 9 9 7 0 9 7 3 0 1 9 9 9 9 9 9 0 9 4 8 0 1 0 0 0 0 0 0 0 0 5 6 0 1 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 0 6 1 1 1 1 1 1 1 1 1 1 1 Answer: I have a few more comments. I think you had a good idea adding typedefs to keep some of the names shorter and more meaningful, but I think I'd add even more than that. typedef std::ostream_iterator<int> output_data; typedef std::vector<int> graph; // As Martin said, `typename` isn't required (or really even allowed) here. typedef graph::iterator v_it; typedef std::pair<int, int> dt; Since you have code to show p, q and id in a couple of different places, and it's important that they match up with each other, I'd move that into a little function named display or something on that order. In one place, you need to display the names p and q, and in others their values, so we want to make this a template parameterized on the types of p and q: template <class T> void display(T p, T q, graph const &id) { std::cout << p << " " << q << "\t"; std::copy( id.begin( ), id.end( ), output_data( std::cout, " " ) ); std::cout << '\n'; } For example, when we use this from main to display the initial state, we use code something like this: display('p', 'q', id); std::cout << std::string(N*3, '-') << "\n"; As a general rule, I prefer to create a vector already initialized, rather than creating, then filling it with data. Since filling a vector (or whatever) with a sequence of values is something that comes up all the time, I have a sequence class and gen_seq front end for exactly that purpose. It would probably be overkill to write gen_seq just for this purpose, but I think it's worth having around anyway, and when you do, you might as well use it. The header looks like this: #ifndef GEN_SEQ_INCLUDED_ #define GEN_SEQ_INCLUDED_ template <class T> class sequence : public std::iterator<std::forward_iterator_tag, T> { T val; public: sequence(T init) : val(init) {} T operator *() { return val; } sequence &operator++() { ++val; return *this; } bool operator!=(sequence const &other) { return val != other.val; } }; template <class T> sequence<T> gen_seq(T const &val) { return sequence<T>(val); } #endif With this, you can create and initialize id like this: #include "gen_seq" graph id(gen_seq(0), gen_seq(N)); I also keep a couple of macros around that are handy when you're initializing a vector from an array. There are templates that are (at least theoretically) better, but I've never quite gotten around to switching over to them. The macros look like this: #define ELEMENTS_(x) (sizeof(x)/sizeof(x[0])) #define END(array) ((array)+ELEMENTS_(array)) With these, the typedefs above, and (as sort of suggested by @Nim) using a vector instead of a queue, initializing your test data looks like this: dt pairs[] = { dt( 3,4 ), dt( 4,9 ), dt( 8,0 ), dt( 2,3 ), dt( 5,6 ), dt( 2,9 ), dt( 5,9 ), dt( 7,3 ), dt( 4,8 ), dt( 5,6 ), dt( 0,2 ), dt( 6,1 ) }; std::vector<dt> data((pairs), END(pairs)); Instead of putting all the code to process the nodes into main, I'd move most of it out to a process_node function object. I'd also note that this: for( v_it i = id.begin( ); i < id.end( ); i++ ) { if( *(i) == t ) { *(i) = *(start+q); } } ...is essentially the same as std::replace(id.begin(), id.end(), t, *(start+q));. Taking that into account, process_node comes out looking something like this: class process_node { graph &id; public: process_node(graph &id) : id(id) {} void operator()(dt const &pair) { int p = pair.first; int q = pair.second; int o = id[p]; int n = id[q]; if(o != n) std::replace(id.begin(), id.end(), o, n); display(p, q, id); } }; Since we now have a vector to iterate through, and a function object to process each item, we can switch from an explicit loop to a standard algorithm: std::for_each(data.begin(), data.end(), process_node(id)); [...and yes, for those find that unusual, this is a noteworthy day: I actually used std::for_each -- truly a rarity. ] That, however, brings up another point: there are still a few things about this code that don't excite me much. One, in particular, is that fact that it combines processing a node with displaying a row of results. We'd have to do a fair amount of extra work to avoid that, so I haven't bothered, but if we fixed that, std::for_each probably wouldn't be the right choice any more.
{ "domain": "codereview.stackexchange", "id": 598, "tags": "c++, interview-questions, mathematics, graph" }
Tank discharge paradox?
Question: I'm studying fluid dynamics and I came up with an example that I find profoundly counterintuitive. I'd like someone more used to this kind of problems to confirm my guess. We have two identical tanks with water $A$ and $B$. The only difference is $A$ has a hole, while $B$ has a long tube of the same width as the hole in $A$. I attach a diagram. The problem I find is that if one applies Bernoulli, the velocity is different. Therefore, deposit $B$ empties much faster than $A$. This is very counterintuitive to me. How having a pipe facilitates the discharge? I would expect the opposite (although I acknowledge that this guess is motivated by the presence of friction the tube that I'm explicitly neglecting in this problem). Is deposit $B$ discharging faster in reality? Why is so? How does water "know" that it must flow faster because there is a pipe somewhere below? Answer: It's not the pipe that's affecting the discharge speed. The only thing that matters in these equations* is the height of the water column above the hole. The water at the opening in B has more water pushing down on it from directly above, so it makes sense that it should exit faster. But the fact that there's a pipe surrounding the column doesn't actually matter. To show this, suppose we construct a tank C, which looks the same as tank A (i.e. no pipe) but has a height $h_1+h_2$. If you use those same equations, you'll find that the water exiting the hole in the bottom of tank C (with no pipe) has the same velocity as the water exiting the hole in the bottom of tank B (with a pipe). *Note that these equations are themselves a simplified model of the way actual fluids work. In particular, they're only reliable for completely inviscid liquids (i.e. those that flow without any internal resistance).
{ "domain": "physics.stackexchange", "id": 60971, "tags": "fluid-dynamics, bernoulli-equation" }
variable oxidation states
Question: which of the statements is true: looking at the electronic configuration of an element, can the possible oxidation states/oxidation number of an elemnet be predicted. for example, the valence electronic configuration of nitrogen is 2s^2 2p^3. From this can we know all the possible oxidation states exhibited by the element nitrogen. (OR) 2.We should first know all the compounds nitrogen can form, and then find the oxidation number of nitrogen in each compound, and then give all the possible oxidation states nitrogen can exhibit. Answer: The oxidation numbers cannot be predicted from the structure of the atom. They are experimentally determined. Not much chance you predict that while nitrogen can get all integer oxidation numbers from $−3$ to $+5$, its neighbor oxygen can only have $−2$, $−1$, $0$, $+1$, $+2$ in known compounds. And then come fractional oxidation states...
{ "domain": "chemistry.stackexchange", "id": 15284, "tags": "inorganic-chemistry, oxidation-state" }
How to convert between different point cloud types using PCL?
Question: I wonder whether there is an elegant way to convert instances of pcl::PointCloud<pcl::PointXYZ> to pcl::PointCloud<pcl::PointXYZRGB>and vice-versa? I am trying to load point cloud data from a .pcd-file that only includes XYZ data. Finally, I want to feed this point cloud into a module that requires PointXYZRGB data (but actually does not use the RGB data). My current solution is to call pcd::io::loadPCDFile in order to populate pcl::PointCloud<pcl::PointXYZ> and then just copy points over one-by-one to a XYZRGB point cloud. Note that I am relatively new to C++ programming and might have overseen an obvious solution. (I am using diamondback) Originally posted by Julius on ROS Answers with karma: 960 on 2011-03-21 Post score: 11 Answer: In the meantime, I found the effective (and simple) solution PointCloud<PointXYZ> cloud_xyz; // [...] PointCloud<PointXYZRGB> cloud_xyzrgb; copyPointCloud(cloud_xyz, cloud_xyzrgb); a quick diff on serialized files showed this to be equivalent to cloud_xyzrgb.points.resize(cloud_xyz.size()); for (size_t i = 0; i < cloud_xyz.points.size(); i++) { cloud_xyzrgb.points[i].x = cloud_xyz.points[i].x; cloud_xyzrgb.points[i].y = cloud_xyz.points[i].y; cloud_xyzrgb.points[i].z = cloud_xyz.points[i].z; } See API documentation. Originally posted by Julius with karma: 960 on 2011-06-05 This answer was ACCEPTED on the original site Post score: 19
{ "domain": "robotics.stackexchange", "id": 5162, "tags": "ros, pcl, perception-pcl" }
What's the complexity of Median-SAT?
Question: Let $\varphi$ be a CNF formula with $n$ variables and $m$ clauses. Let $t \in \{ 0,1 \}^n$ represent a variable assignment and $f_{\varphi}(t) \in \{ 0, \ldots , m \}$ count the number of clauses satisfied by a variable assignment to $\varphi$. Then define Median-SAT as the problem of computing the median value of $f_{\varphi}(t)$ over all $t \in \{ 0,1 \}^n$. For example, if $\varphi$ is a tautology then the solution to Median-SAT will be $m$ since regardless of assignment every clause will be satisfied. However in the case of $\overline{SAT}$ the solution to Median-SAT could be anywhere between $0$ and $m-1$. This question arose when I was pondering two natural extensions of SAT, MAX-SAT and #SAT, and what the difficulty of the resulting problem would be if they were put together. For MAX-SAT we have to find a particular variable assignment to maximize the number of variables satisfied by $\varphi$. For #SAT we have to count how many assignments satisfy all $m$ clauses of $\varphi$. This variant winds up mainly as an extension of #SAT (and in fact of #WSAT), but retains some of the flavor of MAX-SAT in that we count the number of satisfied clauses rather than just deciding whether they're all satisfied or not. This problem seems harder than #SAT or #WSAT. For each variable assignment #SAT decides the Boolean problem of whether that assignment satisfies $\varphi$ or not whereas Median-SAT determines "to what extent" $\varphi$ is satisfied in terms of the number of clauses that an assignment satisfies. I realize that this problem is somewhat arbitrary; computing the average or mode number of clauses satisfied by each variable assignment seems to capture the same quality. Probably many other problems do too. Has this problem been studied, perhaps under a different guise? How hard is it compared to #SAT? It's not clear to me a priori that Median-SAT is even contained in FPSPACE, although it does seem to be contained in FEXPTIME. Answer: Given an instance of SAT, an integer $k$, and a variable assignment, we can decide in polynomial time whether exactly $k$ clauses are satisfied, simply by counting the number of clauses that are satisfied and testing whether that number equals $k$. Hence we can calculate the total number of variable assigments satisfying exactly $k$ clauses using a #P oracle. So like Max-SAT, Median-SAT can be computed in polynomial time using a $\#P$ oracle. This shows that the problem is in $FP^{\#P} \subseteq FPSPACE$.
{ "domain": "cstheory.stackexchange", "id": 2240, "tags": "cc.complexity-theory, sat, counting-complexity" }