anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How to change the homing order | Question:
Hello,
I am trying to home 3 drives in a specific order. Here akosodry asked the same question: Link to another question
So what I did was specify the nodes as a list:
nodes: #The order here is important because it defines the homing order. welp it does not
- name: joint_z
id: 4 #node id
dcf_overlay:
"3202": "77" #Motor configuration
"6099sub1": "40000" #Homing speed when searching for switch
"6099sub2": "5000" #Homing speed when searching for zero
"3240sub1": "7"
"6098": "14" #Homing method,
"6081": "50000" #profile velocity
"3700": "1" #Reaction to following error
pos_to_device: "rint(pos*1000)" # rad -> mdeg
pos_from_device: "obj6064/1000" # actual position [mdeg] -> rad
vel_to_device: "rint(vel*1000)" # rad/s -> mdeg/s
vel_from_device: "obj606C/1000" # actual velocity [mdeg/s] -> rad/s
eff_to_device: "rint(eff)" # just round to integer
eff_from_device: "0" # unset
- name: joint_y
id: 3 # node id
dcf_overlay:
"6099sub1": "40000" #Homing speed when searching for switch
"6099sub2": "5000" #Homing speed when searching for zero
"3240sub1": "7"
"6098": "10" #Homing method
"6081": "50000" #profile velocity
"3700": "1" #Reaction to following error
"6065": "4294967295" #Following error surveillance deactivated
pos_to_device: "rint(-pos*1000)" # rad -> mdeg
pos_from_device: "-obj6064/1000" # actual position [mdeg] -> rad
vel_to_device: "rint(-vel*1000)" # rad/s -> mdeg/s
vel_from_device: "-obj606C/1000" # actual velocity [mdeg/s] -> rad/s
eff_to_device: "rint(eff)" # just round to integer
eff_from_device: "0" # unset
joint_z should be homed before joint_y, but no matter in which order I put them, joint_y always gets homed first. What am I doing wrong or is there a different solution?
Originally posted by TLZ on ROS Answers with karma: 3 on 2021-07-07
Post score: 0
Answer:
What am I doing wrong
Nothing!
Even with the list-style, the node will be sorted by name.
I just discovered this issues recently.
Please try https://github.com/ros-industrial/ros_canopen/pull/438
is there a different solution?
If you want to use the release version:
You can name the node alphabetically, but set different joint names:
nodes:
- name: motor_1
joint: joint_z
id: 4 #node id
Originally posted by Mathias Lüdtke with karma: 1596 on 2021-07-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by TLZ on 2021-10-21:
Sorry for the super late reply. Thank you for the fast commit. It now works as expected. | {
"domain": "robotics.stackexchange",
"id": 36663,
"tags": "ros, ros-canopen"
} |
Linked List simple implementation | Question: I made a simple linked list in C with the following functionalities:
Create(creates the list); Find(searches for an element in the list); Insert(inserts a value at the beginning of the list); Destroy(destroys the entire list). It only uses integers (for simplicity), and I want to know for possible errors and how to improve it in general.
The node structure:
typedef struct sllist
{
int val;
struct sllist* next;
}
sllnode;
The functions:
#include <stdio.h>
#include <stdlib.h>
#include "node.c"
/*
*Creates the linked list with a single value
*/
sllnode *create(int value)
{
sllnode *node;
node = (sllnode *) malloc(sizeof(sllnode));
if (node == NULL)
{
puts("Memory error");
return NULL;
}
node->val = value;
node->next = NULL;
return node;
}
/*
*Searches for an element in the list.
*Returns 1 if its found or 0 if not
*/
int find(sllnode *head, int value)
{
sllnode *trav = head;
do
{
if (trav->val == value)
{
return 1;
}
else
{
trav = trav->next;
}
} while (trav != NULL);
return 0;
}
/*
*Inserts a value at the beginning of the list
*/
sllnode *insert(sllnode *head, int value)
{
sllnode *new_node = (sllnode *) malloc(sizeof(sllnode));
if (new_node == NULL)
{
puts("Memory error");
return NULL;
}
new_node->val = value;
new_node->next = head;
head = new_node;
return head;
}
/*
*Destroys (i.e frees) the whole list
*/
void *destroy(sllnode *head)
{
sllnode *temp = head;
while (temp != NULL)
{
free(temp);
temp = temp->next;
}
free(head);
}
int main(void)
{
return 0;
}
Answer: Including a C file
Convention is that all shared stuff is declared in .h files.
Minor 1
In destroy(), you have the return type void*. Remove the pointer star and it's fine.
Minor 2
sllnode *create(int value)
{
sllnode *node;
node = (sllnode *) malloc(sizeof(sllnode));
if (node == NULL)
{
puts("Memory error");
return NULL;
}
...
It's a bad idea to abuse standard output from within algorithms/data structures. After all, everything that may go wrong is that there is no memory enough for the new node; returning NULL may well indicate that situation.
Minor 3
node = (sllnode *) malloc(sizeof(sllnode));
More idiomatic C is
node = malloc(sizeof *node);
Minor 4
You "append" the new nodes in reversed direction. Basically this is a (linked) stack and not a list.
Minor 5
while (temp != NULL)
{
free(temp);
temp = temp->next;
}
Don't do this. Instead have something like
while (temp)
{
next_node = temp->next;
free(temp);
temp = next_node;
}
Summa summarum
For a starter, I had this in mind:
#include <stdio.h>
#include <stdlib.h>
typedef struct linked_list_node {
int value;
struct linked_list_node* next;
} linked_list_node;
typedef struct {
linked_list_node* head;
linked_list_node* tail;
} linked_list;
void linked_list_init(linked_list* list)
{
if (!list)
{
return;
}
list->head = NULL;
list->tail = NULL;
}
void linked_list_free(linked_list* list)
{
linked_list_node* current_node;
linked_list_node* next_node;
if (!list)
{
return;
}
current_node = list->head;
while (current_node)
{
next_node = current_node->next;
free(current_node);
current_node = next_node;
}
}
int linked_list_append(linked_list* list, int value)
{
linked_list_node* new_node;
if (!list)
{
return 1;
}
new_node = malloc(sizeof *new_node);
if (!new_node)
{
return 1;
}
new_node->value = value;
if (list->head)
{
list->tail->next = new_node;
}
else
{
list->head = new_node;
}
list->tail = new_node;
return 0;
}
int linked_list_index_of(linked_list* list, int value)
{
int index;
linked_list_node* node;
if (!list)
{
return -2;
}
for (node = list->head, index = 0; node; node = node->next, index++)
{
if (node->value == value)
{
return index;
}
}
return -1;
}
int main() {
int i;
linked_list my_list;
linked_list_init(&my_list);
for (i = 0; i < 10; ++i)
{
linked_list_append(&my_list, i);
}
printf("%d %d\n",
linked_list_index_of(&my_list, 4),
linked_list_index_of(&my_list, 100));
linked_list_free(&my_list);
return 0;
}
Hope that helps. | {
"domain": "codereview.stackexchange",
"id": 22250,
"tags": "beginner, c, linked-list, pointers"
} |
Can't interpret the text information and ratings matrix imported to NN | Question: I have a Recommender system which uses a Collaborative bayesian approach using pSDAE for recommending scientific articles from the Citeulike Dataset
The text information (as input to pSDAE) is in the file mult.dat and the rating matrix (as input for the MF part) is in the file cf-train-1-users.dat and is loaded using the following code:
def get_mult():
X = read_mult('mult.dat',8000).astype(np.float32)
return X
def read_user(f_in='cf-train-1-users.dat',num_u=5551,num_v=16980):
fp = open(f_in)
R = np.mat(np.zeros((num_u,num_v)))
for i,line in enumerate(fp):
segs = line.strip().split(' ')[1:]
for seg in segs:
R[i,int(seg)] = 1
return R
The raw data is in proper Excel format with citations as doc-id, title, citeulike-id, raw-title, raw-abstract.
The mult.dat file containing hte text information looks like:
63 1:2 1666:1 132:1 901:1 1537:2 8:1 9:1 912:1
The trainusers.dat file looks like:
10 1631 3591 10272 14851 4662 13172 12684 5324 3595 3404
Here is the link to th ipynb for the whole Recommender system:
https://github.com/js05212/MXNet-for-CDL/blob/master/collaborative-dl.ipynb
Answer: I am the author of the CDL paper.
For the mult.data file, in
63 1:2 1666:1 132:1 901:1 1537:2 8:1 9:1 912:1
63 is the number of words for this documents, 1:2 means word 1 appears twice in the document, 1666:1 means word 1666 appears once in the document, etc.
For the trainuser.dat, in
10 1631 3591 10272 14851 4662 13172 12684 5324 3595 3404
10 is the number of positive samples for this user, the rest is a list of 10 items that are related to (liked by) this user.
You can check the README file in the Datasets collection for more details on the datasets. | {
"domain": "datascience.stackexchange",
"id": 2466,
"tags": "machine-learning, neural-network, deep-learning, dataset, autoencoder"
} |
ReLu layer in CNN (RGB Image) | Question: I am able to get convoluted values from RGB Image lets say for each channel.
So I have red channel with values: -100,8,96,1056,-632,2,3....
Now what I do is that I normalize this values in range 0-255 because that is range of rgb values basically with code Math.Max(minValue, Math.Min(maxValue, currentValue)).
But I don't understand how relu works in CNN. From my point of view I am alredy doing it in my normalization right? Because ReLu layer is from 0 to x where max value of x is 255.
So my question is where should be applied ReLu in RGB image after convolution? (After convolution and normalization values are alredy in range 0-255)
Answer: You should put ReLU as the activation of the convolution layers. ReLU is not applied to the RGB values, but to the matrix obtained by convolving the image, also called the filter. | {
"domain": "datascience.stackexchange",
"id": 11591,
"tags": "machine-learning, deep-learning, cnn, image-preprocessing"
} |
improve the design of class “accuracy” in Python | Question: I am learning about the class and methods in Python.
The class Accuracy is a class of several (13 in total) statistic values between a reference polygon and one or more segmented polygons based on shapely module.
from numpy import average
#some stat
def ra_or(ref, seg):
return average([(ref.intersection(s).area/ref.area) for s in seg])
def ra_os(ref, seg):
return average([(ref.intersection(s).area/s.area) for s in seg])
def sim_size(ref, seg):
return average([(min(ref.area, s.area)/max(ref.area,s.area)) for s in seg])
def AFI(ref,seg):
return (ref.area - max([s.area for s in seg]))/ref.area
where ref.intersection(s).area is the area of intersection between reference and segmented polygon-i
my class design (really basic and probably to improve) is:
class Accuracy(object):
def __init__ (self,ref,seg = None, noData = -9999):
if seg == None:
self.area = ref.area
self.perimeter = ref.length
self.centroidX = ref.centroid.x
self.centroidY = ref.centroid.y
self.data = [self.centroidX,
self.centroidY,
self.area,
self.perimeter,
noData,
noData,
noData,
noData,
noData]
else:
self.area = ref.area
self.perimeter = ref.length
self.centroidX = ref.centroid.x
self.centroidY = ref.centroid.y
self.segments = len(seg)
self.RAor = ra_or(ref,seg)
self.RAos = ra_os(ref,seg)
self.SimSize = sim_size(ref,seg)
self.AFI = AFI(ref,seg)
self.data = [self.centroidX,
self.centroidY,
self.area,
self.perimeter,
self.segments,
self.RAor,
self.RAos,
self.SimSize,
self.AFI]
from shapely.geometry import Polygon
p1 = Polygon([(2, 4), (4, 4), (4, 2), (2, 2), (2, 4)])
p2 = Polygon([(0, 3), (3, 3), (3, 0), (0, 0), (0, 3)])
accp1 = Accuracy(p1,[p2])
accp1.data
[3.0, 3.0, 4.0, 8.0, 1, 0.25, 0.1111111111111111, 0.44444444444444442, -1.25]
accp1 = Accuracy(p1)
accp1.data
[3.0, 3.0, 4.0, 8.0, -9999, -9999, -9999, -9999, -9999]
Answer: If you plan on calling the four functions ra_or, ra_os, sim_size and AFI outside of Accuracy then it is good to keep them as functions. If they never get called except through Accuracy, then they should be made methods.
Classes can help organize complex code, but they generally do not make your code faster. Do not use a class unless there is a clear advantage to be had -- through inheritance, or polymorphism, etc.
If you want faster code which uses less memory, avoid using a class here. Just define functions for each attribute.
If you want "luxurious" syntax -- the ability to reference each statistic via an attribute, then a class is fine.
If you plan on instantiating instances of Accuracy but not always accessing all the attributes, you don't need to compute them all in __init__. You can delay their computation by using properties.
@property
def area(self):
return self.ref.area
Note that when you write accp1.area, the area method above will be called. Notice there are no parentheses after accp1.area.
To be clear, the advantage to using properties is that each instance of Accuracy will not compute all its statistical attributes until they are needed. The downside of using a property is that they are recomputed everytime the attribute is accessed. That may not be a downside if self.ref or self.seg ever change.
Moreover, you can cache the result using Denis Otkidach's CachedAttribute decorator. Then the attribute is only computed once, and simply looked up every time thereafter.
Don't use an arbitrary value for noData like noData = -9999. Use noData = np.nan, or simply skip noData and use np.nan directly.
import numpy as np
from shapely.geometry import Polygon
nan = np.nan
class Accuracy(object):
def __init__(self, ref, seg=None):
self.ref = ref
self.seg = seg
@property
def area(self):
return self.ref.area
@property
def perimeter(self):
return self.ref.length
@property
def centroidX(self):
return self.ref.centroid.x
@property
def centroidY(self):
return self.ref.centroid.y
@property
def data(self):
return [self.centroidX,
self.centroidY,
self.area,
self.perimeter,
self.segments,
self.RAor,
self.RAos,
self.SimSize,
self.AFI]
@property
def segments(self):
if self.seg:
return len(self.seg)
else:
return nan
@property
def RAor(self):
if self.seg:
return np.average(
[(self.ref.intersection(s).area / self.ref.area) for s in self.seg])
else:
return nan
@property
def RAos(self):
if self.seg:
return np.average(
[(self.ref.intersection(s).area / s.area) for s in self.seg])
else:
return nan
@property
def SimSize(self):
if self.seg:
return np.average(
[(min(self.ref.area, s.area) / max(self.ref.area, s.area))
for s in self.seg])
else:
return nan
@property
def AFI(self):
if self.seg:
return (self.ref.area - max([s.area for s in self.seg])) / self.ref.area
else:
return nan
p1 = Polygon([(2, 4), (4, 4), (4, 2), (2, 2), (2, 4)])
p2 = Polygon([(0, 3), (3, 3), (3, 0), (0, 0), (0, 3)])
accp1 = Accuracy(p1, [p2])
print(accp1.data)
# [3.0, 3.0, 4.0, 8.0, 1, 0.25, 0.1111111111111111, 0.44444444444444442, -1.25]
accp1 = Accuracy(p1)
print(accp1.data)
# [3.0, 3.0, 4.0, 8.0, nan, nan, nan, nan, nan]
Here is how you could save your data (as a numpy array) to a CSV file:
np.savetxt('/tmp/mytest.txt', np.atleast_2d(accp1.data), delimiter=',')
And here is how you could read it back:
data = np.genfromtxt('/tmp/mytest.txt', dtype=None)
print(data)
# [ 3. 3. 4. 8. nan nan nan nan nan] | {
"domain": "codereview.stackexchange",
"id": 3459,
"tags": "python, classes, extension-methods"
} |
What makes water-droplet/dew stick to spider's web and what keeps them there? | Question: How water-droplet/dew stick to spider's web? What keeps them there?
Answer: The water droplets on a spider's web are an example of dew. Cold air holds less water than warm air, so if the air has absorbed a lot of water during the day, as it cools overnight the water condenses on anything that can provide a nucleus.
There is an energy barrier to the formation of water droplets from water vapour, so the droplets generally form where there is something to act as a nucleus. This Wikipedia article describes the role of nuclei in formation of clouds, but the same principle applies to condensation onto a spider's web. There's nothing special about a spiders web: dew forms on just about anything. It's just prettier on a spider's web so we notice it more.
The reason you get droplets is due to a phenomenon called Rayleigh Instability. This causes a smooth film of water on a fibre to break up into droplets.
The droplets stick to the fibres of the web due to capillary forces. | {
"domain": "physics.stackexchange",
"id": 4687,
"tags": "water"
} |
Why is there no dynamics in 3D general relativity? | Question: I heard a couple of times that there is no dynamics in 3D (2+1) GR, that it's something like a topological theory. I got the argument in the 2D case (the metric is conformally flat, Einstein equations trivially satisfied and the action is just a topological number) but I don't get how it is still true or partially similar with one more space dimension.
Answer: The absence of physical excitations in 3 dimensions has a simple reason: the Riemann tensor may be fully expressed via the Ricci tensor. Because the Ricci tensor vanishes in the vacuum due to Einstein's equations, the Riemann tensor vanishes (whenever the equations of motion are imposed), too: the vacuum has to be flat (no nontrivial Schwarzschild-like curved vacuum solutions). So there can't be any gravitational waves, there are no gravitons (quanta of gravitational waves). In other words, Ricci flatness implies flatness.
Counting components of tensors
The reason why the Riemann tensor is fully determined by the Ricci tensor is not hard to see. The Riemann tensor is $R_{abcd}$ but it is antisymmetric in $ab$ and in $cd$ and symmetric under exchange of the index pairs $ab$, $cd$. In 3 dimensions, one may dualize the antisymmetric index pairs $ab$ and $cd$ to simple indices $e,f$ using the antisymmetric $\epsilon_{abe}$ tensor and the Riemann tensor is symmetric in these new consolidated $e,f$ indices so it has 6 components, just like the Ricci tensor $R_{gh}$.
Because the Riemann tensor may always be written in terms of the Ricci tensor and because they have the same number of components at each point in $D=3$, it must be true that the opposite relationship has to exist, too. It is
$$ R_{abcd} = \alpha(R_{ac}g_{bd} - R_{bc}g_{ad} - R_{ad}g_{bc} + R_{bd}g_{ac} )+\beta R(g_{ac}g_{bd}-g_{ad}g_{bc}) $$
I leave it as a homework to calculate the right values of $\alpha,\beta$ from the condition that the $ac$-contraction of the object above produces $R_{bd}$, as expected from the Ricci tensor's definition.
Counting polarizations of gravitons (or linearized gravitational waves)
An alternative way to prove that there are no physical polarizations in $D=3$ is to count them using the usual formula. The physical polarizations in $D$ dimensions are the traceless symmetric tensor in $(D-2)$ dimensions. For $D=3$, you have $D-2=1$ so only the symmetric tensor only has a unique component, e.g. $h_{22}$, and the traceless condition eliminates this last condition, too. So just like you have 2 physical graviton polarizations in $D=4$ and 44 polarizations in $D=11$, to mention two examples, there are 0 of them in $D=3$. The general number is $(D-2)(D-1)/2-1$.
In 2 dimensions, the whole Riemann tensor may be expressed in terms of the Ricci scalar curvature $R$ (best the Ricci tensor itself is $R_{ab}=Rg_{ab}/2$) which is directly imprinted to the component $R_{1212}$ etc.: Einstein's equations become vacuous in 2D. The number of components of the gravitational field is formally $(-1)$ in $D=2$; the local dynamics of the gravitational sector is not only vacuous but it imposes constraints on the remaining matter, too.
Other effects of gravity in 3D
While there are no gravitational waves in 3 dimensions, it doesn't mean that there are absolutely no gravitational effects. One may create point masses. Their gravitational field creates a vacuum that is Riemann-flat almost everywhere but creates a deficit angle.
Approximately equivalent theories
Due to the absence of local excitations, this is formally a topological theory and there are maps to other topological theories in 3D, especially the Chern-Simons theory with a gauge group. However, this equivalence only holds in some perturbative approximations and under extra assumptions, and for most purposes, it is a vacuous relationship, anyway. | {
"domain": "physics.stackexchange",
"id": 3575,
"tags": "general-relativity, topological-field-theory"
} |
Determining the number of qubits to represent the eigenvalues in HHL algorithm? | Question: I am trying to understand how well the HHL algorithm would scale. Therefore my first inquiry is how the number of qubits scale with the size of the problem ie the size of the Matrix A in the linear system Ax = b.
I have found an article that gives in a table how many qubits are required based on the parameters of the problem for an application in physics:
https://doi.org/10.48550/arXiv.2209.07964
The first number of qubits can easily be obtained by taking a number of state qubits equal to $\left\lceil \log_2{(matrix\;rank)} \right\rceil +1$ $\\\\$
(the plus one is because we use $\begin{pmatrix}
0 & A \\
A^{\dagger} & 0\\
\end{pmatrix}$ as the matrix to ensure it is hermitian). And the last number is just the ancilla qubit. However I don't know how to compute the second number which represent the number of qubits to represent the eigenvalue. It seems to be
$\simeq $$\left\lceil \log_2{(\frac{1}{|\lambda_{min}|})} \right\rceil$ but it doesn't fit for the last line of the table.
Question
How should I compute the number of qubits needed to represent the eigenvalues of the matrix ?
Answer: It appears that you wish to know how many ancillary qubits $n_c$ are needed or used in Figure 5 of the paper by Lapworth, which implements the HHL algorithm and is reproduced below for convenience:
In general, as $n_c$ increases more precision is obtained with respect to the eigenvalues (eigenphases) stored therein. You are correct that the quantum phase estimation (QPE) algorithm uses an inverse quantum Fourier transform (iQFT) on these $n_c$ qubits; and yes, this iQFT uses $\frac{n_c(n_c+1)}{2}$ or so gates.
Before the iQFT however, the quantum phase estimation performs multiple rounds of Hamiltonian simulation to simulate a unitary $U=\exp(-i H t)$, where $H$ is the hermitian version of your matrix $A$.
Referring to the circuit from the English-language Wikipedia article on the QPE algorithm, you can see that there's a plurality of controlled-$U$ gates that store the phases into the ancilla register. Furthermore there is a controlled gate $U^{2^{n-1}}$ in the most significant qubit of the ancilla register - this is usually performed with $2^{n-1}$ different controlled-Hamiltonian simulations, controlled with the most significant ancilla qubit. These ancilla qubits are the registers upon which the $\mathcal{QFT}^{-1}_{2^n}$ is performed.
Indeed, we can compare Wikipedia's registers to Lapworth - $n$ in Wikipedia corresponds to $n_c$ in Lapworth, while $m$ in Wikipedia corresponds to $n_i$ in Lapworth. That is, the QPE in Figure 5 of Lapworth includes $n_c$ clock qubits and the $n_i$ input qubits, which is the $n$ clock qubits and the $m$ input qubits in Wikipedia, respectively.
Thus, to answer your question you don't need (or really want to have) $n_c$ be much bigger than polylogarithmic in $n_i$ for the HHL algorithm, because if $n_c$ were that much bigger, you would need to do an exponentially longer number of controlled unitaries to simulate $H$ or $A$. | {
"domain": "quantumcomputing.stackexchange",
"id": 4992,
"tags": "quantum-algorithms, hhl-algorithm, linear-algebra"
} |
Move two discs freely and quickly on the underside of a surface? | Question: The discs must remain in flat contact to the surface at all times but you can assume no friction while pushing them about.
No using magnets from the other side of the surface, for all intensive purposes assume it is infinitely thick on the other side.
You must be able to move the discs "freely" meaning that normal robotic arms won't cut it due to getting tangled after moving each disc in a circle around the centre maybe once at most.
I imagine the answer could be very useful if there is one, I couldn't find one after scouring the internet for a few hours but a solution would open a few doors, especially when it comes to "omnidirectional treadmills", if you replace each disc with a vertical cylinder you could crudely mimic any terrain.
I hope this isn't too theoretical, apologies if it is, I do really think there are numerous practical applications though and for that reason any working answer at all would be useful.
Edit: Since it was apparently ambiguous, let me clarify, no using magnets, obviously if a solution uses a machine that has a magnet internally for some reason required for that machine to function then that's fine.
The surface is arbitrary, it could be any material within reason and cannot be changed, assume the area above the surface cannot be altered in any way.
Edit 2: My mistake, I completely forgot to mention the surface has a finite size, although the shape could be practically anything as long as no corridors thinner than the sum of both discs' radii are within it, assume it is convex if you cannot find a solution for concave, or just assume it is a rectangle!
Answer: Use a single robotic arm or other mechanism providing 2 degrees of freedom, to move one disc. Said disc is attached to the arm through an axis of a motor. The stator/chassis of the motor would attach to another arm with a linear actuator moving the other disc closer/farther (and the motor turning would change the angle relative to the first) - turning and extending/retracting in a plane that lies below the plane of movement of the robotic arm. You'll need a rotary slip ring connector to provide control/power to the stator and actuator.
Essentially, attach both one disc and the base of a second robotic arm (moving the second disk) to the end of the first robotic arm. | {
"domain": "engineering.stackexchange",
"id": 2160,
"tags": "mechanical-engineering"
} |
Acid-catalyzed hydrolysis of diphenyl malonate | Question:
What are the complete, acid-catalyzed hydrolysis products (with application of heat) of diphenyl malonate?
According to me, they are phenol, carbon dioxide, and acetic acid.
The aqueous acid and heat conditions indicate to me that we are returning from the carboxylic acid derivative (ester) to the carboxylic acid. The mechanism involves protonation of the carbonyl oxygen and nucleophilic attack by water. This ejects alcohol (well, in this case, phenol).
The heat induces the malonic acid intermediate to decarboxylate and this results in the production of carbon dioxide and acetic acid.
Is my analysis correct?
Answer: If the question is "What are the products of hydrolysis of diphenyl malonate?", then my answer would be 2 equivalents of phenol and one equivalent of malonic acid.
If the question is "What are the products of diphenyl malonate when heated in an acid aqueous solution?", then my answer would be 2 equivalents of phenol, one equivalent of acetic acid, and one equivalent of carbon dioxide.
I see it sort of as a semantic question, since the hydrolysis of an ester gives an alcohol and a carboxylic acid, as you suggest. I don't consider decarboxylation to be a hydrolysis reaction, literally defined as cleavage of bonds by addition of water. Water isn't incorporated into the products of the decarboxylation step. | {
"domain": "chemistry.stackexchange",
"id": 2209,
"tags": "organic-chemistry, hydrolysis"
} |
How do i determine the direction of components of electric fields? | Question:
Here are images of some of the pages inside my textbook that show a strategic problem, it gives me 2 charges and a point at which we would detect the electric field but i have a problem with solving it.
I understand that the charge of the field lines depend on the positivity or negativity of a charge, but what I do not understand is how to figure out from the components of a field line (x and y components for E2 for instance) the directions.
According to the second image, for E2, the x component is positive? but why? isn't it approaching the negative charge as well as the y component of E2 is?
Please guide me through this I really need help, have a finals test tomorrow
Answer: since $q_2$ is negative, the direction of $E_2$ is always pointing at the charge $q_2$. obviously the $x$ component of $E_2$ is positive. | {
"domain": "physics.stackexchange",
"id": 36066,
"tags": "electrostatics, electric-fields"
} |
Runtime topics remapping? | Question:
is it possible to remap topic during runtime?..using strings passed by other topics for example..
If it is..how?
Originally posted by mateo_7_7 on ROS Answers with karma: 90 on 2013-06-10
Post score: 0
Answer:
Not directly as remapping usually means connecting to another topic. But you can just use normal ROS functionality for publishing/subscribing.
Basically: Bring down the old publisher/subscriber and create a new one that you pass another topic string to.
Originally posted by dornhege with karma: 31395 on 2013-06-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by mateo_7_7 on 2013-06-10:
so, if i had 100 robots (nodes) and each of those had to communicate with just ONE (among all robots) and this "one" changes over time, i should write 100 subscriber/publisher declarations!?!?!
Comment by dornhege on 2013-06-10:
Well, you will have 100 total. For each robot, however, you would only have one declaration. If the ONE to connect to changes, you will just change what this single declaration points to in each robot/node. In the simplest form, you'd move the subscribe/advertise call from the init code.
Comment by mateo_7_7 on 2013-06-10:
thanks, i know this solution but in this way my system would be extremely inefficient in terms of scalability...(i should modify the code every time the number of robots changes).
Comment by dornhege on 2013-06-10:
No, that should not be necessary. Can you maybe update your original post and give some more explicit use cases what you'd like to happen, e.g. something like: robots/nodes and topic connections before - something changes - robots/nodes + topics after. This can probably be done without code changes.
Comment by dornhege on 2013-06-10:
As to scalability: If you want to connect 100 robots to one and then change to connect to another one, you'd have to change those 100 connections physically. Besides that there should be no overhead. | {
"domain": "robotics.stackexchange",
"id": 14489,
"tags": "ros, topics"
} |
Does the time to reach the highest point $P$ equal the time to reach the ground from $P$ in a trajectory, taking drag into account? | Question: Assume a very simplified model without Corolis effect, the falloff of the local gravitational field and the like. My answer is no. It is sufficient to look at the vertical velocity of the projectile, because only that determines the time. The periods do not equal, because it is possible to vertically accelerate the projectile to a velocity which is greater than the terminal velocity $v_{term.}$ of the projectile in the medium. Therfore it may take longer for the projectile to come back down because it can only approach $v_{term.}$. I am not sure, though, how it behaves if the initial velocity is lower than $v_{term.}$. My guts say it behaves the same but a different explanation comes into effect. The projectile cannot reach the height which it would if there was no drag. Hence, the projectile cannot reach the initial velocity on the way back to the ground either and it takes longer.
Answer: The quadratic case is much more involved, but for a simple linear term describing drag the height of a projectile shot vertically is given by
$m \ddot{y} + \frac{g}{v_{T}}\dot{y}+g=0$
For graviational acceleration $g$, teminal velocity $v_T$ and mass $m$. Subject to the boundary conditions $y(t=0)$ and $\dot{y}(t=0)=v_0$ this differential equation has the solution
$y = \frac{m v_T}{g}(v_0 + v_T)(1-\exp({-\frac{g t}{m v_T}}))-v_T t$
The particle reaches the apex at $\dot{y}=0$, which after solving for $t$ is given by
$t_{max}= \frac{m v_T}{g} \log(1+\frac{v_0}{v_T})$
Thus we find at time $t=2 t_{Max}$ that
$y(t=2 t_{max}) = \frac{m v}{g}[v_0(v_0+2 v_T)-2 v_T(v_0+v_T) \log(1+\frac{v_0}{v_T})]$
if we find that at this time that $y$ is negative, we conclude the particle fell to ground from the apex quicker than it climbed to it. in term of the ratio $r=\frac{v_0}{v_T}$ this condition is
$\log(1+r) > \frac{r(r+2)}{2(r+1)}$
This is never satisfied and so we conclude that your intuition was correct. In fact these two quantities reach equality only for $r=0$ which is the pathological case of no motion.
I hope this walkthrough is able to offer you insight on how to tackle the more general problem. | {
"domain": "physics.stackexchange",
"id": 9304,
"tags": "newtonian-mechanics, projectile, drag"
} |
Are there any full worked examples of DFT calculations? | Question: I just started learning DFT and now I am totally confused.
Assuming I want to use B3LYP:
\begin{align}
v_s\left(\textbf{r}\right) &= v_\text{ext}\left(\textbf{r}\right) + \int d^3r\frac{n\left( \textbf{r}\right) }{\left|\textbf{r} - \textbf{r}^\prime\right|} + v_{\text{XC}}\left[ n \right]\left(\textbf{r}\right)\\
v_{\text{XC}}\left( r \right) &= \frac{\delta E_\text{XC}}{\delta n \left(\textbf{r}\right) }\\
\end{align}
I know that we use B3LYP for approximation of $v_{\text{XC}}$ - but I have no idea about how - and then we use some basis sets to run an SCF calculation to minimize the energy.
I downloaded a bunch of articles and bought some books and read them, but all of them give very detailed discussion about all the basics (e.g. what a functional is, or Kohn and Sham theorem, etc.), but no one talks about how to use them.
I am looking for a reading material or video that explains all the calculation, step by step, for example, some material that shows all the calculations for a CO molecule, from the beginning to the end of the SCF calculation.
Answer: The XC potential for DFT actually consists of two terms: $V_x$ and $V_c$. Depending on which XC functional you choose, the exchange part is either computed exactly (using Hartree-Fock) or using fitted parameters or in a combination of both. In the case of B3LYP the exchange is 20% Hartree + 8% Slater + 72% Becke88. The correlation functional consists of three (3) different terms (LYP) in a different form. If you take a different functional, say M06-2X, it has different functional type (54% Hartree exchange, 200+ parameters!).
Coming to the next part: You use a basis set only to compute the kinetic energy and exchange terms in the system. Let me walk you through a HF routine first:
The Hamiltonian consists of four terms(nuclear kinetic energy is zero because of Born-Oppenheimer Approximation) : $T_{elec} + V_{nn} + V_{en} + V_{ee}$. The nuclear-nuclear potential energy is constant (because they are 'clamped'). The first and third terms are fairly trivial, and are computed with relative ease. The electron-electron potential term consists of Pauli's exchange term, and the coulomb repulsion term. They are the computationally expensive part of the HF calculation, and the eigenvalues (energies) are solved self-consistently.
So, when you obtain the HF energy of a system, it is obtained variationally with respect to all the energy components in an interacting system of electrons and nuclei. Coming to DFT:
The first term you mentioned $V_{ext}$ is the external potential, which is usually the nuclear potential for an unperturbed system. The second term is equivalent to Coulomb repulsion. The third term is evaluated using the XC potential of your choice(every DFT method differs only at this part). You evaluate the energies for each and every orbital using the effective potential $V_s$ which you mentioned. Note: You evaluate energies for every orbital separately.
This is because of the KS theorem that equates the density created by non-interacting particles to be the same as that of interacting particles. So, we solve Kohn-Sham equation instead of Schrodinger equation, under a constraint that the sum of square of all the orbitals gives the density:
$$\rho(\mathbf r)=\sum_i^N |\phi_{i}(\mathbf r)|^2$$
This means that the energy computed using DFT is minimum with respect to a non-interacting system of electrons. This is the biggest assumption of DFT, which is quite valid too. As a result, DFT provides a good account of correlation energy at a very minimal computational expense (HF accounts for ZERO correlation)!!!
I took a computational chemistry course, where DFT was a small part, so I am not able to explain more in detail. I used Prof. Kieron Burke's ABC of DFT book, which he has uploaded online in his site for free use. It is a good source to start. There is also a video lecture of Prof. David Sherrill on Youtube: https://www.youtube.com/watch?v=5orzn-XA29M .
I missed the last question: For any molecule:
Compute the kinetic energy (using the KE functional)
Compute the external potential ($V_{ext}$)
Compute the Coulomb repulsion using an initial guess density ($n(r)$)
Compute the XC potential from the XC functional
Use Kohn Sham equations to find out orbital energies (under the constraint that the density ($n(r)$) is equal to the density of interacting particles (electrons). In this case, the density must be equal to the total number of electrons.
Using the newly obtained density, recompute the effective potential ($V_s$)
Perform until self-consistency is achieved. Then total energy is the sum of occupied orbital energies times the occupancy number.
Hope this helps! | {
"domain": "chemistry.stackexchange",
"id": 8725,
"tags": "computational-chemistry, theoretical-chemistry, quantum-chemistry, density-functional-theory"
} |
Determination of a chemical compound in a non-homogeneous sample | Question: I want to know if it is possible to determine a chemical compound in a non-homogeneous sample. I am asking this because we are aiming to create a device that will detect histamine level in fish without homogenizing the samples. In all the journals we've read (which are a lot), fish samples are homogenized (blended + chemically-treated) before histamine is determined.
I am a computer engineering student with basic knowledge on biochemistry. I'm researching on this because this is our thesis topic (which our adviser has forced recommended to us).
Any help/recommendation regarding this is appreciated. Thank you!
Answer: If you analyze non-homogeneous samples, the results can't be interpreted as being representative of the whole organism (or population). So, for example, if histamine is unevenly distributed within the body of a fish, your results would vary widely depending on what part of the fish you analyzed. | {
"domain": "chemistry.stackexchange",
"id": 2844,
"tags": "biochemistry"
} |
What does the magnetic field of the (quantum-mechanical) electron look like? | Question: While a treatment of electron spin can be found in any introductory textbook, I've noticed that the electron's magnetic field seems to be treated classically. Presumably this is because a quantum treatment of the electromagnetic field would venture into the much more difficult topic of quantum electrodynamics. However, treating the magnetic field classically also seems to create conceptual difficulties. How can we write something like
$$\mathbf{\mu} = \frac{g_e \mu_b}{\hbar} \mathbf{S}$$
and treat the left-hand side as a vector, while treating the right-hand side as a vector-valued operator?
So, what does really happen when we measure the magnetic field around an electron? For simplicity imagine that the electron is in the ground state of the hydrogen atom, where it has zero orbital angular momentum. It seems to me that we can't observe what looks like a classical dipole field, because such a field would have a definite direction for the electron's magnetic moment, which would appear to contradict the quantum-mechanical properties of spin.
My guess is that measuring any one component of the magnetic field at a point near an electron would collapse the spin part of the electron wave function, and in general the three components of the magnetic field will fail to commute so we cannot indeed obtain a definite direction for the electron magnetic dipole moment. However, I'm not even sure how to begin approaching this problem in a rigorous fashion without breaking out the full machinery of QED. For an electron in a magnetic field we have the Dirac equation. For the magnetic field of the electron I wasn't able to find an answer online or in the textbooks I have at hand.
Answer: The magnetic moment of the electron is a magnetic moment, so the right magnetic field around it is
$$ \mathbf{B}({\mathbf{r}})=\nabla\times{\mathbf{A}}=\frac{\mu_{0}}{4\pi}\left(\frac{3\mathbf{r}(\mathbf{\mu}\cdot\mathbf{r})}{r^{5}}-\frac{{\mathbf{\mu}}}{r^{3}}\right). $$
The world is quantum mechanical – and so is any viable description of the spin – so we have to respect the postulates of quantum mechanics. In particular, the magnetic field above corresponds to a "state" (e.g. spin up and spin down) and one may construct complex linear superpositions of such states. It is important to realize that the quantum mechanical superposition of states in the Hilbert space in no way implies that the corresponding magnetic fields are being added according to the classical electromagnetic field's superposition principle.
Indeed, they're linear superpositions of states that contain different profiles of the magnetic field.
The magnetic field around the electron is so weak that indeed, it can in no way be thought of as a classical magnetic field, in the sense that the classical fields are "large". But the classical formula for the magnetic field is still right! This formula defines the magnetic moment. The quantum effects are always important, however. Also, if you try to measure this very weak magnetic field, it will unavoidably influence the state of the measured system, including the electron's spin itself.
It is of course completely wrong to imagine that we could measure such a weak magnetic field by a big macroscopic apparatus, like a fridge magnet. The effect of one electron's magnetic field on such a big object would be nearly zero, of course. In fact, quantum mechanics guarantees quantization of many "phenomena", so instead of predicting a very tiny effect on the fridge magnet, it predicts a finite effect on the fridge magnet that occurs with a tiny probability.
You may "measure" the electron's magnetic field by creating a bound state with another magnet in the form of an elementary particle. For example, the electron and the proton in a hydrogen atom are exerting the same kind of force that you would expect from the usual "classical" formulae – but it's important to realize that all the quantities in the equations are operators with hats.
Let me show you an example of a simple consistency check implying that there is no contradiction. Calculate the expectation value of the magnetic field (an operator) $\vec B(\vec r)$ at some point for the state $c_{up}|up\rangle + c_{down} |down\rangle$. The column vector of the amplitudes $c$ is normalized. The check is that you get the same expectation value of $\vec B(\vec r)$ for each $\vec r$ if you first compute it for the "up" component and "down" component separately, and then you add the terms, or if you first realize that it's a spin state "up" with respect to a new axis $\vec n$, and compute $\vec B$ from that.
It's a nice exercise. The point is that the expectation value of $\vec B(\vec r)$ is a bilinear expression in the bra-vector and ket-vector $|\psi\rangle$, much like the direction $\vec n$. And indeed, $\vec B$ is linear in the direction $\vec n$, according to the formula above, so things will agree. You may insert anything else to the expectation value, in fact, so the check works for all linear expressions in $\vec B$. The higher powers of $\vec B$ also have expectation values but they will behave differently than in classical physics because there will be extra contributions from the "uncertainty principle", analogous to zero-point energies of the harmonic oscillator in quantum mechanics.
It's extremely important to realize that the field $\vec B(\vec r)$ is also an operator – so it has nonzero off-diagonal matrix elements with respect to the "up" and "down" spin states of the electron. In fact, if you just write $\vec \mu$ in the formula for the magnetic field I started with as a multiple of the Pauli matrices (electron's spin), you will exactly see how the "key" term in the magnetic field behaves with respect to the up- and down- spin states. The off-diagonal elements do not contribute to the expectation value in the up state or in the down state but they do impact the "mixed matrix elements between up and down, and those affect the expectation values in spin states polarized along non-vertical axes.
BTW I added a semi-popular blog version of my answer here
http://motls.blogspot.com/2014/07/does-electrons-magnetic-field-look-like.html?m=1 | {
"domain": "physics.stackexchange",
"id": 15312,
"tags": "quantum-mechanics, magnetic-fields, quantum-spin, quantum-electrodynamics"
} |
Karp-Rabin with bitwise hash | Question: As a Rags-to-Riches re-implementation of the Karp-Rabin question here, from @tsleyson, I thought it would be interesting to understand the algorithm better, and to use bitwise-shifting instead of prime multiplication, to compute the rolling hash.
Karp-Rabin is a text-search algorithm that computes a rolling hash of the current text, and only if the rolling hash matches the search term's hash, does it double-check that the search actually matches. Because the hash is computed in \$O(1)\$ time for each advance through the text, it is essentially an \$O(n)\$ search algorithm with an additional \$O(m)\$ confirmation check for each potential match that is found. The best case is \$O(n)\$ for no matches, and worst case is \$O(nm)\$ if everything matches (for example, searching for "aa" in "aaaaaaaaaaaaa").
The description for the algorithm suggests using a 'base' prime number as the foundation for a hashing algorithm that allows you to 'rotate' an out-of-scope character off of the hash, and then add a new member to it. As you 'roll' the hash down the search text, if the hash matches the hash of your pattern, you can then double-check to see if it is just a collision, or a real match.
Instead of using a system requiring repeated multiplications by prime numbers, I decided to use a computationally simpler bit-shifting algorithm (though the logic may be more complicated, the computations should be simpler). In essence, the algorithm I use uses a long value (64-bit) as a hash, and it rotates characters on to the hash, and uses XOR bitwise logic to 'add' or 'remove' the characters.
I am looking for a review of the usability, and any performance suggestions.
import java.util.Arrays;
public class KarpRabin {
private static final int[] EMPTY = new int[0];
// Use a prime number as the shift, which means it takes a while to reuse
// bit positions in the hash. Creates a good hash distribution in the
// long bit mask.
private static final int SHIFT = 31;
// bits in the long hash value
private static final int BITS = 64;
private final char[] patternChars, textChars;
private final long patternHash;
private final int size;
private final int endPosition;
private final int tailLeftShift, tailRightShift;
private long textHash = 0L;
private int pos = 0;
/*
* Private constructor, only accessible from public static entry methods.
*/
private KarpRabin(String pattern, String text) {
// inputs pre-validated.
// inputs not null, not empty, and pattern is shorter than text.
patternChars = pattern.toCharArray();
textChars = text.toCharArray();
size = this.patternChars.length;
endPosition = this.textChars.length - size;
// calculate the bit-shifts needed to remove the tail of the search
// since the hash values are 'rotated' each time, we can count the
// rotations that will be needed (size - 1) before a char is at the
// tail, and that will be the relative left/right shifts needed.
// note that left and right shifts perform a 'rotation' when combined,
// essentially rotating the character by tailLeft bits to the left.
tailLeftShift = ((size - 1) * SHIFT) % BITS;
tailRightShift = BITS - tailLeftShift;
// Compute the hash of the pattern
// use a temp variable ph because patternHash is final
long ph = 0L;
for (char c : this.patternChars) {
ph = addHash(ph, c);
}
patternHash = ph;
// seed the hash for the first X chars in the text to search.
textHash = 0L;
for (int i = 0; i < size; i++) {
textHash = addHash(textHash, this.textChars[i]);
}
// advance the search to the first 'hit' (if there is one).
// check to make sure we are not already at a match first.
if (textHash != patternHash || !confirmed()) {
advance();
}
}
/*
* Shift the existing hash, and XOR in the new char.
*/
private final long addHash(long base, char c) {
return (base << SHIFT | base >>> (BITS - SHIFT)) ^ c;
}
/*
* Shift the char to remove to the place it would be at the tail, and XOR it
* off.
*/
private final long removeHash(long base, char c) {
long ch = c;
// this is essentially a rotation in a 64-bit space.
ch = ch << tailLeftShift | ch >>> tailRightShift;
return base ^ ch;
}
/*
* Return the next match, and advance the search if needed.
*/
private int next() {
if (pos > endPosition) {
return -1;
}
int ret = pos;
advance();
return ret;
}
/*
* move the position to the next 'hit', or the end of the search text.
*/
private void advance() {
while (++pos <= endPosition) {
// remove the tail char from the hash
textHash = removeHash(textHash, textChars[pos - 1]);
// add in the new head char
textHash = addHash(textHash, textChars[pos + size - 1]);
// check to see if we have a match.
if (textHash == patternHash && confirmed()) {
return;
}
}
}
/*
* Double-check a hash-hit. Assumes the hash computes are equal.
*/
private boolean confirmed() {
for (int i = 0; i < size; i++) {
if (patternChars[i] != textChars[pos + i]) {
return false;
}
}
return true;
}
/**
* Identify whether the pattern appears in the given text.
* <p>
* Null input values will return false, and an empty-string pattern will
* return true (the empty string is a match in any other non-null string)
*
* @param pattern
* the pattern to search for
* @param text
* the text to search in
* @return true if the pattern exists in the text.
*/
public static boolean match(final String pattern, final String text) {
if (pattern == null || text == null || pattern.length() > text.length()) {
return false;
}
if (pattern.isEmpty()) {
return true;
}
KarpRabin kr = new KarpRabin(pattern, text);
return kr.next() >= 0;
}
/**
* Identify all locations where the pattern exists in the search text.
* <p>
* Null input values will result in no matches, and the empty search pattern
* will be located at all positions in the search text.
*
* @param pattern
* the pattern to search for
* @param text
* the text to search in
* @return the integer positions of each found occurrence. An empty array
* will be returned if there are no matches.
*/
public static int[] allMatches(String pattern, String text) {
if (pattern == null || text == null || pattern.length() > text.length()) {
return EMPTY;
}
if (pattern.isEmpty()) {
// empty string is found everywhere
int[] ret = new int[text.length()];
for (int i = 0; i < ret.length; i++) {
ret[i] = i;
}
return ret;
}
KarpRabin kr = new KarpRabin(pattern, text);
// guess the size first, and expand/trim it as needed.
int[] possibles = new int[text.length() / pattern.length()];
int count = 0;
int pos = 0;
while ((pos = kr.next()) >= 0) {
if (count >= possibles.length) {
// expand the matches (a lot). This indicates overlapping results.
// which is unusual.
possibles = Arrays.copyOf(possibles, possibles.length * 2 + 2);
}
possibles[count] = pos;
count++;
}
// trim the matches.
return Arrays.copyOf(possibles, count);
}
}
I have been testing it against some input data similar to @tsleyson's, and then added in a performance test too, that checks the scalability of a non-matching case.
private static final void scalabilityTests() {
String input = "abcdefghijklmnopquestuvwxyz";
long[] times = new long[9];
for (int i = 0; i < times.length; i++) {
int sum = 0;
long nanos = System.nanoTime();
for (int j = 0; j < 5000; j++) {
sum += allMatches("abcdefghx", input).length;
}
times[i] = sum + System.nanoTime() - nanos;
// double the times....
input += input;
}
System.out.println("Scalability (should double each time): " + Arrays.toString(times));
}
public static void main(String[] args) {
String[] inputs = { "", null, "abr", "abra", "abrabrabra",
"abracadabrabracadabra", "cabrac", "cabracadabrabracadabrac",
"abbbbrrraaabbbccrrabbba" };
for (String inp : inputs) {
int[] matches = allMatches("abra", inp);
System.out.printf("Matches %s %s at %s%n", inp, match("abra", inp), Arrays.toString(matches));
}
// run three times for warmups....
scalabilityTests();
scalabilityTests();
scalabilityTests();
}
Answer:
to use bitwise-shifting instead of prime multiplication, to compute the rolling hash.
This is an optimization, I'm not sure about. It's a bit faster (1 instead of 3 or 4 cycles latency, but probably the same throughput on modern Intel), but the hash quality is worse.
This assumes the use of rotations optimized by the JIT, which is not the case for the reviewed code. It's quite possible that a hand-made rotation is worse than multiplication.
You need no prime, any odd number would do.
// Use a prime number as the shift, which means it takes a while to reuse
// bit positions in the hash. Creates a good hash distribution in the
// long bit mask.
private static final int SHIFT = 31;
For int, the rotation distance of 31 would be equivalent to -1, which is not good as the next char uses about the same bits. For long, it's not bad.
You're speaking about primes, but the shift is modulo 64 and there's no such thing as primes in the ring Z/64. Again, what you need is a number co-prime to 64, i.e., odd. This assures that it takes 64 operations to land in the same position.
I don't think 31 is very good, as after two operations, you get 62, i.e., -2. So using three ASCII chars, a collision can be easily produced as their bits overlap.
When sticking with ASCII, 7 should be fine, but for general strings, I'd suggest at least 16. FWIW, with 17, you're guaranteed that there'll no collisions for up 3 steps. After 4 steps, it totals to 68, i.e., 4, which is not perfect, so I'd go for 19 instead. A measurement with real data could tell us more, while I guess, the answer is that it doesn't matter much.
(base << SHIFT | base >>> (BITS - SHIFT))
Wrong! Use Long.rotateLeft. It does exactly the same, but is clearer and much faster as it's an intrinsic. Whenever the JIT sees Long.rotateLeft, it uses the rotation instruction; when it sees an equivalent code, it doesn't care.
Actually, you need no BITS as the distance is taken modulo 64 (guaranteed in Java, undefined in C).
public static boolean match(final String pattern, final String text)
See my comment on the linked CR concerning indexOf and contains.
Concerning naming, you're looking for a pattern in a text, that's fine, but maybe searching a needle in a haystack would be even clearer.
if (pattern == null || text == null || pattern.length() > text.length()) {
return false;
}
Don't allow nulls. Just throw as every null means a problem and the earlier you fail, the better. See Guava's arguments.
I'd probably go for
if (pattern.length() >= text.length()) {
return pattern.equals(text);
}
and simply let it blow on any null.
Someone less lazy would add explicit
import static com.google.common.base.Preconditions.checkNotNull;
checkNotNull(pattern);
checkNotNull(text);
int[] possibles =
I'd call it result.
int pos = 0;
while ((pos = kr.next()) >= 0) {
This should be a for loop.
added in a performance test
Benchmarking Java is hard, have a look at JMH. | {
"domain": "codereview.stackexchange",
"id": 10667,
"tags": "java, algorithm, strings, search, rags-to-riches"
} |
In CNN, why do we increase the number of filters in deeper Convolution layers for complex images? | Question: I have been doing this online course Introduction to TensorFlow for AI, ML and DL. Here in one part, they were showing a CNN model for classifying human and horses. In this model, the first Conv2D layer had 16 filters, followed by two more Conv2D layers with 32 and 64 filters respectively. I am not sure how the number of filters is correlated with the deeper convolution layers.
Answer: For this you need to understand what filters actually do.
Every layer of filters is there to capture patterns. For example, the first layer of filters captures patterns like edges, corners, dots etc. Subsequent layers combine those patterns to make bigger patterns (like combining edges to make squares, circles, etc.).
Now as we move forward in the layers, the patterns get more complex; hence there are larger combinations of patterns to capture. That's why we increase the filter size in subsequent layers to capture as many combinations as possible. | {
"domain": "datascience.stackexchange",
"id": 5609,
"tags": "neural-network, keras, tensorflow, cnn, convolution"
} |
Where do the terms microcanonical, canonical and grand canonical (ensemble) come from? | Question: Where do the terms microcanonical, canonical and grand canonical (ensemble) come from?
When were they coined and by whom? Is there any reason for the names or are they historical accidents?
Answer: I'm not completely sure, but I think they are introduced by Gibbs, and that book (available for download) is of historic importance.
The word ensemble really just means "set" in French, you consider the space of canonical coordinates of the detailed mechanics = microstates and you impose statistics by the fundamental postulate. | {
"domain": "physics.stackexchange",
"id": 4072,
"tags": "statistical-mechanics, terminology, history"
} |
Simple harmonic motion | Question: A uniform straight rod of length $L$ is hinged at one end. It is free to oscillate in vertical plane. Time period of oscillation with small angular amplitude when a point mass of mass equal to that of rod is fixed with lower end is :
$1$.
$2$.
$3$.
$4$.
Hey,please could you help me solve this question.
Actually,this seems to be a torsional pendulum and the time period in such a case is $2\pi\sqrt{I\over C}$ where C is the restoring torque.Now,my doubt is what should the restoring torque be?Should it be 2mg(mass of the rod as well as the one suspended)times the length..or something else because that is not giving me my answer.
High school student,so please keep it simple.
Answer: Note that this is a compound pendulum.
You can take centre of mass of whole system(rod+particle) which has mass $2m$ to provide restoring torque. Hence, the general formula is :
$$2\pi\sqrt{I\over mgy}$$
Where $y$ is distance of centre of mass from pivot. Keep in mind that $m$ is total mass. | {
"domain": "physics.stackexchange",
"id": 13785,
"tags": "homework-and-exercises, harmonic-oscillator"
} |
How to calculate second-order variations of an action? | Question: How does one correctly calculate the second-order variation of an action? I have started an attempt at the calculation (restricting the scalar fields for simplicity), but I'm unsure how to proceed.
Starting with the action for a free scalar field $$S[\phi]=\int\;d^{4}x\mathcal{L}=\frac{1}{2}\int\;d^{4}x\left(\partial_{\mu}\phi(x)\partial^{\mu}\phi(x)-m^{2}\phi^{2}(x)\right)$$
with Minkowski sign convention $(+,-,-,-)$. Naively, if I expand this to second-order, I get $$S[\phi+\delta\phi]=S[\phi]+\int\;d^{4}x\frac{\delta S[\phi(x)]}{\delta\phi(x)}\delta\phi(x)+\int\;d^{4}x d^{4}y\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}\delta\phi(x)\delta\phi(y)$$ Now, assuming that $\phi(x)$ satisfies the equations of motion (EOM), then the first-order term vanishes, however, I'm unsure how to calculate the second-order variation. So far, my attempt is $$\delta^{2}S=\int\;d^{4}x d^{4}y\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}\delta\phi(x)\delta\phi(y)=\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\\=\int\;d^{4}x d^{4}y\left(\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)\partial\phi(y)}\delta\phi(x)\delta\phi(y)+2\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)\partial(\partial_{\mu}\phi(y)}\delta\phi(x)\delta(\partial_{\mu}\phi(y))\\+\frac{\partial^{2}\mathcal{L}}{\partial(\partial_{\mu}\phi(x))\partial(\partial_{\mu}\phi(y))}\delta(\partial_{\mu}\phi(x))\delta(\partial_{\mu}\phi(y))\right)$$ However, I am unsure how to progress (integration by parts doesn't seem to work as nicely in this case), as naively it seems as though the only term that would survive is $\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)\partial\phi(y)}$, but I've seen references stating that $\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}$ is of the form $\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}\sim\Box +m^{2}$.
Any help would be much appreciated.
Update
I have though about it a bit more and have come up with a general formula (for a Lagrangian with up to first-order derivatives in the fields) that I hope is correct: $$\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}=\frac{\delta^{2}}{\delta\phi(x)\delta\phi(y)}\int\;d^{4}z\,\mathcal{L}\left(\phi(z),\partial_{\mu}\phi(z)\right)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\\=\frac{\delta}{\delta\phi(x)}\int\;d^{4}z\left[\frac{\partial\mathcal{L}}{\partial\phi(z)}\delta^{4}(z-y)+\frac{\partial\mathcal{L}}{\partial(\partial_{\mu}\phi(z))}\partial_{\mu}\left(\delta^{4}(z-y)\right)\right]\qquad\qquad\qquad\qquad\\=\int\;d^{4}z\left[\frac{\partial^{2}\mathcal{L}}{\partial\phi(z)^{2}}\delta^{4}(z-x)\delta^{4}(z-y)+2\frac{\partial^{2}\mathcal{L}}{\partial\phi(z)\partial(\partial_{\mu}\phi(z))}\partial_{\mu}\left(\delta^{4}(z-x)\right)\delta^{4}(z-y)\\+\frac{\partial^{2}\mathcal{L}}{\partial(\partial_{\mu}\phi(z))\partial(\partial_{\nu}\phi(z))}\partial_{\mu}\left(\delta^{4}(z-x)\right)\partial_{\nu}\left(\delta^{4}(z-x)\right)\right]\\ =\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)^{2}}\delta^{4}(x-y)+2\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)\partial(\partial_{\mu}\phi(x))}\partial_{\mu}\left(\delta^{4}(x-y)\right)\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{\partial^{2}\mathcal{L}}{\partial(\partial_{\mu}\phi(x))\partial(\partial_{\nu}\phi(x))}\partial_{\mu}\partial_{\nu}\left(\delta^{4}(x-y)\right)$$ which, upon further integrations by parts (neglecting boundary terms), gives $$\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}=\delta^{4}(x-y)\left[\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)^{2}}-2\partial_{\mu}\left(\frac{\partial^{2}\mathcal{L}}{\partial\phi(x)\partial(\partial_{\mu}\phi(x))}\partial_{\mu}\right)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\partial_{\mu}\partial_{\nu}\left(\frac{\partial^{2}\mathcal{L}}{\partial(\partial_{\mu}\phi(x))\partial(\partial_{\nu}\phi(x))}\right)\right]$$ This seems to give the expected answer in the case of a free scalar field. Indeed, $$\frac{\partial^{2}\mathcal{L}}{\partial\phi^{2}}=-m^{2}\, ,\qquad \frac{\partial^{2}\mathcal{L}}{\partial(\partial_{\mu}\phi)\partial(\partial_{\nu}\phi)}=\eta^{\mu\nu}\, , \qquad\frac{\partial^{2}\mathcal{L}}{\partial\phi\partial(\partial_{\mu}\phi)}=0$$ and hence, $$\frac{\delta^{2} S[\phi(x)]}{\delta\phi(x)\delta\phi(y)}=-\delta^{4}(x-y)\left[\Box+m^{2}\right]$$ Any feedback on whether this is correct or not would be much appreciated.
Answer: A proper treatment (and how you should usually go about these things if you forget) is to remember the definition of the functional derivative. It is linear, defined to obey a chain rule, a product rule, and has the fundamental feature
$$\frac{\delta\phi(y)}{\delta\phi(x)}=\delta(x-y)$$
Thus, in painstaking detail, we have
$$\frac{\delta S[\phi]}{\delta\phi(x)}=\frac{1}{2}\int\mathrm{d}^dy\left[\frac{\delta}{\delta\phi(x)}\left(\partial\phi(y)\cdot\partial\phi(y)\right)-m^2\frac{\delta}{\delta\phi(x)}\phi(y)^2\right]\\
=\int\mathrm{d}^dy\left[\partial_{\mu}\delta(x-y)\partial^{\mu}\phi(y)-m^2\delta(x-y)\phi(y)\right]\\
=-(\square+m^2)\phi(x)$$
Thus, we can simply differentiate again to obtain
$$\frac{\delta^2S[\phi]}{\delta\phi(x)\delta\phi(y)}=-\frac{\delta}{\delta\phi(y)}\left[(\square_x+m^2)\phi(x)\right]=-(\square_x+m^2)\delta(x-y)$$
Which is the desired result (note that $\square_x$ simply means that the derivative is only with respect to $x$ -- sometimes this matters)! Note that the delta function comes after the Klein-Gordon operator.
And that's it! No need to expand to second order or pull your hair out deciding whether you have to integrate by parts and when you can.
I hope this helps!
B-B-B-BONUS ROUND
This type of manipulation is actually extremely useful! For instance, in the path integral formulation, we have
$$\langle\mathcal{F}[\phi](x)\rangle=\int\mathcal{D}\phi\,\mathcal{F}[\phi](x)\,e^{iS[\phi]}$$
With this, we can use the above manipulations to find correlation functions! The key is to note that the path integral of a total functional derivative is zero. Thus, we have
$$\int\mathcal{D}\phi\,\frac{\delta^2}{\delta\phi(x)\delta\phi(y)}e^{iS[\phi]}=i\int\mathcal{D}\phi\left[\frac{\delta^2S}{\delta\phi(x)\delta\phi(y)}+i\frac{\delta S}{\delta\phi(x)}\frac{\delta S}{\delta\phi(y)}\right]e^{iS[\phi]}\\
=i\bigg\langle\frac{\delta^2S}{\delta\phi(x)\delta\phi(y)}+i\frac{\delta S}{\delta\phi(x)}\frac{\delta S}{\delta\phi(y)}\bigg\rangle=0$$
This holds for any action $S[\phi]$. In particular, in your free theory, this gives us
$$\left(\square_y+m^2\right)\left(\square_x+m^2\right)\langle\phi(x)\phi(y)\rangle=-i\left(\square_y+m^2\right)\delta(x-y)$$
Eliminating $\square_y+m^2$ from each side tells you that the two point function for a free theory is the Green's function of the Klein-Gordon operator. No need for generating functionals or all that messy second quantization. | {
"domain": "physics.stackexchange",
"id": 40313,
"tags": "homework-and-exercises, lagrangian-formalism, field-theory, action, variational-calculus"
} |
statistics or robust statistics for identifying multivariate outliers | Question: For the single variate data sets, we can use some straightforward methods, such as box plot or [5%, 95%] quantile to identify outliers. For multivariate data sets, are there any statistics that can be used to identify outliers?
Answer: Multivariate outlier detection can be quite tricky and even 2D data can be difficult to visually decipher at times. You are spot-on in looking for robust statistical treatments analogous to 95% quantiles.
Where as normally distributed data naturally aligns with the chi square distribution, the gold standard for robust statistics in n dimensions would be to use Mahalanobis distances and then eliminate data beyond 95% or 99% quantiles in Mahalanobis space.
Plug and play capabilities are available in scikit-learn and in R.
Here is an excellent theoretical and practical treatment of the methodology:
And here is a big picture viewpoint with some heuristics.
Additionally there is a very sophisticated treatments called PCOUT for outlier detection that instead rely on principal component decomposition. There is a corresponding R package, but the theoretical treatment is behind a paywall:
P. Filzmoser, R. Maronna, M. Werner. Outlier identification in high dimensions, Computational Statistics and Data Analysis, 52, 1694-1711, 2008
Hope this helps! | {
"domain": "datascience.stackexchange",
"id": 997,
"tags": "machine-learning, data-mining, statistics, anomaly-detection, unsupervised-learning"
} |
How to prove the orthogonality of spherical tensor operators? | Question: $\def\mybra#1{\langle{#1}|}$ $\def\myket#1{|{#1}\rangle}$
$\def\Tr{\mbox{Tr}}$ $\def\mybraket#1{\langle{#1}\rangle}$ $\def\B#1#2{B^{#1}_{#2}}$
$\def\C#1#2{C^{#1}_{#2}}$
$\def\r#1#2{\hat\rho^{#1}_{#2}}$
I want to expand my density operator
$$
\hat \rho = \sum_{jm j'm'} \myket{jm}\mybraket{jm|\hat \rho|j'm'} \mybra{j'm'}\tag1
$$
with spherical tensor operators,
$$
\hat\rho = \sum_{k}\sum_{q=-k}^k c^k_q\r kq,\tag2
$$
where $\myket{jm}$ is eigen vector of angular momentum $\hat J_z$, and
$$
\hat\rho^k_q = \sum_{jmj'm'} \B{jm}{kq j'm'} \myket{jm}\mybra{j'm'}\tag3
$$
is spherical operator that satisfies
$$
[\hat J_z,\r kq]=q\hbar \hat\rho^kq,\quad
[\hat J_\pm ,\hat\rho^kq] = \hbar\sqrt{k(k+1)-q(q\pm 1)}\r k{q\pm1}. \tag4
$$
Using recursion relations of C-G coefficients, $\B{jm}{kqj'm'}$ can be determined to be C-G coefficient except a scale factor,
$$
\B{jm}{kqj'm'}=f_{{kjj'}} \C{jm}{kqj'm'},\quad \C{jm}{kqj'm'} = \mybraket{kqj'm'|(kj')jm}.\tag5
$$
The factor should be independent of $m,m'$ because the recursion relation changes $m,m',q$.
To determine $f_{kjj'}$ and $c^k_q$, I should make use of the orthogonality and completeness. For orthogonality,
$$
\Tr((\r {k_1}{q_1})^\dagger \r {k_2}{q_2})
= \sum
f_{k_1j_1j_1'}^* f_{k_2j_2j_2'} \C{j_1m_1}{k_1q_1 j_1'm_1'}\C{j_2m_2}{k_2q_2 j_2'm_2'} \delta_{jj_1'}\delta_{mm_1'}\delta_{jj_2'}\delta_{j_1j_2}\delta_{m_1m_2} \delta_{mm_2'}.\tag6
$$
With the restriction $m_1=m_1'+q_1, m_2=m_2'+q_2$, the delta's in the sum indicates the L.H.S equals 0 unless $q_1=q_2$. And only three summation variables $j_1,j,m$ are left,
$$
\Tr((\r {k_1}{q_1})^\dagger \r {k_2}{q_2})
=\sum_{j_1,j,m} f_{k_1j_1j}^* f_{k_2j_1j} \C{j_1(m+q_1)}{k_1q_1 jm}\C{j_1(m+q_1)}{k_2q_1 jm}.\tag7
$$
Using symmetry property of C-G coefficients
$$
\C{jm}{j_1m_1 j_2m_2}=\sqrt{\frac{2j+1}{2j_2+1}} (-)^{j_1-m_1} \C{j_2(-m_2)}{j_1m_1j(-m)},\tag8
$$
I get
$$
\begin{aligned}
\Tr((\r {k_1}{q_1})^\dagger \r {k_2}{q_2})
&=
\sum_{j_1,j,m} f_{k_1j_1j}^* f_{k_2j_1j} \C{j_1(m+q_1)}{k_1q_1 jm}\C{j_1(m+q_1)}{k_2q_1 jm}
\\
&=
\sum_{j_1,j,m} f_{k_1j_1j}^* f_{k_2j_1j}{\frac{2j_1+1 }{2j+1}} \C{j(-m)}{k_1q_1 j_1(-m-q_1)}\C{j(-m)}{k_2q_1 j_1(-m-q_1)}\\
&=
\sum_{j_1,j,m} f_{k_1j_1j}^* f_{k_2j_1j}{\frac{2j_1+1 }{2j+1}} \C{jm}{k_1q_1 j_1(m+q_1)}\C{jm}{k_2q_1 j_1(m+q_1)}.
\end{aligned}\tag9
$$
The third equaliy holds because summation over $m$ is symmetric about 0. Let $f_{kj'j}=f_k\sqrt{(2j+1)/(2j'+1)}$ where $f_k$ is another factor to be determined, so
$$
\Tr((\r {k_1}{q_1})^\dagger \r {k_2}{q_2}) =
f_{k_1}^* f_{k_2} \sum_{j_1jm} \C{jm}{k_1q_1 j_1(m+q_1)}\C{jm}{k_2q_1 j_1(m+q_1)}. \tag{10}
$$
Now, if $k_1=k_2$ I can tell the summation is $1$ using the orthonality of C-G coefficients. But I can't tell if it's 0 when $k_1\neq k_2$. What's the next step?
Answer: Actually this is baked into your sum over $j$. Write $m_1=m+q_1$ and use
$$
\sum_{jm} C^{jm}_{k_1q_1,j_1m_1}C^{jm}_{k_1q_1,j_1m_1}=\delta_{k_1k_2}
$$
To see this, write your CG as an overlap:
$$
C^{jm}_{k_1q_1,j_1m_1}=\langle jm \vert k_1q_1,j_1m_1\rangle=\langle k_1q_1,j_1m_1\vert jm\rangle
$$
since CG's are real.
Then
$$
\sum_{jm} C^{jm}_{k_1q_1,j_1m_1}C^{jm}_{k_2q_1,j_1m_1}=
\sum _{jm}\langle k_1q_1,j_1m_1\vert jm\rangle\langle jm \vert k_2q_1,j_1m_1\rangle\\
=\langle k_1q_1,j_1m_1\vert\left[\sum_{jm}\vert jm\rangle\langle jm \vert \right]\vert k_2q_1,j_1m_1\rangle
$$
and the term in bracket is just the expansion of unity so you're left with
$$
\sum_{jm} C^{jm}_{k_1q_1,j_1m_1}C^{jm}_{k_1q_1,j_1m_1}=
\langle k_1q_1,j_1m_1\vert k_2q_1,j_1m_1\rangle=\delta_{k_1k_2}\, .
$$ | {
"domain": "physics.stackexchange",
"id": 98070,
"tags": "homework-and-exercises, hilbert-space, angular-momentum, linear-algebra, density-operator"
} |
Why do a swamp cooler and dehumidifier effectively cool a room when paired together in an enclosed space? | Question: I've read in more than one place (e.g., here and here) that a swamp cooler (a.k.a., evaporative cooler) can be effectively paired with a dehumidifier to cool an enclosed space, but this makes no sense to me. It seems like the evaporative cooling would be cancelled out by the heat resulting from condensing the water vapor. No energy is leaving the system as a result of these processes, and waste heat is generated by each device, so it seems the only effect would be to make the room hotter.
The only exception I can imagine is if the water resulting from dehumidification holds the bulk of the heat from that process and is quickly drained, while the evaporative cooler has its supply replenished with cooler tap water. But the articles don't specify that.
Answer: Please note the following fact about dehumidifiers:
A dehumidifier is basically a room air conditioner which works by cooling the inlet air down just to the point where the water dissolved in it begins to condense out of solution. This temperature is called the dew point temperature. It does this by blowing the air over a set of cold tubes with refrigerant in them; the water vapor condenses on the tubes as liquid water which drips off the tubes into a catchment tank and the refrigerant carries away the heat of vaporization. This heat is then dumped back into the room by blowing more air over a second set of heat transfer tubes through with the warmed refrigerant is circulated.
A room air conditioner does exactly the same thing except that in this case, the heat from the warm refrigerant is exhausted outside the room and the temperature of the cold coils where the water is condensed is low enough to actually freeze the water solid if it does not promptly drip off those coils. Both these features are there to maximize the cooling effect of the device on the room; the dehumidification effect is an additional bonus.
This means that although you can improve the effectiveness of a swamp cooler by dehumidifying its output, you can't get the room air temperature below the dew point of the air in this way- whereas you can with a conventional room AC unit that exhausts its heat outside the room.
So, dehumidifying the output of a swamp cooler makes little sense compared to just replacing both of them with a conventional room AC unit. | {
"domain": "engineering.stackexchange",
"id": 4026,
"tags": "hvac, cooling, evaporation"
} |
How do materials absorb light? | Question: I've seen a lot of different different answers online so I just want a clarification.
Electrons can absorb photons in 2 ways. The first way involves the electron cloud oscillating with the photon but emit it again without permanently absorbing it. The other way involves the electron cloud oscillating at around its resonant frequency, which causes the absorption of photons to excite the electron cloud to higher energy states. But from my understanding of quantum mechanics, energy levels should be discrete, why would a range of photons be able to cause the elctron clouds to resonate and then excite them to different degrees? Also, is emission lines only produced by ions and would be irrelevant here?
Answer: Let us first try to understand why we must get peaks in our spectra. Considered for simplicity, transitions between $3$ energy levels. Now if the energy levels are sharply defined, then we expect to see three peaks in our spectra, not because there are $3$ levels but because there are $3$ unique pairs that can be formed ($^3C_2=3$).The height of the spectra depends on how strongly the energy levels in question couple with the electromagnetic field.
In gases, the energy levels are usually sharply defined. But still the spectra is not completely discrete. One of the main reasons is Doppler effect. Due to the motion of the atoms, they see a Doppler shifted frequency of the incoming light. This means they absorb light of “wrong” frequency. And since the atoms in general have a velocity distribution, this translates to a distribution in the resonant frequency. This causes a broadening in the spectra(which can be reduced by lowering the temperature).
Coming to solids, the energy levels are not sharp to begin with. They are broad in general. Thus there is a continuous range of transition that can be made. | {
"domain": "physics.stackexchange",
"id": 65265,
"tags": "quantum-mechanics, electromagnetic-radiation, photons, resonance"
} |
RSA wrapper python | Question: #!/usr/bin/env python3
'''
Asymmetric criptography of chat messages.
'''
#TODO: implement perfect foreward secrecy
#Because of forward secrecy an attacker would need to have access to the internal SSH state of either the client or server at the time the SSH connection still exists.
#TODO: protect aainst traffic analysis
# Everything is peer to peer, which is cool, but the IP fetching needs to be anonymysed as well.
import os
from pathlib import Path
import random
import rsa
Priv = rsa.key.PrivateKey
Pub = rsa.key.PublicKey
Keypair = (Priv, Pub)
# Location to first look for a private key.
DEFAULT_PRIV = Path(Path.home() / '.ssh/id_rsa')
# 'MD5', 'SHA-1', 'SHA-224', 'SHA-256', 'SHA-384' or 'SHA-512'
HASH = 'SHA-256'
def generate_keypair(bits: int=1024) -> Keypair:
pub, priv = rsa.newkeys(bits)
return priv, pub
def write_keypair(priv: rsa.key.PrivateKey
, pub: rsa.key.PublicKey=None
, p: Path=DEFAULT_PRIV):
'''Obviously this function violates the RAM-only constraint.'''
if p.exists():
raise BaseException('Refusing to ovewrite an existing private key: '
+ str(p))
with open(p, 'wb') as f:
f.write(priv.save_pkcs1())
if pub:
with open(p.with_suffix('.pub'), 'wb') as f:
f.write(pub.save_pkcs1())
def regenerate_pub(path_priv: Path=DEFAULT_PRIV):
os.run('ssh-keygen -y -f ' + path_priv
+ ' > ' + path_priv + '.pub')
def read_keypair(p: Path=DEFAULT_PRIV) -> Keypair:
with open(p, mode='rb') as priv_file:
key_data = priv_file.read()
priv = rsa.PrivateKey.load_pkcs1(key_data)
pub = None
p = Path(p)
if not Path(p.with_suffix('.pub')).is_file():
regenerate_pub()
with open(p.with_suffix('.pub'), 'rb') as f:
key_data = f.read()
pub = rsa.PublicKey.load_pkcs1(key_data)
assert(pub is not None)
return priv, pub
def encrypt(text: str, pub: Pub) -> bytes:
'''Encrypt a message so that only the owner of the private key can read it.'''
bytes = text.encode('utf8')
encrypted = rsa.encrypt(bytes, pub)
return encrypted
def decrypt(encrypted: bytes, priv: Priv) -> str:
try:
bytes = rsa.decrypt(encrypted, priv)
string = bytes.decode('utf8')
except rsa.pkcs1.DecryptionError:
# Printing a stack trace leaks information about the key.
print('ERROR: DecryptionError!')
string = ''
return string
def sign(msg: str, priv: Priv) -> bytes:
'''Prove you wrote the message.
It is debatable should signing be performed on the plaintext
or on the encrypted bytes.
The former has been chosen because it is not vulnerable to the following.
Tim sends an encrypted and then sign packet to a server containing a password.
Joe intercepts the packet, strips the signature, signs it with his own key
and gets access on the server ever though he doesn't know Tim's password.
Furthermore it increases privacy.
Only the recepient can validatet the sender instead of anyone intercepting.
'''
signature = rsa.sign(msg.encode('utf8'), priv, HASH)
return signature
def verify(msg: str, signature: bytes, pub: Pub):
'''VerificationError - when the signature doesn't match the message.'''
rsa.verify(msg.encode('utf8'), signature, pub)
def test():
priv, pub = generate_keypair()
p = Path('/tmp/whatever' + str(random.randint(0, 1e6)))
write_keypair(priv, pub, p)
newpriv, newpub = read_keypair(p)
assert(priv == newpriv)
assert(pub == newpub)
msg = "We come in peace!"
bytes = encrypt(msg, pub)
newmsg = decrypt(bytes, priv)
assert(msg == newmsg)
signature = sign(msg, priv)
verify(msg, signature, pub)
if __name__ == '__main__':
test()
This is my rsa wrapper(GitHub). I am posting this in anticipation for learning about modern python best practices. Some of the concerns are:
commented with # just under the module string
Path and IP instead of str - which should be used?
not all functions have docstrings - is that bad?
types have been annotated but import typing is nowhere in sight
the test ... I should probably ask on QA.SE
Answer: Class imports
Replace
import rsa
Priv = rsa.key.PrivateKey
Pub = rsa.key.PublicKey
with
from rsa.key import PrivateKey as Priv, PublicKey as Pub
Though I'm not convinced that the short forms are entirely necessary, this would be the more standard way to do it.
Keypair = (Priv, Pub)
is styled like a class name but is actually a global tuple, so should be KEY_PAIR. However, if you're using it as a type, as in
def generate_keypair(bits: int=1024) -> Keypair:
then you should instead
KeyPair = Tuple[Priv, Pub]
Function shims
If you don't care too much about the default and you're mainly doing this to annotate types, then
def generate_keypair(bits: int=1024) -> Keypair:
pub, priv = rsa.newkeys(bits)
return priv, pub
can become
generate_keypair: Callable[[int], KeyPair] = rsa.newkeys
Goofy commas
def write_keypair(priv: rsa.key.PrivateKey
, pub: rsa.key.PublicKey=None
, p: Path=DEFAULT_PRIV):
neither helps legibility nor is it PEP8-compliant; just write it like a human would:
def write_keypair(priv: rsa.key.PrivateKey,
pub: Optional[rsa.key.PublicKey]=None,
p: Path=DEFAULT_PRIV):
Path.open instead of open(Path)
with open(p, 'wb') as f:
should be
with p.open('wb') as f:
subprocess instead of os.run
Replace your os.run calls with calls to the appropriate subprocess methods. You probably also should not bake in a Bash-style redirect; instead, open the file handle yourself and use it as the stdout argument in subprocess.
Asserts
load_pkcs1 does not indicate that it might return None. Is that actually possible? If not, your assert is redundant.
Top-level exceptions
decrypt should not bake in an except. Catch that at the outer scope instead.
If you're confident that this is true (which I'm not):
# Printing a stack trace leaks information about the key.
then decrypt can try/except/raise a new exception that includes all information about the failure you consider to be informative but non-compromising.
Don't roll your own temp handling
Particularly since you care about security, don't do this:
'/tmp/whatever' + str(random.randint(0, 1e6)
Instead, read https://docs.python.org/3/library/tempfile.html . | {
"domain": "codereview.stackexchange",
"id": 40323,
"tags": "python, security, type-safety"
} |
Saint-Exupery describes visit to plateau where he easily finds several meteorites. Is this realistic? | Question: In "Wind, Sand and Stars" Antoine de Saint-Exupéry describes a visit to a plateau where it is extremely easy to discern meteorites from stones, since there are no stones.
I picked up one and then a second and then a third of these stones, finding them at about the rate of one stone to the acre. And here is where my adventure became magical, for in a striking foreshortening of time that embraced thousands of years, I had become the witness of this miserly rain from the stars. the marvel of marvels was that there on the rounded back of the planet, between this magnetic sheet and those stars, a human consciousness was present in which as in a mirror that rain could be reflected.
Is this realistic? Are there places on Earth where you can find meteorite on 5 minute walk?
edit: Judging by this NASA article looking for meteorites in Antarctica is relatively hard.
Answer: This is only a partial answer; meteor falls are pretty common and so their density will be high in flat areas without a lot of flowing water, they're hard to identify without a good eye or at least a sensitive metal detector.
Hopefully a more quantitative source for the areal density of meteor finds can be found. The numbers are there, and if all rocks in the area that Saint-Exupéry describes are light in color, then picking up and checking every dark rock and checking to see if it has indications of burning or melting, this might be possible. Of course if it were this easy to collect them there then it seems others would have gone and collected a lot more. But those people have no incentive to document it since they can (often) be sold for cash apparently.
Note that the meteors shown in the image from the PRI article linked below are not dark but the dearth of dark meteorites found in Antarctica is a different story (1, 2) though there's been a lot of progress since 1986 when the document you link to was published.
According to Arizona State University's Meteorites page Meteorite Locations:
Since there is an estimated one meteorite fall per square kilometer per year, geologically stable desert regions can show significant accumulations of meteorites. Some desert regions have dozens of different meteorites per square kilometer, though they can be difficult to distinguish from normal terrestrial rocks.
PRI's Want to find a meteorite? Antarctica might be the best place to look says:
This winter, Lanza has been a rookie member of the ANSMET (the Antarctic Search for Meteorites) field team. For 40 years, the project has sent teams of scientists to the bottom of the globe to recover meteorites from all over the solar system, including chunks of the moon, comets, even Mars.
On this trip? Lanza and her colleagues recovered a total of 569 meteorites.
“They are just waiting for us to come and find them,” Lanza says. “You can imagine that blue ice ... and then you just look and there's some dark spots on them. And those are rocks and frequently those rocks are from space.”
Meteorites fall to Earth all the time, but Antarctica is a particularly good place to find them.
“We're lucky with Antarctica,” Lanza says. “It's this big white sheet covered in glaciers. And so these meteorites just become embedded in the ice and start flowing with the glacier. And in some places the glacier will run into something — like the mountain range that we were in — the Miller range. It will slow down and then the winds ... will start to remove some of that ice and that acts to concentrate the meteorites in these locations. So we can actually go there and find many more meteorites than you might imagine would fall in a single location.”
Nina Lanza knows space rocks. In her day job as a staff scientist at Los Alamos National Laboratory, she operates the Curiosity Rover’s ChemCam, using a rock-vaporizing laser to analyze the Martian surface. | {
"domain": "astronomy.stackexchange",
"id": 4678,
"tags": "meteorite"
} |
Defining bound on input signal to test accumulator for BIBO stability | Question: For an accumulator, defined as shown in the image below, why would I define $B_x=1$? $u[n]$ is defined at zero so my (possibly misguided intuition) is telling me that I'd choose $B_x = 0$ to not eliminate that point.
Answer: $B_x$ is a bound on the magnitude of the input signal. So if $x[n]=u[n]$ you have $|x[n]|\le1$ and, consequently, $B_x=1$ is a valid bound on $|x[n]|$. $B_x=0$ would mean that the input signal is zero. | {
"domain": "dsp.stackexchange",
"id": 7295,
"tags": "discrete-signals, stability"
} |
Behavior of derivative on Lorentz transformations of spinors | Question: I'm currently working through Supersymmetry Demystified by Patrick Labelle and one passage in particular confuses me.
Specifically, if $\eta$ and $\chi$ are right and left Weyl spinors respectively, the Weyl equation:
$$
i\bar{\sigma}^\mu \partial_\mu \chi = m\eta
$$
with $\bar{\sigma}^\mu \equiv \left(1,-\vec{\sigma}\right)$, shows that $i\bar{\sigma}^\mu \partial_\mu \chi$ transforms as a right chiral spinor under Lorentz transformations. The author then states that therefore the expression:
$$
\left( \partial_\mu \phi \right) \bar{\sigma}^\mu \chi
$$
transforms in the same (right chiral) representation of the
Lorentz group, regardless of the fact that the derivative acts on a complex scalar field $\phi$ rather then the spinor $\chi$. I can't find an argument anywhere justifying that statement. Why does the fact that the derivative acts on a scalar field instead of the spinor not influence its behavior under Lorentz transformations?
Answer: For a complex scalar $\phi$ and left-chiral Weyl spinor $\chi$, the quantity $\partial_\mu (\phi \bar{\sigma}^\mu \chi)$ transforms in some representation of the Lorentz group. Expanding out the product,
$$(\partial_\mu \phi) \bar{\sigma}^\mu \chi \text{ and } \phi \partial_\mu \bar{\sigma}^\mu \chi$$
transform in the same representation of the Lorentz group. Since a scalar transforms trivially, the latter transforms in the same representation as $\partial_\mu \bar{\sigma}^\mu \chi$, which we established transforms like a right-chiral Lorentz spinor. | {
"domain": "physics.stackexchange",
"id": 64445,
"tags": "special-relativity, supersymmetry, representation-theory, spinors"
} |
Can stress-induced cortisol reduce diabetes type I? | Question: Let us consider a patient with type 1 diabetes.
I think it would be safe to say that diabetes is an autoimmune disease. Now let us suppose this patient comes under stress. As far as I know, cortisol is a stress hormone and is released when a human is under stress. Now cortisol, as far as I know, increases the amount of blood sugar and at the same time is an immunosuppressant.
I would like to know whether this stress-induced cortisol can suppress the immune system, which has gone haywire, and help against diabetes - thereby rendering a suitable amount of stress useful.
Answer: Short answer
Stress is not a good candidate to treat autoimmune disease like diabetes type I.
Background
Stress indeed weakens the immune system, and diabetes type I is indeed caused by an auto-immune response against the insulin-producing beta cells in the islets of Langerhans. However, autoimmune diseases like diabetes type I need chronic treatment (i.e., insulin treatment and dietary changes).
Even if stress would help reduce diabetes type I (admittedly, theoretically it could), chronic stress would also reduce the immune response against viral and bacterial infections, it would reduce wound healing and perhaps even increase the risk of contracting cancer (Glaser & Kiecolt-Glaser (2005), as well as increase the risk of depression and cardiovascular disease (Cohen et al., 2007).
Hence, it is safe to say that deploying chronic stress as a treatment against auto-immune disease effectively means substituting one life-threatening condition with another.
References
- Cohen et al., JAMA (2007); 298(14): 1685-7
- Glaser & Kiecolt-Glaser, Nature Rev Immunol (2005); 5: 243-51 | {
"domain": "biology.stackexchange",
"id": 4372,
"tags": "autoimmune, diabetes-mellitus"
} |
Why sample is placed after the interferometer in FTIR? | Question: In all image of schematics of FTIR I found (Wikipedia for example), the sample is placed after the interferometer:
Wouldn't this miss some emergent effects, such photo-induced absorption, since the light coming out of the interferometer is missing some wavelengths due to interference ?
Consider a hypothetical system with first two excited states at 2eV and 3eV. When measuring absorbance with 620nm (2eV) and 1240nm (1eV) light, if both wavelengths are measured at the same time, it would show absorbance at both wavelengths due to ground state excitation 0->2ev and photo-induced absorption 2ev->3ev. However if they are measured separately, it would only show absorbance at 620nm because the ground state wouldn't absorb 1240nm.
Answer: Photo-induced absorption is an example of non-linear optical phenomenon: the response of the medium to two simultaneously present harmonic waves is not simply a sum of responses this medium would have to those waves if present individually. When both are present, one wave interacts, via the material medium, with another and a completely different state of medium and its behaviour is obtained.
This phenomenon is not considered to be present in ordinary absorption or reflection spectra, which is the main use of FTIR, either because it simplifies the analysis of the experiment (even if not entirely true), or due to the fact that the light wave is known to be not strong enough to make this phenomenon substantial.
In the first approximation, optical media usually behave linearly, i.e. different harmonic waves do not interact with each other. This is, of course, an idealization that is sometimes true, sometimes not. Especially if the waves are strong and close to allowed transitions in the medium, the idealization may cease to be valid. | {
"domain": "physics.stackexchange",
"id": 55145,
"tags": "spectroscopy, interferometry"
} |
cm^-1 and ns^-1 relationship | Question: In my last Phys.SE question Emilio Pisanty mentioned this relationship $1 cm^{-1} = 10^n c^{-1} ns^{-1}$. I was wondering where this relationship came from. Does anybody know? Second of all, I am not sure how from this relationship, it can be derived that in natural units, $1 cm^{-1} = 30 ns^{-1}$. Can I have a clear derivation of this? I understand in natural units, the c vanishes, so is should be $1 cm^{-1} = 10^n ns^{-1}$, not $30 ns^{-1}$. So how is this derived?
Before downvoting, please tell me what you think is wrong with the question.
Answer: Let's do natural units the other way around. Suppose that we've always worked with natural units, we measure time and distances in the same units and then some crazy physicist comes along who puts in factors of c in equations, e.g.
$$ds^2 = dt^2 - dx^2 - dy^2 - dz^2 \longrightarrow c^2 dt^2 - dx^2 - dy^2 - dz^2$$
He then defines a meter and a second such that:
$c = 1 = 299792458 \text{ meters/second} $
This then means that:
1 meter = 1/299792458 seconds.
Or:
cm = 1/29979245800 seconds
Therefore:
$$\text{cm}^{-1} = 29979245800 \text{ second}^{-1} = 29.9792458 \text{ ns}^{-1} $$ | {
"domain": "physics.stackexchange",
"id": 25302,
"tags": "homework-and-exercises, units, unit-conversion"
} |
How is the GT field in a VCF file defined? | Question: As my question in SO was closed and asked to be posted in this forum, I am posting it here.
I am not from the bioinformatics domain. However, for the sake of analysis, I am trying to pick up certain basics related to the GT field in the VCF file.
I know we have a field called Alleles. May I know under what circumstances GT takes a certain value and how they are called? Can you confirm my understanding?
Ref Alt GT Name
A A 0/0 Homozygous
A G 0/1 Heterozygous (does 1/0 also mean the same?) What's this 0 and 1 actually?
A [C,CA] ?? ??
?? ?? 1/1 HOM_ALT? How and why?
Can the experts here help me to fill the question marks and also help me understand with the Ref and Alt combinations when a genotype can take a value of 0/0 or 1/0 or 1/1 or 1/2 etc. and what are the names for those values? Like when it is called HOM_ALT etc.?
Any simple explanation for beginner like me (with no background in bioinformatics/biology) can be helpful.
Answer: You can get most of the info from this paper. See Fig. 1 and the surrounding text. Quoting from there, "GT, genotype, encodes alleles as numbers: 0 for the reference allele, 1 for the first allele listed in ALT column, 2 for the second allele listed in ALT and so on."
In your case, the reference allele, here a single nucleotide, is A. When the alternate allele is also A, the genotype GT is reference, or 0. There are 2 copies of each allele in the human genome in non-sex chromosomes chr1-chr22, hence 0/0, or homozygous reference or HOM_REF.
When ALT=G, and the GT column is 0/1, this means that you have 1 reference allele (0), and 1 alternate allele (1). This means that you have A on one copy of this locus, and G on another. The convention is write GT field in ascending order, so 0/1 rather than 1/0. This is called heterozygous, or HET.
When ALT=C,CA, the GT is probably 1/2, because there are 2 alternate alleles, and I assume we continue with the same chromosome present in 2 copies. This means there are no reference alleles here at all, only alternate alleles. It is a heterozygous genotype composed of two different ALT alleles, or HET_ALT. Note that it is not enclosed in square brackets in the vcf file format: A <tab> C,CA <tab> 1/2 ....
Finally, these are some examples of HOM_ALT:
A C 1/1
A G 1/1
A CA 1/1
This means that the same ALT allele (either C, or G, or CA) is present in 2 copies. There is no reference allele present. This is called homozygous alternate genotype.
In general, the name homo means the same, and hetero means different, in the context of genotypes.
REFERENCES:
Danecek P, Auton A, Abecasis G, et al. The variant call format and VCFtools. Bioinformatics. 2011;27(15):2156-2158. doi:10.1093/bioinformatics/btr330 : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3137218/
SEE ALSO:
VCF - Variant Call Format
What Does Genotype ("0/0", "0/1" Or "1/1") In *.Vcf File Represent?
Difference between 0/0 and ./. for genotype in VCF
1/2 in VCF genotype field? | {
"domain": "bioinformatics.stackexchange",
"id": 1557,
"tags": "vcf, file-formats"
} |
Is the Quantum Annealing model universal? | Question: I understand that the D-Wave Quantum Annealer they have today is not a universal quantum computer.
Is the reason that it's not universal because of the lack of error correction and lack of all-to-all qubit connectivity?
Or is the reason it's not universal is that the Quantum Annealing model is not universal to begin with?
Answer: The reason is that quantum annealing is intended for solving quadratic unconstrained binary optimization (QUBO) problems. As there are tasks which cannot be converted to QUBO (e.g. Monte Carlo simulations), quantum annealers are not universal quantum processors.
A converse is true, any universal quantum processor can simulate quantum annealer. | {
"domain": "quantumcomputing.stackexchange",
"id": 3585,
"tags": "annealing"
} |
Nose-Hoover thermostat: real-time average problem? | Question: Many articles as well as the following presentation
MD Ensembles and Thermostats, page 23,
claim that the principle of the nose-hoover thermostat is "removing the real-time average"-problem.
I can see that for a canonical ensemble we need to extend the classical equations of motion according to Newton's 2nd law with some kind of a heat-bath. Further, the extended system must ensure ergodicity to be useful for MD simulations.
But what is meant by the "problem of real-time average"?
How does the Sundman time-transformation help here?
Any ideas?
Answer: This point arises because of the history associated with the derivation of this method.
The original paper of Nosé involved a transformation of variables, including a scaling of the time, according to $\mathrm{d}t'=s \mathrm{d}t$ where $s$ is a dynamical variable. Therefore, solving the equations (in $t'$) and sampling the trajectories at regular intervals of $t'$ does not correspond to sampling at regular intervals of "real" time $t$: the interval depends on the value of $s$. To sample from a canonical ensemble, you need regular intervals of $t$, independent of the value of $s$, and therefore should sample the scaled-variable trajectories at varying intervals of $t'$, which is somewhat inconvenient. This is the problem being referred to.
The subsequent reformulation by Hoover eliminated this problem: there is no longer a scaled time variable. Solution of the resulting equations as a function of time has been shown to generate states sampled from the correct ensemble (assuming no ergodicity problems, which may be tackled in many cases by Nosé-Hoover chains, but this is peripheral to the question). There is a slight drawback, in that the resultant dynamics is non-Hamiltonian, but our understanding of non-Hamiltonian statistical mechanics has progressed a lot since these algorithms were formulated, and it is not a major problem.
As I understand it, the Sundman time transformation is the one I mentioned above, so I guess it is part of the problem, not part of the solution. However, an alternative method of solution to the problem, the Nosé-Poincaré algorithm, was proposed: you can read up on it here, and the original paper is by SD Bond, BJ Leimkuhler and BB Laird, J Comput Phys, 151, 114 (1999). | {
"domain": "physics.stackexchange",
"id": 57334,
"tags": "statistical-mechanics, molecular-dynamics"
} |
prove: $[p^2,f] = 2 \frac{\hbar}{i}\frac{df}{dx}p - \hbar^2 \frac{d^2f}{dx^2}$ | Question: I need to prove the commutation relation,
$$[p^2,f] = 2 \frac{\hbar}{i}\frac{\partial f}{\partial x} p - \hbar^2 \frac{\partial^2 f}{\partial x^2}$$
where $f \equiv f(\vec{r})$
and $\vec{p} = p_x \vec{i}$
I know
$$[AB,C] = A[B,C] + [A,C]B$$
Applying this, I get
$$[p^2,f] = p[p,f] + [p,f]p$$
where $p = \frac{\hbar}{i}\frac{\partial}{\partial x}$, and $[p,f] \equiv pf - fp$
using a trial function, $g(x)$, I get
$$[p^2,f] = p[p,f]g + [p,f]pg$$
$$= \frac{\hbar}{i}\frac{\partial}{\partial x}\left[\frac{\hbar}{i}\frac{\partial fg}{\partial x} - f\frac{\hbar}{i}\frac{\partial g}{\partial x}\right] + \left[\frac{\hbar}{i}\frac{\partial fg}{\partial x} - f\frac{\hbar}{i}\frac{\partial g}{\partial x}\right] \frac{\hbar}{i}\frac{\partial}{\partial x}$$
using the product rule
$$ = -\hbar^2 \frac{\partial}{\partial x}\left[g \frac{\partial f}{\partial x} + f \frac{\partial g}{\partial x} - f \frac{\partial g}{\partial x} \right] + \left[g \frac{\partial f}{\partial x} + f \frac{\partial g}{\partial x} - f \frac{\partial g}{\partial x} \right] \frac{\hbar}{i}^2 \frac{\partial}{\partial x}$$
cancelling the like terms in the brackets gives
$$= -\hbar^2 \frac{\partial}{\partial x} \left[g \frac{\partial f}{\partial x}\right] - \left[g \frac{\partial f}{\partial x}\right]\hbar^2 \frac{\partial}{\partial x}$$
using the product rule again gives
$$ = -\hbar^2 \left[\frac{\partial g}{\partial x} \frac{\partial f}{\partial x} + g \frac{\partial^2 f}{\partial x^2} \right] - \left[g \frac{\partial f}{\partial x}\right] \hbar^2 \frac{\partial}{\partial x}$$
$\frac{\partial g}{\partial x} = 0$, so
$$ = -\hbar^2 g \frac{\partial^2}{\partial x^2} - \hbar^2 g \frac{\partial f}{\partial x} \frac{\partial}{\partial x}$$
Substituting the momentum operator back in gives
$$ = -\hbar^2 g \frac{\partial^2}{\partial x^2} - \frac{\hbar}{i} g \frac{\partial f}{\partial x} p$$
The trial function, $g$, can now be dropped,
$$ = -\hbar^2 \frac{\partial^2}{\partial x^2} - \frac{\hbar}{i}\frac{\partial f}{\partial x}p$$
But this is not what I was supposed to arrive at. Where did I go wrong?
Answer: I didn't read your answer, but let's think about just computing the operator $\partial_x^2 f$. First we need to compute the operator $\partial_x f$. Now I am saying "the operator" because we are viewing $\partial_x f$ as a composition of first multiplying by $f$ and then taking the derivative. By the product rule, we know $\partial_x f = (\partial_x f) + f \partial_x$, were by $(\partial_x f)$, I really do just mean multiplication by the derivative of $f$.
Now lets try to compute $\partial_x^2 f$. It is $\partial_x [(\partial_x f) + f \partial_x] = (\partial_x^2 f) + (\partial_x f) \partial_x + (\partial_x f) \partial_x + f \partial_x^2 = (\partial_x^2 f) +2(\partial_x f) \partial_x + f \partial_x^2$.
Then $[\partial_x^2,f] = \partial_x^2f-f \partial_x^2 = (\partial_x^2 f) +2(\partial_x f) \partial_x$. If you understand this then you should get the right answer. You just need to put in the appropriate $i$'s and $\hbar$'s. | {
"domain": "physics.stackexchange",
"id": 12271,
"tags": "quantum-mechanics, homework-and-exercises, commutator"
} |
Equivalent of mass*force constant in LC oscillations | Question: Comparing the L-C oscillations with the oscillations of a spring-block system (force constant of spring = k and mass of block = m), to what electrical expression is the physical expression mk equal to?
Answer: Method 1 (analysis):
In a spring block system, we have the following quantities:
$m =$ source of inertia, resistance to change of velocity
$k =$ spring constant, spring stores the kinetic energy as potential energy
$x =$ displacement
$v =$ velocity, the rate of change of displacement
$$$$
In an LC oscillator, we have the following quantities:
$L =$ inductance, resistance to change in current
$C =$ capacitance, stores the electrical energy as potential energy
$q =$ charge
$i =$ current, the rate of flow of charge
$$$$
If you try to draw an analogy, you can obviously see that the mass is analogous to inductance. It isn't hard to draw an analogy between displacement and charge either. Even the analogy between velocity and current is obvious.
The tricky part is the relation between the spring constant and capacitance.
A higher spring constant indicates that the spring is more stiff (does not allow too much displacement for given energy). A higher capacitance indicates that it isn't that stiff and can store more charge for a given potential difference.
Therefore, $k$ is analogous to $\frac{1}{C}$.
$$$$
$$$$
Method 2 (equations):
In a spring block system, we have the following equations:
$Kinetic \space Energy = \frac{1}{2}mv^2$
$Potential \space Energy = \frac{1}{2}kx^2$
$$$$
In an LC oscillator, we have the following equations:
$Magnetic \space Energy = \frac{1}{2}Li^2$
$Electrical \space Energy = \frac{1}{2}\frac{q^2}{C} $
You can clearly identify the pairs of equivalent/similar quantities by comparing variables in the two sets of equations. | {
"domain": "physics.stackexchange",
"id": 37741,
"tags": "homework-and-exercises, electric-current, harmonic-oscillator, inductance"
} |
What does it mean "less than identity" in the operator sum representation? | Question: In Quantum Computation and Quantum Information by Isaac Chuang and Michael Nielsen, section 8.2.3, $\mathcal{E}=\sum_{k}E_k\rho E_k^{\dagger}$ gives the operator-sum representation. In general, it requires $\sum_k E_k E_k^{\dagger}\leq I$. But, what does it mean by the inequation here? Does it mean every entry of the matrix is a nonnegative real value up to 1?
Thanks
Answer: Matrix inequalities of the form $A\ge B$ should be read as
$$
A-B\ge 0\ ,
$$
which in turn means that all eigenvalues of $A-B$ are larger or equal than zero.
In the given case, $M\le I$ means that all eigenvalues of $M$ are smaller or equal than one.
(Note that this convention for $\ge$ used on matrices depends on the field. In other fields, "$\ge0$" might refer to a component-wise property, and $\succeq0$ might be used for positivity of the eigenvalues.) | {
"domain": "quantumcomputing.stackexchange",
"id": 1225,
"tags": "quantum-operation, kraus-representation"
} |
Home experiments to measure the RPM of a pedestal fan without special equipment? | Question: Is it possible to determine to an approximate degree, the revolutions per minute of a fan, for example a pedesal fan pictured below, without using some electronic/mechanical measuring device?
One thing that comes to mind is the markings on a old record player's turntable - can that concept somehow be used to make marks on a fan blade to determine it's RPM? Or can something called a "strobe tuner" help:
Can I maybe make markings on a fan with a marker and figure out the RPM using nothing else but a stopwatch? Or maybe some other DIY technique that does not require the purchase of any measuring device?
Ps. I'm not entirely sure if this is a physics or engineering question, so please feel free to move it to the appropriate site (I checked all 58 stackexchange sites and only the Physics site seemed to fit the question)
Answer: Let me first list all of the possibilities I considered that I later rejected. This is far from exhaustive, and I'm looking forward to seeing other people's creativity.
Bad Ideas
Sit on a tire swing with the fan pointing to the side. Point the fan up, measure speed of rotation of the system on the tire swing.
Get a laser or collimated flashlight. Point it into the fan at a single point. On a wall on the other side of the fan, you will have a dot of light that is flashing, measuring the rate of the flashing could be an easier problem.
Attach springs to the tip of all the blades. Observe how far out the acceleration deforms them, apply Hooke's Law to find acceleration, convert to angular speed.
Now, I am almost sure that the experimenter doesn't have the tools to execute any of these methods very well. Not even one. I'll try to break this down as to why.
Firstly, do we know the moment of inertia of the fan? No. Do we know the moment of inertia of the tire swing with a person and a fan on it? No. Is it even constant? No. I'm not saying we can't figure these things out, but it's an absurdly inferior method that will get terrible data.
On to the laser method. How are we going to measure the flash rate of the laser? I thought endlessly about this problem. Generally, a reference would be good, or if you could use electronics you could nail down the speed almost exactly and very easily. But I don't think anything is available that will work.
Now, the spring idea.. where to begin? The measurement of the deformation length is error-prone. The weight of the springs themselves will affect the speed of the fan. What spring do you have with appropriate characteristics anyway?
My best proposal
I'm hoping you can take off the cover. If you can't take off the cover, I hope you can take off some part of it, so that you can get a protruding shaft. I think the best way to make this measurement with household stuff would be to:
Get a part of the shaft isolated
Tape or attach the end of a thin string to it
Turn the fan on with a stopwatch
Time how long it takes for the fan to completely wrap the entire string around the shaft with a stopwatch
Count the number of times the string has wrapped around the shaft
There you go, you have a number of turns per some amount of time. Ideally you would use a very light string that offered little to no resistance when pulled, as in, have it loosely laid out on the floor. Fishing wire could possibly be very good. You will want the acceleration time to be small compared to the entire measured time.
Some other (not terrible) possibilities
It occurred to me that acoustic methods might have some merit here. Get something that the blades can smack against, like when you take a pencil and stick it in the fan. Open up Windows sound recorder (accessories -> entertainment -> sound recorder), or a program like Audacity. Use some sound editing program and zoom in really tightly on the sound. See if you can identify a periodic shape that corresponds to a single hit. Count the peaks over a given time frame. Once again, you have number of rotations (or 1/3rd rotations) per unit time. If you already have an educated guess as to the frequency, then identifying the acoustic pattern from individual hits might not be very bad, not to mention, there is a lot of design flexibility in this experiment and computers should have a sufficiently high sample rate.
I think the ideal would be some kind of visual timing mechanism like the OP suggests. I'd imagine that a mechanical reference could be of use. Like if you had another fan that you knew the speed of, you could place it in front of the unknown fan and adjust its speed until you saw some patterns that indicated they were in sync. Yes, I'm lacking a lot of what's required to do this effectively, but maybe someone else can offer better advice.
The Experiment
Half of the papers on my desk are blown away. I'm getting complaints about the wretched sound of pen on fanblades, and people in my office are not too happy with me right now. But this is all in the name of physics! I am editing to present my experimental results. I used the acoustic method to determine the speed of my fan.
Firstly, my experimental apparatus is the Galaxy 20 inch model 4733 fan. It has 5 blades. I can't find any shopping results for you, but maybe someone else can. Here is a pretty good quality demo of the Galaxy 20" fan on youtube. And this video specifically states they have the 4733 model that I'm using. Why do people upload youtube videos of these things?! Do you have to "unbox" every single thing you buy??
Ok, moving on. I'm using the Audicity program and the microphone from a Microsoft Lifechat headset.
The fan has 3 settings, plus 0 for off. Setting 1 is the slowest and setting 3 is the highest. It produces quite a good breeze and has served me well. To start off, I'll share a waveform I recorded with it on setting 3 and setting 1 with me doing nothing else to it.
As you can see, this is not too useful. It makes a sound, but there's no way to distinguish peaks. Maybe it has a frequency that reflects the speed of the fan, but I can't be sure (and I haven't had much luck with the spectrum visualizations). You can see how the sound it makes is different between the two, and the 3rd setting is obviously louder, and the frequency is obviously different, but we want actual numbers.
So I put a plastic pen in it (the butt of the pen). Now, you might not want to try this at home (like I just did), but I kind of had to play with the angle to get it to not miss blades. It's very easy for it to jump and miss one, which would mess up the count. I had to press kind of hard and it was rather loud. But I got results. Here are the waveforms for 0.5 seconds, and my markup in order to count the "hits". I also provide the actual count in the image.
You can check my work for the count itself. I'm also happy to upload some mp3 files, but I'm not sure where I'll host them right now. The above image was made with the high-tech research software MS Paint. I'll give answer denoted $rpm_i$, where $i$ is the number of the setting, and the number of hits above will be denoted $hits$. I'll take the error in each hit count to be $\pm 1$ hit. The formula and reported results are as follows. Remember, it has 5 blades.
$$rpm_i = \left( hits_i \pm 1 \right) \times \frac{turn}{5 hits} \times \frac{1}{0.5 s} \times \frac{60 s}{min}$$
$$rpm_1 = 456 \pm 24 rpm$$
$$rpm_2 = 624 \pm 24 rpm$$
$$rpm_3 = 864 \pm 24 rpm$$
Power Consumption (addendum)
I used my KILL A WATT device to record power consumption for all the different speed settings. I'll denote this with $P_i$ but I need to explain a little about the difficulty in making this measurement. I believe my KILL A WATT to be fairly reliable and it gives stable power measurements for devices with constant power consumption. This is not quite true for the fan. When I first turn on the fan it consumes more power than after I leave it running for some time. The largest swing I observed was $50.5 W$ at max and $45.3 W$ at minimum for setting 1. This gives you another possible source of error. Since the experiment was performed with the fan on for a good while I'll report the lowest readings I have.
$$P_0 = 0 W$$
$$P_1 = 45.3 W$$
$$P_2 = 65.1 W$$
$$P_3 = 97.1 W$$
Now, I want to take just one quick second to apply the physics concepts of friction here. Dynamic friction between two solid bodies is often taken as a constant, and fluid friction is often taken as a power law, as in, $v^n$ where $n$ is most commonly from 1 (fully laminar) to 2. We can apply that here! I converted the previous speeds to rad/s and plotted the speed versus power consumption for all 3 "on" settings. Then, I guessed a certain offset and subtracted this value from the power consumption and applied a power fit to what was left. I realized after I did this that a constant power consumption does not correlate to a constant force (that would be linear), so I'm really just assuming some base power consumption for the device, and this yields a more perfect power fit for what's left just due to the mechanics of the motor. I found that I could get a better power fit by making this subtraction. The power fit had $R^2=1.0000$ for a constant offset from $8 W$ all the way to $13 W$ so I took the middle ground of $10.5 W$, which is to say, I made an educated guess that a constant power loss accounts for $10.5 W$ out of the consumed power. The power fit follows a satisfying $1.5$ power law, which is about what I expect for fluid friction.
$$P_i = 10.5 W + 0.1343 \omega_i^{1.4374} W$$
Lastly, I want to report the general intensity of the sound in my mp3 files. I need to put a disclaimer with this that it might not be accurate in any physical sense. I would want to ask an audio engineer about this issue - I don't know if the dB of an audio file represents the physical dB of the sound at the point the measurement was made. My guess is that this will depend on the recording device. Anyway, I want to give a dB measurement (I'll denote $A$) for the average peak for the pen hits on the fan blade, as per the audio file.
$$A_1 = -8 dB$$
$$A_2 = 0 dB$$
$$A_3 = -3 dB$$
Did I have the microphone in different locations when I took these measurements? Yes, I did. If I had to guess, I would say that I had it about 3-4 inches away from the pen contact point when I did setting 1 & 2 and closer to 7-8 inches when I did the 3rd setting. Obviously the 3rd setting was louder to my ear, and I had to set the headset down when taking that measurement because holding both was getting difficult.
I offer this data because with my guesstimates you could potentially calculate the energy released in the sound wave on a hit (assuming the dB measure is a 'real' measurement). Then with a conversion efficiency (from mechanical to sound), estimate the energy dissipated in a hit. You could also take some generic values for fan motor efficiency to relate power consumption to friction forces. You could then use a tailored mathematical form for power consumption (like above) for the friction losses and apply the energy dissipation rate from the pen hits and estimate the speed loss due to the pen. It's just something good to keep in mind for future experiments, so that you can show that the process of measuring isn't affecting what is being measured too much. With my 20" fan I don't think it matters too much, but repeating the experiment on a smaller fan could benefit from these calculations, and in order to do so you should have the microphone located in a fixed position for all measurements (unlike what I did).
Comments
It's possible that the pen contact was slowing down the fan some. In fact, this is almost surely the case, but this is a rather large fan. It is also an old fan. I would expect these speeds to be less than someone with a newer one. I've taken power measurements that could be used for some other investigation if desired. One use would be to guesstimate the impact the application of the pen on the blades has on the fan speed. I have already taken a shot at putting together a picture of where the friction comes from and developed a formula for power consumption as a function of speed based on a breakdown between static and fluid friction. | {
"domain": "physics.stackexchange",
"id": 29323,
"tags": "measurements, home-experiment, angular-velocity"
} |
why rosdep instead of others? | Question:
Hi there,
Couldn't seem to find a lot of historical information on this. What is the rationale/genesis of catkin and rosdep? I'm not sure entirely yet what catkin is used for but as far as i can tell, rosdep is a package manager. Why was it written, as opposed to using all the other package managers that already exist? Note that I'm not even really sure how old ROS is. Was it written before other notable package managers stabilized themselves? Just for my own edification.
Cheers.
Originally posted by nnnnnnils on ROS Answers with karma: 31 on 2015-08-01
Post score: 3
Answer:
tl;dr: because synaptic doesn't work on OSX (fi), and dependency foo may be named foo-dev-1.2.3 on one platform, but libfoo-bar on another. rosdep hides those differences. See also the introductory note in the documentation.
[..] as far as i can tell, rosdep is a package manager
Almost: it is a meta package manager. Instead of working with packages directly, it asks the package manager of the platform it is run on to install packages for it. So instead of working with packages, it works with package managers (that work with packages).
Why was it written, as opposed to using all the other package managers that already exist?
Because it allows you to hide the differences between the various package managers on the various platforms you want to support. This abstraction does not only hide the fact that different platforms use different package managers (with different names, different options, different behaviour, etc), but also the differences in package naming that exist between platforms.
Instead of having to specify all dependencies by name for each and every platform that you want your package to work on (as a developer), you can now use a single key from the rosdep rule db, which is translated into the name of that dependency on a specific platform. The platform's package manager is then asked to check for / install that name.
Without this abstraction, package developers would need to lookup the name of system dependencies for every platform they'd wish to support, and package manifests would have to contain that information. Additionally, extending ROS support to a new platform would require updating all package manifests to add the names of all the system dependencies existing packages use. Even for relatively small numbers of packages, this is impractical (if not infeasible). With rosdep, we 'only' need to add new mappings to the db, and the system takes care of the rest.
As a random example, the following are the mappings for apr (from rosdep/base.yaml):
apr:
arch: [apr, apr-util]
cygwin: [libapr1, libaprutil1]
debian: [libapr1-dev, libaprutil1-dev]
fedora: [apr-devel, apr-util]
freebsd: [builtin]
gentoo: [dev-libs/apr, dev-libs/apr-util]
macports: [apr, apr-util]
opensuse: [libapr1, libapr-util1]
rhel: [apr-devel, apr-util]
ubuntu: [libapr1-dev, libaprutil1-dev]
I picked this randomly from the list, but I think it illustrates nicely how many different names there can be for a single dependency.
Edit: as @130s mentions, even on the same platform / OS, dependencies can have multiple different names, which is again something that rosdep can take care of for us. The Boost libraries are an example of this:
boost:
[..]
ubuntu:
[..]
natty:
apt:
packages: [libboost1.42-all-dev]
oneiric:
apt:
packages: [libboost1.46-all-dev]
precise:
apt:
packages: [libboost-all-dev]
[..]
These are all for Ubuntu, but depending on the actual version of the OS installed, require the boost key to be mapped to different system dependencies.
Originally posted by gvdhoorn with karma: 86574 on 2015-08-02
This answer was ACCEPTED on the original site
Post score: 14
Original comments
Comment by 130s on 2015-08-02:
+1. We might want to show that there can be variation under each platform (e.g. Ubuntu versions). boost has an example.
Comment by asherikov on 2020-12-17:
Why a meta package manager though, not a portable package manager, e.g., pkgsrc? Do you know of any resources where such architectural choices are described?
Comment by gvdhoorn on 2020-12-17:
I was not part of the design discussions, so I cannot answer your question.
Some things to keep in mind however:
it had to work on all targeted platforms: Linux, Windows and OSX
the decision was made in the early days of ROS (we're talking 2007/8-ish here)
rosdep does not build dependencies, it only ask something else to install them
WillowGarage, and later O(S)R(F) was (and still is) not in the packaging business, so reuse of something else is preferred over having to package things again | {
"domain": "robotics.stackexchange",
"id": 22353,
"tags": "rosdep"
} |
Does the rate of heat dissipation slow as the temperature differential between the heated object and the surrounding environment decreases? | Question: Does the rate of heat dissipation slow as the temperature differential between the heated object and the surrounding environment decreases, or is it constant?
To put this into context, picture the following scenario:
A spherical piece of copper heated to 100C is suspended in air at 20C.
Answer: In the vast majority of cases, the rate of heat transfer will increase with the temperature difference. This is generally true in conduction, convection and radiation. Sometimes it's linear, sometimes not, but it's usually monotonic (always increasing with $\Delta T$).
There are find a few specialized contradictions.
For example: the Leidenfrost effect. In that case the cooling rate of a surface by the boiling of a liquid will increase with temperature to a point, before dropping. The boiling process becomes so intense that a layer of pure vapor forms between the solid and the liquid, acting as an insulator and actually reducing the total heat transfer rate.
I don't know of any other contradictions, but would be keen to hear from others on the subject! | {
"domain": "physics.stackexchange",
"id": 20889,
"tags": "thermodynamics"
} |
What should be written to fixed frame, rviz | Question:
I read rviz/UserGuide but I still don't understand what does fixed frame means. In my rviz map is implicitly there. In some tutorial they add /camera_depth_optical_frame. Now I am playing some bag file which publics topic /base_scan [sensor_msgs/LaserScan] and rviz gives me global status error: Fixed Frame [map] does not exist. So what should be written in the fixed frame?
Originally posted by sykatch on ROS Answers with karma: 109 on 2015-12-17
Post score: 0
Answer:
The fixed frame is what you consider not moving in the world, i.e., fixed. It is, what rviz projects all data into. For your example, you might just choose the frame that the laser is published in. However, if it is attached to a moving base, this should be a frame in the world, e.g., odom or map (if these are available).
Originally posted by dornhege with karma: 31395 on 2015-12-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23243,
"tags": "ros, rviz, fixed-frame"
} |
Can one derive the Schrödinger equation from probability density arguments? | Question: My interest is regarding a probability amplitude $\psi^\dagger \psi: \mathbb{R}^2 \to [0,\infty)$, where $\psi:\mathbb{R}^2 \to \mathbb{C}$. The average position is
$$
\bar{x}=\int_{-\infty}^\infty x (\psi (x,t))^\dagger \psi(x,t)\,\mathrm{d}x \tag{1}
$$
and the average velocity is:
$$
\frac{\mathrm{d}^2}{\mathrm{d}t^2} \bar{x} = \frac{\mathrm{d}^2}{\mathrm{d}t^2}\int_{-\infty}^\infty x (\psi (x,t))^\dagger \psi(x,t)\,\mathrm{d}x \tag{2}
$$
yielding an equation of motion.
Since the Schrödinger equation is the equation of motion of quantum mechanics, how can I get to it form (2)? Is it even possible?
Answer: The equations you have written down are general definitions for the average position of a quantum particle and its derivatives. They apply to any single particle system and don't draw any kind of distinction between them.
The Schrödinger equation, however contains a potential term, which contains information about the forces acting on a particle in that specific situation. This information is not present in anything you have written down, so the Schrödinger equation cannot be derived without additional information.
This additional information can be provided in the form of Ehrenfest's theorem, which is equivalent to the Shrödinger equation. | {
"domain": "physics.stackexchange",
"id": 75023,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, probability"
} |
Safe and simple strtol() | Question: Until today, in the few cases where I needed something like this, it had been in simple programs which only I used, and where I didn't care about security, so I used the simple atoi().
However, today I needed to do that for a some more serious program, and I researched about the many different forms that are out there to go from a string to a number: atoi vs atol vs strtol vs strtoul vs sscanf
None of those pleased me. strtol() (and its family) is the safest standard one and also a very fast one, but it is incredibly difficult to use, so I decided to write a safe and simple interface to it. strtoi() (libbsd) is easier to use than strtol(), but still a bit complicated. I decided to use fixed-width integers, as I do in all my code. I also did an interface for strtof() and company.
Requisites:
libbsd (The following code can be written in terms of strtol() instead of strtoi() if libbsd is not available, but it is more complex, and has a problem with errno which strtoi() hasn't).
GNU C11 (not actually needed, but I use it for added safety/optimizations).
Signed integers:
strtoi_s.h:
#pragma once /* libalx/base/stdlib/strto/strtoi_s.h */
#include <errno.h>
#include <inttypes.h>
#include <stddef.h>
#include <stdint.h>
__attribute__((nonnull, warn_unused_result))
inline
int strtoi8_s (int8_t *restrict num, const char *restrict str,
int base);
__attribute__((nonnull, warn_unused_result))
inline
int strtoi16_s (int16_t *restrict num, const char *restrict str,
int base);
__attribute__((nonnull, warn_unused_result))
inline
int strtoi32_s (int32_t *restrict num, const char *restrict str,
int base);
__attribute__((nonnull, warn_unused_result))
inline
int strtoi64_s (int64_t *restrict num, const char *restrict str,
int base);
inline
int strtoi8_s (int8_t *restrict num, const char *restrict str,
int base)
{
int rstatus;
*num = strtoi(str, NULL, base, INT8_MIN, INT8_MAX, &rstatus);
switch (rstatus) {
case 0:
return 0;
case ENOTSUP:
return rstatus;
case ECANCELED:
case EINVAL:
case ERANGE:
default:
return -rstatus;
}
}
inline
int strtoi16_s (int16_t *restrict num, const char *restrict str,
int base)
{
int rstatus;
*num = strtoi(str, NULL, base, INT16_MIN, INT16_MAX, &rstatus);
switch (rstatus) {
case 0:
return 0;
case ENOTSUP:
return rstatus;
case ECANCELED:
case EINVAL:
case ERANGE:
default:
return -rstatus;
}
}
inline
int strtoi32_s (int32_t *restrict num, const char *restrict str,
int base)
{
int rstatus;
*num = strtoi(str, NULL, base, INT32_MIN, INT32_MAX, &rstatus);
switch (rstatus) {
case 0:
return 0;
case ENOTSUP:
return rstatus;
case ECANCELED:
case EINVAL:
case ERANGE:
default:
return -rstatus;
}
}
inline
int strtoi64_s (int64_t *restrict num, const char *restrict str,
int base)
{
int rstatus;
*num = strtoi(str, NULL, base, INT64_MIN, INT64_MAX, &rstatus);
switch (rstatus) {
case 0:
return 0;
case ENOTSUP:
return rstatus;
case ECANCELED:
case EINVAL:
case ERANGE:
default:
return -rstatus;
}
}
Unsigned integers:
It's mostly the same as the previous one, so I'll post only a function
strtou_s.h:
inline
int strtou8_s (uint8_t *restrict num, const char *restrict str,
int base)
{
int rstatus;
*num = strtou(str, NULL, base, 0, UINT8_MAX, &rstatus);
switch (rstatus) {
case 0:
return 0;
case ENOTSUP:
return rstatus;
case ECANCELED:
case EINVAL:
case ERANGE:
default:
return -rstatus;
}
}
Floating-point:
strtof_s.h:
#pragma once /* libalx/base/stdlib/strto/strtof_s.h */
#include <errno.h>
#include <stdlib.h>
/*
* `errno` needs to be cleared before calling these functions. If not, false
* negatives could happen (the function succeds, but it reports error).
*/
__attribute__((nonnull, warn_unused_result))
inline
int strtod_s (double *restrict num, const char *restrict str);
__attribute__((nonnull, warn_unused_result))
inline
int strtof_s (float *restrict num, const char *restrict str);
__attribute__((nonnull, warn_unused_result))
inline
int strtold_s (long double *restrict num, const char *restrict str);
inline
int strtod_s (double *restrict num, const char *restrict str)
{
char *endptr;
*num = strtod(str, &endptr);
if (*endptr != '\0')
return ENOTSUP;
if (errno == ERANGE)
return ERANGE;
if (str == endptr)
return -ECANCELED;
return 0;
}
inline
int strtof_s (float *restrict num, const char *restrict str)
{
char *endptr;
*num = strtof(str, &endptr);
if (*endptr != '\0')
return ENOTSUP;
if (errno == ERANGE)
return ERANGE;
if (str == endptr)
return -ECANCELED;
return 0;
}
inline
int strtold_s (long double *restrict num, const char *restrict str)
{
char *endptr;
*num = strtold(str, &endptr);
if (*endptr != '\0')
return ENOTSUP;
if (errno == ERANGE)
return ERANGE;
if (str == endptr)
return -ECANCELED;
return 0;
}
The functions take two pointers: the first one to the variable where the number has to be stored; and the second one to the string to be read. The integer functions also require the base, which follows the same rules as in strtol().
The return value is simply an error code:
0 is OK as always,
> 0 means a valid conversion with some error (partial conversion, 0 or inf in floating-point, ...).
< 0 means an invalid conversion, or no conversion at all.
Example:
char buf[BUFSIZ];
int64_t num;
if (!fgets(buf, ARRAY_SIZE(buf), stdin))
goto err;
if (strtoi64_s(&num, buf, 0))
goto err;
/* num is safe to be used now*/
Do you think the interface can be improved in any way?
Answer: Portability
To be clear, strtoi() and strtou() that OP's code relies on is not in the standard C library. OP's code is limited to the requisites.
strtol() may be more complex, yet it is portable throughout all compliant C implementations.
Bug - failure to clear errno
When strtod() succeeds, it does not change errno, so the tests on errno are testing the prior state. Add errno = 0; before calling strtod(), strtof(), strtold().
errno = 0; // add
*num = strtod(str, &endptr);
if (*endptr != '\0') return ENOTSUP;
if (errno == ERANGE) return ERANGE;
...
Questionable error
With floating point conversions, input like "z", the functions indicate ENOTSUP. I'd expect ECANCELED
Rather than
if (*endptr != '\0') return ENOTSUP;
if (errno == ERANGE) return ERANGE;
if (str == endptr) return -ECANCELED;
Consider
if (str == endptr) return -ECANCELED;
if (*endptr != '\0') return ENOTSUP;
if (errno == ERANGE) return ERANGE;
Questionable cases
With "1.0e100000"? A floating point value with infinity with an ERANGE error?
With "INF"? A floating point value with infinity with an no error?
Careful about ERANGE on the small side
When the string indicates a small value like 1e-100000, this may or may not set errno = ERANGE.
C allows that. C also allows errno to not be set on underflow.
Linux man has "If the correct value would cause underflow, zero is returned and ERANGE is stored in errno."
It is unclear to me what libbsd or OP wants in this case.
There are additional issues anytime the string would convert to a value smaller in magnitude than DBL_MIN. This lack of crispness in strtod() specification renders string in the converted range of DBL_MIN and DBL_TRUE_MIN troublesome.
String to number design
Most string to number functions tolerate leading spaces. I find it curious that most such functions do not well tolerate trailing white-space.
IMO, such functions should - very convenient for reading and converting a line of input like "123\n". Perhaps as:
number = strto*(string, &endptr);
if (string == endptr) return fail_no_conversion;
while (isspace((unsigned char) *endptr)) {
endptr++;
}
// Now test for null character
if (*endptr) return fail_junk_at_the_end;
... | {
"domain": "codereview.stackexchange",
"id": 35859,
"tags": "c, formatting, api"
} |
How did Richard Feynman arrive at the hamiltonian equation for an electron propagating in a crystal lattice? | Question: In Richard Feynman's lectures on Physics Vol III, Chapter 13-1, he described the state of an electron in a crystal lattice to be approximated by the linear combination of the orthogonal atom orbitals:
$$|\phi\rangle\hspace{1mm} = \sum_{n} |n\rangle\langle n|\phi\rangle\hspace{1mm} = \sum_{n} C_{n} |n\rangle$$
Where $|n\rangle$ is the state of the electon being at the $n^{\text{th}}$ atom.
By treating the electron as a wave with amplitude $iA/\hbar$, he proposes that the partial hamiltonian equation be:
$$i\hbar \frac{dC_{n}(t)}{dt} = E_{0}C_{n}(t) - AC_{n+1}(t) - AC_{n-1}(t)$$
where $E_{0}$ is the energy the electron would have if it couldn't leak away from one of the atoms.
How did Feynman arrive at this proposal?
Answer: This is essentially tight binding model, our hamiltonian elements are
$$\langle n|H|n\rangle=E_0$$
this is the energy to create an electron sitting at $n$.
$$\langle n+1|H|n\rangle=-\Delta$$
this is amplitude to hop next site.
$$\langle n-1|H|n\rangle=-\Delta$$
this is amplitude to hop to previous site.
Thus our partial Hamiltonian would look like
$$H=-\Delta|n+1\rangle \langle n|+E_0 |n\rangle \langle n|-\Delta|n-1\rangle \langle n|$$
Now we have
$$H|\phi,t\rangle= [E_{0}C_{n}(t) - \Delta C_{n+1}(t) - \Delta C_{n-1}(t)]|n\rangle$$
and LHS would be
$$\frac{i\hbar\partial}{\partial t}|\phi,t\rangle=\sum_{n} \dot{C}_{n}
|n\rangle$$
Thus if we equate
$$\langle n|\frac{i\hbar\partial}{\partial t}|\phi,t\rangle=\langle n|H|\phi,t\rangle$$
we would arrive
$$i\hbar \frac{dC_{n}(t)}{dt} = E_{0}C_{n}(t) - \Delta C_{n+1}(t) - \Delta C_{n-1}(t)$$ | {
"domain": "physics.stackexchange",
"id": 42979,
"tags": "quantum-mechanics, solid-state-physics"
} |
bash script for constructing RNA pipeline | Question: I have written a bash script that consists of multiple commands and Python scripts. The goal is to make a pipeline for detecting long non coding RNA from a certain input. Ultimately I would like to turn this into a app and host it on some bioinformatics website. One problem I am facing is using getopt tools in bash. I couldn't find a good tutorial that I understand clearly. In addition any other comments related to improving on the code is much appreciated.
#!/bin/bash
if [ "$1" == "-h" ]
then
echo "Usage: sh $0 cuffcompare_output reference_genome blast_file"
exit
else
wget https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz && tar xvf 2.0.1 && rm -r 2.0.1
makeblastdb -in $3 -dbtype nucl -out $3.blast.out
grep '"u"' $1 | \
gffread -w transcripts_u.fa -g $2 - && \
python2.7 get_gene_length_filter.py transcripts_u.fa transcripts_u_filter.fa && \
TransDecoder-2.0.1/TransDecoder.LongOrfs -t transcripts_u_filter.fa
sed 's/ .*//' transcripts_u_filter.fa | grep ">" | sed 's/>//' > transcripts_u_filter.fa.genes
cd transcripts_u_filter.fa.transdecoder_dir
sed 's/|.*//' longest_orfs.cds | grep ">" | sed 's/>//' | uniq > longest_orfs.cds.genes
grep -v -f longest_orfs.cds.genes ../transcripts_u_filter.fa.genes > longest_orfs.cds.genes.not.genes
sed 's/^/>/' longest_orfs.cds.genes.not.genes > temp && mv temp longest_orfs.cds.genes.not.genes
python ../extract_sequences.py longest_orfs.cds.genes.not.genes ../transcripts_u_filter.fa longest_orfs.cds.genes.not.genes.fa
blastn -query longest_orfs.cds.genes.not.genes.fa -db ../$3.blast.out -out longest_orfs.cds.genes.not.genes.fa.blast.out -outfmt 6
python ../filter_sequences.py longest_orfs.cds.genes.not.genes.fa.blast.out longest_orfs.cds.genes.not.genes.fa.blast.out.filtered
grep -v -f longest_orfs.cds.genes.not.genes.fa.blast.out.filtered longest_orfs.cds.genes.not.genes.fa > lincRNA_final.fa
fi
Here is how I run it:
sh test.sh cuffcompare_out_annot_no_annot.combined.gtf /mydata/db/Brapa_sequence_v1.2.fa TE_RNA_transcripts.fa
Answer: This script looks like a wall of text and barely readable.
Let's break it down from top to bottom.
Simplify the handling of -h
Since the if at the beginning to check if the first parameter is -h will exit if true,
it would be simpler to drop the else part entirely:
if [ "$1" == -h ]
then
echo "Usage: sh $0 cuffcompare_output reference_genome blast_file"
exit
fi
I also dropped the quotes around the -h, they are not necessary.
Validate the command line arguments
The script expects 3 parameters, but doesn't validate if it really received them.
I suggest to rewrite the top part like this:
help() {
exitcode=$1
echo "Usage: sh $0 cuffcompare_output reference_genome blast_file"
exit $exitcode
}
[ "$1" == -h ] && help 0
[ $# -lt 3 ] && help 1
I added a help function to avoid duplication, because I wasn't too reuse the same help message when exiting the program in both cases:
When the first param is -h, print help and exit with 0, meaning success
When there are less than 3 params, print help and exit with 1, meaning failure
Instead of if statements I used && expressions to make it shorter and simpler. The result is equivalent.
Give the parameters meaningful names
The parameters $1, $2 and $3 are scattered here and there in the wall of text,
and they are not exactly easy to remember.
Give them proper names right after validation:
cuffcompare_output=$1
reference_genome=$2
blast_file=$3
And use these names throughout the script.
Taming the wall of text...
Add some line breaks.
Group together lines that are most tightly related.
For example,
for the lines that are the continuation of a pipeline, put
a line break before and after would make this easier to read.
And when a pipeline gets too long,
add some more line breaks (escaped with \) to make the important ending part more visible,
for example when redirecting output to a file.
For example, I think this is more readable:
wget https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz && tar xvf 2.0.1 && rm -r 2.0.1
makeblastdb -in $3 -dbtype nucl -out $3.blast.out
grep '"u"' $1 | \
gffread -w transcripts_u.fa -g $2 - && \
python2.7 get_gene_length_filter.py transcripts_u.fa transcripts_u_filter.fa && \
TransDecoder-2.0.1/TransDecoder.LongOrfs -t transcripts_u_filter.fa
sed 's/ .*//' transcripts_u_filter.fa | grep ">" | sed 's/>//' \
> transcripts_u_filter.fa.genes
cd transcripts_u_filter.fa.transdecoder_dir
sed 's/|.*//' longest_orfs.cds | grep ">" | sed 's/>//' | uniq \
> longest_orfs.cds.genes
grep -v -f longest_orfs.cds.genes ../transcripts_u_filter.fa.genes \
> longest_orfs.cds.genes.not.genes
sed 's/^/>/' longest_orfs.cds.genes.not.genes \
> temp && mv temp longest_orfs.cds.genes.not.genes
python ../extract_sequences.py longest_orfs.cds.genes.not.genes ../transcripts_u_filter.fa longest_orfs.cds.genes.not.genes.fa
blastn -query longest_orfs.cds.genes.not.genes.fa -db ../$3.blast.out -out longest_orfs.cds.genes.not.genes.fa.blast.out -outfmt 6
python ../filter_sequences.py longest_orfs.cds.genes.not.genes.fa.blast.out longest_orfs.cds.genes.not.genes.fa.blast.out.filtered
grep -v -f longest_orfs.cds.genes.not.genes.fa.blast.out.filtered longest_orfs.cds.genes.not.genes.fa > lincRNA_final.fa
As a small tip, a shorter equivalent of grep ">" | sed 's/>//' is sed -ne 's/>//p'. | {
"domain": "codereview.stackexchange",
"id": 12385,
"tags": "bash, bioinformatics"
} |
Why is the Plane progressive wave equation $y= a\sin (kx-wt)$ for positive direction of x-axis? | Question: Likewise, why is $y= a\sin(kx+\omega t)$ for negative direction? What is the basis/derivation for this?
Answer: Let's take the argument of the function i.e, $kx-\omega t$. The argument of the function should remain constant,(equivalently the phase must remain constant)for a particular section of the wave.
\begin{equation}
kx-\omega t=\lambda
\end{equation}
where $\lambda$ is a constant.
Differentiating both sides we get,
\begin{equation}
k\frac{dx}{dt}=\omega
\end{equation}
which is positive, thus this represents a wave travelling in positive direction.
Similarly for the wave travelling in negative direction.
This can be illustrated by plotting also, the blue wave represents wave at $t=0$ and the orange wave represents wave at a later time, As you can see, the wave has moved in left direction for the first figure for the equation $y=a\sin(kx+\omega t)$ and in the right for the second i.e for $y=a\sin(kx-\omega t)$ case.Consider the blue wave below at some $x$, now if you want to make the wave move, then you need the same $y$ at some other $x$ and $t$, thus for the first case,it happens when $x$ becomes less positive, or moves to the left, implying that the wave has traveled in negative direction. | {
"domain": "physics.stackexchange",
"id": 30316,
"tags": "newtonian-mechanics, waves"
} |
More German overengineering™ - Class mappings and factories | Question: Goals:
So the plan was simple: Provide a factory to instantiate implementations of a certain interface (ModelConverter<T>), depending on what model-class you want to convert.
The approach is relatively straightforward:
statically map model-classes to ModelConverter implementations
use that mapping to obtain the implementation class
use reflection to get a constructor and invoke it
return the instance
If any of these steps fail, fail hard, because the application relies on the ModelConverter instances.
My code currently looks as follows (note that I am tied to java 6 so please don't recommend multi-catch):
public final class ModelConverterFactory {
private enum ClassMapping {
DEFAULT(Object.class, AbstractConverter.class), RIGHT(Right.class,
RightConverter.class);
private Class<?> modelClass;
private Class<?> converterClass;
private ClassMapping(final Class<?> modelClass, final Class<?> clazz) {
if (modelClass == null || clazz == null) {
throw new BrokenCodeException(
"Cannot create a mapping without modelClass and converterClass");
}
this.modelClass = modelClass;
this.converterClass = clazz;
}
public static ClassMapping fromModelClass(final Class<?> modelClass) {
for (ClassMapping mapping : ClassMapping.values()) {
if (mapping.modelClass.equals(modelClass)) {
return mapping;
}
}
return DEFAULT;
}
public Class<?> getConverterClass() {
return this.converterClass;
}
public Class<?> getModelClass() {
return this.modelClass;
}
}
public static Collection<Class<?>> supportedModelClasses() {
Collection<Class<?>> supported = new HashSet<Class<?>>();
for (ClassMapping mapping : ClassMapping.values()) {
if (mapping != ClassMapping.DEFAULT) {
supported.add(mapping.getModelClass());
}
}
return supported;
}
public static <T> ModelConverter<T> createConverterFor(final Class<?> clazz) {
ClassMapping mapping = ClassMapping.fromModelClass(clazz);
Class<?> converterClass = mapping.getConverterClass();
@SuppressWarnings("unchecked")
Constructor<ModelConverter<T>> converterConstructor = ((Constructor<ModelConverter<T>>) converterClass
.getDeclaredConstructors()[0]);
ModelConverter<T> instance = null;
try {
instance = converterConstructor.newInstance((Object[]) null);
} catch (IllegalArgumentException e) {
Log.doLogError(e.toString());
e.printStackTrace();
throw new BrokenCodeException(
"Could not instantiate ModelConverter, because the Arguments didn't match",
e);
} catch (InstantiationException e) {
Log.doLogError(e.toString());
e.printStackTrace();
throw new BrokenCodeException(
"Something happened, when we tried to instantiate ModelConverter:\n",
e);
} catch (IllegalAccessException e) {
Log.doLogError(e.toString());
e.printStackTrace();
throw new BrokenCodeException(
"Could not access Constructor to for specific ModelConverter Implementation:",
e);
} catch (InvocationTargetException e) {
Log.doLogError(e.toString());
e.printStackTrace();
throw new BrokenCodeException(
"For some reason, the Constructor of ModelConverter threw an exception:",
e);
}
return instance;
}
}
Further Notes:
I am aware that the current approach requires Implementations to declare only a single parameterless constructor. For the current state of the project this is used for, that is actually quite fine.
Again I want to mention that I am required to use java 6
BrokenCodeException is a simple RuntimeException with no additional info.
Concerns:
Unchecked casting: Do I really need it? It feels somewhat dirty to me :(
Again, am I overly complicating a simple task? This feels like it's the way around and below to come from behind! Note that I didn't want to use CDI, to make this approach usable for non-JEE applications.
Answer: Things will be greatly simplified if ClassMapping didn't map a class to a class, but a class to an instance.
public final class ModelConverterFactory {
private enum ClassMapping {
DEFAULT(Object.class, new AbstractConverter()), RIGHT(Right.class, new RightConverter());
private Class<?> modelClass;
private ModelConverter<?> converter;
private <T> ClassMapping(final Class<T> modelClass, final ModelConverter<T> modelConverter) {
this.modelClass = modelClass; // got rid of null checks, we control all invocations, none pass null.
this.converter = modelConverter;
}
public static ClassMapping fromModelClass(final Class<?> modelClass) {
for (ClassMapping mapping : ClassMapping.values()) {
if (mapping.modelClass.equals(modelClass)) {
return mapping;
}
}
return DEFAULT;
}
public ModelConverter<?> getConverter() {
return this.converter;
}
public Class<?> getModelClass() {
return this.modelClass;
}
}
public static Collection<Class<?>> supportedModelClasses() {
Collection<Class<?>> supported = new HashSet<Class<?>>();
for (ClassMapping mapping : ClassMapping.values()) {
if (mapping != ClassMapping.DEFAULT) {
supported.add(mapping.getModelClass());
}
}
return supported;
}
public static <T> ModelConverter<T> getConverterFor(final Class<T> clazz) {
return (ModelConverter<T>) ClassMapping.fromModelClass(clazz).getConverter(); // safe cast : we know all entries will match
}
}
This gets rid of that nasty introspection bit. Of course this means ModelConverter implementations ought to be reentrant.
I would also be inclined to replace the enum with a Map. You say you use an enum to disallow runtime changing of mappings, but with proper encapsulation that can be perfectly achieved with a Map as well. | {
"domain": "codereview.stackexchange",
"id": 8692,
"tags": "java, design-patterns, interface"
} |
Simple Vigenere Cipher In Python | Question: I'm a relatively new programmer. I've made a simple Vigenere cipher program. It takes three arguments and acts on a file. I have made some of the steps more "explict" for myself by using more lists than I need to, rather than applying multiple "transformations" at a time. I would appreciate feedback on how this code would be written differently by people who know more than I do.
#!/usr/bin/env python3
# vigenere.py - This program has two modes, encrypt and decrypt. It takes
# three arguments: the mode('encrypt' or 'decrypt'), a keyword, and a
# filename to act upon. It is designed to work with lowercase letters.
from sys import argv
from itertools import cycle
# User specifies a mode, a key, and a file with argv arguments
def start():
if len(argv) > 1:
mode = argv[1]
key = argv[2]
plaintextFilename = argv[3]
else:
print('Please supply mode, key, and file as arguments.')
exit()
# Start the mode selected
if mode == 'encrypt':
encryptMode()
elif mode == 'decrypt':
decryptMode()
else:
print('Please supply \'encrypt\' or \'decrypt\' mode.')
exit()
# Encryption Mode
def encryptMode():
# Open the alpha plaintext file as an object
alphaPlaintextFileObj = open(argv[3])
# Create the ordinal plaintext data structure
ordinalPlaintext = []
# Populate the ordinal plaintext data structure
for c in alphaPlaintextFileObj.read():
if c == ' ':
ordinalPlaintext.append(' ')
else:
o = ord(c) - 65
ordinalPlaintext.append(o)
# Create an ordinal ciphertext data structure
ordinalCiphertext = []
# Turn the key into an ordinal key where a = 1, etc.
ordinalKey = []
key = argv[2]
for c in key:
n = ord(c) - 96
ordinalKey.append(n)
# Populate the ordinalCiphertext structure with numbers shifted using the
# ordinal key.
for k, p in zip(cycle(ordinalKey), ordinalPlaintext):
if p == ' ':
ordinalCiphertext.append(' ')
else:
c = (k + p) % 25
ordinalCiphertext.append(c)
# Create the alpha ciphertext file
alphaCiphertextFilename = argv[3] + '_encrypted'
alphaCiphertextFileObj = open(alphaCiphertextFilename, 'w')
# Populate the alpha ciphertext file
for c in ordinalCiphertext:
if c == ' ':
alphaCiphertextFileObj.write(' ')
else:
l = chr(int(c) + 65)
alphaCiphertextFileObj.write(l)
# Save and close the plaintext and ciphertext files.
alphaPlaintextFileObj.close()
alphaCiphertextFileObj.close()
# Print a message telling the user the operation is complete.
print(f'{argv[3]} encrypted as {alphaCiphertextFilename}')
# Decryption Mode
def decryptMode():
# Open the alpha ciphertext file as an object
alphaCiphertextFileObj = open(argv[3])
# Create the ordinal ciphertext data structure
ordinalCiphertext = []
# Populate the ordinal ciphertext data structure
for c in alphaCiphertextFileObj.read():
if c == ' ':
ordinalCiphertext.append(' ')
else:
o = ord(c) - 97
ordinalCiphertext.append(o)
# Create the ordinal key
ordinalKey = []
key = argv[2]
for c in key:
n = ord(c) - 96
ordinalKey.append(n)
#Create the ordinal plaintext data structure
ordinalPlaintext = []
# Populate the ordinal plaintext data structure with the modular
# difference of the ordinal ciphertext and the ordinal key
for k, c in zip(cycle(ordinalKey), ordinalCiphertext):
if c == ' ':
ordinalPlaintext.append(' ')
else:
p = (c - k) % 25
ordinalPlaintext.append(p)
# Create the alpha plaintext file
alphaPlaintextFilename = argv[3] + '_decrypted'
alphaPlaintextFileObj = open(alphaPlaintextFilename, 'w')
# Convert the ordinal plaintext to an alpha plaintext file,
# 'filename_decrypted'
for p in ordinalPlaintext:
if p == ' ':
alphaPlaintextFileObj.write(' ')
else:
l = chr(int(p) + 97)
alphaPlaintextFileObj.write(l)
# Save and close the ciphertext and plaintext files
alphaCiphertextFileObj.close()
alphaPlaintextFileObj.close()
# Print a message telling the user the operation is complete
print(f'{argv[3]} decrypted as {alphaPlaintextFilename}')
start()
Answer: def start():
I'd call this function main as that's what it is generally called.
if mode == 'encrypt':
encryptMode()
elif mode == 'decrypt':
decryptMode()
Why not call this encrypt and decrypt? The methods to actually perform the encryption / decryption after all; you're not setting a mode.
alphaPlaintextFileObj = open(argv[3])
It seems to me that file handling can be perfectly split from the encrypt function, especially if you read in all the data before encryption happens anyway.
ordinalPlaintext = []
Why would you first convert the entire plaintext / ciphertext to ordinals? This can be done on a per-character base, preferably using a separate method. Then it also becomes easier to skip space and such, which you now have to handle twice.
Conversion to ordinals - or more precisely, indices within the Vigenere alphabet - is of course exactly what is needed, so that's OK.
o = ord(c) - 65
65 is an unexplained magic number, why not use ord('a') instead or use a constant with that value?
n = ord(c) - 96
Why is A a 1? What about Z in that case? And why do we suddenly use the uppercase character set?
for k, p in zip(cycle(ordinalKey), ordinalPlaintext):
Now this I like, it is very clear what is done here, and it is good use of Python specific functionality.
c = (k + p) % 25
Wrong! You always perform a modular calculation with the same size as the alphabet. This might work as well (if you forget about the Z) but it's not Vigenere as it was written down a long time ago.
alphaPlaintextFileObj.close()
Always close files as soon as they are not necessary any more. You already read all of the plaintext, no need to keep that file handle around.
What I'm missing is validation that the contents of the plaintext consist of characters that are out of range, and a way of handling those. The same thing goes for the key, which should consist of all uppercase characters, but lowercase characters are used without issue.
Furthermore, if you take a good look, then decryption is the same as encryption, except for p = (c - k) % 25 and - of course - the file handling. Now the file reading and writing should not be in either method, so let's exclude that. That leaves us with that single assignment / expression. Of that, only the - sign is really different.
This is why most people will write a single "private" _crypt method that simply takes an integer of 1 for encryption and -1 for decryption. Then the expression becomes (charIndex + direction * keyIndex) % alphabetSize.
Currently you are violating the "DRY principle": don't repeat yourself. | {
"domain": "codereview.stackexchange",
"id": 38197,
"tags": "python, vigenere-cipher"
} |
Various questions on renormalization in lattice systems | Question: Forgive the long, multi questioned-question. The setting of this question is inspired by this answer.
Consider some theory on a lattice, for example the 2D $0$-field Ising model
$$H=-K\sum_{\langle i,j\rangle} \sigma_i \sigma_j$$
the lattice is $\mathbb{Z}^2$ in this case. We can define the space of all theories $\mathcal T$, i.e. the space of all probability measures of real valued fields defined on $\mathbb{Z}^2$, and define a renormalization map as $R:\mathcal{T}\rightarrow \mathcal{T}$.
First question: how can we make the idea that renormalization "scales" the system clear? The image of the renormalization map is still on $\mathbb{Z}^2$. How to formalize the idea that the new model has a "larger lattice spacing"? I don't see any notion or measure of the lattice spacing in $H$.
If we accept that there is some notion that the renormalization increases the scale of the system, then the interesting stuff comes when we look at the correlation length $\xi$: if the lattice spacing increases by a factor $b>1$, and the length is measured in units of $b$, then it must be that $\xi$ is mapped to $\xi'=\xi/b$. If we start from a model that has $\xi=\infty$, the image will still have $\xi=\infty$.
Suppose this transformation has a fixed point $V_*$ such that $R(V_*)=V_*$, then by the above argument $V_*$ should have infinite correlation length. Also, since the map reduces the correlation length, the stable manifold of the fixed point, defined as
$$ W^s=\{V\in W^s: \lim_{n\rightarrow \infty}R^n(V)=V_*\}$$
must be composed exclusively of points with $\xi=\infty$. Minor question: is the reciprocal true? Is any point with $\xi=\infty$ on the stable manifold?
Second question: I've always only seen definitions of the correlation length that are loosely based on an ansatz for the shape of the correlation function of the Ising model
$$ \Gamma(r)\sim e^{-r/\xi}$$
in principle the renormalization could map our simple Ising model to a ridiculously complicated distribution, for which the correlation function doesn't have such a simple form. More generally, even if we chose a nice renormalization map, it would still have a fixed point with a stable manifold, and for the argument above to make any sense, all the points on the stable manifold should have a well defined correlation length. What is it?
This all means that really, fixed points and critical points are two separate beasts, and that the critical point of a given model does not correspond to the fixed point of a given renormalization transformation. A model with Hamiltonian $H$ corresponds to a curve in $\mathcal T$
$$ K\rightarrow \mathrm{Ising}(K)$$
and the critical point is the intersection of this curve with the stable manifold of a renormalization procedure, which leads me to the following third question
Third question: why do we care about fixed points of renormalization? Unless by some miracle the intersection point happens to be the fixed point, finding the fixed point looks useless to me, as finding the stable manifold and where it intersects the curve looks like a daunting problem in general. Is it correct to say that renormalization is not helpful to find the critical temperature of a model? If I understand correctly, it can still be used to derive critical exponents.
Answer: Let $\mathbb{Z}^d$ denote the unit square lattice in $d$ dimensions.
Let $\Omega=\mathbb{R}^{\mathbb{Z}^d}$ be the Cartesian product of one copy of $\mathbb{R}$ for each lattice site $\mathbf{x}\in\mathbb{Z}^d$. An element $\sigma$ of $\Omega$ is thus a spin configuration $(\sigma_{\mathbf{x}})_{\mathbf{x}\in\mathbb{Z}^d}$. We equip $\Omega$ with the product topology (of a countable product of copies of $\mathbb{R}$) and also with the Borel $\sigma$-algebra $\mathcal{F}$ resulting from this topology. We can now define $\mathcal{T}$ as the set of all probability measures $\mu$ on the measurable space $(\Omega,\mathcal{F})$.
Pick some fixed integer $L>1$. For any site $\mathbf{x}\in\mathbb{Z}^d$, define the block
$$
B_{\mathbf{x}}=\{\mathbf{y}\in\mathbb{Z}^d\ |\ \mathbf{y}\in L\mathbf{x}+[0,L)^d\}
$$
of linear size $L$ near the point $L\mathbf{x}$ (I chose "bottom right corner" but one could also have it at the center). Note that the point $L\mathbf{x}$ belongs to the coarser lattice $(L\mathbb{Z})^d$. We now pick some constant $[\phi]$ and define a map $\Gamma:\Omega\rightarrow\Omega$ as follows. We send the spin configuration $\sigma$ to the new configuration $\Gamma(\sigma)=\tau$ where, for all $\mathbf{x}\in\mathbb{Z}^d$,
$$
\tau_{x}=L^{[\phi]-d}\sum_{\mathbf{y}\in B_{\mathbf{x}}} \sigma_{\mathbf{y}}\ .
$$
The map $\Gamma$ is continuous and therefore $(\mathcal{F},\mathcal{F})$-measurable.
If $\mu$ is a probability measure on $\Omega$, then one can define direct image or push-forward measure $\mu'=\Gamma_{\ast}\mu$. It is the probability distribution of the spin configuration $\Gamma(\sigma)$ if $\sigma$ is sampled according to the probability distribution $\mu$. We thus have a map $R:\mathcal{T}\rightarrow\mathcal{T}, \mu\mapsto\mu'$.
This map $R$ is the renormalization group map, in the block spin approach. There are other ways of doing that (decimation, splitting of Gaussian measures as a sum of high and low momentum fields, etc.)
Now suppose that the original measure is such that the two-point function satisfies
$$
\langle\sigma_{\mathbf{x}_1}\sigma_{\mathbf{x}_2}\rangle_{\mu}
\sim e^{- \frac{|\mathbf{x}_1-\mathbf{x}_2|}{\xi}}
$$
at large distance. Note that, to avoid confusion, I put as a subscript the probability measure
with respect to which the expectation $\langle\cdot\rangle$ is taken.
Also note that the $\sim$ is rather vague. It could mean the LHS is roughly equal to the RHS times a constant or even a power law decay in the distance $|\mathbf{x}_1-\mathbf{x}_2|$.
Let us do the computation for the new measure $\mu'=R(\mu)$.
Pretty much by definition of the direct image measure,
$$
\langle\sigma_{\mathbf{x}_1}\sigma_{\mathbf{x}_2}\rangle_{\mu'}=
\langle(\Gamma(\sigma))_{\mathbf{x}_1}(\Gamma(\sigma))_{\mathbf{x}_2}\rangle_{\mu}
$$
$$
=L^{2[\phi]-2d}\sum_{\mathbf{y}_1\in B_{\mathbf{x}_1},\mathbf{y}_2\in B_{\mathbf{x}_2}} \langle\sigma_{\mathbf{y}_1}\sigma_{\mathbf{y}_2}\rangle_{\mu}
$$
$$
\simeq L^{2[\phi]} \langle\sigma_{L\mathbf{x}_1}\sigma_{\mathbf{Lx}_2}\rangle_{\mu}
$$
from the approximation that the two-point function of $\mu$ does not change much if the points roam around the $L$ blocks near $L\mathbf{x}_1$ and $L\mathbf{x}_2$. So the result is
$$
\sim e^{-\frac{|L\mathbf{x}_1-L\mathbf{x}_2|}{\xi}}=e^{-\frac{|\mathbf{x}_1-\mathbf{x}_2|}{\xi'}}
$$
with $\xi'=\frac{\xi}{L}$. So you see that the correlation length has shrunk by a factor of $L$.
The above regime is for noncritical measures. In that case the best choice of $[\phi]$ is $\frac{d}{2}$, in order to converge to a well defined fixed point.
Indeed, take $\mu_{\rm triv}$ to be the measure where all the $\sigma_{\mathbf{x}}$ are iid $N(0,1)$ random variables. One then has $R(\mu_{\rm triv})=\mu_{\rm triv}$ just from undergraduate probability.
For 2D Ising and for the critical measure $\mu$, one expects that the choice $[\phi]=\frac{1}{8}$ in the definition of $R$ entails convergence to a fixed point which is nontrivial. You can redo a similar two-point function calculation as above in this case, them you will see that, because $\langle\sigma_{\mathbf{x}_1}\sigma_{\mathbf{x}_2}\rangle_{\mu}$ decays like $1/|\mathbf{x}_1-\mathbf{x}_2|^{1/4}$, the fixed point condition $\mu=\mu'$ is only consistent with $1/8$ as a pick for the scaling dimension $[\phi]$. | {
"domain": "physics.stackexchange",
"id": 64695,
"tags": "quantum-field-theory, statistical-mechanics, renormalization, ising-model, critical-phenomena"
} |
Why the formation of a fog is observed when triethylamine is added? | Question: In the procedure for the synthesis of N-anisoyl-2-pyrrolidinone that was given to me, it is written that when triethylamine is added, a "fog" is observed inside the flask.
Triethylamine is added to neutralize the hydrochloric acid that is generated during the reaction, forming triethylammonium chloride.
I searched for similar procedures involving an acyl chloride and an amine, but the formation of a "fog" is never mentioned.
A quick search on the internet gave me no clue.
In my experience I have never witnessed the formation of a "fog" with this kind of reactions.
Since I cannot enter the lab due to the COVID-19 restrictions, I cannot perform the reaction now, can someone help me to understand if and possibly why in this case a "fog" is formed?
Answer: You are adding Et3N to a solution that contains HCl as a reaction product. That means there will be HCl vapour above the solution. The Et3N is fairly volatile so there will be Et3N vapour around the addition stream/droplets and this will react with the HCl vapour as it is added to give triethylammonium chloride. This gives you the fog. | {
"domain": "chemistry.stackexchange",
"id": 14456,
"tags": "organic-chemistry"
} |
Optimizing system calls and nested loops | Question: I'm writing a Perl program to take a set of clauses and a conclusion literal and produce a resolution-refutation proof (if possible) using a breadth-first set of support (SOS) search algorithm.
The actual searching part of the program runs extremely slow, because I have many nested loops. I imagine it may also have to do with the system calls for the I/O taking place, but I'm not sure.
Here is the code for the searching part of the program.
@clauses and @SOS are both 2D arrays. The @clauses contain all of the clauses including the negated conclusion. In the beginning of the algorithm you see @SOS gets initialized with the negated conclusion as its only value. It then grows with clauses as resolutions are found.
#Begin breadth-first/SOS search/add algorithm
$SOS[0][0]=$conclusion2;
my $cSize=@clauses;
say "\nworking......";
my $dots=0;
SOSROW:
for(my $a; $a<@SOS; $a++)
{
if((($dots % 7) ==0))
{
print "\n";
}
if($dots==14)
{
print "You might want to get some coffee.\n";
}
if($dots==35)
{
print "I'm being VERY Thorough.\n";
}
if($dots==63 || $dots==140)
{
print "Hows that coffee?\n";
}
if($dots==105)
{
print "I think it might be time for a second cup of coffee\n"
}
print ".";
$dots++;
#Iterate through each clause on tier i
CLAUSEROW:
for(my $i=0; $i<@clauses; $i++)
{
SOSCOL:
for(my $b; $b<=$#{@SOS[$a]};$b++)
{
CLAUSECOL:
for(my $j=0; $j<=$#{@clauses[$i]}; $j++)
{
if($SOS[$a][$b] eq "~$clauses[$i][$j]"
|| $clauses[$i][$j] eq "~$SOS[$a][$b]")
{
my @tmp;
#Found a resolution, so add all other literals from
#both clauses to each set as a single clause
##*Algorith improvement**##
# First add them to a temporary array, then add them to the actual lists,
# only if the clause does not already appear.
#Start with the SOS literals (use a hash to keep track of duplicates)
my %seen;
for(my $c=0; $c<$#{@SOS[$a]}+1; $c++)
{
if($c != $b)
{
$seen{$SOS[$a][$c]}=1;
push @tmp, "$SOS[$a][$c]";
}
}
#Now add the literals from the non-SOS clause
for(my $k=0; $k<$#{@clauses[$i]}+1; $k++)
{
if($k != $j)
{
if(!$seen{$clauses[$i][$k]})
{
push @tmp,"$clauses[$i][$k]";
}
}
}
#Check to see if the clause is already listed
my $dupl='not';
my @a1=Unicode::Collate->new->sort(@tmp);
my $s1= join(undef, @a1);
for(my $i=0; $i<@clauses; $i++)
{
my @a2= Unicode::Collate->new->sort(@{@clauses[$i]});
my $s2= join(undef,@a2);
if($s1 eq $s2 )
{
$dupl ='did';
}
}
if($dupl eq 'not')
{
my $s=$cSize+$cAdd;
$res++;
$sAdd++;
$cAdd++;
push @{$SOS[$sAdd]}, @tmp;
push @{$clauses[$s]}, @tmp;
#Print out the new clauses.
print RESULTS"clause $s: ";
my $clause = $cSize+$a-1;
if($SOS[$sAdd][0])
{
print RESULTS "{";
for(my $j=0; $j<$#{@clauses[$s]}+1; $j++)
{
if($clauses[$s][$j])
{
print RESULTS "$clauses[$s][$j]";
}
if($j!=$#{@clauses[$s]})
{
print RESULTS ",";
}
}
print RESULTS "} ($i,$clause)\n";
}
#If you found a new res, but there was nothing to push, you found
# the contradiction, add {} as a clause, signal that you're done and break.
else
{
print RESULTS "{} ($i, $clause)\n";
$flag=1;
last SOSROW;
}
}
}
}
}
}
}
close(RESULTS);
I am interested in ways to possibly improve this code without changing the searching method (that is, breadth-first SOS).
As it stands, it works okay for small sets, but with sets with a lot of clauses, or rather, a lot of literals in the clauses, it takes a really long time to complete. For example, I just ran it on a file containing 16 clauses. The largest clause had 16 literals. It took about 24 hours to complete.
I also welcome any and all criticism of my code, no matter how harsh (as long as it's constructive).
EDIT: I feel that multi-threading would be a good solution, but now I'm trying to figure out the best place to use threads. I've decided to use two threads because, I'm not sure what machine will be running this program, but I can be fairly certain it will have at least two cores.
Actually, is there an environment variable that could tell me how many cores the CPU has, or that I could parse the number from? That way I could dynamically determine the thread amount.
Either way, I have two ideas so far
I am thinking I could break up the second loop into multiple concurrent threads so each time the outer-most loop finished, the second loop (checking the SOS clause against all other clauses) could be broken up into x groups that are executed concurrently.
Or, because on each pass of the outer loop, multiple clauses could be added to the SOS set, I could break that group up and check them against the other clauses concurrently, instead of sequentially.
Is there any reason one solution would be better than the other?
Answer: I found the general style of your program hard to read. You use 8-space indentation but have so many levels that the most of your code isn't visible without scrolling. Other for indentation, you use very little spaces in your code, for example around operators. This line:
for(my $j=0; $j<=$#{@clauses[$i]}; $j++) {
Could be made more readable as
for(my $j = 0; $j <= $#{ @clauses[$i] }; $j++) {
I used the automatic Perl code formatter “Perl::Tidy” with a configuration I use, which produced the more readable below output. All I did manually was to reflow the comments and remove empty lines.
# begin breadthfirst/sos search/add algorithm
$SOS[0][0] = $conclusion2;
my $cSize = @clauses;
say "\nworking......";
my $dots = 0;
SOSROW:
for (my $a ; $a < @SOS ; $a++) {
if ((($dots % 7) == 0)) {
print "\n";
}
if ($dots == 14) {
print "You might want to get some coffee.\n";
}
if ($dots == 35) {
print "I'm being VERY Thorough.\n";
}
if ($dots == 63 || $dots == 140) {
print "Hows that coffee?\n";
}
if ($dots == 105) {
print "I think it might be time for a second cup of coffee\n";
}
print ".";
$dots++;
# iterate through each clause on tier i
CLAUSEROW:
for (my $i = 0 ; $i < @clauses ; $i++) {
SOSCOL:
for (my $b ; $b <= $#{ @SOS[$a] } ; $b++) {
CLAUSECOL:
for (my $j = 0 ; $j <= $#{ @clauses[$i] } ; $j++) {
if ( $SOS[$a][$b] eq "~$clauses[$i][$j]"
|| $clauses[$i][$j] eq "~$SOS[$a][$b]")
{
my @tmp;
# found a resolution, so add all other literals from
# both clauses to each set as a single clause
# Algorith improvement:
# first add them to a tmp array, then add them to the actual lists
# only if the clause does not already appear.
#start with the SOS literals(use a hash to keep track of duplicates)
my %seen;
for (my $c = 0 ; $c < $#{ @SOS[$a] } + 1 ; $c++) {
if ($c != $b) {
$seen{ $SOS[$a][$c] } = 1;
push @tmp, "$SOS[$a][$c]";
}
}
# now add the literals from the non-SOS clause
for (my $k = 0 ; $k < $#{ @clauses[$i] } + 1 ; $k++) {
if ($k != $j) {
if (!$seen{ $clauses[$i][$k] }) {
push @tmp, "$clauses[$i][$k]";
}
}
}
# check to see if the clause is already listed
my $dupl = 'not';
my @a1 = Unicode::Collate->new->sort(@tmp);
my $s1 = join(undef, @a1);
for (my $i = 0 ; $i < @clauses ; $i++) {
my @a2 =
Unicode::Collate->new->sort(@{ @clauses[$i] });
my $s2 = join(undef, @a2);
if ($s1 eq $s2) {
$dupl = 'did';
}
}
if ($dupl eq 'not') {
my $s = $cSize + $cAdd;
$res++;
$sAdd++;
$cAdd++;
push @{ $SOS[$sAdd] }, @tmp;
push @{ $clauses[$s] }, @tmp;
# print out the new clauses.
print RESULTS"clause $s: ";
my $clause = $cSize + $a - 1;
if ($SOS[$sAdd][0]) {
print RESULTS "{";
for (
my $j = 0 ;
$j < $#{ @clauses[$s] } + 1 ;
$j++
)
{
if ($clauses[$s][$j]) {
print RESULTS "$clauses[$s][$j]";
}
if ($j != $#{ @clauses[$s] }) {
print RESULTS ",";
}
}
print RESULTS "} ($i,$clause)\n";
}
# if you found a new res, but there was nothing to
# push, you found the contradiction, add {} as a
# clause, signal that you're done and break.
else {
print RESULTS "{} ($i, $clause)\n";
$flag = 1;
last SOSROW;
}
}
}
}
}
}
}
close(RESULTS);
The most crucial issues with this code besides formatting are mentioned by Borodin in his answer.
Beyond those issues, I made the following observations:
you did join(undef, @a1). This makes very little sense, if you want to join strings without a delimiter, use the empty string: join('', @a1).
You perform an Unicode sort Unicode::Collate->new->sort(@tmp), and that multiple times. Unless there is a real need for this, I'd recommend to use the faster builtin sort: sort @tmp. Here there is no need, and you just need the sorting to be consistent.
You use the variable $dup as a boolean flag with values "not" or "dup". Instead, choose a proper variable name and use values that are boolean on their own:
my $found_duplicates = 0;
...
$found_duplicates = 1;
...
if (not $found_duplicates) {
...
The provided code snippet refers to many variables that are declared outside of this snippet (if they are declared at all). These have very broad scopes, which is a bad sign.
You regularly output some status (usually a period, every seven periods a line, and sometimes a quip). I'd suggest moving this code into a separate subroutine so the actual algorithm is less cluttered. Your loop would then only contain a simple display_status($dots++).
my %quips = (
14 => "You might want to get some coffe",
35 => "I'm being VERY thorough",
...
);
sub display_status {
my ($round) = @_;
print "\n" if $round % 7 == 0;
print "$quips{$round}\n" if exists $quips{$round};
print ".";
}
Initialize all your variables. All of them, without excuse. use warnings if you need a friendly reminder.
Some of your search loops can be terminated early when you've found something. E.g.
# check to see if the clause is already listed
my $dupl = 'not';
my @a1 = Unicode::Collate->new->sort(@tmp);
my $s1 = join(undef, @a1);
for (my $i = 0 ; $i < @clauses ; $i++) {
my @a2 =
Unicode::Collate->new->sort(@{ @clauses[$i] });
my $s2 = join(undef, @a2);
if ($s1 eq $s2) {
$dupl = 'did';
}
}
could be improved to
# check to see if the clause is already listed
my $found_duplicates = 0;
my $s1 = join '', sort @tmp;
CLAUSE:
for my $clause (@clauses) {
my $s2 = join '', sort @$clause;
if ($s1 eq $s2) {
$found_duplicates = 1;
last CLAUSE;
}
}
Use foreach loops wherever you can, e.g. for my $clause (@clauses) rather than using indices.
This code is basically replicating join:
print RESULTS "{";
for (
my $j = 0 ;
$j < $#{ @clauses[$s] } + 1 ;
$j++
)
{
if ($clauses[$s][$j]) {
print RESULTS "$clauses[$s][$j]";
}
if ($j != $#{ @clauses[$s] }) {
print RESULTS ",";
}
}
print RESULTS "} ($i,$clause)\n";
It can be simplified to
my $list_of_clauses = join ',', map { $_ || '' } @{ $clauses[$s] };
say RESULTS "{$list_of_clauses} ($i, $clause)";
This needs a bit of explanation. map is a function that takes a block and a list. It sets $_ to each element of the input list in turn and then evaluates the block. The result of the block is added to the output list:
my @output;
for (@input) {
push @output, the_block($_);
}
So for each element map { $_ || '' } @list returns the list element if it's true-ish, or the empty string otherwise.
Edit: comprehensive refactoring
I went through the code and gradually improved it. There are many optimizations that can be employed, such as calculating data structures in the outermost possible loop, or using hashes for linear-time lookup. I also tried to rename most variables to something sensible. This should have lowered the algorithmic complexity of the outer loops from \$O(n^2 \cdot m^2)\$ to \$O(n^2 \cdot m)\$ (which isn't much in the grand scheme of things, but might still be a 16× speedup under some input). More importantly, in many parts my refactoring should have a far lower constant factor.
I could have employed further optimizations if I'd known the exact initial makeup of @SOS and @clauses. For example certain checks have to added when elements can be undef, or when the input clauses might contain duplicate elements. Furthermore the input is currently treated as strings. If they are actually numbers or can be mapped to numbers, certain simplifications could be employed.
After having spent a lot of time with this code, I've come to the conclusion that this problem is not easily parallelizable. Certainly, it would be possible to run the MATCH loop inside worker threads which get passed a pair of rows and return a bunch of new clauses. A main thread would then aggregate the new clauses and extend your data structures, then dispatch new jobs. However, the speedup obtainable in this way is likely to be small compared with other optimizations you could take (such as rewriting part of the program in C, or doing more caching of data structures that get recalculated).
# begin breadthfirst/sos search/add algorithm
$SOS[0][0] = $conclusion2;
my $initial_clause_offset = $#clauses;
my %known_clauses;
for my $clause (@clauses) {
my $key = join '', sort @$clause;
$known_clauses{$key} = 1;
}
say "";
say "working......";
SOS_ROW:
for (my $sos_row = 0 ; $sos_row < @SOS ; $sos_row++) {
my $the_sos_row = $SOS[$sos_row];
display_status($sos_row);
# build the hash of seen elements for this row
my %seen;
$seen{$_}++ for @$the_sos_row;
CLAUSE_ROW:
for (my $clause_row = 0 ; $clause_row < @clauses ; $clause_row++) {
my $the_clause_row = $clauses[$clause_row];
MATCH:
for my $match (find_matches($the_sos_row, $the_clause_row)) {
my ($sos_col, $clause_col) = @$match;
# We found a resolution, so we combine all other literals from
# both clauses to each a single new clause.
# We add the new clause only if it isn't already known
# make a copy of the SOS clause
# and remove the $sos_col-th element
my @sos_literals = @$the_sos_row;
my $removed = splice @sos_literals, $sos_col, 1;
# shadow-delete the $removed element from the hash
# but only if it was seen once. It will be there again in the next
# loop iteration. This is admittedly a bit arcane.
local $seen{$removed} = $seen{$removed};
delete $seen{$removed} if $seen{$removed} == 1;
# copy the literals from the non-SOS clause
# and remove $clause_col-th element
# and remove seen elements
my @non_sos_literals = @$the_clause_row;
splice @non_sos_literals, $clause_col, 1;
@non_sos_literals = grep { not $seen{$_} } @non_sos_literals;
my @new_clause = sort(@sos_literals, @non_sos_literals);
my $new_clause_key = join '', @new_clause;
# skip this new clause if the clause is already known
next MATCH if $known_clauses{$new_clause_key};
# else add this clause to the known clauses etc.
push @SOS, \@new_clause;
push @clauses, \@new_clause;
$known_clauses{$new_clause_key} = 1;
$res++;
# print out the new clauses.
my $clause = $initial_clause_offset + $sos_row;
my $list_of_clauses = join ',', map { $_ || '' } @new_clause;
say RESULTS
"clause $#clauses: {$list_of_clauses} ($clause_row, $clause)";
# if you found a new res, but there was nothing to
# push, you found the contradiction, add {} as a
# clause, signal that you're done and break.
if (not @new_clause) {
$flag = 1;
last SOS_ROW;
}
}
}
}
close(RESULTS);
# Return pairs of indices for all matching elements between the two arrays.
# Only uses O(n + m) complexity!
sub find_matches {
my ($sos_row, $clause_row) = @_;
# build a hash of clause items that map to their columns
# in principle, this could be cached.
my %clause_col;
for my $i (0 .. $#$clause_row) {
$clause_col{ $clause_row->[$i] } = $i;
$clause_col{ '~' . $clause_row->[$i] } = $i;
}
# we now look up possible matching items,
# and add the indices to the matches
my @matches;
for my $sos_col (0 .. $#$sos_row) {
my $item = $sos_row->[$i];
my $clause_col = $clause_col{$item} // $clause_col{"~$item"};
push @matches, [ $sos_col, $clause_col ] if defined $clause_col;
}
return @matches;
}
my %quips;
BEGIN {
%quips = (
14 => "You might want to get some coffe",
35 => "I'm being VERY thorough",
63 => "How's that coffee?",
105 => "I think it might be time for a second cup of coffee",
140 => "How's that coffee?",
);
}
sub display_status {
my ($round) = @_;
print "\n" if $round % 7 == 0;
print "$quips{$round}\n" if exists $quips{$round};
print ".";
} | {
"domain": "codereview.stackexchange",
"id": 7348,
"tags": "perl, optimization, io"
} |
Energy-Momentum tensor of Polyakov action vanishes | Question: In the lecture notes of David Tong on String Theory he defines the energy momentum tensor of the polyakov action as
\begin{align*}
T_{\alpha\beta}=-\frac{2}{T}\frac{1}{\sqrt{-g}}\frac{\delta S}{\delta g^{\alpha\beta}}.\tag{1}
\end{align*}
Since the equation of motion for the dynamical metric $g_{\alpha\beta}$ can be obtained by $$\frac{\delta S}{\delta g^{\alpha\beta}}=0\tag{2}$$ of course this energy momentum tensor vanishes on-shell. I'm a bit confused about this statement. In GR we define the energy momentum tensor exactly the same way, but there of course the EMT does not vanish, but its derivative $\nabla_\mu T^{\mu\nu}$ does. So why don't we conclude in GR that the EMT also vanishes on-shell? What am I getting wrong here?
Answer: There are at least 2 differences:
The metric $g_{\alpha\beta}$ in the Polyakov action $S$ of string theory is a worldsheet metric, not the metric of spacetime (=target space).
The action $S$ in OP's eq. (1) in the context of GR is a matter action (and the $T_{\alpha\beta}$ is a matter SEM tensor); it does not include the gravitational sector. In other words, the corresponding Euler-Lagrange (EL) equations $\frac{\delta S}{\delta g_{\alpha\beta}}=0$ are incomplete; they are not the full equations of motion for $g_{\alpha\beta}$. We need to include the gravitational Einstein tensor to form EFE. | {
"domain": "physics.stackexchange",
"id": 95800,
"tags": "general-relativity, lagrangian-formalism, string-theory, stress-energy-momentum-tensor, variational-principle"
} |
What does uniprot consider "unambiguous" evidence for the subcellular domain of a protein? | Question: Uniprot has annotation for subcellular location of protein domains. This topological domain information of proteins is under the TOPO_DOM flag.
In most cases the subcellular location is assigned by rigorous sequence analysis both for the experimentally verified and predicted subcellular locations. Sometimes more reliable information is available...
... when the experimental technique used allows the unambiguous
assignment of the transmembrane boundary to a particular position
(X-ray crystallography, etc.), the ‘Sequence analysis’ qualifier is
not added in the topological domain annotation. In this case, the
positions of the topological domains can be propagated ‘By
similarity’.
What evidence is available that constitutes "unambiguous assignment" so that the by similarity qualifier can be used?
Answer: Following your question, we have updated the documentation for topological domains in UniProtKB:
http://www.uniprot.org/help/topo_dom
I hope you will find it more informative now. Please don't hesitate to contact the UniProt helpdesk if you have any additional questions (or, if you prefer to discuss here, a short note to the helpdesk would help us spot your posting more reliably, since we cannot closely monitor all forums as you may understand). | {
"domain": "biology.stackexchange",
"id": 4893,
"tags": "cell-biology, bioinformatics, cell-membrane, database"
} |
Does GR put a theoretical lower limit on the radius of a black hole event horizon? | Question: Within GR theory, without going to the extreme r/0 as a radius, (but approaching that as an asymptotic case), is there any theoretical limit as to how small the event horizon of a rotating and/or charged black hole can be?
I appreciate that the Hawking radiation hypothesis postulates that micro black holes, especially primordial ones, should be hard to find, due to evaporation although, as far as I know, this is still a speculative idea, with unfortunately no definitive data to confirm or falsify it.
I am also aware of the possible basis of Planck lenght effects, but again, as far as I know, these are speculative ideas, without observational proof.
To sum up, I want to ask this question regarding GR within the observational effects we have already confirmed.
EDIT apart from CuriousOne's comment, which is true of course, I need 1 assumption for this particular question! END EDIT
Answer: General Relativity is a purely geometrical theory of gravitation. Quantum effects have no place within GR, and more generally there is no scale to GR itself.
For example, if you look at the Schwarzschild solution, you can set the mass $M$ to be whatever you want. But if you change $M$, you can also scale the time coordinate $t$ and the radial coordinate $r$ so that this has no effect. To be specific, just define new quantities
\begin{align}
M' &= 2M, \\
r' &= 2r, \\
t' &= 2t.
\end{align}
This also gives you a Schwarzschild solution. Plug in any (nonzero) number instead of $2$, and you've got another solution. This is why we can't derive the length of a second from GR (we arbitrarily choose it related to a frequency found in quantum effects). Instead, you can only derive scale-invariant ratios.
Of course, GR is only an approximation to what scientists believe is the correct theory of the universe. And there are regions of parameter space where that approximation is believed to be wrong. One of these is at the very small scales where quantum effects are strong, which is why the Planck length might come into it.
So within GR itself, there are no limits. But GR isn't the end of the story, and we don't know the end of the story yet. | {
"domain": "physics.stackexchange",
"id": 22573,
"tags": "general-relativity, black-holes"
} |
Is $d U = 0$ at thermodynamic equilibrium? | Question: At thermodynamic equilibrium, there is thermal, mechanical, and diffusive equilibrium. Does this imply:
$$d\mu = dT = dV = 0$$
$$dU = TdS - PdV \implies dU = TdS$$
Here, I know entropy is maximum, so perhaps $dS=0$ and hence $dU = 0$? I also don't think I can write $d\mu = 0$ as this may be sort of abusive treating of the chemical potential.
Answer: A system is in thermodynamic equilibrium when there are no changes in the macroscopic properties (internal energy, entropy, temperature, etc.) of the system.
So yes, at thermodynamic equilibrium $\Delta U=0$.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 58003,
"tags": "thermodynamics, equilibrium"
} |
Clarification of the physics involved of an orbiting body in a circular orbit | Question: I am writing a physics simulation program to simulate a body orbiting a planet but I am a little confused about the physics involved and would like some clarification.
To simulate gravity I am using the following formula
$F = G \frac{m_1 m_2}{r^2}$
and to find the circular velocity of the orbiting body
$v = \sqrt{G (m_1 + m_2) \over r}$
Would I be correct in saying if I initialise the orbiting body's circular velocity using the above formula perpendicular to the radial gravitational force the body will stay in a circular orbit or is there something else going on here that I have missed?
If so how can I calculate the x and y components of the circular velocity as I am working with a cartesian coordinate system?
Answer: If you want to actually simulate the behavior of the planet as it experiences the (vector) force as it moves around, then you need to find a stepping method and write your velocity vectors and position vector in terms of coordinates. I recommend a Verlet velocity method. Others at this site have their favorites, too. Euler's method is not good enough for orbital motion.
Euler's method uses only a first-order expansion to approximate integrals. It is easy to program, but isn't designed to conserve important physics quantities like energy and angular momentum, plus the error accumulation after about 1 complete orbit will be too large at reasonable time steps. Euler will be either too slow (extremely small steps) or to sloppy, and will eventually violate conservation laws.
Verlet methods and other symplectic methods are designed to solve Hamiltonian mechanics systems and are physics friendly. Verlet is also a second order method which means that it has smaller inherent error for a given time step $\Delta t$. For example, in Euler's method, cutting the time step in half (and doubling the number of steps) only cuts the error to 1/2 of what it was (roughly). In Verlet (and other 2nd order methods), cutting the time step in half cuts the error to (1/2)$^2$ = 1/4 of what it was.
Put the large body at the origin, and put the smaller body at ($r_o$,0) at $t=0$. You will need to change the acceleration vector at each step, too, because it will change direction. Write all your vectors in $\hat{i}, \hat{j}$ notation.
And like @Gert says in the comments, the initial speed for a circular orbit will be $v=\sqrt{\frac{Gm_2}{r}}.$
If you have the position vector of a particle and you want to force the velocity vector to be perpendicular then you have to find the instantaneous angle of the position vector in your well-defined coordinate system.
$$ \theta_r = \arctan\left(\frac{y}{x}\right).$$
Then increase that angle by 90$^o$ to find the direction of the velocity vector. Use the magnitude of the velocity and the angle to find the components of the vector. | {
"domain": "physics.stackexchange",
"id": 24519,
"tags": "newtonian-gravity, orbital-motion, velocity, simulations"
} |
Why $2R\sigma\sqrt{T+logT+1}=\tilde{{O}}(\sigma\sqrt{T})$? | Question: On page 17 on the paper Online Learning with Predictable Sequences, we find a regret of an algorithm equal to
$$
\text{Reg}_T=\frac{R^2}{\eta}+\frac{\eta}{2}\sigma^2(T+logT+1)
$$
where $T$ is the entire time, $\eta$ is learning rate, $R$ is constant, and $\sigma^2$ is the variance of data. The above using arithmetic mean-geometric mean inequality can be lower bounded by
$$
\sqrt{2}R\sigma\sqrt{T+logT+1}\leq \frac{R^2}{\eta}+\frac{\eta}{2}\sigma^2(T+logT+1)
$$
why the left hand side is $\tilde{{O}}(\sigma\sqrt{T})$?
According to $\tilde{{O}}$ vs $O$, $\tilde{{O}}$ ignores logarithmic factors that means when $f(n) \in \tilde{O}(g(n))$ there exist a $k$ such that $f(n) \in O(g(n)\log^k g(n))$, now I have $f(T)=\sqrt{2}R\sigma\sqrt{T+logT+1}$, why I can write it $O(\sigma\sqrt{T}log^kT)=\tilde{O}(\sigma\sqrt{T})$? and what is my $k$?
Answer: What is $\tilde O$? There are two different definitions.
According to exercise 3.5 of Introduction to Algorightms by Corman et al., $\tilde O$ can mean $O$ with logarithmic factors ignored: $$\tilde O(g(n))= \{f(n): \text{there exist nonnegative constant } k \\ \text{ and positive constants } c, k, \text{ and }n_0
\text{ such that }\\ 0\le f(n)\le cg(n)(\log n)^k\text{ for all }n\ge n_0\}$$
According to Wikipedia entry, $f(n) = \tilde O(g(n))$ is shorthand for $f(n) = O(g(n)\left(\log g(n)\right)^k)$ for some $k$.
For any value $x>0$, $x^0=1$. If we take $k=0$ simply, we can see that for either definition,
$$f(T)=\sqrt{2}R\sigma\sqrt{T+\log T+1}=O(\sigma \sqrt{T}) =O(\sigma \sqrt{T}\,1)=\tilde{O}(\sigma\sqrt{T})$$
Here is the explanation for the second equality in the above equation.
$\log T=o(T) \Rightarrow
T+\log T+1=\Theta(T) \Rightarrow \sqrt{T+\log T+1}=\Theta(\sqrt T)$
W can also proceed more rigorously, using L'Hôpital's rule once.
$$\lim_{T\to\infty}\frac{\sqrt{T+\log T+1}}{\sqrt T} = \sqrt{\lim_{T\to\infty}\frac{T+\log T+1}{T}} = \sqrt{\lim_{T\to\infty}\frac{1+\frac1T}{1}}=1$$ | {
"domain": "cs.stackexchange",
"id": 12904,
"tags": "asymptotics"
} |
What is known about $CFL \cap coCFL$? | Question: CFL is the class of context-free languages; co-CFL the languages whose complements are context-free. So CFL $\neq$ co-CFL.
Are there any nice characterizations or other basic facts about CFL $\cap$ co-CFL?
Two easy examples of the kind of fact I have in mind (but one would like more precise information):
DCFL $\subseteq$ CFL $\cap$ co-CFL.
pCFL $\subseteq$ CFL $\cap$ co-CFL, where pCFL is the class of CFLs that are permutation-closed. It's closed under complement by Parikh's theorem and the fact that semilinear sets are definable in Presburger arithmetic.
Another related question: given a CFL $L$, it's undecidable whether $L \in$ DCFL. Is the same true for pCFL?
Answer: About the last question, the usual undecidability proof for universality could be adapted.
Recall that in this proof, one considers an instance $\langle \Sigma,\Delta,u,v\rangle$ of Post's correspondence problem, where $\Sigma$ and $\Delta$ are two disjoint alphabets, and $u$ and $v$ are two homomorphisms from $\Sigma^\ast$ to $\Delta^\ast$. Then $$L_u=\{a_1\cdots a_n(u(a_1\cdots a_n))^R\mid n>0\wedge\forall 0<i\leq n.a_i\in\Sigma\}$$ and $$L_v=\{a_1\cdots a_n(v(a_1\cdots a_n))^R\mid n>0\wedge\forall 0<i\leq n.a_i\in\Sigma\}$$—where $w^R$ denotes the reversal of word $w$—are two DCFLs s.t. $L_u\cap L_v=\emptyset$ iff the original PCP instance was negative. Letting $$L=\overline{L_u}\cup\overline{L_v}\;,$$one thus defines a CFL (since DCFLs are effectively closed under complement and CFLs under union), which is universal, i.e. equal to $(\Sigma\cup\Delta)^\ast$, iff the original PCP instance was negative.
Now, if $L$ is universal, i.e. if $L=(\Sigma\cup\Delta)^\ast$, then $L$ is closed under permutations. Conversely, if $L$ is not universal, i.e. if $L_u\cap L_v\neq\emptyset$, there is at least one word $x$ of form $x=w(u(w))^R=w(v(w))^R$ for some $w$ in $\Sigma^+$. Then $x$ does not belong to $L$, but it's easy to find a permutation of $x$ that belongs to $L$: for instance, permute the last letter of $w$ (which is in $\Sigma$) with the first of $(u(w))^R$ (which is in $\Delta$) to obtain a word in $\Sigma^\ast\Delta\Sigma\Delta^\ast\subseteq L$.
Hence $L$ is closed under permutation iff it is universal iff the original PCP instance was negative. | {
"domain": "cstheory.stackexchange",
"id": 2477,
"tags": "context-free, decidability"
} |
Why is the function $\sin(10π t) + \sin (31 t)$ not periodic while its graph looks periodic? | Question: I was going through the book "Signals-and-Systems-Continuous-and-Discrete" (4th edition). The question was to check if Sin(10πt)+Sin(31t) is periodic or not. The answer present was short and simple "THE SUM OF TWO SINUSOIDS IS PERIODIC IF THE RATIO OF THEIR RESPECTIVE PERIODS IS RATIONAL". However the same when tried in DESMOS GRAPH doesn't correspond.
Answer: The graph aught to look periodic. Afterall it should be very hard to see that the fraction of the two periods is irrational (which it is, contrary to "ZerotheHeros" comment to your question) since there are always rational numbers arbitrarily close to each irrational.
One could argue that it should even conceptually be impossible to see the effect by simply plotting something on the computer, as in numerics you typically only uses rational numbers internally.
Conclusion: You will have a hard time observing aperiodicity like this by plotting something on the computer. But this doesn't mean the function that you encounter is periodic. | {
"domain": "physics.stackexchange",
"id": 82741,
"tags": "waves, mathematics"
} |
Redshift - How can $z$ be greater than 1? | Question: I'm having trouble understanding the equation for redshift:
$z = Δλ/λ ≈ Δf/f ≈ v/c$.
If $z = v/c$ and $c =$ speed of light,
how can $z>1$ (as nothing can exceed the speed of light)?
Answer: The formula $z \simeq v/c$ is only approximately true when $v \ll c$. Redshifts greater than 1 are possible if the redshift is caused by relativistic motion or by cosmological expansion.
The cosmological redshift is not a Doppler shift and should not be interpreted as such except perhaps at very small redshifts. It is caused by the expansion of space between the time when the light is emitted and when it is received by an observer. This expansion could be interpreted as a recession speed at small redshifts, but as you have surmised, that interpretation runs into trouble when redshifts become greater than 1. It is the expansion of space that allows things to apparently recede at greater than the speed of light. Your statement that "nothing can exceed the speed of light" is more nuanced in General Relativity and has received many questions and answers in these pages.
A redshift larger than 1 is also possible when relativistic motion is applied to a Doppler shift. The correct formula is
$$ z = \sqrt{\frac{c+v}{c-v}} -1,$$
which can become arbitrarily large as $v \rightarrow c$.
If $v \ll c$, then the above expression can be approximated by
$$ z = (1 + v/2c +...)(1 + v/2c -...) -1 \simeq v/c$$ | {
"domain": "physics.stackexchange",
"id": 81114,
"tags": "special-relativity, space-expansion, doppler-effect, redshift"
} |
Projection of bosonic and fermionic creation and annihilation operators | Question: For the creation and annihilation operator $a(\varphi)$ and $a^\dagger(\varphi)$ and the orthogonal projections i would like to understand why the following holds
\begin{equation}
P_\pm a(\varphi) P_\pm = a(\varphi) P_\pm \,\neq \,P_\pm a(\varphi)
\end{equation}
and
\begin{equation}
P_\pm a^\dagger(\varphi) P_\pm = P_\pm a^\dagger(\varphi) \neq \,a^\dagger(\varphi) P_\pm
\end{equation}
I know how $a(\varphi)$ and $a^\dagger(\varphi)$ are defined, but I am not sure how to the projection operators act on them.
Edit: We defined the projection operators in the following way:
\begin{equation}
P_+ := \frac{1}{N!} \sum\limits_{\sigma \in S_N} U_\sigma \quad\text{and}\quad P_- := \frac{1}{N!} \sum\limits_{\sigma \in S_N} \text{sgn}(\sigma) U_\sigma
\end{equation}
and annihilation and creation operator as
\begin{equation}
(a^\dagger(\varphi)\Psi)_N := \sqrt{N}\varphi\otimes \psi_{N-1}
\end{equation}
\begin{equation}
(a(\varphi)\Psi)_N := \sqrt{N+1}\langle \varphi, \psi_{N+1}\rangle_1
\end{equation}
Answer: First of all, the range of $P_\pm a^\dagger(\varphi)$ is a subspace of the (anti)-symmetric states (who are stable under $P_\pm$), while this is not true of $a^\dagger(\varphi)P_\pm $. For example, if $\psi\neq \varphi$ is another $1$-particle state, then :
$$a^\dagger(\varphi)P_\pm \psi =a^\dagger(\varphi) \psi = \sqrt{2} \varphi \otimes \psi$$
is not symmetric.
To prove the other part, let $\psi_0,\psi_1,\ldots, \psi_n$ be $1$-particle states. Then :
\begin{align}
P_\pm a^\dagger(\psi_0) P_\pm \psi_1 \otimes \ldots \psi_n &= \frac{\sqrt{n+1}}{(n+1)!n!}\sum_{\sigma \in \mathfrak S_{n+1}} \sum_{\sigma' \in \mathfrak S_n}\varepsilon^\pm(\sigma)\varepsilon^\pm(\sigma')U_\sigma (\psi_0 \otimes \psi_{\sigma'(1)} \otimes \ldots \psi_{\sigma'(n)}) \\
&= \frac{\sqrt{n+1}}{(n+1)!n!}\sum_{\sigma \in \mathfrak S_{n+1}} \varepsilon^\pm(\sigma\circ\sigma') (\psi_{\sigma(0)} \otimes \psi_{\sigma\circ\sigma'(1)} \otimes \ldots \psi_{\sigma\circ\sigma'(n)}) \\
&= \frac{\sqrt{n+1}}{(n+1)!}\sum_{\sigma \in \mathfrak S_{n+1}} \sum_{\sigma' \in \mathfrak S_n} \varepsilon^\pm(\sigma) (\psi_{\sigma(0)} \otimes \psi_{\sigma(1)} \otimes \ldots \psi_{\sigma(n)}) \\
&= P_\pm a^\dagger(\psi_0) \psi_1 \otimes \ldots \psi_n
\end{align}
where we consider $\sigma' \in \mathfrak S_n \subset \mathfrak S_{n+1}$ by setting $\sigma'(0)=0$.
The results for $a(\varphi)$ are just the adjoint of the ones for $a^\dagger(\varphi)$. | {
"domain": "physics.stackexchange",
"id": 79078,
"tags": "quantum-mechanics, second-quantization"
} |
Twist Message Example and /cmd_vel | Question:
Hi there, i try to find any example of Twist_Message and cmd_vel example but i cant find it. If there any example that a subscriber listen to twist message using /cmd_vel topic. I try to drive a motor using arduino.
Thank you.
Originally posted by MyloXyloto on ROS Answers with karma: 3 on 2012-03-14
Post score: 3
Original comments
Comment by MyloXyloto on 2012-03-25:
I not very familiar with Python. can you give me in C++ so i can directly compile it into arduino IDE ? Or you have servo motor subscriber example which is controlling by using ultrasonic sensor as a publisher ? Thnks in advance
Comment by mjcarroll on 2012-03-25:
From the tutorials, Servo Example: http://www.ros.org/wiki/rosserial_arduino/Tutorials/Servo%20Controller
Comment by mjcarroll on 2012-03-25:
From the tutorials: Rangefinder example: http://www.ros.org/wiki/rosserial_arduino/Tutorials/SRF08%20Ultrasonic%20Range%20Finder
Comment by MyloXyloto on 2012-03-26:
Thanks, but i already done this rosserial tutorials and it works perfectly. The main problem is to mix the Servo, Rangefinder, arduino and my main processor in C language. My main goal is to build an normal arduino obstacle avoidance robot which is implement by ROS.
Comment by MyloXyloto on 2012-03-26:
I believe, many people out there is not clear about this 'mixing' up method even though its a simple robot project. Sorry about that :)
Comment by mjcarroll on 2012-03-26:
So are you trying to do all of this on an Arduino? Or is there some interface to another computer?
Comment by MobileWill on 2013-10-25:
Have you taken a look at this https://github.com/hbrobotics/ros_arduino_bridge/
All the base code is done so it makes it much easier to get started and you can easily add support for different hardware.The readme gives a twist example, and it just enough to start playing with twist messages.
Answer:
So, the topic /cmd_vel topic should have the message type Twist
Looking at the message description, we can see that each incoming message should have a linear component, for the (x,y,z) velocities, and an angular component for the angular rate about the (x,y,z) axes.
I'm going to give you some example code in Python, but this could just as easily be done in C++.
#!/usr/bin/env python
import roslib; roslib.load_manifest('YOUR_PACKAGE_NAME_HERE')
import rospy
import tf.transformations
from geometry_msgs.msg import Twist
def callback(msg):
rospy.loginfo("Received a /cmd_vel message!")
rospy.loginfo("Linear Components: [%f, %f, %f]"%(msg.linear.x, msg.linear.y, msg.linear.z))
rospy.loginfo("Angular Components: [%f, %f, %f]"%(msg.angular.x, msg.angular.y, msg.angular.z))
# Do velocity processing here:
# Use the kinematics of your robot to map linear and angular velocities into motor commands
v_l = ...
v_r = ...
# Then set your wheel speeds (using wheel_left and wheel_right as examples)
wheel_left.set_speed(v_l)
wheel_right.set_speed(v_r)
def listener():
rospy.init_node('cmd_vel_listener')
rospy.Subscriber("/cmd_vel", Twist, callback)
rospy.spin()
if __name__ == '__main__':
listener()
In this example, we create a node called cmd_vel_listener that subscribes to the /cmd_vel topic. We can then create a callback function (called callback), which accepts the message as a parameter msg.
From here, it is as simple as reading the named fields from the message, and using that data in whatever way that you want in your application.
Hope this helps.
Originally posted by mjcarroll with karma: 6414 on 2012-03-14
This answer was ACCEPTED on the original site
Post score: 17
Original comments
Comment by Dhagash Desai on 2018-09-02:
Where is the function set_speed and how are you writing it in arduino can you give me example of that? | {
"domain": "robotics.stackexchange",
"id": 8585,
"tags": "ros, navigation, beginner"
} |
Could you explain these 2 steps of the derivation of the Bellman equation as a recursive equation in Sutton & Barto? | Question: I am reading the Sutton & Barto (2018) RL textbook.
On page 59, it derives the recursive property of the state-value function as below.
Could you explain the steps of third and fourth equality?
Here is what I have.
\begin{aligned}
E_\pi[R_{t+1}+\gamma G_{t+1}|S_t=s] &= E_\pi[R_{t+1}|S_t=s]+E_\pi[\gamma G_{t+1}|S_t=s] \\
&= \sum_a \pi(a|s) \sum_{s',r} p(s',r|s,a) \times r + ???
\end{aligned}
Answer: To expand $\mathbb{E}_\pi[\gamma G_{t+1}|S_t=s]$, you can take the same expectation over next state and reward as for $R_{t+1}$ (in fact this is normally shown without separating the two terms as you have done, but as it is the expansion of this part where you want help, we can do it separately).
The key thing is to move forward one time step - choosing the action using the policy, and the reward and next state using the state transition function - and express the expectation as a sum of the probabilities for the next step (also noticing that this changes the condition from $S_t=s$ to $S_{t+1}=s'$):
$$\mathbb{E}_\pi[\gamma G_{t+1}|S_t=s] = \gamma \sum_{a}\pi(a|s) \sum_{r,s'} p(r,s'|s,a) \mathbb{E}_\pi[\gamma G_{t+1}|S_{t+1}=s']$$
Then, we can notice that $\mathbb{E}_\pi[\gamma G_{t+1}|S_{t+1}=s']$ is $v_{\pi}(s')$ and get
$$\mathbb{E}_\pi[\gamma G_{t+1}|S_t=s] = \gamma \sum_{a}\pi(a|s)\sum_{r,s'} p(r,s'|s,a) v_{\pi}(s')$$
Now we can recombine this with the other expression for expected immediate reward that you have already resolved, because the summation is the same in both parts, and get result 3.14 from the book.
It is more common to resolve both parts of the expectation the same way though, and not split them up to later recombine. The tricky part is perhaps realising that you cannot resolve the expected return fully when looking ahead one time step, only express it as a sum of expected returns from all possible next states. | {
"domain": "ai.stackexchange",
"id": 3806,
"tags": "reinforcement-learning, markov-decision-process, value-functions, sutton-barto, bellman-equations"
} |
Integration to find general solution of free particle | Question: I was attempting problems in Griffiths Intro to QM when I came across the following:
A free particle has the initial wavefunction:
$$\Psi(x, 0) = Ae^{-ax^2} \, .$$
Find $\Psi(x, t)$.
I normalised the initial wavefunction and found $\phi(k)$, and I managed to get the following equation for $\Psi(x, t)$:
$$\Psi(x, t)
= \frac{1}{(2\pi a)^\frac{1}{4}} \int^{\infty}_{-\infty} \exp\left[ikx - k^2 \left( \frac{1}{2a} + \frac{\hbar t}{2m} \right) \right] \frac{dk}{2\pi} \, .$$
I got stuck after that so I looked at the answer booklet provided and my luck failed me. The answer went from the integral I provided immediately to the final result with no intermediate steps.
Any help would be much appreciated. It looks like it requires changing the integral to be with respect to x.
Answer: The integral can be computed independent of $x$, in fact, it must, because $x$ must remain after the integration, otherwise, $\Psi(x,t)$ would no longer be a function of $x$. Instead, you are integrating out the variable $k$.
Anyway, you can try writing down the argument for the exponential, and then completing the square. Then you'll have something like $$e^{-C(k+A)^2 + B}$$in which case, $e^B$ can be factored out, and you can do a substitution $$u = k+A$$ $$du = dk$$ to make it a simple Gaussian integral. | {
"domain": "physics.stackexchange",
"id": 26904,
"tags": "quantum-mechanics, integration"
} |
Connection between isometries and projectors in QM | Question: I realize this question is technically a mathematical one but I think it is seen often enough in quantum information so I ask it here. The following is the definition of an isometry in Mark Wilde's book
Let $\mathcal{H}$ and $\mathcal{H}^{\prime}$ be Hilbert spaces such
that $\operatorname{dim}(\mathcal{H}) \leq$
$\operatorname{dim}\left(\mathcal{H}^{\prime}\right)$ An isometry $V$
is a linear map from $\mathcal{H}$ to $\mathcal{H}^{\prime}$ such that
$V^{\dagger} V=I_{\mathcal{H}}$. Equivalently, an isometry $V$ is a
linear, norm-preserving operator, in the sense that
$\||\psi\rangle\left\|_{2}=\right\| V|\psi\rangle \|_{2}$ for all
$|\psi\rangle \in \mathcal{H}$.
He also points out that $V V^{\dagger}=\Pi_{\mathcal{H}^{\prime}}$ which is a projection onto $\mathcal{H'}$.
My questions are about $V^\dagger$.
By the definition, it is not an isometry but it is a linear map from $\mathcal{H'}$ to $\mathcal{H}$. Is $V^\dagger$ itself a projector from $\mathcal{H'}$ to a subspace of $\mathcal{H'}$ of dimension $\text{dim}(\mathcal{H})$ followed by a unitary from this subspace to $\mathcal{H}$?
Does every projector have a corresponding isometry? That is, suppose I am given a projector $\Pi_{\mathcal{H}}$ onto a subspace of $\mathcal{H}$ called $\mathcal{K}$. Then does every isometry $V$ from $\mathcal{K}$ to $\mathcal{H}$ satisfy $VV^\dagger = \Pi_{\mathcal{H}}$?
Answer: You can characterise isometries as those linear maps that can be written in the form
$$V = \sum_{k=1}^d |u_k'\rangle\!\langle u_k| \in \operatorname{Lin}(\mathcal H,\mathcal H'),$$
where $\{|u_k\rangle\}_k$ is an orthonormal basis for $\mathcal H$, $\{|u_k'\rangle\}_k$ is an orthonormal set in $\mathcal H'$ (but not a basis if $\operatorname{dim}(\mathcal H)<\operatorname{dim}(\mathcal H')$), and $d\equiv\operatorname{dim}(\mathcal H)$.
In this notation, $V^\dagger$ is obtained by simply switching $|u_k\rangle$ and $|u_k'\rangle$:
$$V^\dagger = \sum_{k=1}^d |u_k\rangle\!\langle u_k'| \in \operatorname{Lin}(\mathcal H',\mathcal H).$$
Now to address your questions:
Indeed, $V^\dagger$ is not an isometry if $\operatorname{dim}\mathcal H<\operatorname{dim}\mathcal H'$. You can write it as
$$V^\dagger = \left( \sum_{j=1}^d |u_j\rangle\!\langle u_j'| \right) \left( \sum_{k=1}^d |u_k'\rangle\!\langle u_k'| \right)=V^\dagger \left( \sum_{k=1}^d |u_k'\rangle\!\langle u_k'| \right).$$
This amounts to simply multiplying to the right with a projector onto the support of $V^\dagger$, which you can always do freely.
This clearly is not a very insightful statement.
However, you could think of $V^\dagger$ as a unitary operation when restricting its domain to its support. In other words, $V^\dagger|_{\operatorname{supp}(V^\dagger)}$ is unitary. That's probably how close you can get to your statement.
Let $W:\mathcal K\to\mathcal H$ be an isometry, with $d'\equiv \operatorname{dim}\mathcal K\le d$. Then we can write it as
$$ W = \sum_{k=1}^{d'} |v_k\rangle\!\langle v_k'|,$$
with $|v_k'\rangle$ orthonormal basis for $\mathcal K$ and $|v_k\rangle$ orthonormal set in $\mathcal H$. Then,
$$ W W^\dagger = \sum_{k=1}^{d'} |v_k\rangle\!\langle v_k|.$$
This is therefore a projector onto a subset of $\mathcal H$ of dimension $d'$ (though not necessarily a projector onto $\mathcal K$). | {
"domain": "physics.stackexchange",
"id": 67503,
"tags": "quantum-mechanics, quantum-information, unitarity"
} |
Could Carbon 14 have been created before the Big Bang? | Question: Here is my interest in learning about the source of Carbon 14(The Stuff I think they use for Carbon Dating). I might even be asking the question wrong, but here goes.
Source of the Elements?
Does not science say that matter through fusion has brought to us the chemicals of the periodic table? And with each level of fusion the temperature grows raising the diameter of the inferno? Logically to me from this perspective could carbon elements have been created from the original black star of the universe? For if all matter explodes from a single location, does not logic say that as the original element grew hotter and hotter more of the elements where given to us?
Carbon Made Before the Big Bang?
Now if Carbon molecules survived within this inferno of nuclear activity while the higher elements where in process, could not those same carbon elements gathered therefore some forms of Carbon 14? For truly from that perspective a carbon molecule would have much more intense receiving of radiation considering that all the stars of the universe pounded a specific molecule, yet after the expansion of the sky those elements would then receive less radiation? Would not a 100 thousand million extra stars change calculations figured for the rate of the single star used in the calculation for carbon 14 dating?
Could Carbon 14 have been created before the Big Bang or did it only get created after the making of the world? Or something else all together?
Just curious, and want to learn. Thanks for helping me understand.
Answer: NO Near the big bang temperatures were high enough that nuclei were not important. The binding energies of nuclei were trivial compared to the temperature. There was a sea of (as currently understood) quarks and gluons, so nuclei may have formed for a short time, but would break apart immediately. The best understanding is that what came out was protons and neutrons, which fused into some helium, deuterium, and lithium, but little of anything else. This question looks like it is trying to justify a universe only a few thousand years old, which is not appropriate here. The scientific evidence shows little Carbon-14 at that stage. | {
"domain": "physics.stackexchange",
"id": 14598,
"tags": "elements"
} |
Two normalization constants for Dirac plain wave function | Question: I stumbled across two different expressions for a Dirac plane wave function, namely
$$\psi=\sqrt{\frac{m}{EV}}ue^{-ip\cdot x}$$
and
$$\psi=\frac{1}{\sqrt{2EV}}ue^{-ip\cdot x}$$
where $u$ is the Dirac spinor.
What is the difference?
Answer: Different authors may choose different normalisation schemes for the same thing. They will just have slightly different constants elsewhere to compensate. These two seem to be just a constant factor away from each other. | {
"domain": "physics.stackexchange",
"id": 95770,
"tags": "quantum-mechanics, wavefunction, dirac-equation, normalization"
} |
Number Division Theory | Question: I have a JavaScript function that takes in a user number (total amount placed on a betting table in denominations of 50) and returns an Array of the optimum chips (lowest # required) to form each stack.
My function works, the correct array for each value tested is output - but since this should be running on the client browser with a fast response time for smooth UI I feel like the code is not optimised.
Is there a better, more efficient (+ mathematically solid) way of computing the same output?
Any suggestions would be appreciated.
//number will be validated before running through function to make sure it has a factor of 50.
function chipStack(number) {
var k = 0; //1k chip
var f = 0; //500 chip
var t = 0; //300 chip
var o = 0; //100 chip
var fi = 0; // 50 chip
k = Math.floor(number / 1000);
var number2 = number % 1000
if (number2 === 0) {
return [k, f, t, o, fi];
} else {
f = Math.floor(number2 / 500)
var number3 = number2 % 500
}
if (number3 === 0) {
return [k, f, t, o, fi];
} else {
t = Math.floor(number3 / 300)
var number4 = number3 % 300
}
if (number4 === 0) {
return [k, f, t, o, fi];
} else {
o = Math.floor(number4 / 100)
var number5 = number4 % 100
}
if (number5 === 0) {
return [k, f, t, o, fi];
} else {
fi = Math.floor(number5 / 50)
var number6 = number5 % 50
}
if (number6 === 0) {
return [k, f, t, o, fi];
} else {
return "Something isn't right " + number6;
}
}
Answer: You could compress the code as follows. It doesn't check for early exit conditions, but since it's running on the front-end anyway, i.e. not at high volumes, the performance won't be noticeably degraded.
function chipStack(n){
var chipValues = [1000,500,300,100,50];
var stack = [];
chipValues.forEach(function(e,i){
stack.push(Math.floor(n / e));
n -= (stack[stack.length-1] * e);
})
return stack;
} | {
"domain": "codereview.stackexchange",
"id": 6686,
"tags": "javascript, optimization"
} |
Why cosmic background radiation is not ether? | Question: why cosmic background radiation is not ether? I mean it's everywhere and it' a radiation then we can measure Doppler effect by moving with a velocity.
Answer: The luminiferous aether was, by definition, a hypothesized medium that was needed for electromagnetic waves to propagate through space. The cosmic microwave background isn't needed for photons to move; indeed, they move through space even if one removes (shields) the cosmic microwave radiation. So that's why CMS isn't luminiferous aether.
On the other hand, the aether also made a particular prediction, the existence of a preferred reference frame. In this sense, the CMB plays the same role as the aether. Cosmologists use the reference frame associated with the CMB as the preferred coordinate system. However, this ability to pick a "preferred" coordinate system depends on the environment – the CMB is just some property of the environment that could possibly be different as well (e.g. if you shield it). In this respect, it still differs from the luminiferous aether that couldn't have been shielded and that guaranteed the existence of a preferred coordinate system in any situation, regardless of details of the environment. | {
"domain": "physics.stackexchange",
"id": 9587,
"tags": "cosmology, speed-of-light, space-expansion, cosmic-microwave-background, aether"
} |
Surface energy of water | Question: If $10^3$ small drops of water each of radius $10^{-7}\,\mathrm{m}$ combined to form one single large drop then what should be the energy ? If the surface tension is equal to $0.07\,\mathrm{N/m}$ . I think the surface area should decrease as energy is released , but how will we find the energy? Or how can we find radius of large drop to find $∆ A$ , $A$= surface area?
Answer: I will only tell you how to proceed with the problem.
Volume is conserved. Equate the volume of the small spherical drops with that of the large drop and find the radius of the large drop.
Now you can find the surface area of the new large drop. You can also find the total surface area occupied by the many small drops before.
Hence calculate the change in surface area and input the proper formula/formulae to obtain the answer. | {
"domain": "physics.stackexchange",
"id": 27126,
"tags": "homework-and-exercises, fluid-dynamics, surface-tension, fluid-statics"
} |
what are the prerequisites to learn ros and make the application on ros | Question:
what are the prerequisites to learn ros and make the application on ros
Originally posted by akshaykumar on ROS Answers with karma: 1 on 2014-03-27
Post score: 0
Answer:
That´s a pretty broad question. A good idea for beginners certainly is to use one of the Ubuntu version stated on the Installation page and follow the install instructions there. Depending on what you know better, it then would make sense to either start with the Python or C++ tutorials.
It generally helps to have some basic knowledge about Linux and programming. The speed of learning will depend on previous knowledge, but following the tutorials and trying out things yourself generally will allow you to make progress best.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-03-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 17447,
"tags": "ros, beginner"
} |
scoped timer class (3rd revision) | Question: Again, here is the previous question. I have (re)revised my custom scoped timer using the valuable feedback I got from a helpful member of this community. However I feel the need to ask this question so that my new solution is reviewed.
Here is the code (Compiler Explorer):
#include <chrono>
#include <type_traits>
#include <functional>
#include <exception>
#include <utility>
// these are only required by client code
#include <thread>
#include <cstdio>
#include <fmt/core.h>
#include <fmt/chrono.h>
#define NODISCARD_WARNING_MSG "ignoring returned value of type 'ScopedTimer' might " \
"change the program's behavior since it has side effects"
template < class Duration = std::chrono::microseconds,
class Clock = std::chrono::steady_clock >
requires ( std::chrono::is_clock_v<Clock> &&
requires { std::chrono::time_point<Clock, Duration>{ }; } )
class [[ nodiscard( NODISCARD_WARNING_MSG ) ]] ScopedTimer
{
public:
using clock = Clock;
using duration = Duration;
using rep = Duration::rep;
using period = Duration::period;
using time_point = std::chrono::time_point<Clock, Duration>;
using callback_type = std::conditional_t< noexcept( Clock::now( ) ),
std::move_only_function<void ( Duration ) noexcept>,
std::move_only_function<void ( Duration,
std::exception_ptr ) noexcept> >;
[[ nodiscard( "implicit destruction of temporary object" ) ]] explicit
ScopedTimer( callback_type&& callback = nullptr ) noexcept( noexcept( now( ) ) )
: m_callback { std::move( callback ) }
{
}
[[ nodiscard( "implicit destruction of temporary object" ) ]]
ScopedTimer( ScopedTimer&& rhs ) noexcept = default;
~ScopedTimer( )
{
if ( m_callback == nullptr )
return;
if constexpr ( noexcept( now( ) ) )
{
const time_point end { now( ) };
m_callback( end - m_start );
}
else
{
try
{
const time_point end { now( ) };
m_callback( end - m_start, nullptr );
}
catch ( ... )
{
const std::exception_ptr ex_ptr { std::current_exception( ) };
m_callback( duration { }, ex_ptr );
}
}
}
[[ nodiscard ]] const time_point&
get_start( ) const& noexcept
{
return m_start;
}
[[ nodiscard ]] const time_point&
get_start( ) const&& noexcept = delete;
[[ nodiscard ]] const callback_type&
get_callback( ) const& noexcept
{
return m_callback;
}
[[ nodiscard ]] const callback_type&
get_callback( ) const&& noexcept = delete;
void
set_callback( callback_type&& callback ) & noexcept
{
m_callback = std::move( callback );
}
void
set_callback( callback_type&& callback ) && noexcept = delete;
[[ nodiscard ]] duration
elapsed_time( ) const& noexcept( noexcept( now( ) ) )
{
return now( ) - m_start;
}
[[ nodiscard ]] duration
elapsed_time( ) const&& noexcept( noexcept( now( ) ) ) = delete;
[[ nodiscard ]] time_point static
now( ) noexcept( noexcept( clock::now( ) ) )
{
return std::chrono::time_point_cast<duration>( clock::now( ) );
}
private:
time_point const m_start { now( ) };
callback_type m_callback;
};
template <class Callback>
ScopedTimer( Callback&& ) -> ScopedTimer<>;
template <class Duration>
ScopedTimer( std::move_only_function<void ( Duration ) noexcept> ) -> ScopedTimer<Duration>;
// --------------------------- client code ---------------------------
using std::chrono_literals::operator""ms;
template <class Duration, class Clock>
ScopedTimer<Duration, Clock> func( ScopedTimer<Duration, Clock> timer )
{
std::exception_ptr ex_ptr;
timer.set_callback( [ &ex_ptr ]( const auto duration ) noexcept
{
try
{
fmt::print( stderr, "\nTimer in func took {}\n", duration );
}
catch ( ... )
{
ex_ptr = std::current_exception( );
}
} );
if ( ex_ptr )
{
try
{
std::rethrow_exception( ex_ptr );
}
catch ( const std::exception& ex )
{
fmt::print( stderr, "{}\n", ex.what( ) );
}
}
std::this_thread::sleep_for( 200ms );
return timer;
}
int main( )
{
std::move_only_function<void ( std::chrono::milliseconds ) noexcept> f =
[&]( const auto duration ) noexcept
{
try
{
fmt::print( stderr, "\nTimer took {}\n", duration );
}
catch ( ... ) { }
};
ScopedTimer timer { std::move( f ) };
std::this_thread::sleep_for( 100ms );
ScopedTimer t;
auto t2 = func( std::move( timer ) );
t.set_callback( []( const auto d ) noexcept {
fmt::print( stderr, "\nTimer t took {}\n", d ); } );
std::this_thread::sleep_for( 500ms );
}
Chnages:
I have added 3 [[nodiscard(message)]] attributes to the code. Although I'm not sure how the one for move constructor might be useful. No valid scenario comes to my mind.
I also added setters and getters for members m_start and m_callback and also deleted their rvalue reference overloads.
Used the class keyword instead of struct and thus made the two member variables private.
Added a second template deduction guide so that the type parameter Duration can be deduced from the duration parameter type of the std::move_only_function object that is passed to the constructor of ScopedTimer.
Any suggestions could be valuable.
Answer: I think you reached all your goals with this class, and you implemented all the suggestions I made, so I don't have much to say about it anymore. You could consider adding some more utility functions. For example, a cancel() function that removes the callback and returns the (discardable) elapsed time. Or pause() and continue() functions. But the YAGNI principle applies here.
The only issue I see with this code is that it looks overengineered, and it now relies on C++23, which means that not a lot of codebases will be able to use it yet. | {
"domain": "codereview.stackexchange",
"id": 44548,
"tags": "c++, library, timer, template-meta-programming, c++23"
} |
Ring buffer for audio processing | Question: Below is my source code for the implementation of a Ring/Circular buffer for use in audio processing. I have been using this implementation for several months now but I have some doubts about the efficiency of the insertion and removal functions as well as the correctness/necessity of my usage of std::atomic<size_t>type index variables. The execution speed of the read and write functions is crucial to the overall execution speed of the DSP callbacks using these buffers so I would like to know if anyone can suggest any improvements to my design.
class Buffer {
public:
Buffer();
Buffer(size_t size);
Buffer(Buffer const& other);
Buffer& operator=(Buffer const& other);
virtual ~Buffer();
DSG::DSGSample& operator[](size_t const& index);
inline size_t const& Size()const;
protected:
DSG::DSGSample* _buffer;
size_t _size;
};
inline size_t const& DSG::Buffer::Size()const{
return _size;
}
//implementation
DSG::Buffer::Buffer():_size(0),_buffer(nullptr){}
DSG::Buffer::Buffer(size_t size):_size(size),_buffer(new DSG::DSGSample[size]){}
DSG::Buffer::Buffer(Buffer const& other) {
_buffer = new DSG::DSGSample[_size];
_size = other._size;
*this = other;
}
DSG::Buffer& DSG::Buffer::operator=(Buffer const& other){
if (_size!=other._size) {
if (_buffer!=nullptr) {
delete [] _buffer;
}
_size = other._size;
_buffer = new DSG::DSGSample[_size];
}
for (int i=0; i<_size; ++i) {
_buffer[i] = other._buffer[i];
}
return *this;
}
DSG::Buffer::~Buffer(){
if (_buffer!=nullptr) {
delete [] _buffer;
}
}
DSG::DSGSample& DSG::Buffer::operator[](size_t const& index){
#ifdef DEBUG
assert(index<_size);
#endif
return _buffer[index];
}
//ringbuffer
class RingBuffer:public DSG::Buffer {
protected:
std::atomic<size_t> _write;
std::atomic<size_t> _read;
size_t _count;
size_t MASK;
size_t write;
size_t read;
inline size_t next(size_t current);
inline size_t make_pow_2(size_t number);
public:
RingBuffer();
RingBuffer(const size_t size);
RingBuffer(RingBuffer& buffer);
RingBuffer& operator=(RingBuffer& buffer);
virtual ~RingBuffer();
inline bool Write(const DSGSample& elem);
inline bool Read(DSG::DSGSample& elem);
inline size_t const& Count()const;
inline bool Full()const;
inline bool Empty()const;
inline void Flush();
friend bool operator>>(DSG::DSGSample const& signal,DSG::RingBuffer& buffer){
return buffer.Write(signal);
}
friend bool operator<<(DSG::DSGSample& signal,DSG::RingBuffer& buffer){
return buffer.Read(signal);
}
#ifdef DEBUG
friend std::ostream& operator<<(std::ostream& os,DSG:: RingBuffer const& buffer){
if (!buffer.Empty()) {
size_t index= buffer._read;
size_t count=buffer.Count();
size_t size = buffer.Size();
for (int i=0; i<count; ++i) {
os<<index<<": "<<buffer._buffer[index]<<std::endl;
index = ((index+1)%size);
}
}return os;
}
#endif
};
inline bool DSG::RingBuffer::Full()const{
return _count==this->_size;
}
inline bool DSG::RingBuffer::Empty()const{
return _count==0;
}
inline void DSG::RingBuffer::Flush(){
_write.store(0,std::memory_order_relaxed);
_read.store(0,std::memory_order_relaxed);
_count=0;
}
inline bool DSG::RingBuffer::Write(const DSGSample& elem){
if (!Full()) {
write = _write.load(std::memory_order_acquire);
_write.store(next(write),std::memory_order_release);
this->_buffer[write] = elem;
++_count;
return true;
}else return false;
}
inline bool DSG::RingBuffer::Read(DSGSample& elem){
if (!Empty()) {
read = _read.load(std::memory_order_acquire);
_read.store(next(read),std::memory_order_release);
elem = this->_buffer[read];
--_count;
return true;
}else return false;
}
inline size_t const& DSG::RingBuffer::Count()const{
return _count;
}
//note: RingBuffer implementation will force a power of 2 size to allow use of bitwise increment.
inline size_t DSG::RingBuffer::next(size_t current){return (current+1) & MASK;}
inline size_t DSG::RingBuffer::make_pow_2(size_t number){
return pow(2, ceil(log(number)/log(2)));
}
//implementation
DSG:: RingBuffer::RingBuffer():Buffer(0),_read(0),_write(0),_count(0),MASK(0){}
DSG:: RingBuffer::RingBuffer(const size_t size):Buffer(make_pow_2(size)),_read(0),_write(0),_count(0){
MASK = this->_size-1;
}
DSG:: RingBuffer::RingBuffer(RingBuffer& buffer):Buffer(buffer){
_write.store(buffer._write.load(std::memory_order_acquire));
_read.store(buffer._read.load(std::memory_order_acquire));
_count = buffer._count;
MASK = buffer._size-1;
}
DSG:: RingBuffer& DSG:: RingBuffer::operator=(RingBuffer& buffer){
Buffer::operator=(buffer);
_write.store(buffer._write.load(std::memory_order_acquire));
_read.store(buffer._read.load(std::memory_order_acquire));
_count = buffer._count;
MASK = buffer._size-1;
return *this;
}
DSG:: RingBuffer::~RingBuffer(){Flush();}
NOTE: DSG:: is the namespace that encloses all of the code. I have removed the namespace declaration in the code I have posted to cut down on code length.
Answer: Some basic comments
As a small point to begin with, I think your code to do with some more whitespace, especially when starting a new method. The way it is currently, it's very densely packed, which makes it harder to read.
Your operator= has no exception safety, and does not check for self-assignment. The copy and swap idiom is the easiest way to fix this (if you implement a swap function). If not, always remember to allocate the new buffer before deleting the old one; if new throws, this will then leave everything in a consistent state. Further, delete is a no-op on a nullptr, so doing the check isn't needed.
DSG::Buffer& DSG::Buffer::operator=(Buffer const& other)
{
if(this != &other) {
if(_size == other._size) {
std::copy(other._buffer, other._buffer + other._size, _buffer);
}
else {
new_buf = new DSG::DSGSample[other._size];
std::copy(other._buffer, other._buffer + other._size, new_buf);
delete[] _buffer;
_size = other._size;
_buffer = new_buf;
}
}
return *this;
}
Same again with the destructor: the nullptr check isn't needed.
Depending on what you have available to you, using a std::vector may be a better choice, as it will free you from having to write all of this.
You use the pattern }else return false; a few times. This looks clunky. Just separate it onto a new line and return false.
if(some_condition) {
....
return true;
}
return false;
Multithreading Issues
If you were planning on make this work in a multithreaded environment, you have quite a lot of work to do. For a start:
Your _count variable will need to be atomic. Full, Empty, Write and Read are all completely non-threadsafe at the moment.
Using memory_order_relaxed triggers massive warning bells. Unless you 100% know what you are doing, aren't working on x86(-64), have profiled the hell out of your code and KNOW that it is causing a bottleneck, you should avoid using memory_order_relaxed. In this case, if you were in a multithreaded environment, it is almost certainly wrong.
To expand upon why this is wrong in this situation, the most important thing to know about memory_order_relaxed is that it does no inter-thread synchronization. If you have one thread that calls Flush(), and another thread that calls Read(), there is absolutely no guarantee that the Read() thread will see the update from Flush().
In fact, most of your methods will likely need locks. Suppose you get rid of your std::memory_order_relaxed and have it use the default std::memory_order_seq_cst. If we then have a thread that is in a call to Flush(), it may complete the store operations on _write and _read, and then be pre-empted. Another thread may then come along and call Read() or Write(), in which case you will be reading or writing the start value of the array, which is presumably not what you want. In a multithreaded environment, you want everything in Flush() to complete atomically - so this would require a mutex or a rethink of your design.
The same problems can happen with the Read() and Write() functions themselves.
The RingBuffer operator= suffers from this on both sides: threads could be trampling all over each other inside Buffer::operator=, giving you an inconsistent view of its member variables, and also potentially trampling over each other for any of the member variables in the RingBuffer itself. Even with locks, it is very, very easy to write operator= in an incorrect way. Generally, you will need to use std::lock, and acquire locks on both this AND the argument to operator=:
RingBuffer& RingBuffer::operator=(const RingBuffer& buffer)
{
// Assuming each has a mutable std::mutex called mutex
// Need to use std::lock here to make sure things are always
// locked in the same order, so we can't deadlock
std::lock(mutex, buffer.mutex);
// Probably want to add each to a std::lock_guard<> here
// to make sure they are properly unlocked on scope exit.
// Now we can safely perform the actual operator= operations...
}
The thing to take away from this is that writing lock-free data structures is hard. It is not sufficient to simply use a few atomic variables here and there in a multithreaded environment; thread pre-emption will cause all sorts of havoc if you aren't careful. Further, using std::memory_order_relaxed makes things very, very hard to reason about, and is best avoided.
Unfortunately, you may have to rethink the design of this. For a single threaded buffer it will work, but using std::atomic is unnecessary. In a multithreaded environment, this has a number of problems. | {
"domain": "codereview.stackexchange",
"id": 14031,
"tags": "c++, circular-list"
} |
Passivity and stability | Question: I read the paper "Bilateral Control of Teleoperators with Time Delay",which is written by Robert J. Anderson and Mark W. Spong. In that paper, they defined passivity of an n-port flow as
$\int^\infty_0 F^T (t) v(t) dt \geqslant 0$
$F$ means effort and $v$ means flow.
And they said if a two-port system is passive, the system is stable. They didn't said that directly, but they said an system can be unstable by being nonpassive. They said that without proof, so I want to know why an passive system is stable.
Answer: I didn't read the entire paper but only checked the statement you mentioned.
The two-port communication circuit for this system is nonpassive, and is the cause of the instability.
And later they write again
[...] it was shown that the instability [...] is due to a nonpassive communication block.
It looks to me like you mixed up sufficient and necessary conditions here.
They write in the paper, as you already mentioned in your question, that nonpassive implies unstable.
This makes passivity a necessary condition for stability in this case.
They did not say that it is a sufficient condition, which would be how you understood it, that passive implies stable.
Wikipedia explains it a bit more compact than me.
If P is sufficient for Q, then knowing P to be true is adequate grounds to conclude that Q is true; however, knowing P to be false does not meet a minimal need to conclude that Q is false.
I hope I could explain it somewhat understandable, don't hesitate to comment if not. | {
"domain": "engineering.stackexchange",
"id": 2053,
"tags": "control-engineering, control-theory"
} |
Is the function $f: \mathbb{N} \rightarrow \mathbb{N}$ where $f(n) = 2^n$ computable in polynomial time using TM? | Question: Assuming that the input $n$ is given as a decimal number.
I was asked to prove whether the function $f: \mathbb{N} \rightarrow \mathbb{N}$ where $f(n) = 2^n$ is computable in polynomial time using TM or not.
My guess is that the function cannot be computed in polynomial time but I can't figure out how I can prove it.
Showing that the function can be computed in polynomial time is easy because we just have to exhibit an algorithm and show it runs in polynomial time.
I was trying to think about the problem a little bit but I feel like I don't have the right knowledge and tools when it comes to understanding the relationship between a function that can\cannot be computed in polynomial time and a language that can be decided in polynomial time ? and how it's related to the problem of $P = NP$ ?
My TA said that I should use a certain property that the function above has in order to show that it can't be computed in polynomial time, but he didn't answer the question about the relationship between computable\non-computable functions to decidable\non-decidable languages.
Answer: Look at the size of the required output. How much time would it take you just to write the result?
And this has nothing to do whatsoever with P vs NP, or computable or decidable languages. | {
"domain": "cs.stackexchange",
"id": 21691,
"tags": "turing-machines, computability, polynomial-time"
} |
Proving P-Isomorphism between two languges | Question: The famous Isomorphism Conjecture states that all NP-complete problems are isomorphic via polynomial-time computable and invertible bijections (reductions). Padability is the only property that I know which can be used to show P-isomorphism between two languages (The most direct way is to present the reduction). My intuition suggests that two p-isomorphic languages are two different labelings for some language and should be related via permutations.
What other techniques (or properties) can be used to show that two languages are P-isomorphic?
Motivation: I am trying to extend an analogy from GI. If two graphs are isomorphic then they are just two different labelings of the same mathematical structure. I guess there should be more natural and direct way than padability.
Answer: While I don't know other general properties (similar to paddability) that imply p-isomorphism, I suppose there is another way that is perhaps more "direct and natural" in the way asked for. Namely, if you can show that two languages are really encoding the same thing. (Indeed, Berman and Hartmanis refer to p-isomorphisms as "polynomial-time recodings" in their introduction, though they don't delve further in this direction.)
Several types of examples:
Example Type 1: Two different string encodings built from "conceptually the same" language. For example, we may encode CNF-SAT instances into $\Sigma^*$ where $\Sigma = \{(,),x,0,1,\wedge,\vee, \neg\}$ in the obvious way, using the bits 0,1 to encode the indices of the variables. Or we might choose any of a number of encodings of CNF-SAT into $\{0,1\}^*$ (for example, using a prefix-free encoding of $\Sigma$). All of these different encodings should yield p-isomorphic languages, where the p-isomorphism is basically to translate from one encoding to the other.
Example Type 2: Using two different data structures to encode the same thing. For example, if you consider the Graph 3-Coloring problem where the input is adjacency matrices, this should be p-isomorphic to one where the input is adjacency lists, incidence matrices, or edge lists. (I guess this is similar to type 1 - if pressed, I suppose I would find it hard to give a technical distinction between "two different encodings of the same language into strings" and "using two different data structures to encode the same objects", but conceptually Type 2 feels a bit more specific than Type 1. Maybe it is a sub-type.)
Example Type 3: Problems that are very easily seen to be equivalent. For example, Clique and Independent Set. If graphs are encoded by adjacency matrices, the p-isomorphism here can literally be given by swapping 0s and 1s. As another example, consider IndSet and VertexCover. Recall a graph has an independent set of size $k$ iff it has a vertex cover of size $|V|-k$. So the isomorphism here is given by mapping $(G,k)$ (an IndSet instance) to $(G,|V|-k)$. As a less trivial possible example, I would be surprised if one could not directly make the holographic equivalence between #$_2$MonRtw3CNF and #$_2$IndSet for cubic graphs from Valiant/Guo-Lu-Valiant into a p-isomorphism, though I have not worked it out carefully. | {
"domain": "cstheory.stackexchange",
"id": 5631,
"tags": "cc.complexity-theory, permutations"
} |
Do SIC-POVM elements for $d=2$ sum up to the identity? | Question: I am studying SIC-POVM in dimension two and I want to check that the elements sum up to identity.
$$\begin{aligned}
& \left|\psi_1\right\rangle=|0\rangle \\
& \left|\psi_2\right\rangle=\frac{1}{\sqrt{3}}|0\rangle+\sqrt{\frac{2}{3}}|1\rangle \\
& \left|\psi_3\right\rangle=\frac{1}{\sqrt{3}}|0\rangle+\sqrt{\frac{2}{3}} e^{i \frac{2 \pi}{3}}|1\rangle \\
& \left|\psi_4\right\rangle=\frac{1}{\sqrt{3}}|0\rangle+\sqrt{\frac{2}{3}} e^{i \frac{4 \pi}{3}}|1\rangle
\end{aligned}$$
I calculated
$$
|\psi_1\rangle\langle\psi_1|+|\psi_2\rangle\langle\psi_2|+|\psi_3\rangle\langle\psi_3|+|\psi_4\rangle\langle\psi_4|
$$
but the result was 2 times identity.
Do I need to divide them all by $\sqrt2$? Why this happened.
Answer: TL;DR: You need to divide each projector by two. This is equivalent to dividing each state by $\sqrt2$.
Your calculation result is correct and can easily be corroborated by computing the trace
$$
\sum_{k=1}^4\mathrm{tr}(|\psi_k\rangle\langle\psi_k|)=\sum_{k=1}^4\langle\psi_k|\psi_k\rangle=4
$$
which is the same as $\mathrm{tr}(2I)=4$.
In order for the four projectors to form a POVM they should sum to identity, so you need to rescale them. Since you want a SIC-POVM you should rescale them equally which means dividing each projector by two.
Thus, your SIC-POVM will consist of $\frac12|\psi_k\rangle\langle\psi_k|$ for $k=1,2,3,4$. | {
"domain": "quantumcomputing.stackexchange",
"id": 4438,
"tags": "measurement, povm"
} |
Get real time geojson on Javascript | Question: I frequently need to obtain some info about coordinates, because I have dynamic elements on a map (in Openlayers). So I use SetInterval and XMLHttpRequest Object to do it.
window.onload = function () {
var requestURL = "data.geojson";
var request = new XMLHttpRequest();
request.onreadystatechange = function () {
if (request.readyState == 4) {
console.clear();
console.log(request.response);
}
}
var prueba = setInterval(function () {
request.open('GET', requestURL, true);
request.setRequestHeader('authenticated_user', 'terrain');
request.send();
}, 2000);
}
Is It the correct way?
Answer: First of all, readyState 4 simply means XHR is done. The response, however, can still either be a success (2xx) or a failure (4xx, 5xx). In order to distinguish a success from an error, you should also check status.
Next would be the reuse of the XHR instance. Although it's perfectly valid (open reinitializes the instance), you'll have to be absolutely sure subsequent calls aren't affected by the previous calls. It's probably much safer to just create a new instance of XHR for each request.
Then instead of window.onload, consider using addEventListener. The issue is that there might be other scripts assigning a callback to window.onload, overwriting your handler. On the other hand, multiple calls to addEventListener will just append handlers.
A simpler way to do it, assuming your target audience doesn't use legacy browsers, is fetch. It's essentially jQuery's $.ajax but natively implemented.
window.addEventListener('load', event => {
setInterval(() => {
fetch('data.geojson', {
method: 'GET',
headers: { authenticated_user: 'terrain' }
}).then(() => {
console.clear();
console.log(request.response);
})
}, 2000)
}) | {
"domain": "codereview.stackexchange",
"id": 30673,
"tags": "javascript, json, ajax"
} |
Reactant energy distribution and spontaneity of reactions | Question: I wonder about the implications of substances having an energy distribution around a mean, as opposed to them all having the same energy level, on the spontaneity of reactions.
2 somewhat separate questions:
Can a reaction with a high activation energy, but which in the end has a negative Gibbs free energy change at certain temperature, occur spontaneously even though at this temperature the energy level of the reactants is lower than the activation energy? If I understand correctly, the answer would be yes because the energy of the reactants is just an average of a broader distribution, and so some small fraction of the reactants would attain the $E_a$ even if on average $E_a > E_{\text{reactants}}$?
Wouldn't it also be possible that some non-spontaneous reactions (with a positive change in Gibbs free energy) occur spontaneously? I mean we know that if we invest $\Delta G$ energy, we will force the non-spontaneous reaction to occur. But what if some small fraction of the reactants get to the energy level they would have had if $\Delta G$ energy was invested in them (without it being actually invested). Wouldn't that mean that for that fraction of the reactants, in the high end of the energy distribution around some average, the reaction would occur (while not occurring for the rest and vast majority of the reactants who fail by themselves to attain the same energy level they would have had if ΔG energy were invested in them)? Is there a flaw in this understanding?
I guess the question here is whether the idea that $\Delta G > 0$ reactions are non-spontaneous is a statement of probability that says that such reactions are just very unlikely or is it an hard and fast rule like the conservation of energy?
Answer: Yes; you are on the right path in your second paragraph, but distinguish between activation energy and free energy as they are not connected. I'm sorry but I think that your third paragraph is not all correct as it stands. If the free energy change is positive the reaction is not spontaneous by definition. Perhaps you mean, if the overall enthalpy change is positive can the free energy now be negative if the overall entropy change is also positive? The answer is yes.
The thermodynamic quantities only give us information about initial and final states at equilibrium. The activation energy is obtained from kinetic, that is rate constant measurements usually vs, temperature and not from thermodynamic quantities. In the vast majority of reactions the activation energy is greater than thermal energy (approx kBT, where kB is Boltzmann's constant). These reactions are generally slow because, by the decreasing exponential nature of the Boltzmann distribution with energy, not many reactants have enough energy to react in any given time period, say 1 sec. (This is lucky otherwise we would all have reacted eons ago!) In some special types of unimolecular reactions, e.g. electron transfer, bond dissociation & isomerization, with zero or very small activation energy then clearly these will be very fast possibly with rate constants $ >10^{12}\ /sec $.
In the case where the products are of higher energy than reactants, reaction still occurs but the equilibrium lies on the side of the reactants. Again by the Boltzmann distribution argument products react faster to reactants than the other way round. | {
"domain": "chemistry.stackexchange",
"id": 5860,
"tags": "physical-chemistry, free-energy"
} |
BINDING ENERGY OR BINDING ENERGY PER NEUCLEON REMAINS CONSTANT | Question: Which one is constant BINDING ENERGY or BINDING ENERGY PER NEUCLEON? I GOT LOTS OF ANS. BUT I CAN`T UNDERSTAND. IS NEITHER OF THE IS CONSTANT?
Answer: 1) There are four vectors and a system of particles has an invariant mass according to the addition of its four vectors.
2) each nucleus has a rest mass, M, when it is not moving.
3) each nucleus has a fixed number of protons and neutrons, protons are Z in the table, and neutrons are A-Z
4) Each proton and each neutron has a fixed mass in its center of mass
From the above, multiplying the number of protons with the mass of the proton and multiplying the mass of the neutron with the number of neutrons and adding the quantities will give a mass M'. This would be the mass if all nucleons were at rest . It was observed that M' was bigger than M, i.e. the bound protons and neutrons into a nucleus have less mass than if they were not bound in a nucleus. The difference in the two values, M'-M is the total binding energy of the nucleus. Divided by [Z+(A-Z)] (the number of protons and neutrons one gets the binding energy per nucleon . So both are constant attributes of a specific nucleus. A simple division by a constant . | {
"domain": "physics.stackexchange",
"id": 21785,
"tags": "nuclear-physics, binding-energy"
} |
For a cic filter, why RMS value at -3db is not twice as large as its -6db value? | Question: For a given input sinewave with amplitude $A = 2000$ and frequency $f = 53405\,\text{Hz}$ (at $-3\,\text{dB})$,
From my understanding, the CIC filter gain should be $5^3=125$, and $-3\,\text{dB}$ magnitude response is at the frequency of about $f = 53405\,\text{Hz}$, so at least I should get a result around $2000*125/2 = 125000$
So I can see the RMS value is $\text{RMS}_{3dB} = 125005$, and amplitude $A_{3dB} = √2*{RMS}_{3dB} = 176803$. Yet $\text{RMS}_{6dB} = 88578$, Why is $\text{RMS}_{3dB}$ not twice as large as $\text{RMS}_{6dB}$
Below is my CIC code.
clc;
clear;
close all;
%% generate the input signal
fsDataIn = 1e6;
fs_clk = 32e6;
freqInput_6db = 74.463e3; % frequency of input signal -6db
freqInput_3db = 53.405e3; % frequency of input signal -3db
deltaT = 1/fsDataIn; % time interval between two samples
amplitude = 2000;
Length = 100;
t = 0:deltaT:Length;
sine_dataIn_6db=amplitude*sin(2*pi*freqInput_6db*t);
sine_dataIn_6db = sine_dataIn_6db(1:10000)';
sine_dataIn_3db=amplitude*sin(2*pi*freqInput_3db*t);
sine_dataIn_3db = sine_dataIn_3db(1:10000)';
%% cic parameters
R = 5; % decimator factor
M = 1; % differential delay
N = 3; % number of stage
CICDecim = dsp.CICDecimator(R, M, N);
[dataOut_6db] = CICDecim(sine_dataIn_6db);
[dataOut_3db] = CICDecim(sine_dataIn_3db);
cic_out = fvtool(CICDecim);
figure('name','cic');
subplot(2,1,1);
plot(sine_dataIn_6db');
xlabel('t');
ylabel('input sinewave');
title('input sinewave in time domain');
subplot(2,1,2);
plot(dataOut_6db);
xlabel('sample');
ylabel('ideal cic output data');
title('ideal cic output in time domain');
rms_3db = rms(dataOut_3db)
rms_6db = rms(dataOut_6db)
amplitude_3db = (max(dataOut_3db)-min(dataOut_3db))/2
amplitude_6db = (max(dataOut_6db)-min(dataOut_6db))/2
```
Answer: $3dB$ down is half power, and $6dB$ down is half amplitude;
The average power value matches the power calculated using RMS voltage. | {
"domain": "dsp.stackexchange",
"id": 11509,
"tags": "matlab, digital-filters"
} |
Does an accelerated spring change anything? | Question:
A block of mass M is attached with a spring of spring constant k. The whole
arrangement is placed on a vehicle as shown in the figure. If the vehicle starts moving towards right with a constant acceleration a (there is no friction anywhere & vehicle is long),Then find maximum elongation in spring
So what I did was
Let the elongation be $x$
Since the total acceleration of the block is $a$ so the total force on it should be $ma$
So $ma=kx$
Maximum elongation $x=ma/k$
But the actual answer is double of it. I can't understand why. It's probably because the spring is also accelerated, but can anyone explain how this works?
Answer: This is equivalent to having the mass hanging in a gravitational field $a$, and releasing it from $x=0$.
The equilibrium position has $ma = kx_{eq}$; however, when it gets to that position, it is moving. It has converted gravitational potential energy:
$$ U = max_{eq} = kx_{eq}^2 $$
into spring potential energy $\frac 1 2 kx_{eq}^2$, and has the remainder as kinetic energy. Hopefully you can see the factor of two.
Note that the Virial Theorem is in effect. | {
"domain": "physics.stackexchange",
"id": 81063,
"tags": "homework-and-exercises, newtonian-mechanics, energy-conservation, work"
} |
A simple decision problem whose decidability is not known | Question: I am preparing for a talk aimed at undergraduate math majors, and as part of it, I am considering discussing the concept of decidability. I want to give an example of a problem that we do not currently know to be decidable or undecidable. There are many such problems, but none seem to stand out as nice examples so far.
What is a simple-to-describe decision problem whose decidability is open?
Answer: The Matrix Mortality Problem for 2x2 matrices. I.e., given a finite list of 2x2 integer matrices M1,...,Mk, can the Mi's be multiplied in any order (with arbitrarily many repetitions) to produce the all-0 matrix?
(The 3x3 case is known to be undecidable. The 1x1 case, of course, is decidable.) | {
"domain": "cstheory.stackexchange",
"id": 3450,
"tags": "computability, decidability"
} |
Formatting timestamps during autumn DST switchover | Question: I've written code that 99.x% of the time does the same thing as
def format(timestamp: datetime) -> str:
return timestamp.strftime("%X")
but follows the German "Sommerzeitverordnung" ("daylight savings time decree") during the hour before and after the autumn switchover.
Die Stunde von 2 Uhr bis 3 Uhr erscheint dabei zweimal. Die erste Stunde (von 2 Uhr bis 3
Uhr mitteleuropäischer Sommerzeit) wird mit 2A und die zweite Stunde (von 2 Uhr bis 3 Uhr mitteleuropäischer
Zeit) mit 2B bezeichnet.
(Sommerzeitverordnung, §2, Abs. 2, Satz 3-4)
Translation (by google translate with minor fixes by me)
The hour from 2 a.m. to 3 a.m. appears twice. The first hour (from 2 a.m. to 3 a.m.
Central European summer time) becomes 2A and the second hour (from 2 a.m. to 3 a.m. Central European
time) is denoted by 2B.
I've written the following pytests
from datetime import datetime
from zoneinfo import ZoneInfo
from timezone_formatting import format
def test_timestamp_without_marker() -> None:
assert "01:59:59" == format(datetime(
year=2022, month=10, day=30, hour=1, minute=59, second=59,
tzinfo=ZoneInfo('Europe/Berlin')))
assert "03:00:00" == format(datetime(
year=2022, month=10, day=30, hour=3, minute=0, second=0,
tzinfo=ZoneInfo('Europe/Berlin')))
def test_timestamp_during_last_hour_before_switchover() -> None:
assert "02A:00:00" == format(datetime(
year=2022, month=10, day=30, hour=2, minute=0, second=0,
fold=0,
tzinfo=ZoneInfo('Europe/Berlin')))
assert "02A:59:59" == format(datetime(
year=2022, month=10, day=30, hour=2, minute=59, second=59,
fold=0,
tzinfo=ZoneInfo('Europe/Berlin')))
def test_timestamp_during_first_hour_after_switchover() -> None:
assert "02B:00:00" == format(datetime(
year=2022, month=10, day=30, hour=2, minute=0, second=0,
fold=1,
tzinfo=ZoneInfo('Europe/Berlin')))
assert "02B:59:59" == format(datetime(
year=2022, month=10, day=30, hour=2, minute=59, second=59,
fold=1,
tzinfo=ZoneInfo('Europe/Berlin')))
and the following code, which passes the tests:
from datetime import datetime, timedelta, timezone
from zoneinfo import ZoneInfo
utc = timezone.utc
germany = ZoneInfo("Europe/Berlin")
offset = timedelta(hours=1)
def switchover_letter(timestamp: datetime) -> str:
"""
Returns "A" if timestamp is in the last hour of daylight savings time.
Returns "B" if timestamp is in the first hour of standard time.
Else returns an empty string
"""
utc_timestamp = timestamp.astimezone(utc)
if (utc_timestamp + offset).astimezone(germany).hour == timestamp.hour:
return "A"
if (utc_timestamp - offset).astimezone(germany).hour == timestamp.hour:
return "B"
return ""
def format(timestamp: datetime) -> str:
return (
timestamp.strftime("%H")
+ switchover_letter(timestamp)
+ timestamp.strftime(":%M:%S")
)
But, there are two things that kind of raise red flags for me:
Calling strftime twice on the same timestamp and inserting my own formatting in between seems wrong. Is there a format specifier for strftime I have missed, or is there something else I could do to make this less icky?
Manually doing time manipulation in switchover_letter. I'm reasonably sure that my code is correct, but is it really? Is there a way to do this less manually? Note that doing this without the roundtrip to UTC doesn't work, because "timezone aware timestamp + offset" uses wall time.
Answer: Overall this is well-written.
Don't call a function format; that shadows a built-in.
Calling strftime twice on the same timestamp and inserting my own formatting in between seems wrong. Is there a format specifier for strftime I have missed, or is there something else I could do to make this less icky?
Your instincts are correct; prefer instead:
def daylight_savings_format(timestamp: datetime) -> str:
return '{0:%H}{1}:{0:%M:%S}'.format(
timestamp, switchover_letter(timestamp)
)
Or do two passes where you first format a datetime-field string with your middle character and then pass that to strftime, as in
def daylight_savings_format(timestamp: datetime) -> str:
fmt = f'%H{switchover_letter(timestamp)}:%M:%S'
return timestamp.strftime(fmt)
I have a weak preference for the former. | {
"domain": "codereview.stackexchange",
"id": 43579,
"tags": "python, datetime"
} |
How to extract the maxima and minima of a noisy period input signal | Question: I have a (rough) known period sine wave coming in, held in an array, with superimposed noise. I need to find and count the maxima and minima (in C). Any suggestions or algorithms? The signal has already been received and buffered, so not real time.
Answer: If you have the entire signal, you can then run a Lowess smoothing. This algorithm is statistically robust and provides locally acceptable smoothing. Outliers will be tolerated as well. After smoothing your signal in that way, you might apply a derivative based (or even derivative free) peak detection in order to obtain the signal peaks. These will be your local minima/maxima.
You could use MATLAB or R and port smoothing algorithms to C, they are pretty straightforward and open source:
http://www.mathworks.com/help/curvefit/smooth.html
And here are your options in C:
1) https://code.google.com/p/variationtoolkit/ for Lowess/Loess.
2) For peak detection, there are many tools available in C, one of them being:
https://github.com/xuphys/peakdetect | {
"domain": "dsp.stackexchange",
"id": 1445,
"tags": "signal-analysis, noise"
} |
Average Time Complexity of Two Nested Loops | Question: Recently I have had this question in one of my interviews.
You have 1 Million sorted integer, you have a value of $x$, compare each pair in this array and if the addition of two pair is less or equal to $x$ then increment the paircounter.
I have implemented this solution in C++ however I will write pseudocode here(I am sorry I am not very good at pseudo-code)
Initialise ARR_SIZE to 1000000
Initialise index_i to 0
Initialise pairCount to 0
Initialise x to 54321
while index_i is less than ARR_SIZE
Initialise index_j to index_i+1
while index_j is less than ARR_SIZE
if array[index_i]+array[index_j] is less or equal to x
Increment pairCount
increment index_j
endof while
increment index_x
endof while
At first I said it is $O(n \log n)$ but then with the hint the second loop itself average complexity is O(n/2) so overall I said it would be $O(n\cdot n/2)$ but in Big $O$ notation it would be $O(n)$ because $n/2$ is a constant(although I was not too sure). So what is the average complexity of this overall algorithm?
PS: I know that I could have decreased the complexity by adding an extra else, index_j = ARR_SIZE, which would be $O(N)$ complexity, but I couldn't think of it during the interview.
Answer: How to solve it in O(N)
Let's take an example sequence of numbers:
1 3 4 6 7 9
Assume we want the number of pairs (A,B) where A + B <= 12.
First, let's compare the 1st (low element) and the Nth element (high element). 9 + 1 <= 12. So that's one pair. But, as our list is sorted, 9 is the biggest number, if (9, 1) <= 12, for any other number X in the list, (X, 1) <= 12! Meaning, we can just add 6-1 (high index - low index) to the count, we don't need to test those numbers separately. Those are pairs (9,1), (7,1) ... (3,1).
Now we increase our low index to 2, we continue to 3. 9 + 3 <= 12 still, so we add 6-2 to the count. We already added (3,1) in the previous step so this is fine. This adds pairs (9,3), (7,3), (6,3), (4,3).
Again we increase the low index. 9 + 4 > 12. When this happens, we decrease the high index and add nothing to the count.
Now low index is 3 and high index is N-1. 7 + 4 <= 12. Adding 5-3... And so on.
If it is unclear why I add high index - low index every time: take for example the 7 + 4 step. I know that, because I am at high = 7, I had to skip 9 because it was too large. So I don't want to add the pair involving 9. I also know that because I am at low = 4, I already counted the pairs involving 1 and 3 in the previous steps. So I need to add pairs (7,4) and (6,4) in this step.
int count = 0;
for (int low = 0, high = list.Count - 1; high >= 1 && low != high; )
{
if (list[high] + list[low] <= x)
{
count += high - low;
low++;
}
else
{
high--;
}
}
Regarding constants and complexity
Judging by your comment, you are confused about what is ignored and why. First, n/2 is not a constant because it depends on n. As n grows, n/2 grows, so it is clearly not constant. The constant that is ignored in n*n/2 is 1/2, and what remains is n*n.
So the complexity is N^2. The actual number of inner loop executions is N^2/2, but this is still O(N^2). The fact you brought up in your comment that the inner loop will run 50 times when 10^2 would indicate 100 times is irrelevant, here's why:
Constants are not meaningful unless they are extremely large (a 2^32 constant because your algorithm tests every 32 bit integer) or you calculate the average case cycle count on a reference architecture.
Otherwise, actual speed depends on the language used, on how many instructions each operation actually performs, prediction, caching, pipelining, etc and it is difficult to compare the constants of 2 different algorithms, just outright counting syntactic constructs is very misguided.
2X operations may run faster than X operations if what is being done, how and in which order is different. So until you dig a bit deeper and look into the things I mentioned above, counting operations in a high level language is pure wishful thinking. It is a sign of a questionable CS teaching in my opinion because it teaches a nonsense model of code execution, instead of using a valid one, but on a simple reference architecture (like Knuth does), if you insist on counting operations.
Consider these two pieces of code:
for (int j = 0; j < N - 1; j++)
for (int i = j + 1; i < N; i++)
r += A[i][j];
And:
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
r += A[i][j];
The first example has N^2 / 2 inner loop repetitions. The second example has N^2 inner loop repetitions. The inner loop body is the same. But the second one is actually much faster on a standard computer (in fact, a smart compiler with certain settings might rewrite your code if you write the first one - so if you test this, turn off optimization)! This is because the second one has much better memory access locality - it accesses the memory locations in order, while the first one jumps around. This causes a much bigger difference in speed than running the loop twice as many times does. And this is just one of many things which affect run speed and are not immediately visible in the code.
Second example, two pieces of code:
for (int i = 0; i < N - 1; i++)
{
for (int j = i + 1; j < N; j++)
{
int a = A[i][j] * 3;
a += 5;
r += a;
}
}
And:
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
r += A[i][j];
Can you tell me which one has a bigger "constant"? Again, the first inner loop runs 1/2 as many times as the second one. But the first one has 3 lines of code inside the inner loop, while the second one has just one. On my machine, the first one is faster. By what factor? I have no idea (I could measure it, but I can't tell you by looking at the code). If you decide on a constant A for the first and B for the second program by looking at the code, A/B will probably not be anything like the ratio of actual execution times, making the constants meaningless.
So the 1/2 constant is meaningless. Whether a program runs twice faster than some other program depends on so many things that it is not productive to account for small constants on such a superficial level.
By the way, you have a bug in your program. The outer loop condition needs to be less than ARR_SIZE - 1 or you are setting index_j to ARR_SIZE in the last outer iteration, and as your indices are 0-based you are reading from beyond the end of your buffer. | {
"domain": "cs.stackexchange",
"id": 1609,
"tags": "algorithms, algorithm-analysis, runtime-analysis"
} |
How to choose origin in rotational problems to calculate torque? | Question: We know that $\text{Torque} = r \times F$ and $r$ is the position vector. But the position vector depends on the choice of the coordinate system and in turn on the choice of origin. So, where should we take the origin?
Also, do torque, angular velocity and angular acceleration point of out the plane of rotation for 2D objects because otherwise they wouldn't have constant direction?
Many sources (including my textbook) seem to say that the origin should lie on the axis and that it wouldn't make a difference where it is on the axis.. but I don't get why it shouldn't, since position vector would be different from different origins and so the torque, according to me, might come out to be different.
Answer: In order to calculate the torque, $\vec\tau=\vec r\times\vec F$, one can choose any origin $O$. The torque then is said to be calculated with respect to $O$ and it is dependent of this choice. In particular, if the sum of all external forces on the system vanishes then the resultant torque is independent of $O$.
For the second question, note that when a particle is rotating in a fixed plane, say $xy$ plane, and the forces acting on it are also in this plane then the torque is in the $z$ direction because a vector product with force must be orthogonal to it. Similarly the expressions for the angular velocity and angular acceleration also satisfy vector product relations, $\vec v=\vec\omega\times\vec r$ and $\vec a=\vec\alpha\times\vec r+\vec\omega\times\vec v$. As you can see, angular velocity has to be perpendicular to the velocity and angular acceleration has to be perpendicular to the acceleration. | {
"domain": "physics.stackexchange",
"id": 59113,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, torque, rotational-kinematics"
} |
Difficulty in understanding this summations to analyze time complexity | Question: I wanna know if what this link https://stackabuse.com/shell-sort-in-java/'s calculation of the complexity of shell sort is true.
Here's the shell sort algorithm:
void shellSort(int array[], int n){
for (int gap = n/2; gap > 0; gap /= 2){
for (int i = gap; i < n; i += 1) {
int temp = array[i];
int j;
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){
array[j] = array[j - gap];
}
array[j] = temp;
}
}
}
Let me attach the site author's calculation using summations:
Where did he get the o(n log n) from? And why O(n^2)?
Answer: Shellsort is notoriously difficult to analyze (surprising for such a simple algorithm). In Knuth's "Sorting and Searching" it gets an inordinate scrutiny, papers on its performance get published regularly.
Then again, the Busy Beaver function (At most how many ones does an $n$ state Turing machine write starting on an empty tape before halting?) is known only up to $n = 4$, the current record holder with 6 states writes $3.515×10^{18267}$ ones before halting... | {
"domain": "cs.stackexchange",
"id": 15619,
"tags": "algorithms, time-complexity, loops, java"
} |
Order a string of colors by the last element which is an integer | Question: I have solved the coding problem written here.
In summary:
Input: str = "red2 blue5 black4 green1 gold3"
Output: "green red gold black blue"
Here is my code:
def order_color(str1):
#Split string using the whitespaces and send to a list
#Sort elements within the list by the number
str2 = sorted(str1.split(), key=lambda color: color[-1])
#Convert the list to a string and erase the number
str3 = ' '.join([str(colo[: -1]) for colo in str2])
return str3
Is there a way to make the code more elegant, "pythonic" (Ex. To impress a MAANG inteviewer).
Thanks
Answer: Comments
Don't use comments to describe what your code does. Any developer can see that by reading the code itself. Comments are there to clarify why your codes does certain things if that is not immediately clear.
Docstrings
In order to document your code, use docstrings on modules, classes and functions.
Type hints
Using type hints can simplify determining the parameter and return types of functions.
Use existing library code
It might not be a big deal in your case, but lambda color: color[-1] can be substituted by using operator.itemgetter.
Useless casting
The casting to a string at str(colo[: -1]) is unnecessary, since slicing a str will return a str again.
Useless comprehension
You don't need to create a list in a comprehension at ' '.join([str(colo[: -1]) for colo in str2]). str.join will happily take a generator expression.
Useless local variable
Assigning an expression to str3 and then immediately returning it is useless. Just return the expression instead.
Naming
I hope we can all agree, that str{1,2,3} are suboptimal names. Call variables for what they represent.
Putting it together:
from operator import itemgetter
def order_colors(colors: str) -> str:
"""Order a whitespace separated string of colors
ending with an single digit index by that index and
return a string of the ordered colors without that index.
"""
return ' '.join(
color[:-1] for color in
sorted(colors.split(), key=itemgetter(-1))
) | {
"domain": "codereview.stackexchange",
"id": 44097,
"tags": "python, python-3.x"
} |
If the universe is full of dark matter, why is it only 2.73 K cold? | Question: people!
I am just a physics layman, but I recently watched a documentary about the universe and it was told that
the universe is full of dark matter and energy and
the universe is empty, so that the average temperature measured by the cosmic microwave background is about 2.73 Kelvin.
They also said, that temperature is (also) depending on gravity or mass respectively. That's why the sun is so hot, since there is huge mass concentrated, and the empty space is cold, since there is no mass. Or is it?
So now my question(s): IF there is almost only dark matter and energy in the universe with HUGE mass effects which explains interstellar movement etc., why is the universe only 2.73 K cold? Shouldn't it be warmer? Isn't the 2.73 Kelvin the temperature you get, if you calculate the temperature without dark matter and energy?
Answer: The temperature 2.73 K is not calculated, it is measured. The cosmic microwave background (CMB) has the properties of a blackbody radiation at the temperature 2.73 K. It is based just on the measurement of CMB, no calculation of dark matter or dark energy is involved.
Temperature does not simply depend on mass or gravity. Temperature is a quantity which in a lot of situations is difficult to define. For example a black hole can be extremely massive and have huge gravity, but the black body radiation from such a black hole is extremely cold. If there is nothing falling into the black hole, there won't be any other radiation.
The dark energy particles (if dark energy is made up of particles) do not interact electromagnetically or strongly (and maybe not even weakly), so they play virtually no role in our discussions about what we call "temperature of the universe".
We know basically nothing about dark energy, so it also is not included in our definition of temperature. | {
"domain": "physics.stackexchange",
"id": 10719,
"tags": "temperature, universe, dark-matter, dark-energy"
} |
dynamic_reconfigure is not working with Diamondback | Question:
Hello,
I'm trying to put dynamic_reconfigure package to my node.
But when I create a new node with the following tutorial, http://www.ros.org/wiki/dynamic_reconfigure/Tutorials/SettingUpDynamicReconfigureForANode , It doesn't publish any topic.
For the example on the dynamic_reconfigure package(testserver) also doesn't work. It just stops then I should press "Ctrl+" to quit the process.
Is there anyone who encounter this problem?
Cheers.
Originally posted by enddl22 on ROS Answers with karma: 177 on 2011-05-02
Post score: 1
Answer:
It works the different machine which has exact same version of ROS and Linux.
ROS is diamond back, version 1.4.6
Linux is Ubuntu 10.04, Kernel version 2.6.35-28
Is there anyone who knows about this problem?
Cheers.
Originally posted by enddl22 with karma: 177 on 2011-05-03
This answer was ACCEPTED on the original site
Post score: -1
Original comments
Comment by tfoote on 2011-05-03:
PS it's better to update your question than to answer it, unless you consider it closed.
Comment by tfoote on 2011-05-03:
Can you look a little closer at the differences between the two machines? There must be something. | {
"domain": "robotics.stackexchange",
"id": 5498,
"tags": "dynamic-reconfigure"
} |
ROS2 messages stop 90s after network drops | Question:
I'm running ROS2 Dashing with Fast-RTPS on a single-board computer running Ubuntu 18.04 server. It's mounted inside a waterproof enclosure. When the enclosure is in the air I can talk to it via wifi, and when it goes underwater the wifi connection drops. All ROS message traffic stops ~90 seconds after the network connection is lost.
I can easily reproduce this on my desktop (Ubuntu 18.04 desktop):
in the 1st terminal run ros2 run demo_nodes_cpp talker
in the 2nd terminal run ros2 run demo_nodes_cpp listener -- starts reporting
unplug the Ethernet cable, after ~90 seconds the listener stops reporting
plug in the Ethernet cable, after a few seconds the listener starts back up
I've tried DHCP and static IP & DNS, but the behavior is the same. I don't see anything surprising in journalctl -f.
Interestingly, if I start sending messages while the network is down, it appears to ignore the state of the network:
unplug the Ethernet cable
in the 1st terminal run ros2 run demo_nodes_cpp talker
in the 2nd terminal run ros2 run demo_nodes_cpp listener -- starts reporting
plug in the Ethernet cable, no change
unplug the Ethernet cable, no change
Any ideas?
Thanks.
Originally posted by clyde on ROS Answers with karma: 1247 on 2019-06-25
Post score: 0
Original comments
Comment by gvdhoorn on 2019-06-26:
Related: #q325872.
Answer:
It sounds like the loopback interface might not be configured to allow multicast traffic. Here's a SO post about configuring this. You might find more information in ros2/rmw_fastrtps#228.
Originally posted by sloretz with karma: 3061 on 2019-06-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by clyde on 2019-06-26:
Thx... I'm investigating this now
Comment by clyde on 2019-06-27:
You were right, loopback was not configured for multicast. I configured it using route add -net 224.0.0.0 netmask 240.0.0.0 dev lo and ifconfig lo multicast. This works great if I boot up with no network, enable multicast on loopback, then run the talker and listener. I can plug / unplug the Ethernet and it continues to work.
But if I boot up with the Ethernet plugged in, enable multicast on loopback, then start the talker and listener, it still stops 90 seconds after I unplug the Ethernet. The only way to restart communications is to kill all ROS nodes then restart them in the network-starved situation.
Comment by sloretz on 2019-06-27:
This might be a bug. Would you mind opening an issue on https://github.com/ros2/rmw_fastrtps/issues ?
Comment by clyde on 2019-06-27:
Opened: https://github.com/ros2/rmw_fastrtps/issues/297 | {
"domain": "robotics.stackexchange",
"id": 33257,
"tags": "ros2"
} |
[SOLVED] Getting map co-ordinates | Question:
We are building a SLAM map using our Turtlebot. During the process, we stop the Turtlebot at certain points and would like to obtain the x,y,z values from the map coordinate frame.
We are using the /map_metadata topic which we believe gives us the x,y,z values in the map.
However, these values never seem to change and always stay the same in the /map_metadata topic.
Could someone please provide a solution or an alternative?
Many thanks!
Edit: We've managed to obtain the corresponding robot co-ordinates by simply using
rosrun tf tf_echo /map /base_footprint
Originally posted by aviprobo on ROS Answers with karma: 61 on 2014-05-05
Post score: 3
Answer:
The MapMetaData message type used for the "/map_metadata" topic provides information about the transform and scaling of the grid map with regards to the (metric) map frame (Essentially allowing to convert between grid and metric coordinates). Note it contains no information about a robot´s location whatsoever. If you have setup everything correctly (which appears to be the case if your Turtlebot is mapping the environment), you can use tf to obtain the robot´s pose estimate. Have a look at this tutorial and this Q/A.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-05-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aviprobo on 2014-05-06:
Thanks Stefan! Now, once I have acquired the translation from map to robot pose estimate, if I want to send a goal to the map (map co-ordinates), how am I able to translate from metric back to grid? Do I still use tf to perform the translation from metric to grid or is there another way?
Comment by Stefan Kohlbrecher on 2014-05-07:
Navigation happens in a metric map frame when using the ROS navigation stack. You normally do not care about the grid (e.g. pixel) coordinates of the underlying occupancy grid map (unless you want to do lower level things like SLAM or query the map data directly). | {
"domain": "robotics.stackexchange",
"id": 17855,
"tags": "ros, slam, coordinate-system"
} |
syntax error near unexpected token [SOLVED] | Question:
Hi guys,
I am new in ros and i am having trouble with the odometry c++ publish turorial (wiki.ros.org/navigation/Tutorials/RobotSetup/Odom). I am using the code in this link.
When i try to rosrun the program it gives me this:
line 5: syntax error near unexpected token `('
line 5: ` int main(int argc, char** argv){'
I don't understand why. BTW i am using groovy.
my c code is:
#include <ros/ros.h>
#include <tf/transform_broadcaster.h>
#include <nav_msgs/Odometry.h>
int main(int argc, char** argv){
ros::init(argc, argv, "odometry_publisher");
ros::NodeHandle n;
ros::Publisher odom_pub = n.advertise<nav_msgs::Odometry>("odom", 50);
tf::TransformBroadcaster odom_broadcaster;
double x = 0.0;
double y = 0.0;
double th = 0.0;
double vx = 0.1;
double vy = -0.1;
double vth = 0.1;
ros::Time current_time, last_time;
current_time = ros::Time::now();
last_time = ros::Time::now();
ros::Rate r(1.0);
while(n.ok()){
current_time = ros::Time::now();
//compute odometry in a typical way given the velocities of the robot
double dt = (current_time - last_time).toSec();
double delta_x = (vx * cos(th) - vy * sin(th)) * dt;
double delta_y = (vx * sin(th) + vy * cos(th)) * dt;
double delta_th = vth * dt;
x += delta_x;
y += delta_y;
th += delta_th;
//since all odometry is 6DOF we'll need a quaternion created from yaw
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(th);
//first, we'll publish the transform over tf
geometry_msgs::TransformStamped odom_trans;
odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = "odom";
odom_trans.child_frame_id = "base_link";
odom_trans.transform.translation.x = x;
odom_trans.transform.translation.y = y;
odom_trans.transform.translation.z = 0.0;
odom_trans.transform.rotation = odom_quat;
//send the transform
odom_broadcaster.sendTransform(odom_trans);
//next, we'll publish the odometry message over ROS
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = "odom";
odom.child_frame_id = "base_link";
//set the position
odom.pose.pose.position.x = x;
odom.pose.pose.position.y = y;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.twist.twist.linear.x = vx;
odom.twist.twist.linear.y = vy;
odom.twist.twist.angular.z = vth;
//publish the message
odom_pub.publish(odom);
last_time = current_time;
r.sleep();
}
}
thanks
Originally posted by trbf on ROS Answers with karma: 16 on 2013-11-02
Post score: 1
Original comments
Comment by yigit on 2013-11-02:
We need to see the code that you try to compile
Comment by trbf on 2013-11-02:
Ok,sorry about the editing,i don't now why it appears like that!
Comment by yigit on 2013-11-02:
It is because of indentation. It would be better if you shift your code to right by one tab, then paste it here. I'm not sure I understand it correctly but isn't there a problem with your "include"s?
Comment by trbf on 2013-11-03:
Thanks, now it is better. Maybe is the includes but i don't know why...
Answer:
The problem is solved! I was trying to run de cpp file instead of the one made with the rosmake! It was a really nub problem!
Thanks
Originally posted by trbf with karma: 16 on 2013-11-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Andrea019 on 2016-12-15:
Hello! I have the same problem :) which file is the one made with the rosmake? :(
Comment by trashperson on 2020-01-25:
wow I knew I was a noob but I never imagined I was this much of a frikkin n0000b. Thanks for the thread. | {
"domain": "robotics.stackexchange",
"id": 16041,
"tags": "navigation, odometry, c++, ros-groovy"
} |
Why does the debris from comets and former comets hang around so long? | Question: So tonight's Quatrantids shower got me thinking. Why does the debris from comets and former comets hang around so long? Each year the earth sweeps through the region of space that the comet went through. However, the comet doesn't come by each year, so the earth must be going through the same cloud numerous times. And each time we get a meteor shower as a result.
I suspect an answer, but I'd rather hear from professionals.
Answer: I'm not a professional, but I'll try to answer anyway.
Meteor showers occur when the Earth passes through the orbit of a comet (or, in at least one case, an asteroid). Over time, the debris spreads over the entire orbit of the comet.
A shower can last for several days, which is an indication of how wide the debris stream is. Assuming a duration of 1 day, and assuming the Earth's orbit is roughly at right angles to the debris stream, that gives a width of very roughly 2.5 million kilometers (and a length of several hundred million kilometers). The Earth is only about 12,735 kilometers in diameter.
Say the comet's orbit is 1 billion kilometers long (that's probably shorter than average). Then multiplying the length of the orbit by the area of a circle 2.5 million kilometers across gives the volume of the stream, and multiplying 2.5 million kilometers by the area of a circle 12,735 kilometers in diameter gives the volume of the stream through which the Earth passes (the hole it punches in the stream). The ratio is about 15 million.
Other factors:
Earth's gravity will pull in some debris that wouldn't otherwise have hit it, making its effective diameter a bit bigger (thanks to ghoppe's comment).
The density of the stream is not uniform. There are bound to be clumps of greater density. There's probably also a systematic change of density with distance from the Sun. The width of the stream probably varies as well. I have no idea of the details
The Moon (and its gravity well) will also sweep up some debris -- but the Moon's effective area is a small fraction of Earth's.
But the blatant errors in my assumptions undoubtedly swamp any such effects, and I'm only looking for a rough estimate.
So yes, the Earth's passage through a meteor stream will effectively punch a hole in it, but it's a very small hole relative to the size of the entire stream. It could have a significant effect over millions of years.
I'm making a lot of simplifying assumptions here, but the conclusion seems about right if I've gotten the result within one or two orders of magnitude.
Reference: http://www.amsmeteors.org/meteor-showers/meteor-faq/#5, plus some of my own extremely rough back-of-the-envelope calculations. | {
"domain": "physics.stackexchange",
"id": 3207,
"tags": "solar-system, comets"
} |
Find the sum of prime numbers below 2 million | Question: This code is trying to find the sum of prime numbers below 2 million. The code does work as intended, but it takes a very long time ( 6 minutes) for the code to execute completely. What is the problem and how can I make it more efficient?
#include <iostream>
using namespace std;
unsigned long summationOfPrime();
bool isPrime(int n);
int main() {
cout <<summationOfPrime() << endl;
}
unsigned long summationOfPrime()
{
const int num = 2000000;
unsigned long sum = 0;
for(int i = 2; i < num; i++)
{
if(isPrime(i) == true)
{
sum += i;
}
}
return sum;
}
bool isPrime(int n)
{
// Corner case
if (n <= 1)
return false;
// Check from 2 to n-1
for (int i = 2; i < n; i++)
if (n % i == 0)
return false;
return true;
}
Answer: There are several issues, but here goes. Firstly, your indentation is all wonky. Some of this may be due to trying to input stuff here, but either way, indentation helps legibility.
Next, the biggest thing you can change to speed thing up is to replace for (int i = 2; i < n; i++) with for (int i = 2; i*i <= n; i++). This works because if \$a \cdot b = n\$, one of them is at most the square root of \$n\$. This change will probably make your code fast enough on its own.
That said, the much better solution to this type of problem is to use a prime sieve such as the Sieve of Eratosthenes which will generate all of the primes less than 1,000,000 faster by using addition and multiplication instead of modulo operations which are slower. | {
"domain": "codereview.stackexchange",
"id": 30861,
"tags": "c++, performance, primes"
} |
what are imax and imin in pid | Question:
I was trying to use the pid controller of the control toolbox ros package in my hardware interface of diff drive controller. Here when i was studing this api i didn't got what imax and imin means and what it does.
What i'm trying to do it use this in the hardware interface of the diff drive robot i'm making where their is presence of velocity feedback sensors like wheel encoders which is sending the actual angular velocity and use this to calculate the velocity differnece i.e error and send appropirate pwm singnals to the motor.
Originally posted by dinesh on ROS Answers with karma: 932 on 2020-10-18
Post score: 0
Answer:
As noted in the link you posted:
i_max The max integral windup.
i_min The min integral windup.
, this is the cap for integrator windup. These limits prevent the system from generating huge values in the I term of the PID, which can take an excessive amount of time to "unwind", leading to oscillating and poor control of the system.
If you're not familiar with how PID control works, then the concept will be hard to get from this forum and I suggest searching for an online simulator, but having a windup limiter allows you to set your I term high enough to get good response (when it's required) but provides a means to mitigate the oscillation risk created by an unrestricted integrator that is free to accumulate unlimited error history.
A motor circuit for driving a robot should not really need such a feature once well tuned except in extreme situations, so in your case the limits will assist you keeping the robot under control while you are tuning, but unlikely will have an impact beyond that.
EXAMPLE OF INTEGRATOR WIND UP:
Lets say you have a robot working on an un-even surface. ROS sends the robot on a path that is up-hill and the robot is commanded to move at a speed it is not physically able to up the hill. There will be a static error, "too slow", in the velocity target vs actual and every iteration of the PID loop, the integrator will take the error, "too slow", from the most recent PID loop and add it to the accumulated error history, and the error history will get huge = 10000 * "too slow". Once the robot summits the peak and is on flat ground it will speed up, but not just to the commanded speed, but to much faster than the commanded speed, because of the huge accumulation of error history in the integrator. The (I* integrator) term of the PID is large due to the error history. With the robot now moving much faster than commanded, the error history in the integrator will start to get smaller as the error is now "too fast" instead of "too slow". But if there was 10 seconds of "too slow" saved into the integrator, there could now be 10 seconds of "too fast" required to unwind it. Your robot is out-of-control because of integrator windup.
PURPOSE OF LIMIT ON WINDUP: At some point the integrator will become large enough that the (I*integrator) term will saturate the output of the controls. There is no value to having the integrator getting larger than that. Larger than that cannot help control the system and leads to the unwinding issue from our robot example. So a good PID implementation will implement a max/min value to cap the integrator to limit the amount of "too slow" that can be stored. It means that at the top of the hill, only a short period of "too fast" is required to unwind the integrator and your robot stays in control.
Originally posted by billy with karma: 1850 on 2020-10-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35650,
"tags": "ros, diff-drive-controller"
} |
How train - test split works for Graph Neural Networks | Question: I have recently started studying GNN's. I have covered GCN and GraphSage so far. But I am confused regarding the process when testing occurs.
Now suppose in the graph above I am using the nodes as train and test set as shown in the figure. Suppose I am using the GraphSage model for a supervised node-classification task , now during training I am providing the sub-graph with blue nodes and the weights(parameters) gets calculated using the neighbourhood information of the nodes within this sub-graph(blue).
But during testing I want to find the labels of the green nodes. So during this time the forward propagation of GraphSage will be performed using the weights calculated during training and using the neighbourhood information of the test nodes.
My Doubt : So the part where I am confused is that during testing does the algorithm consider as neighbourhood only the green nodes(test set) or it also considers the blue node's (Since it is connected as can be seen in the figure) information during the forward propagation step to compute the node embedding?
Below is the attached forward propagation algorithm of Graphsage as mentioned in the paper.
It might be a silly question, but since I am new to this I am having difficulty in understanding the neighbourhood definition during train and test times. Do correct me if I have wrongly stated any point.
Answer:
Does the algorithm consider as neighbourhood only the green nodes(test
set) or it also considers the blue node's?
It does consider both blue nodes and green nodes.
Note tha GNN deals with transductive learning, where the test data(nodes here) is seen (without knowing the labels) during training. What you might have in mind is inductive learning(train set and test set is completely separated).
Suppose I am using the GraphSage model for a supervised
node-classification task , now during training I am providing the
sub-graph with blue nodes and the weights(parameters) gets calculated
using the neighbourhood information of the nodes within this
sub-graph(blue).
This is not right, during training, you provide with the whole graph(both blue nodes and green nodes and all egdes) but you only provide with the labels of nodes in train set (labels of nodes in test set is unknown during training)
You may want to refer to these slides (p. 55 onwards). | {
"domain": "datascience.stackexchange",
"id": 10158,
"tags": "training, graph-neural-network, inference"
} |
Why do large neurotransmitters travel faster down the axon? | Question: I understand that with large neurotransmitters, like neuropeptides, the precursor neurotransmitter and the enzymes are produced in the soma and quickly travel down the axon to be modified in the axon terminal.
With the small molecule transmitters, the enzymes are made in the soma and then transported slowly down the terminal to the terminal where they synthesize the small transmitters.
Why is it that the precursors and enzymes for large neurotransmitters travel fast down the axon, whereas the enzymes for the small transmitters travel slowly? Is there a specific reason?
Answer: I presume you are referring to fast versus slow axoplasmic transport.
I presume by "small neurotransmitters," you are referring to "small molecule" neurotransmitters like GABA, glutamate, acetylcholine, and the catecholamines like dopamine.
For the "large" neurotransmitters, I assume you are referring to peptides.
(for a quick reference on these two categories you can see this book blurb)
Although local protein synthesis is possible outside the soma, particularly in dendrites, and there is even evidence for protein synthesis in axons, including neurotransmitter peptides, most protein synthesis still seems to occur in the vicinity of the nucleus, at the soma. Synthesis anywhere else means you need to transport the mRNAs and translation machinery instead, since the mRNA needs to be made at the nucleus. Even the "large" neurotransmitters, which are peptides, need to be synthesized just like bigger proteins.
Back to axoplasmic transport... There are two main mechanisms for moving things down the axon. The faster version of this transport is really for the transport of vesicles. To transport a neurotransmitter peptide in this fashion, you fill a vesicle full of the peptide and send it on its way along the microtubules via kinesin and dynein motors.
Larger, non-membrane bound proteins, however, are often not transported in vesicles, but by the slower transport mechanism. Note that this transport is still pretty fast, way faster than simple diffusion, it just isn't as fast as the "fast" transport. The exact mechanisms of slow transport are still an active area of research and not fully understood.
To get back to your question, "is there a specific reason?" The reason isn't so much about the speed, but about the mechanism of transport. Small peptides and membrane-bound proteins move in vesicles via fast transport. Large proteins not bound in membranes move via slow transport. The enzymes that synthesize the small neurotransmitters fall into this latter category. Importantly, although the neurotransmitters may be small, the enzymes themselves can be big, at least certainly bigger than the small peptides that are equivalent to the "large" neurotransmitters. | {
"domain": "biology.stackexchange",
"id": 6527,
"tags": "neuroscience, intracellular-transport"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.