anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Leaky ReLU inside of a Simple Python Neural Net
Question: To build a simple 1-layer neural network, many tutorials use a sigmoid function as the activation function. According to scholarly articles and other online sources, a leaky ReLU is a better alternative; however, I cannot find a way to alter my code snippet to allow a leaky ReLU. I tried logic like if x > 0 then x else x/100 as the activation function, then the same for the derivative. Is it failing because the output layer cannot have a ReLU? Should I change the first layer to ReLU then add a softmax output layer? import numpy as np np.random.seed(1) X = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]]) y = np.array([[0,1,0,1]]).T class NN: def __init__(self, X, y): self.X = X self.y = y self.W = np.random.uniform(-1, 1, (X.shape[1], 1)) self.b = np.random.uniform(-1, 1, (X.shape[1], 1)) def nonlin(self, x, deriv=False): if deriv: return x*(1-x) return 1/(1+np.exp(-x)) def forward(self): self.l1 = self.nonlin(np.dot(self.X, self.W + self.b)) self.errors = self.y - self.l1 print(abs(sum(self.errors)[0])) def backward(self): self.l1_delta = self.errors * nonlin(self.l1, True) self.W += np.dot(self.X.T, self.l1_delta) self.b += np.dot(self.X.T, self.l1_delta) def train(self, epochs=20): for _ in range(epochs): self.forward() self.backward() nn = NN(X, y) nn.train() Answer: I assume the task you're working on is a binary classification task since y = np.array([[0,1,0,1]]).T. And as an error function you use self.errors = self.y - self.l1. Now compare this to the curve of a leaky ReLU function: The leaky ReLU is an unbounded function. How is your network supposed to model a binary classification task where output values are elements of $\{0,1\}$ using this function? And what is the result of applying the absolute difference as an error function to your labels $y \in \{0,1\}$ and your outputs $\hat{y} \in (-\infty,\infty)$? What you would need is a translation of the lReLU outputs to your classes, e.g. something like prediction = 1.0 if activation >= 0.0 else 0.0. Which is one reason why it is more common to apply last layer activation functions to classification problems which can be interpreted as class probabilities, e.g. Softmax or Sigmoid: You still need a threshold here to move from real valued outputs (which are interpreted as probabilities) to binary class labels but that is fairly straightforward since often you just set it to a probability of $0.5$. I suggest to go follow a tutorial for your from scratch implementation. If you are willing to take a step back here is one for a simple percepton (i.e. just a single neuron). Images taken from this article
{ "domain": "datascience.stackexchange", "id": 6972, "tags": "python, neural-network, activation-function" }
Problem in derivation of Rydberg Equation
Question: In deriving the Rydberg Equation I found that $\delta E=E_2-E_1=hf$ where $E_1$ is the energy the orbit in which the electron was and $E_2$ is the energy of the orbit in which the electron is transferred to and $f$ is the frequency of releasedd photon. Now shouldn't there be $nhf$ instead of $hf$. If one electron is transferred, then is only one photon released resulting $1\times hf$? Does this have any proof that only one photon is released? If more than one photon is released, then why isn't there $nhf$? Answer: You're correct that multiphoton processes (processes caused by the absorption/emission of multiple photons) are allowed, they're just less probable than single photon processes. I'm not sure there's a simple ``semiclassical'' justification for this, but it's clear in quantum mechanics. If you were to calculate the transition matrix element in perturbation theory, your initial state would be two photons and an electron in the $i$th atomic orbital, and the final state would be an electron in the $j$th atomic orbital. The interaction between the electron and the electromagnetic field is $e j\cdot A$ where $j$ is the electron current and $A$ the electromagnetic potential. Since we have two photons in the initial state, we need two factors of $A$ to contract with to get a nonzero matrix element, so we have to go to second order perturbation theory (which we didn't have to do for a single photon process). Second order perturbation theory loosely speaking involves the square of the interaction operator, which when you're careful about all the factors gives you a prefactor of $e^2/(\hbar c)\sim 1/137$ relative to the first order process. So each time you consider a process involving an additional photon you add another factor of a small number, making it usually less probable. I'm not sure if that made much sense since it relies on some understanding of quantum mechanics. The handwavy answer is that there's a probability of emitting/absorbing a photon, and the probability of doing that that multiple times involves the square of the probability you started with so it's less probable. I'm not sure what the early founders of quantum mechanics thought about this, maybe they just postulated that only single photon processes are important.
{ "domain": "physics.stackexchange", "id": 53752, "tags": "energy, electrons, atomic-physics" }
convert iplImage to sensor_msgs::ImageConstPtr
Question: Hi everyone, I have some problem to understand how to convert images with cvbridge. I red the tutorial : http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages . But I can't find a way to convert a openCV images to ROS images message. I succed to convert from ROS to openCV. I wanted to use the function toImageMsg() but it returns a sensor_msgs::Image but I want a sensor_msgs::ImageConstPtr. Can somebody help me please? Thank you. Originally posted by RosFaceNoob on ROS Answers with karma: 42 on 2014-04-16 Post score: 0 Answer: Try the following: sensor_msgs::Image my_image = <somehow get this image with toImageMsg()>; sensor_msgs::ImageConstPtr my_image_const_ptr = &my_image; sensor_msgs::ImageConstPtr is nothing else than a constant shared pointer to a sensor_msgs::Image. Originally posted by BennyRe with karma: 2949 on 2014-04-16 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by RosFaceNoob on 2014-04-16: I did : sensor_msgs::ImageConstPtr img_const = &img; I have this error: conversion from ‘IplImage** {aka IplImage**}’ to non-scalar type ‘sensor_msgs::ImageConstPtr {aka boost::shared_ptr<const sensor_msgs::Imagestd::allocator<void > >}’ requested I declared "img" like : IplImage *img Comment by BennyRe on 2014-04-16: I updated my answer. my_image should be the result of toImageMsg() I guess Comment by RosFaceNoob on 2014-04-16: So first I have to conver my IplImage to a cv_bridge::CvImagePtr, because you can only use toImageMsg() on a cv_bridge::CvImagePtr. Do you know how to do it?
{ "domain": "robotics.stackexchange", "id": 17669, "tags": "ros, cvbridge" }
Exponential Linear Units (ELU) vs $log(1+e^x)$ as the activation functions of deep learning
Question: It seems ELU (Exponential Linear Units) is used as an activation function for deep learning. But its' graph is very similar to the graph of $log(1+e^x)$. So why has $log(1+e^x)$ not been used as the activation functions instead of ELU? In other words what is the advantage of ELU over $log(1+e^x)$? Answer: ReLU and all its variants ( except ReLU-6 ) are linear i.e $ y = x $ for values greater than or equal to 0. This gives ReLU and specifically ELU an advantage like: Linearity means that the slope does not plateau or saturate when $x$ becomes larger. Hence, the vanishing gradient problem is solved. Now, the graph $ y = log( 1 + e^x ) $ isn't linear for values > 0. For larger negative values, the graph produces values which are very close to zero. This is also found in sigmoid where larger values produce a fully saturated activation. Hence, $ y = log( 1 + e^x ) $ can raise problems which sigmoid and tanh suffer. About ELU: ELU has a log curve for all negative values which is $ y = \alpha( e^x - 1 )$. It does not produce a saturated firing for some extent but saturates for larger negative values. See here for more information. Hence, $ y = log( 1 + e^x ) $ is not used because of early saturation for negative values and also non linearity for values > 0. This may produce problems and even bring down some features which ReLU and variants exhibit.
{ "domain": "datascience.stackexchange", "id": 5357, "tags": "machine-learning, deep-learning, activation-function" }
Why isn't my model learning satisfactorily?
Question: The problem to solve is non-linear regression of a non-linear function. My actual problem is to model the function "find the max over many quadratic forms": max(w.H.T * Q * w), but to get started and to learn more about neural networks, I created a toy example for a non-linear regression task, using Pytorch. The problem is that the network never learns the function in a satisfactory way, even though my model is quite large with multiple layers (see below). Or is it not large enough or too large? How can the network be improved or maybe even simplified to get a much smaller training error? I experimented with different network architectures, but the result is never satisfactory. Usually, the error is quite small within the input interval around 0, but the network is not able to get good weights for the regions at the boundary of the interval (see plots below). The loss does not improve after a certain number of epochs. I could generate even more training data, but I have not yet understood completely, how the training can be improved (tuning parameters such as batch size, amount of data, number of layers, normalizing input (output?) data, number of neurons, epochs, etc.) My neural network has 8 layers with the following number of neurons: 1, 80, 70, 60, 40, 40, 20, 1. For the moment, I do not care too much about overfitting, my goal is to understand, why a certain network architecture/certain hyperparameters need to be chosen. Of course, avoiding overfitting at the same time would be a bonus. I am especially interested in using neural networks for regression tasks or as function approximators. In principle, my problem should be able to be approximated to arbitrary accuracy by a single layer neural network, according to the universal approximation theorem, isn’t this correct? Answer: Neural networks learn badly with large input ranges. Scale your inputs to a smaller range e.g. -2 to 2, and convert to/from this range to represent your function interval consistently.
{ "domain": "ai.stackexchange", "id": 2361, "tags": "neural-networks, machine-learning, deep-learning, pytorch" }
What's the required tensile strength for a wire?
Question: I want to horizontally stretch a 50 meters wire rope and slide 100 kg object from side to side. What should be the minimum tensile strength (or carrying capacity) of the rope to be able to hold the object while it is placed in the middle of the 50 meters? We can assume the rope's weight is 100 grams/meter. I am sure there's some straightforward formula for calculate this just need some help figuring it out. :) Answer: Actually to get an exact expression for a hanging wire with a lumped mass somewhere in the middle is extremely hard (maybe impossible). Another thing to consider is the allowable sag of the rope. If the rope can sag a lot then the tension can be low, whereas if low sag is required the tension has to be really high. In addition as the tension increases the rope stretches increasing the sag but relieving the tension. It is a rather complex problem overall, I can provide an approximate expression when the lumped weight $W$ is located in the middle of the span $S$. Also important is the unit weight $w = \frac{m g}{\ell}$. I also consider the rope to be inflexible. The tension is split into the horizontal tension component $H$ which is constant thought the rope, and the total tangential tension $T$ which increases the further away you go from the sag point. The total sag amount is $D$ and the relationships between $H$, $D$ are $T$ are: $$ \begin{align} T & = H \cosh\left( \frac{w S}{2 H} \right) + \frac{W}{2} \sinh\left( \frac{w S}{2 H} \right) \\ D & = \frac{H}{w} \left( \cosh \left( \frac{w S}{2 H} \right) -1 \right) + \frac{W}{2 w} \sinh \left( \frac{w S}{2 H} \right) \end{align} $$ A further approximation of the above can be down when $ H \gg \frac{w S}{2}$ $$ \begin{align} T & = \frac{8 H^2+w^2 S^2+2 S W w}{8 H} \\ D & = \frac{ \frac{w S^2}{8} + \frac{S W}{4}}{H} \end{align} $$ Or by solving the last one for the horizontal tension $H$ $$ T = w D + \frac{S}{4 D} \left(W - \frac{w S}{2} \right) $$ In your case $S=50$, $w = 0.1\times9.81$, $W=100\times 9.81$, so for $D=3$ meter sag, the tension is $T=3988$ newtons for example. Your approximate design expression is $$\require{cancel} T = \frac{11,955}{D} + \left(\cancel{ 0.981 D }\right)$$ Edit 1 Upon further examination it can be said that worst tension occurs when the weight is at one end. There you add the vertical tension and the weight and vectorially combine the horizontal tension for the worst tension $$ T = \sqrt{ (V+W)^2 + H^2 } $$ The vertical tension at the end of a cable is $$ V = H \sinh \left(\frac{w S}{2 H} \right) $$ The horizontal tension is found (numerically) from the measured sag (without the weight). $$ D = \frac{H}{w} \left( \cosh \left( \frac{w S}{2 H} \right) -1 \right) $$ Example You string the rope with $D=1$ meter sag. From the above equation you find the horizontal tension to be $H=306.7$ newtons. You can use Wolfram Alpha, Excel or any numeric solver you can find for this step. $$ 1 = \frac{H}{0.1 \times 8.81} \left( \cosh \left(\frac{0.1\times 9.81 \times 50}{2 H}\right) -1 \right) $$ The vertical tension on one end is then $$ V = 306.7 \sinh \left(\frac{0.1\times 9.81 \times 50}{2\times 306.7}\right) = 24.55 $$ With a weight of $W = 100 \times 9.81 = 981$ the total tension on the end is $$ T = \sqrt{(24.55+981)^2 + 306.7^2} = 1051.3 $$
{ "domain": "physics.stackexchange", "id": 29820, "tags": "homework-and-exercises, newtonian-mechanics, string" }
Zipping two lists with an offset in Python
Question: Walking my first steps in Python from C, I find myself missing pointers from time to time. In this case I want to go through a list processing two element at a time, where those elements are step places away. This is my initial approach, but I would be happy to learn a couple alternatives, and their advantages. v = [0,1,2,3,4,5,6,7,8,9] step = 3 for x, y in zip( v[:-step], v[step:] ) : print("\nx={}, y={}".format(x, y)) I'm interested in what can be done with naked Python alone first. But alternatives from a module, are welcome as well. I want to know which approach is closer to pointer use in C (which was my intent with the code above) in terms of efficiency. Answer: You're on the right track, but it can be simplified to the following. Note also that the step variable name could add confusion because Python ranges and slices have a step attribute -- but it's not what you are doing: start = 3 for x, y in zip(v, v[start:]): ... Also note that syntax like v[start:] creates a new list. If you are dealing with large data volumes and want to avoid that, you can use itertools.islice. from itertools import islice v = [0,1,2,3,4,5,6,7,8,9] start = 3 for x, y in zip(v, islice(v, start, None)): print(x, y)
{ "domain": "codereview.stackexchange", "id": 42727, "tags": "python, beginner, iterator" }
Using entropy_mutual function in QuTiP
Question: I am trying to calculate mutual entropies using QuTiP, but I am being unsuccessful so far. More specifically, I consider a 2^n x 2^n matrix representing the density operator of a n-qubit bipartite system AB made of system A (first m < n qubits) and B (remaining n-m qubits). No tutorial nor material on the internet addressed this specific task. For simplicity, let us consider a 1-qubit system A and a 2-qubit system B and a density operator of dimension 8x8 representing AB in computational basis. More practically in python, let rhoAB = Qobj=(np.random.rand(8,8)), and assume that this is a valid density operator. How should I call entropy_mutual so that I can get this measure between A and B, in particular, regarding the arguments selA and selB? Ideally, I would call something like entopy_mutual(rhoAB, selA=[1], selB=[2,3]) but this not the approach how the function interprets the subsystems and their respective dimensions. Answer: Best to look at the source code when the documentation isn't helpful enough. The definition of entropy_mutual is def entropy_mutual(rho, selA, selB, base=e, sparse=False): """ Calculates the mutual information S(A:B) between selection components of a system density matrix. Parameters ---------- rho : qobj Density matrix for composite quantum systems selA : int/list `int` or `list` of first selected density matrix components. selB : int/list `int` or `list` of second selected density matrix components. base : {e,2} Base of logarithm. sparse : {False,True} Use sparse eigensolver. Returns ------- ent_mut : float Mutual information between selected components. """ if isinstance(selA, int): selA = [selA] if isinstance(selB, int): selB = [selB] if rho.type != 'oper': raise TypeError("Input must be a density matrix.") if (len(selA) + len(selB)) != len(rho.dims[0]): raise TypeError("Number of selected components must match " + "total number.") rhoA = ptrace(rho, selA) rhoB = ptrace(rho, selB) out = (entropy_vn(rhoA, base, sparse=sparse) + entropy_vn(rhoB, base, sparse=sparse) - entropy_vn(rho, base, sparse=sparse)) return out So we see selA and selB are passed as arguments to compute the partial trace. I am not too familiar with qutip but here is an example computing $S(A:B)$ for $\rho_{AB}$ where $A$ is a qubit system and $B$ is a two-qubit system. import qutip as qtp # note there is a rand_dm function # We should also let qutip know how are systems are partitioned # This is so it knows how to correctly compute the partial trace rho = qtp.rand_dm(8, dims=[[2,4],[2,4]]) qtp.entropy_mutual(rho,0,1) With the above example we could also specify the second system as two-qubits instead of a four dimensional system i.e. rho = qtp.rand_dm(8, dims=[[2,2,2],[2,2,2]]) qtp.entropy_mutual(rho,0,[1,2])
{ "domain": "quantumcomputing.stackexchange", "id": 1495, "tags": "programming, entropy, qutip" }
Can a mirror reflect flame?
Question: Since a mirror can reflect (bounce it in a different direction) heat, doesn't that mean it can reflect flame because it is also heat, I know that heat is anything hot, and flames are hot. If yes does it reflect it in a specific direction and if no why not? Answer: Since a mirror can reflect(bounce it in a different direction) heat, There exist heat reflectors, true, but they are not mirrors in the sense of coherent images. A mirror reflects the rays of the object coherently so the pattern remains . Heat reflectors reflect like a lot of point sources. Any heat reflected by the usual mirror will also be incoherent infrared radiation, because the wavelengths of visible light which make up the flame image, are much smaller than the infrared wavelengths of reflected heat, and the mirror is a mirror designed for visible light. doesn't that mean it can reflect flame because it is also heat, The image of the flame will be there,but not its heat concentrated as in real space. I know that heat is anything hot, and flames are hot. If yes does it reflect it in a specific direction and if no why not? The light frequencies of the flame will be reflected in a coherent image, from the silver back side of the mirror. . The infrared frequencies hitting the average mirror will be reflected incoherently from the glass infront, which will be opaque to the infrared frequencies , depending where the photons hit. Special mirrors are designed for infrared frequencies, for example this link. I suppose they would reflect the heat localized as an image, as this video shows in the test, where you can see the source and the infrared is caught in the detector.
{ "domain": "physics.stackexchange", "id": 37591, "tags": "thermodynamics, electromagnetic-radiation, reflection, infrared-radiation" }
Running roscore and roslaunch from eclipse
Question: I'm trying to run roslaunch or roscore as an External Tool following another post, but I got an error. Looks like you probably need to define for eclipse the ROS environment variables, I think I've done so, but then I get this error when running the External Tool: /usr/bin/env: python: No such file or directory I think it is probably a matter of configuring the ROS environment variables for eclipse. I've set ROS_ROOT, ROS_PACKAGE_PATH and ROS_MASTER_URI, are there any more? Maybe there is another reason. I would be grateful if anyone can shed some light on this. Thank you! Originally posted by chcorbato on ROS Answers with karma: 202 on 2012-09-18 Post score: 1 Original comments Comment by dornhege on 2012-09-18: Are you starting eclipse from a shell or via a starter? Does it work, when starting it from a shell? Comment by chcorbato on 2012-09-18: It seems to work as you suggest, running eclipse from a shell, thank you very much However, I do not get the output in the eclipse console, a Progress Information eclipse window dialog appears instead and keeps saying "launching roslaunch" checking from a shell I see that the ROS system is up Answer: You might need to source the setup.bash file from ~/.bash_profile to make ROS environment variables available to all the system not only to bash session. The ~/.bash_profile shell script is called only once when you log in, contrary to ~/.bashrc which is executed locally every time a new terminal window is opened. P.S. Another useful approach to debug nodes started via launch files is to attach gdb to already running process. I am using this a lot and it is quite handy. The only problem is that you will not be able to debug any startup code, e.g. constructors, with such approach. To use this option go to Run->Debug Configurations... and double-click on C/C++ Attach to Application. Then fill in C/C++ Application and press Debug. Originally posted by Boris with karma: 3060 on 2012-09-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11052, "tags": "roslaunch, eclipse" }
Using the results of clustering to retrain a neural network
Question: I am following and expanding upon previous work from the winner of the Melanoma Classification from here. The dataset has 9 classes. The competition is only interested in the one class (Melanoma). I have taken the feature outputs (pre-final layer) from CNN and performed clustering. Then used this to group different classes (leaving Melanoma as its own group) then used this in the training. I have already performed clustering with other steps (PCA, TSNE, K-Means, Hierarchichal, LDA, QDA, NDA, etc.) and have results. I am largely trying to understand the maths (and background research) behind why this approach in retraining might improve performance (on the ROC-AUC of the class that was not grouped ie. melanoma) Any advice / relevant papers welcome! Thanks, Answer: I would agree with Brian's answer in the following sense: All the steps you perform ie embedding, clustering, retraining,.. do not represent, in principle, qualitatively different math operations than what a dedicated deep model with non-linearities can do. So, in this sense, I do not expect to get radically different performance than using a single NN model trained end-to-end. That being said, whatever approach one uses (as I expect them to be equivalent) one will possibly have to deal with class imbalance wisely.
{ "domain": "datascience.stackexchange", "id": 11130, "tags": "clustering, convolutional-neural-network, pytorch, feature-extraction, retraining" }
What do you call it when you add zeros between the elements of a vector?
Question: What do you call it when you add zeros between the elements of a vector? let's say you have x =[1 2 3 4] and x_2 = [0 1 0 2 0 3 0 4] what do you call this process? Answer: In the context of filter banks, simply upsampling (without filtering, or with the trivial all-pass filter, as Wikipedia relates it to interpolation), denoted by an up-arrow, as illustrated below fro a 2-fold or order-2 upsampling: This is described in Filter banks: Decimation, Interpolation, and Modulation, and the interpolation filter associated with upsampling is called synthesis filter. As noted by @Jason R, one also finds "expansion". Expansion is also found as a streching for continuous signals, while compression is the effect in the dual Fourier domain. Sometimes, one can find "zero-insertion" or "zero-stuffing". Actually in image processing some use "upsampling" as a proxy for pixel increase with filtering.
{ "domain": "dsp.stackexchange", "id": 3732, "tags": "terminology" }
Notebook app for terminal
Question: I'm currently enrolled in a node.js course. The first app we have done in the course was a terminal notebook. It appends objects to a json-file. Example call on the terminal: node notebook.js add Second "Aenean commodo ligula eget dolor." Example structure json-file: [{ "title": "First", "timestamp": 1481534161237, "body": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit." }, { "title": "Second", "timestamp": 1481534192437, "body": "Aenean commodo ligula eget dolor." }] Then the objects can be displayed, listed or removed. The basic structure was done together with the trainer. Afterward I added the validation part and an additional property "timestamp". "timestamp" is mainly for listing the notes sorted in ascending order. Furthermore I wrote a function for creating a formatted date (based on the timestamp). notebook.js (the "main" file): // ---- Assignments ----------- const fs = require('fs'); const notes = require('./notes.js'); var args = process.argv; var errorReport = "\nSomething has gone wrong."; var maxLengthTitle = 150; var maxLengthBody = 1000; var command = args[2]; var title = args[3]; var body = args[4]; // ---- Validation ----------- if (['add', 'list', 'read', 'remove'].indexOf(command) === -1) { errorReport += `\nParam 1: Expected "add" or "list" or "read" or "remove". "${command}" found.`; } if (command) { if (!title && command !== 'list') { command = ""; errorReport += `\nParam 2: Expected string. "undefined" found.`; } else if (title && title.length > maxLengthTitle) { command = ""; errorReport += `\nParam 2: Maximal length of title is ${maxLengthTitle} chars.`; } } if (!body || typeof body !== 'string') { body = '-'; } else if (body.length > maxLengthBody) { command = ""; errorReport += `\nParam 3: Length of given second parameter is ${body.length} Maximal valid length is ${maxLengthBody} chars.`; } // ------------------------------------------- console.log('\n ----- NOTEBOOK ----- '); // ---- Reacting to the user input ----------- if (command === 'add') { var note = notes.addNote(title, body); if (note) { console.log(`Note '${title}' has been added.`) } else { console.log(`Adding note has failed.'`) } } else if (command === 'list') { var allNotes = notes.getAll(); allNotes.sort((a, b) => { return a - b; }); for (let i = 0; i < allNotes.length; i++) { console.log('\n' + notes .createFormattedDate(allNotes[i]['timestamp']) + '\n' + allNotes[i]['title'] + '\n' + allNotes[i]['body']); } } else if (command === 'read') { var title = notes.readNote(title); console.log(`Title is : ${title} !`); } else if (command === 'remove') { notes.removeNote(title); console.log(`Note '${title}' has been removed.`) } else { console.log(errorReport); } // ---------------------------------------- console.log('\n -------------------- \n'); notes.js (containing the module with the actual functions): const fs = require('fs'); var fetchNotes = () => { try { var notesString = fs.readFileSync('notes-data.json'); return JSON.parse(notesString); } catch (e) { return []; } }; var saveNotes = (notes) => { fs.writeFileSync('notes-data.json', JSON.stringify(notes)); } var addNote = (title, body) => { var notes = fetchNotes(); var note = { title: title.trim(), timestamp: Date.now(), body: body.trim() }; notes.push(note); saveNotes(notes); return note; } var readNote = (title) => { var notes = fetchNotes(); var ret = notes.filter((note) => { return note.title === title; }); return ret[0] ? ret[0].title + '\n' + ret[0].body : ''; } var getAll = () => { return fetchNotes(); } var getNote = (title) => { console.log('Get single node: ', title); } var removeNote = (title) => { var notes = fetchNotes(); var newNotes = notes.filter((note) => note.title !== title); fs.writeFileSync('notes-data.json', JSON.stringify(notes)); } var createFormattedDate = (timestamp) => { var date = new Date(timestamp); var ret = ('0' + date.getDate()).slice(-2) + '.' + ('0' + (date.getMonth() + 1)).slice(-2) + '.' + date.getFullYear() + ', ' + ('0' + date.getHours()).slice(-2) + ':' + ('0' + date.getMinutes()).slice(-2) + ':' + ('0' + date.getSeconds()).slice(-2); return ret; } module.exports = { addNote, getAll, getNote, removeNote, readNote, createFormattedDate } It all works but "notebook.js" seems rather messy to me. How could it be better structured? What other improvements could be done? I guess there a tasks which could be accomplished less awkward then I have done, especially concerning the formatted-date function. Was it a good choice to use a date value for sorting the records? Or would another data type be more appropriate?? Answer: One of the things a course should teach you is to take care when you decide what should be const, and what should not be const. Example of what ought to be const are maxLengthTitle and maxLengthBody Moving process.argv to args is sugar, it makes the lines 10 to 12 easier to read. However, in this case, I would have foregone the move and just use process.argv in lines 10 to 12. The code would be 1 line shorter, and not harder to read. You would get extra points if you could derive 'Expected "add" or "list" or "read" or "remove".' from the array you declared just prior to that with ['add', 'list', 'read', 'remove'] I would not clear command if there is an error, I would just make the first if after --reacting to the user input -- if( errorReport ){ and go from there. I would also future proof that variable and call it output or feedback I am wondering about typeof body !== 'string', what case are you covering there? Run your code thru jshint.com, you have a few missing semicolons, some accessors that could use dot notation, and an unused library in notebook.js Read up on model view controller (MVC), your controller code (especially for command 'list', is doing far too much, in essence your code should read if(command == "list"){ var allNotes = notes.getAll(); console.log( formatNotes( allNotes ); } and then drop the mic Finally, this drives me nuts: var getAll = () => { return fetchNotes(); } Fat arrow syntax is meant for inline functions, please just use function getAll(){ .. }
{ "domain": "codereview.stackexchange", "id": 23362, "tags": "javascript, beginner, node.js" }
Energy balance of closed timelike curves in Gödel's universe
Question: I recently read Palle Yourgrau's book "A World Without Time" about Gödel's contribution to the nature of time in general relativity. Gödel published his discovery of closed timelike curves in 1949. Many years later (in 1961), S. Chandrasekhar and James P. Wright pointed out in "The geodesic in Gödel's universe" that these curves are not geodesics, and hence Gödel's philosophical conclusions might be questionable. Again some years later, the philosopher Howard Stein pointed out that Gödel never claimed that these curves are geodesics, which Gödel confirmed immediately. Again much later other physicists have computed that these closed timelike curve must be so strongly accelerated that the energy for a particle with a finite rest mass required to run through such a curve is many times its rest mass. (I admit that I may have misunderstood/misinterpreted this last part.) Questions This makes me wonder whether any particle (with finite rest mass) actually traveling on a closed timelike curve wouldn't violate the conservation of energy principle. (As pointed out in the comments, I made a hidden assumption here. I implicitly assumed that the particle traverses the closed timelike curve not only once or only a finite number of times, but "forever". I put "forever" in quotes, because the meaning of "forever" seems to depend on the notion of time.) I vaguely remember that light will always travel on a geodesic. Is this correct? Is this a special case of a principle that any particle in the absence of external forces (excluding gravity) will always travel on a geodesic? Is it possible for a particle to be susceptible to external forces and still have zero rest mass? Is it possible that Chandrasekhar and Wright were actually right in suggesting that Gödel's philosophical conclusions are questionable, and that they hit the nail on the head by focusing on the geodesics in the Gödel's universe? Answer: I have not really studied Godel's metric, so I will only address questions 2 and 3 in a general metric (without specifically referring to Godel's metric). Yes, light (in vacuum) will always travel on a null geodesics. Yes, particles remain on geodesics in absence of a net external force. Momentum means different things in the massive and mass-less case, since massive particles move on geodesics with timelike tangent vectors and mass-less move on null tangent vectors. 4-force is equal to the covariant derivative of 4-momentum along the tangent vector to its worldline. I will elaborate: Let us assume a world in which quantum mechanics is bogus and all particles have a 'kick' (momentum) associated with them. A particle of light has a definite momentum associated with it. So its 'kick' can be redirected and/or diminished. Particles with mass also have this 'kick' and can also have it redirected and/or diminished. The 'kick' is redirected when 'kick' makes contact with the force applier i.e. they would be deviated from their geodesic motion. Now, in particles with non-zero mass this kick is directly proportional to the 4-velocity. So applying a force on the particles changes its 4-velocity and deviates it from timelike geodesic motion. However, for mass-less particles the 4-velocity does not exist (as proper time in their frame is 0). Applying force on the particle would also deviate it from null geodesic motion, but the tangent vector of it's motion would remain a null vector, so their net speed would still remain c in your local frame throughout the application of force. Back to reality. In classical GR, we don't have any forces for these mass-less particles, but have forces (Electromagnetic forces) for massive particles. So we treat mass-less particles purely as waves with energy and momentum (that can't be changed by applying classical force). Note, in classical GR, in vacuum the speed of the EM wave can be reduced in dielectric media, but the fastest speed possible in the dielectric frame will still remain a null vector (speed of light). In the above discussion, I treat gravitation as the structure of space-time and not as a force. In Quantum Field Theory, observation is discontinuous and particles change in number and type between 2 successive measurements. There is a symmetry in these changes which leave net Energy and momentum invariant. So here force is irrelevant here and we treat photons as particles again. Questions 1 and 4: First of all there is no global conservation of energy in general relativity. There is only local conservation of energy. There are other methods used to get globally conserved quantities (like Killing vectors fields). CTC's are looked upon as pathological entities. A whole lot of concepts in classical GR have to be revisited if if we accept CTC's in the acceptable causal structure of realistic spacetimes. A lot of ideas we take for granted are thrown to the winds in such extreme spacetime. Let me give you a very crude and rough analogy: -There is an astral chicken that lays an egg and dies, the egg hatches and the chick eats the egg and its parent, lays an egg, dies and so on..... Thus, the astral chicken's wordline is a CTC. -Let's say you (moving along a normal geodesic) are at the event P (hatching of egg) and stay with the chicken till event Q (dying of chicken), the chicken will vanish suddenly after Q. Can you imagine the chicken vanishing? -The egg also appeared suddenly in your past at P. Kind of like Marty in Back To the Future who appears and disappears suddenly. The egg-mass appears, turns into a chick and disappears, obviously from your viewpoint, energy is not conserved at all not even locally. This is the best I can do without referring using math. Causal Structure is a very elementary theory, you will be able to understand it. This would help you better understand CTC's, which are not elementary at all. I recommend Wald's book on GR. In addition, here is a pdf by Thorne on some implications of CTCs. It is a moderately advanced paper, but very interesting.http://www.its.caltech.edu/~kip/scripts/ClosedTimelikeCurves-II121.pdf
{ "domain": "physics.stackexchange", "id": 22834, "tags": "general-relativity, spacetime, time-travel" }
DAL Returns a collection of business objects in MVP
Question: In Model View Presenter (MVP) pattern, it is said that our DAL always should returns business models. If I want to get a list of business objects from my DAL, is it acceptable to do it as follows or better to get a DataTable from DAL and then in the applications BLL, converts it to a List? public List<Employee> GetNewEmployees() { string selectStatement = "SELECT Employee.Emp_ID, Employee.Initials + ' ' + Employee.Surname AS Name,..."; using (SqlConnection sqlConnection = new SqlConnection(db.GetConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand(selectStatement, sqlConnection)) { sqlConnection.Open(); using (SqlDataReader dataReader = sqlCommand.ExecuteReader()) { List<Employee> list = new List<Employee>(); while (dataReader.Read()) { list.Add ( new EpfEtfMaster { EmployeeID = (int) dataReader ["emp_id"], EmployeeName = (string) dataReader ["Name"], AppointmentDate = (DateTime) dataReader["appointment_date"], }); } return list; } } } } Answer: Your Question System.Data.DataTable lives under the System.Data namespace; one good way to start mixing concerns in your BLL is to reference System.Data, and then you don't even need a DAL any more and can have your BLL directly access the database. Or worse, have some of your data access code code in the DAL, and then some more data access code in the BLL. Doesn't sound like a good idea? Then leave DataTable where it belongs: hidden/encapsulated as an implementation detail of your DAL. You're using ADO.NET now, but if you decide to reimplement your DAL with Entity Framework or Linq-to-SQL, that decision shouldn't require modifying your BLL in any way, because the BLL shouldn't know anything that has anything to do with data access. If the BLL needs employees, return employees, not some DataTable. Your Code public List<Employee> GetNewEmployees() You should return an IEnumerable<Employee>, not a List<Employee>; List<T> will be confusing for the BLL to receive, because it has Add and AddRange members which allow client code to add items to the list, making the returned value something else than just a list of "new employees" taken from the DAL. { string selectStatement = "SELECT Employee.Emp_ID, Employee.Initials + ' ' + Employee.Surname AS Name,..."; using (SqlConnection sqlConnection = new SqlConnection(db.GetConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand(selectStatement, sqlConnection)) { sqlConnection.Open(); using (SqlDataReader dataReader = sqlCommand.ExecuteReader()) { A using block opens a scope, and in C# we denote scopes with braces { } - it is almost always a good idea to be explicit about scopes, both for readability and for maintainability. However in the case of using blocks, it's often more readable to reduce nesting and to stack these blocks where possible, using braces { } only where they are needed: using (SqlConnection sqlConnection = new SqlConnection(db.GetConnectionString)) using (SqlCommand sqlCommand = new SqlCommand(selectStatement, sqlConnection)) { sqlConnection.Open(); using (SqlDataReader dataReader = sqlCommand.ExecuteReader()) { The reason why this works, is because when braces are not specified, a scope spans only a single instruction (like, if (1 == 1) DoSomething(); and if (1 == 1) { DoSomething(); } are equivalent). However if that single instruction opens a new scope, that scope gets encompassed in the "parent" one, which allows you to "stack" these using blocks and reduce nesting. List<Employee> list = new List<Employee>(); while (dataReader.Read()) { list.Add ( new EpfEtfMaster { EmployeeID = (int) dataReader ["emp_id"], EmployeeName = (string) dataReader ["Name"], AppointmentDate = (DateTime) dataReader["appointment_date"], }); } return list; } } } } There are no changes to make in that part, to return an IEnumerable<Employee>, because List<Employee> implements the IEnumerable<Employee> interface, so your method can work with a List<T> and only return an IEnumerable<T> to its clients, keeping the ability to add/remove items all for itself :) It's just very unclear and surprising why you're adding a new EpfEtfMaster instead of an Employee. The name EpfEtfMaster should be renamed to something much more explicit, and again I don't see the reason why a List<Employee> would want to take EpfEtfMaster items. I would also rename list to either employees or result - list is just too vague.
{ "domain": "codereview.stackexchange", "id": 7838, "tags": "c#, object-oriented, .net, mvp, ado.net" }
What is Fjord Region?
Question: I'm studying molecular motors for a project of mine and I'm trying to understand what a Fjord region is. It's sort of labeled in the image below but I'm not entirely sure which part. I would like a suggestion of a book or article that may help me understand what it is? Answer: Fjords and bays Fjord is a term originally developed by chemists studying polycyclic aromatic hydrocarbons, along with the term bay. The descriptions aren't rigidly defined in the same way as stereochemical descriptors, but were designed to be 'self-explanatory', following the real world uses of the words1 Bay: "a broad inlet of the sea where the land curves inwards." Fjord: "a long, narrow, deep inlet of the sea between high cliffs, as in Norway, typically formed by submergence of a glaciated valley." In an attempt unify the usage of these terms, IUPAC issued a set of guidelines for their usage in 2014.2 Some historical context The earliest publication of these terms was given in the references above by Harry Heaney, Keith Bartle, Denny Jones, and Peter Lees who were working at the now non-existent Institute of Technology (Bradford, UK). They were studying triphenylene, and trying to find descriptions for the various environments present. Circumnavigating triphenylene, the British crew sighted two essentially diffe­rent kinds of H's; and Derry Jones spontaneously called one type "bay" and the other "peninsular." They published these descriptive "self-explanatory" names,40 but not without some apprehension. In fact, Professor Jones confessed to having dreamed that referees rejected their manuscript on account of its "bay" and "peninsular" nomencla­ ture Reference: 1 Spectrochim. Acta 1966,22,pp941-951; Trans. Farad. Soc. 1967,63,pp2868-287 2 J. Polycyclic Aromatic Hydrocarbons, 2015, 35, pp 161-176
{ "domain": "chemistry.stackexchange", "id": 8059, "tags": "organic-chemistry, nomenclature" }
Probability of measuring Energy
Question: I have done the first part of the question, but in (b) and (c) are struggling me . I second part : My tutor wrote : $$ P(req.) = \frac{|\int \phi_E^*(x)*\psi(x,t) dx |^2}{|\int\: \psi^*(x,t)*\psi(x,t) dx|^2 } $$ For Third part : $ \frac{|{A_E}|^2}{\sum_E |{A_E}|^2 } $ . How we can write those above expressions . I have doubt in the equation above .It seems incorrect . I think for the second part , numerator must be $E$ instead of $\phi$ . Can anyone please explain.? Answer: It would be easier if you assume $\psi(x)$ is normalized, then the total probability of anything is one, so that: $$ \big|\int\: \psi^*(x,t)\psi(x,t) dx\big |^2 = \sum_E |{A_E}|^2 = 1 $$ The the probability of measuring one of the eigenenergies, $E$, is: $$ P(req.) = \big|\int \phi_E^*(x)\psi(x,t) dx \big|^2$$ So that's just a projection of $\psi$ on the basis state $\phi_E$. As an example, imagine some system with 3 eigenstates of energy $E_x$, $E_y$, $E_z$. All we know about the actual eigenfunctions is that they are orthonormal, so we can call them $\hat x$, $\hat y$, $\hat z$. The most general (normalized) state would then be: $$ \psi(x, t) = A_x e^{iE_xt}\hat x + A_y e^{iE_yt}\hat y + A_z e^{iE_zt}\hat z $$ which is something on the unit sphere with complex phases factors that all oscillate at differ rates, but they're just phase factors, so you can think of it a vector in 3-space: $$ \psi(x) = A_x \hat x + A_y\hat y + A_z \hat z $$ and worry about time dependence later. The question "what is the probability of measuring $E_x$" is $$P(E_x) = |A_x|^2$$ because you're selecting the amplitude along that direction and squaring it. For the 3rd question, you have to find an energy weighted average, so for instance, the $\hat x$ state in this example contributes $|A_x|^2 E_x$. (Now here you expect and $E$ to appear, because the final result needs to have dimensions of energy). The point of the 3 dimensional example was to give something you can visualize (mathematically), IRL it's usually an infinite dimensional Hilbert space.
{ "domain": "physics.stackexchange", "id": 55744, "tags": "quantum-mechanics, homework-and-exercises, quantum-states" }
Getting a monitor packet
Question: The following are an example of methods supported by a proprietary device: Monitor ControlTemp PutPeakInfo GetPeakInfo I have a class that builds the packets for the above corresponding methods: GetMonitorPacket GetControlTempPacket GetPutPeakInfo Example: public byte[] GetMonitorPacket(int ddcNum) { byte[] monitorPacket = new byte[MONITOR_PACKET_BYTE_COUNT]; Buffer.BlockCopy(START_HEADER_BYTES, 0, monitorPacket, 0, START_HEADER_BYTES.Count()); monitorPacket[2] = (byte) ddcNum; monitorPacket[3] = 0x01; byte checksum = 0; foreach (var b in monitorPacket) { checksum = (byte) (((ushort) checksum + (ushort) b) & 0xff); } checksum = (byte) (checksum ^ 0x55); monitorPacket[4] = checksum; return monitorPacket; } All they do is return the bytes in proper format, which then will be sent to the proprietary device via TCP to call the methods. Now I'm screwed on my naming convention for GetPeakInfo method. How do I name this? What about GetGetPeakInfoPacket (two gets for a method name)? Do I break the naming convention here as an exception case and drop one of the gets? Do I use some other naming convention? Actually, GetPutPeakInfo sounds a little bit weird too, but at least there's no duplication. Answer: How about using Generate as prefix? GenerateMonitorPacket GenerateControlTempPacket GeneratePutPeakInfoPacket GenerateGetPeakInfoPacket Alternatively, you can also create a separate class for each type of packet generator. You've only shown the code to create the packet for Monitor, so I don't know how complex the code for the other packets is, but separating these four methods into their own class might be logical and beneficial in the long run. If you create separate classes, you put the name of the packet (Monitor, ControlTemp, PutPeakInfo and GetPeakInfo) in the class name. public class MonitorPacketGenerator { public byte[] Generate(int ddcNum) { ... } }
{ "domain": "codereview.stackexchange", "id": 4128, "tags": "c#, tcp" }
What is the mechanism responsible for the 'delay' in delayed rectifier potassium channels?
Question: I've been trying to find a comprehensive explanation concerning the nature of the 'delay' in neurons' delayed rectifier potassium channels. As it's written in my intro to neuroscience textbook, these channels are voltage gated and become activated when the typical neuronal membrane reaches an electrical potential of approx. -40mV. While the channel is activated at this point, it only opens up after a delayed period of approx 1ms. I'm mostly trying to clarify that this isn't an confusion in wording and that delayed rectifiers aren't actually activated when the inner membrane reaches a charge of approx +40mV which would be correlated (at least in textbook examples) with the '1ms delay', but caused by the actually typical charge observed at approx 1 ms of delay during an action potential. Thanks so much. Let me know if I can clarify something. I know this wasn't necessarily so well asked Answer: These channels are slow to respond. After depolarization they are activated, but only after a relatively long time. Their closing kinetics are also relatively slow. A definition is give here.
{ "domain": "biology.stackexchange", "id": 3037, "tags": "neuroscience, neuroanatomy, cell-membrane, action-potential, membrane-transport" }
Why NFA is called Non-deterministic?
Question: I have this [kind of funny] question in mind. Why is the non-deterministic finite automaton called non-deterministic while we define the transitions for inputs. Well, even though there are multiple and epsilon transitions, they are defined which means that the machine is deterministic for those transitions. Which means it's deterministic. Answer: "Deterministic" means "if you put the system in the same situation twice, it is guaranteed to make the same choice both times". "Non-deterministic" means "not deterministic", or in other words, "if you put the system in the same situation twice, it might or might not make the same choice both times". A non-deterministic finite automaton (NFA) can have multiple transitions out of a state. This means there are multiple options for what it could do in that situation. It is not forced to always choose the same one; on one input, it might choose the first transition, and on another input it might choose the same transition. Here you can think of "situation" as "what state the NFA is in, together with what symbol is being read next from the input". Even when both of those are the same, a NFA still might have multiple matching transitions that can be taken out of that state, and it can choose arbitrarily which one to take. In contrast, a DFA only has one matching transition that can be taken in that situation, so it has no choice -- it will always follow the same transition whenever it is in that situation.
{ "domain": "cs.stackexchange", "id": 9744, "tags": "finite-automata, nondeterminism" }
Excess Pressure on a curved surface with two radius of curvature
Question: While studying surface tension, I noticed the following formula to calculate excess pressure on a curved liquid film made use of two radii of curvature: $$2T\left(\frac{1}{R_{1}} + \frac{1}{R_{2}}\right)$$ I have not been able to understand the significance of two radii of curvature for a surface. Answer: The best way to visualize two different radii for any given surface. The best way is to cut the surface by a pair of perpendicular planes. Since, now you are viewing the section of surface cut by a plane you will have a planar surface which has a radius of curvature. Try this out:::: Use a torus to apply the above argument. That will clear your doubt.
{ "domain": "physics.stackexchange", "id": 77394, "tags": "pressure, curvature, fluid-statics, surface-tension" }
Condition for looping the loop
Question: Consider a ball tied to a string and it is imparted a velocity we have studied that condition for looping the loop is that tension at the uppermost point must be zero, but why is this condition imposed please explain? If tension becomes zero at some point below the uppermost point won't the ball complete the loop because it still has some velocity? Answer: At highest point the body has minimum tension but the velocity of the body is greater compared to some point of the curve hence a body continued in its circular path.
{ "domain": "physics.stackexchange", "id": 39203, "tags": "newtonian-mechanics, forces, energy-conservation, free-body-diagram, string" }
What are the state-of-art techniques in capturing carbon dioxide and/or converting it to oxygen and their challenges?
Question: It seems there are multiple chemical reactions (from this question and internet in general) where $CO_2$ can be converted to $O_2$. If this is the case, can someone please explain the challenges (manufacturing, setup costs, raw material scarcity etc) we are facing in reducing $CO_2$ in atmosphere? Also, what are the state-of-the-art techniques used to reduce $CO_2$? Answer: Carbon capture and storage (CCS) and direct air capture (DAC) are massively energy intensive processes. So a large barrier to deployment is finding the energy to run the capture plants. Since we emit so much CO2 from our power generation it's better to just turn off that power generation than use any of the resulting energy to then partly undo the process done in the power station. So in a world with any energy generation from fossil fuels, it basically makes no sense to use energy on capturing carbon, we should just turn off that energy generation. There are some tiny carbon capture plants operating in Iceland since they have an abundance of geothermal energy. After/if we transition to renewable energy it will still take energy/resources to build renewable energy infrastructure. Energy doesn't suddenly magically become free. So the massive energy bill of trying to capture carbon is still going to be a problem. As ever, the best solution is to stop emitting carbon dioxide. Carbon capture is pushed hard by oil companies for several reasons, and one of those is that it justifies them continuing to profit off destroying the atmosphere for decades more while they watch their delaying tactics play out.
{ "domain": "earthscience.stackexchange", "id": 2792, "tags": "climate-change, carbon-cycle" }
Computing the Levenshtein distance quickly
Question: Given a huge database of allowed words (alphabetically sorted) and a word, find the word from the database that is closest to the given word in terms of Levenshtein distance. The naive approach is, of course, to simply compute the Levenshtein distance between the given word and all the words in the dictionary (we can do a binary search in the database before actually computing the distances). I wonder if there is a more efficient solution to this problem. Maybe some heuristic that lets us reduce the number of words to search, or optimizations to the Levenshtein distance algorithm. Links to papers on the subject welcome. Answer: What you are asking about is the problem of near-neighbor search under the edit distance. You didn't mention whether you're interested in theoretical results or heuristics, so I'll answer the former. The edit distance is somewhat nasty to deal with for building near-neighbor search structures. The main problem is that as a metric, it behaves (sort of) like other well known bad metrics like $\ell_1$ for the purpose of dimensionality reduction and approximation. There's a rather vast body of work to read on this topic, and your best source is the set of papers by Alex Andoni: by following pointers backward (for example from his FOCS 2010 paper) you'll get a good set of sources.
{ "domain": "cstheory.stackexchange", "id": 4520, "tags": "ds.algorithms, reference-request, string-matching, string-search" }
King Robota: Does he speak for himself?
Question: I want to know if its currently possible for a robot to speak by it self as King Robota does, or is just someone speaking on his behalf? Youtube video Answer: Not only is a human voice behind this (likely through a vocoder), I believe you are looking at a mechanized costume and not an actual robot -- someone is inside. Using fake robots to attract crowds has worked since "Elektro" at the 1939 World's Fair (video). You can also buy your own King Robota suit if you want to take up this form of entertainment yourself. A more interesting question is how the costume's mouth reacts to the incoming speech, to emulate actual speaking.
{ "domain": "robotics.stackexchange", "id": 55, "tags": "control, software" }
How to accurately measure mass flow through server vents using an anemometer
Question: I have to work out the air mass flow coming out each vent on a server and have proposed to use a hot wire anemometer to measure the air velocity at each vent by placing it slightly inside one outlets of the vent (it is a grid pattern), then multiply that by the vented area and air density to find mass flow. My concern is that I am instructing technicians to do it on site, and I am worried that the measurements may vary significantly depending on where on the vent they are taken from i.e. if they take it from the corner or side of the vent it may be different from the centre. This is my first time trying to measure something like this and was wondering how others in the industry do it. Is there a better way of doing this? Answer: The Hot-wire anemometer useful for accurate turbulence measurements. Before using it, one has to calibrate the hot-wire with a fine Pitot tube for the range of velocity interested in. In which case you might require local laboratory/workplace temperature compensation. So, unless you need a turbulence-intensity/spatial or temporal correlations, you can go for a well designed pitot tube. Also the measurements have to be carried out in the entire outlet region with reasonable spatial resolution. Then you can calculate the mass flow rate using the below formula (assuming data taken in a 2D-plane ) $\dot m = \bar{\rho(T)}\int\int u(x,y) dx dy = \bar{\rho(T)} \sum_{i=1}^{n_x}\sum_{j=1}^{n_y} u(i,j) \Delta x_{i,j} \Delta y_{i,j}$ where, $\bar{\rho(T)} $ is density of air compensated for temperature. Hope this helps you.
{ "domain": "engineering.stackexchange", "id": 2037, "tags": "fluid-mechanics, airflow" }
Faster way to create linked list of length n in Python
Question: I'm attempting to create a linked list of length n using Python. I have the simple list implemented, a working concatenation function, and a working create_list function; however, I just want to know if there is a more efficient method at making the linked list than using my concatenate function (made for testing). Simple List Class: class Cell: def __init__( self, data, next = None ): self.data = data self.next = next Concatenate Function: def list_concat(A, B): current = A while current.next != None: current = current.next current.next = B return A List create (that takes forever!): def create_list(n): a = cell.Cell(0) for i in (range(1,n)): b = cell.Cell(i) new_list = cell.list_concat(a, b) return new_list This is an assignment, but efficiency is not part of it. I am just wondering if there is a more efficient way to create a large list using this implementation of the structure. Answer: The current algorithm has a failure in using the list_concat during list creation. The list_concat function walks to the end of the list for each node you want to add. So adding a node becomes O(n). This means creating a list of 10,000 items takes 1+2+3...+10,000 steps. That's O(n*n) performance. As hinted, you should add the next node in the creation loop. You then get code that looks like this (I left out the list_concat function as it's not used): class Cell(object): def __init__(self, data, next=None): self.data = data self.next = next def create_list(length=1): linked_list = Cell(0) head = linked_list for prefill_value in xrange(1, length): head.next = Cell(prefill_value) head = head.next return linked_list You could also create the list in reverse order, allowing you to immediately fill the next attribute of the Cell. Your create_list function then looks like this (the reversed function should not be used for really huge lengths (several million) because it's created in-memory): def create_list(length=1): tail = Cell(length - 1) for prefill_value in xrange(length - 2, -1, -1): tail = Cell(prefill_value, tail) return tail
{ "domain": "codereview.stackexchange", "id": 1291, "tags": "python, algorithm" }
Any idea on performing localization & mapping using a Velodyne VLP-16 LIDAR?
Question: I am new to ROS. I want to do the SLAM for a humanoid robot with Stanley Innovations's RMP-220 V3 Base Platform and Velodyne VLP-16 LIDAR. Do I have to use an IMU? to perform this? I am using ROS Kinetic. I have found out that Velodyne drivers support have released only till ROS Indigo. Using Kinetic will be a problem? Originally posted by Aswin Sarang on ROS Answers with karma: 3 on 2016-09-07 Post score: 0 Answer: It works on Jade and Kinetic, if built from source. Binary packages are still pending. Originally posted by joq with karma: 25443 on 2016-09-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25704, "tags": "ros-kinetic, velodyne" }
Bond breaking stretch length with QM
Question: In QM and assuming you could repeat the same exact bond-breaking experiment arbitrary number of times, do bonds always break at the same stretch length, or does the uncertainty principle require some variability in the stretch length at which the atoms debond? Answer: A direct answer to this question is not simple. The main difficulty is related to the formal definition of what a chemical bond is. The most useful definitions of bond in the case of molecules and condensed matter either rely on the introduction of molecular orbitals, analyzing the role played by orbitals and their energy on the cohesive property of the system, or are based on a careful analysis of the topological properties of the electronic charge density (this is the modern approach of "atoms in molecules" proposed by Bader and coworkers a couple of decades ago. At the best of my knowledge, all the attempts to put on a more formal base the concept of chemical bond hinge on electronic density or electronic wavefunctions, which means that the concept of bonding, bond formation, bond stretching or bond breaking, are all based on quantities which already embody in their definition some average statistical behavior. Electron density is an average quantity. An orbital allows to evaluate its associated probability density and therefore averages and uncertainties (standard deviations). Therefore, every statement about bonds is based on properties of the whole statistical ensemble underlying the statistical interpretation of quantum mechanics. On the basis of these considerations, my answer is that bonds always break at the same stretch length due to the existing definitions of bond.
{ "domain": "physics.stackexchange", "id": 54890, "tags": "quantum-mechanics, heisenberg-uncertainty-principle" }
What causes a rotating object to rotate forever without external force—inertia, or something else?
Question: Someone told me that it is not inertia, but I think it is inertia, because it will rotate forever. In my understanding, inertia is the constant motion of an object without external force. Am I wrong? Answer: Is it inertia that a rotating object will rotate forever without external force? Someone told me that this is not inertia [...] Well, sort of - it’s somewhat correct to say it is inertia, and somewhat correct to say it isn’t. One has to be precise with language! But there is some truth to what you were told. “Inertia” generally refers to the tendency of objects to continue moving in a straight line with a fixed velocity unless an external force is applied to them. It is basically a single word that encapsulates Newton’s first law of motion. It is a very fundamental law of nature, and at some level, no one really knows why it’s true. The different parts of the rotating object are definitely not moving in a straight line, and it’s not the case that no forces are acting on them. So there is more than just inertia at play. What is happening with a rotating rigid body is that each part of the body “wants” to maintain its fixed velocity according to the law of inertia, but the rigidity of the body is preventing it from doing so (since the pieces of the body have different velocity vectors so with fixed velocities they would all fly off in different directions). At the microscopic level, each piece of the body is applying forces to the adjacent pieces. Those forces are causing those adjacent pieces to change their velocity, according to Newton’s second law of motion. The end result of this highly complicated process is surprisingly simple: the body rotates. But the underlying cause is more than just inertia. Now, I said it’s also somewhat correct to say that it is inertia that’s making bodies keep rotating. This is because there is also a rotational analogue of inertia that in informal speech among physicists might still be referred to as “inertia” (although calling it rotational inertia is more appropriate, and it will also commonly be described under the terms “moment of inertia” or “conservation of angular momentum”, or even more fancy terms like “rotational symmetry of space + Noether’s theorem”, although each of these terms describes something a bit more complicated than just rotational inertia). This rotational inertia is the tendency of rotating rigid bodies to continue rotating at a fixed angular velocity in their center of mass frame, unless a torque is applied to them. Rotational inertia differs from ordinary “linear” inertia in that it is a derived principle: it can be derived mathematically from Newton’s laws of motion, so in that sense it has (in my opinion) a slightly less fundamental status among the laws of physics. Rigid bodies don’t “want” to keep rotating in the same fundamental sense that particles “want” to keep moving in a straight line with a fixed velocity - they do end up rotating but it’s because of a process we understand well and can analyze mathematically (starting from Newton’s laws), rather than some mysterious natural phenomenon we observe experimentally and accept as an axiom without being able to say much more about why it’s true.
{ "domain": "physics.stackexchange", "id": 60351, "tags": "angular-momentum, rotational-dynamics, conservation-laws, torque, inertia" }
Why are bacteria and archaea in different domains?
Question: As I understand it, the main difference between the Bacteria and the Eucaryota domains are that eukaryotes have a nucleus and bacteria don't. I understand that bacteria and archaebacteria have enough of a genetic difference to be separate kingdoms, but why are they separate domains? Answer: The reason that Archaea were determined to be a separate (and only the third) kingdom so late (1977 according to this reference) was because archaea often completely resemble eubacteria. They are unicellular and have no organelles and appropriately they were grouped with other prokaryotes because of their morphology and cellular physiology. But by the 1970s phylogeny becoming based on the sequence of DNA and RNA which were just becoming more available. Based on a simple comparison of the sequences of 18S rRNA of 13 fungi, bacteria and archaea, Woese and Fox showed that the bacteria and the archaea were about as different from the eukaryotes than from the bacteria. (see table 1). Since then, the differences between bacteria and archaea have continued to be great. The wikipedia diagram below is not to scale- the divergence of the three kingdoms is so far back that they pre-date the emergence of eukaryotes entirely. But you can see that fungi and other eukaryotes are more similar to archaea than the bacteria. BTW, the idea that archaea are extremophiles is not entirely true - they are pretty much exclusively the only kingdom found in these conditions, but archaea have been found to coexist with bacteria and eukaryotes in the open sea and their detection heretofore in most environments may be only due to the fact that archaea are difficult to grow in the lab. Just found this retrospective from Woese himself, who passed away recently.
{ "domain": "biology.stackexchange", "id": 757, "tags": "bacteriology, taxonomy" }
C++17 implementation
Question: C++20 added the span library under the header <span>. As voluntary exercise (not homework!), I spent approximately 1.5 hours implement it under C++17. Please tell me if I used stuff that is not available under C++17. Most stuff is in the namespace ::my_std, but tuple_size, etc. are in the namespace ::std, otherwise they are useless. The ::my_std::span_detail namespace contains implementation details that are not to be exposed. I used the C++ standard draft, HTML version as a reference. Here is my code, within 400 lines: // a C++17 implementation of <span> #ifndef INC_SPAN_HPP_c2GLAK6Onz #define INC_SPAN_HPP_c2GLAK6Onz #include <array> // for std::array, etc. #include <cassert> // for assert #include <cstddef> // for std::size_t, etc. #include <iterator> // for std::reverse_iterator, etc. #include <type_traits> // for std::enable_if, etc. #define CONSTRAINT(...) \ std::enable_if_t<(__VA_ARGS__), int> = 0 #define EXPECTS(...) \ assert((__VA_ARGS__)) namespace my_std { // constants // equivalent to std::numeric_limits<std::size_t>::max() inline constexpr std::size_t dynamic_extent = -1; // class template span template <class T, std::size_t N = dynamic_extent> class span; namespace span_detail { // detect specializations of span template <class T> struct is_span :std::false_type {}; template <class T, std::size_t N> struct is_span<span<T, N>> :std::true_type {}; template <class T> inline constexpr bool is_span_v = is_span<T>::value; // detect specializations of std::array template <class T> struct is_array :std::false_type {}; template <class T, std::size_t N> struct is_array<std::array<T, N>> :std::true_type {}; template <class T> inline constexpr bool is_array_v = is_array<T>::value; // ADL-aware data() and size() using std::data; using std::size; template <class C> constexpr decltype(auto) my_data(C& c) { return data(c); } template <class C> constexpr decltype(auto) my_size(C& c) { return size(c); } // detect container template <class C, class = void> struct is_cont :std::false_type {}; template <class C> struct is_cont<C, std::void_t< std::enable_if_t<!is_span_v<C>>, std::enable_if_t<!is_array_v<C>>, std::enable_if_t<!std::is_array_v<C>>, decltype(data(std::declval<C>())), decltype(size(std::declval<C>())) >> :std::true_type {}; template <class C> inline constexpr bool is_cont_v = is_cont<C>::value; } template <class T, std::size_t N> class span { public: // constants and types using element_type = T; using value_type = std::remove_cv_t<T>; using index_type = std::size_t; using difference_type = std::ptrdiff_t; using pointer = T*; using const_pointer = const T*; using reference = T&; using const_reference = const T&; using iterator = T*; using const_iterator = const T*; using reverse_iterator = std::reverse_iterator<iterator>; using const_reverse_iterator = std::reverse_iterator<const_iterator>; static constexpr index_type extent = N; // constructors, copy, and assignment // LWG 3198 applied constexpr span() noexcept : size_{0}, data_{nullptr} { static_assert(N == dynamic_extent || N == 0); } constexpr span(T* ptr, index_type n) : size_{n}, data_{ptr} { EXPECTS(N == dynamic_extent || N == n); } constexpr span(T* first, T* last) : size_{last - first}, data_{first} { EXPECTS(N == dynamic_extent || last - first = N); } template <std::size_t M, CONSTRAINT(N == dynamic_extent || N == M && std::is_convertible_v<std::remove_pointer_t<decltype(span_detail::my_data(std::declval<T(&)[M]>()))>(*)[], T(*)[]>)> constexpr span(T (&arr)[M]) noexcept : size_{M}, data_{arr} { } template <std::size_t M, CONSTRAINT(N == dynamic_extent || N == M && std::is_convertible_v<std::remove_pointer_t<decltype(span_detail::my_data(std::declval<T(&)[M]>()))>(*)[], T(*)[]>)> constexpr span(std::array<value_type, M>& arr) noexcept : size_{M}, data_{arr.data()} { } template <std::size_t M, CONSTRAINT(N == dynamic_extent || N == M && std::is_convertible_v<std::remove_pointer_t<decltype(span_detail::my_data(std::declval<T(&)[M]>()))>(*)[], T(*)[]>)> constexpr span(const std::array<value_type, M>& arr) noexcept : size_{M}, data_{arr.data()} { } template <class Cont, CONSTRAINT(N == dynamic_extent && span_detail::is_cont_v<Cont> && std::is_convertible_v<std::remove_pointer_t<decltype(span_detail::my_data(std::declval<Cont>()))>(*)[], T(*)[]>)> constexpr span(Cont& c) : size_{span_detail::my_size(c)}, data_{span_detail::my_data(c)} { } template <class Cont, CONSTRAINT(N == dynamic_extent && span_detail::is_cont_v<Cont> && std::is_convertible_v<std::remove_pointer_t<decltype(span_detail::my_data(std::declval<Cont>()))>(*)[], T(*)[]>)> constexpr span(const Cont& c) : size_{span_detail::my_size(c)}, data_{span_detail::my_data(c)} { } constexpr span(const span& other) noexcept = default; template <class U, std::size_t M, CONSTRAINT(N == dynamic_extent || N == M && std::is_convertible_v<U(*)[], T(*)[]>)> constexpr span(const span<U, M>& s) noexcept : size_{s.size()}, data_{s.data()} { } ~span() noexcept = default; constexpr span& operator=(const span& other) noexcept = default; // subviews template <std::size_t Cnt> constexpr span<T, Cnt> first() const { EXPECTS(Cnt <= size()); return {data(), Cnt}; } template <std::size_t Cnt> constexpr span<T, Cnt> last() const { EXPECTS(Cnt <= size()); return {data() + (size() - Cnt), Cnt}; } template <std::size_t Off, std::size_t Cnt = dynamic_extent> constexpr auto subspan() const { EXPECTS(Off <= size() && (Cnt == dynamic_extent || Off + Cnt <= size())); if constexpr (Cnt != dynamic_extent) return span<T, Cnt>{data() + Off, Cnt}; else if constexpr (N != dynamic_extent) return span<T, N - Off>{data() + Off, size() - Off}; else return span<T, dynamic_extent>{data() + Off, size() - Off}; } constexpr span<T, dynamic_extent> first(index_type cnt) const { EXPECTS(cnt <= size()); return {data(), cnt}; } constexpr span<T, dynamic_extent> last(index_type cnt) const { EXPECTS(cnt <= size()); return {data() + (size() - cnt), cnt}; } constexpr span<T, dynamic_extent> subspan(index_type off, index_type cnt = dynamic_extent) const { EXPECTS(off <= size() && (cnt == dynamic_extent || off + cnt <= size())); return {data() + off, cnt == dynamic_extent ? size() - off : cnt}; } // observers constexpr index_type size() const noexcept { return size_; } constexpr index_type size_bytes() const noexcept { return size() * sizeof(T); } [[nodiscard]] constexpr bool empty() const noexcept { return size() == 0; } // element access constexpr reference operator[](index_type idx) const { EXPECTS(idx < size()); return *(data() + idx); } constexpr reference front() const { EXPECTS(!empty()); return *data(); } constexpr reference back() const { EXPECTS(!empty()); return *(data() + (size() - 1)); } constexpr pointer data() const noexcept { return data_; } // iterator support constexpr iterator begin() const noexcept { return data(); } constexpr iterator end() const noexcept { return data() + size(); } constexpr const_iterator cbegin() const noexcept { return data(); } constexpr const_iterator cend() const noexcept { return data() + size(); } constexpr reverse_iterator rbegin() const noexcept { return reverse_iterator{end()}; } constexpr reverse_iterator rend() const noexcept { return reverse_iterator{begin()}; } constexpr const_reverse_iterator crbegin() const noexcept { return reverse_iterator{cend()}; } constexpr const_reverse_iterator crend() const noexcept { return reverse_iterator{cbegin()}; } friend constexpr iterator begin(span s) noexcept { return s.begin(); } friend constexpr iterator end(span s) noexcept { return s.end(); } private: pointer data_; index_type size_; }; // deduction guide template <class T, std::size_t N> span(T (&)[N]) -> span<T, N>; template <class T, std::size_t N> span(std::array<T, N>&) -> span<T, N>; template <class T, std::size_t N> span(const std::array<T, N>&) -> span<const T, N>; template <class Cont> span(Cont&) -> span<typename Cont::value_type>; template <class Cont> span(const Cont&) -> span<const typename Cont::value_type>; // views of objects representation template <class T, std::size_t N> auto as_bytes(span<T, N> s) noexcept -> span<const std::byte, N == dynamic_extent ? dynamic_extent : sizeof(T) * N> { return {reinterpret_cast<const std::byte*>(s.data()), s.size_bytes()}; } template <class T, std::size_t N, CONSTRAINT(!std::is_const_v<T>)> auto as_writable_bytes(span<T, N> s) noexcept -> span<std::byte, N == dynamic_extent ? dynamic_extent : sizeof(T) * N> { return {reinterpret_cast<std::byte*>(s.data()), s.size_bytes()}; } } namespace std { // tuple interface // the primary template declarations are included in <array> template <class T, std::size_t N> struct tuple_size<my_std::span<T, N>> : std::integral_constant<std::size_t, N> {}; // not defined template <class T> struct tuple_size<my_std::span<T, my_std::dynamic_extent>>; template <std::size_t I, class T, std::size_t N> struct tuple_element<I, my_std::span<T, N>> { static_assert(N != my_std::dynamic_extent && I < N); using type = T; }; template <std::size_t I, class T, std::size_t N> constexpr T& get(my_std::span<T, N> s) noexcept { static_assert(N != my_std::dynamic_extent && I < N); return s[I]; } } #undef CONSTRAINT #undef EXPECTS #endif Constructive criticism is highly appreciated! Answer: Well, it looks quite nice. But now let's try to find all the corners which can still be improved: I'm not quite sure why you list a few members of each include you added. But, at least it simplifies checking for extraneous includes, and they are sorted. Simply restating the code in vaguer words, or otherwise restating the obvious, is an abuse of comments. They just detract from anything relevant. Well, at least some of them could be justified as breaking long blocks of declarations and definitions into easier digestible logical chunks. Conversion to and arithmetic using unsigned types being done using modulo-arithmetic should not be a surprise to any reviewer and/or maintainer. If it is, they lack basic knowledge, and your sources should not be a basic language-primer. You had extraneous whitespace at the end of some lines. An automatic formatter, or format-checker, either of which can be put in a commit-hook, would have fixed or at least found them. You use your macro CONSTRAINT seven times, leading to a total saving of \$((40-13)-(15-3)) \times 7 - 71 - 19 = 15 \times 7 - 90 = 15\$ bytes. That's quite a paltry compensation for adding this cognitive burden on any maintainer, and breaking any user-code defining CONSTRAINT before the include. At least the damage is limited due to you undefining it too. Your use of the macro EXPECTS does not even have that silver lining, as it expanded your code by \$((12-3)-(21-13)) \times 11 + 49 + 16 = 1 \times 11 + 65 = 76\$ bytes. And it leaves me even more puzzled as you should have just used assert() directly, that's exactly what it's for. You use an extra template-parameter for SFINAE of a function once. While ctors have to do SFINAE there or not at all, functions can avoid any potential extra-cost by using the return-type for that. If you were actually writing part of the standard library, or had C++20 with it customisation-points, my_size() and my_data() would be pointless. Even though you don't actually need it, I would suggest enabling their use for SFINAE. You aren't currently optimizing the case of non-dynamic extent. Not too surprising, as you go down to the metall everywhere. Just always delegate to span::span(T*, std::size_t) (at least ultimately), and see everything get magically easier. Yes, when you conditionally remove the backing-field for size, you need to adapt .size(). Unifying the ctors span::span(Container&) and span::span(Container const&) is simplicity itself: template <class Container, class = std::enable_if_t< !std::is_rvalue_reference_v<Container&&> && previous_constraint>> span(Container&& c) Building on the above, you only have one point left interested in whether you have a non-std::array, non-span, container. Thus, you can simplify all the machinery to detect that, unify detection of std::array and span, and inline it all: template <class T, template <auto> class TT> struct is_template_instantiation : std::false_type {}; template <template <auto> class TT, auto N> struct is_template_instantiation<TT<N>, TT> : std::true_type {}; template <class Container, class = std::enable_if_t< !std::is_rvalue_reference_v<Container&&> && !span_detail::is_template_instantiation<std::decay_t<Container>, span>() && !span_detail::is_template_instantiation<std::decay_t<Container>, std::array>() && !std::is_array<std::decay_t<Container>>(), decltype( span_detail::my_size(std::declval<Container&>()), void(), span_detail::my_data(std::declval<Container&>()), void())>> span(Container&& c) : span(span_detail::my_data(c), span_detail::my_size(c)) {} I suggest adding computed noexcept even where not mandated. Doing so even allows you to unify more members.
{ "domain": "codereview.stackexchange", "id": 34285, "tags": "c++, reinventing-the-wheel, template-meta-programming, c++17, c++20" }
What would be an ideal fidelity measure to determine the closeness between two non unitary matrices?
Question: The Hibert Schmidt norm $\mathrm {tr}(A^{\dagger}B)$ works well for unitaries. It has a value of one when the matrices are equal and less than one otherwise. But this norm is absolutely unsuitable for non-unitary matrices. I thought maybe $\frac{\mathrm{tr}(A^{\dagger}B)}{\sqrt{\mathrm{tr}(A^{\dagger}A)} \sqrt{\mathrm{tr}(B^{\dagger}B)}}$ would be a good idea? Answer: When you ask about an 'ideal' fidelity measure, it assumes that there is one measure which inherently is the most meaningful or truest measure. But this isn't really the case. For unitary operators, our analysis of the error used in approximating one unitary by another involves the distance induced by the operator norm: $$ \bigl\lVert U - V \bigr\rVert_\infty := \max_{\substack{\lvert \psi\rangle \ne \mathbf 0}} \frac{\bigl \lVert (U - V) \lvert \psi \rangle \bigr\rVert_2}{\bigl \lVert \lvert \psi \rangle \bigr\rVert_2} $$ That is, it is the greatest factor by which the Euclidean norm (or 2-norm) of a vector will be increased by the action of $(U - V)$: if the two operators are very nearly equal, this factor will be very small. I know you asked for norms on non-unitary matrices, but if a norm is useful for non-unitary matrices, you might hope that it would also be useful for unitary matrices, and the point here is that the 'operator norm' is. It is also useful for (non-unitary) observables: for two Hermitian operators $E$ and $F$ — representing evolution Hamiltonians, for instance, or measurement projectors — the operator norm $\lVert E - F \rVert$ conveys how similar $E$ and $F$ are in a way which directly relates to how easily you can operationally distinguish one from the other. On the other hand, for density operators $\rho$ and $\sigma$, the best distance measure to describe how easily you can distinguish them is the trace norm: $$\bigl\lVert \rho - \sigma \bigr\rVert_{\mathrm{tr}} := \mathrm{tr} \Bigl( \sqrt{(\rho - \sigma) ^2} \Bigr)$$ which is the same as (in fact, it's just a fancy way of writing) the sum of the absolute values of the eigenvalues of $(\rho - \sigma) $: if the two operators are very nearly equal, this sum will be very small. So, which norm you want to use to describe distances on operators, depends on what those operators are and what you would like to say about them.
{ "domain": "quantumcomputing.stackexchange", "id": 1344, "tags": "fidelity" }
Why doesn't dark matter collapse into a black hole?
Question: Dark matter supposed to be affected only by gravity and no other force. At the same time, however, dark matter forms halos around galaxies, that become visible examining how galaxies and clusters bend the light. The question is now, what does prevent these huge clouds of dark matter from collapsing? A cloud of "normal" matter has an inner pressure from its temperature. The force working here would be the electromagnetic force because the atoms and molecules of "normal" matter repel each other (compare https://www.youtube.com/watch?v=bKldI-XGHIw Veritasium - Can We Really Touch Anything?) When it collapses as a cold cloud to form stars, the inner pressure is still working, it's heating up until it forms a star hot enough to start fusion fire. When the star collapses because the pressure decreases after the core cools down after fusion slows down at the end of its life, quantum mechanical forces (Pauli principle) keep it from getting down any further until the mass is strong enough to overcome the Pauli forces and create a black hole. Now why doesn't the same happen to dark matter if there's no other force that's affecting dark matter particles (whatever they might be) except for gravity? Inner pressure can't happen because electromagnetic forces don't work here. Pauli principle can't happen because for that it's not dense enough and would again need other forces to be exchanged between the particles. Does that mean, that these DM particles have to be Bosons since they can go through each other and thus never lose any energy to each other do collapse further? However, gravity could produce some kind of inner friction too when one particle loses gravitational energy to another one passing by. PS: I am not a physicist just interested in this stuff, thus I can't do the calculations for myself. Answer: Dark matter without strong self-interaction is indeed pressure-less, so it cannot form a static ball. However, since it does not interact with electromagnetism or give off any other kind of radiation, it also cannot lose energy. When normal matter clouds collapse to form stars it happens because the involved atoms and molecules can radiate away their energy and end up in a more tightly bound form. This is not available to the dark matter halo, where individual particles (whatever they are) will orbit in the joint gravitational field. If enough dark matter was accidentally concentrated somewhere it would implode into a black hole, but since black holes are tiny on an astronomical scale this will just remove the super-dense part and leave the rest orbiting as normal (with a tiny fraction falling in over time). Technically, dark matter form virialized halos. The virial theorem in mechanics states that when a lot of particles bound interact with each other by gravity the average movement energy tends to equal half of their average potential energy. That is, they form a density distribution with high density and speed in the core, surrounded by a loose and low-density cloud of more slowly moving particles.
{ "domain": "physics.stackexchange", "id": 69619, "tags": "newtonian-gravity, angular-momentum, dark-matter, gravitational-collapse" }
Why SeqIO.parse method isn't working?
Question: I am trying to follow the tutorial from from the Biopython website here and I am right at the beginning (2.4.1), where I am trying a Simple FASTA parsing example which is from Bio import SeqIO for seq_record in SeqIO.parse("ls_orchid.fasta", "fasta"): print(seq_record.id) print(repr(seq_record.seq)) print(len(seq_record)) For an unknown reason, I get the following error: --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-19-c64128d7dcfd> in <module> 1 from Bio import SeqIO ----> 2 for seq_record in SeqIO.parse("ls_orchid.fasta", "fasta"): 3 print(seq_record.id) 4 print(repr(seq_record.seq)) 5 print(len(seq_record)) ~\Anaconda3\lib\site-packages\Bio\SeqIO\__init__.py in parse(handle, format, alphabet) 605 iterator_generator = _FormatToIterator.get(format) 606 if iterator_generator: --> 607 return iterator_generator(handle) 608 if format in AlignIO._FormatToIterator: 609 # Use Bio.AlignIO to read in the alignments ~\Anaconda3\lib\site-packages\Bio\SeqIO\FastaIO.py in __init__(self, source, alphabet, title2ids) 181 raise ValueError("The alphabet argument is no longer supported") 182 self.title2ids = title2ids --> 183 super().__init__(source, mode="t", fmt="Fasta") 184 185 def parse(self, handle): ~\Anaconda3\lib\site-packages\Bio\SeqIO\Interfaces.py in __init__(self, source, alphabet, mode, fmt) 45 raise ValueError("The alphabet argument is no longer supported") 46 try: ---> 47 self.stream = open(source, "r" + mode) 48 self.should_close_stream = True 49 except TypeError: # not a path, assume we received a stream FileNotFoundError: [Errno 2] No such file or directory: 'ls_orchid.fasta' I am using a Jupyter Notebook. Answer: Personally I think its late and we're all tired. "FileNotFoundError: [Errno 2] No such file or directory: 'ls_orchid.fasta'" The traceback is pointing to line 2, the SeqIO as you point out. In my opinion, the script should be in the same directory as the fasta file. You know how to identify the path in bash (pwd or echo $PWD), although you must be using Windows (Anaconda has \ rather than /). Alternatively the file names has a typo. You know the for seq_record in SeqIO.parse("~\path\ls_orchid.fasta", "fasta"): # where path is the dir(s) leading to ls_orchid.fasta, but obviously use / if its Linux. I would assume you can alternatively dump the .ipynb file in the location where the fasta file is. As personal choice I prefer a formal IDE such as 'Visual Studio Code' or "PyCharm". In my personal opinion, I am aware some will disagree, Jupyter notebook good is for running code on a remote commercial server for data science work. However, the traceback - which you clearly have activated - looks good. I can see that you're Anaconda3 installation looks good, you must be using Windows (which I've never worked with) and you're running Jupiter from Anaconda3 ... so cool.
{ "domain": "bioinformatics.stackexchange", "id": 1626, "tags": "python, phylogenetics, phylogeny, seqio" }
Why potenial drops across a resistor in an electric circuit?
Question: Why potential drop across a resistor in circuit? I want to know what causes this drop at the atomic level. Answer: I limit my answer to materials other than semiconductors here. At the atomic level, the electrons that are carrying charge will flow through a solid if there are lots of mobile (unbound) electrons in that material. If the conductivity of that material is perfect, it will require no work for the electrons to traverse that material. However, in the real world and at room temperature, there are things that will get in the way of the electron flow, and for an electron to make its way through such a "real world" material then requires that some amount of work be expended in the process. Things that "get in the way" of electron flow through a material (say, a metal) are defects in the crystalline structure of the metal which, when struck by a moving electron, cause it to bounce off and seek another path past it. These defects are called "scattering centers" and the more of them you have in a chunk of material, the more work you must expend to push an electron through it. Where does this work come from? Well, if that chunk of material is connected to a battery so that electrical current from the battery is flowing through the material, then we know that every electron that enters the chunk of material has to exit it (current is conserved) so the material cannot "consume" electrons in order to do the work. But the voltage across the chunk of material is a measure of how hard the electrons are being pushed through it by the battery (higher voltage means harder push) and so it is the voltage that furnishes the effort which is diminished as the electrons traverse the material. Then the net amount of work required to push those electrons through the chunk is equal to (effort) x (flow) or (voltage) x (current). This means that if we measure the voltage above the return line potential to the battery at a series of tiny slices through the material along its length, we will see that to traverse each tiny slice requires a little bit of work equal to (the voltage difference appearing across that little slice) times (the current flowing through it). In this way, the voltage across the chunk of material is progressively diminished as the electrons being pushed by that voltage make their way through it.
{ "domain": "physics.stackexchange", "id": 53801, "tags": "electric-circuits" }
Black Level Calculation for Raw Bayer Image
Question: I have black Raw Bayer Images RGGB color space. I want to go over each pixel in their channel and sum them up for each channel, then divide it by the number of pixels for each channel. I'm trying to build a fast optimized algorithm. Here is how I have started. The code runs but I still have some issues with the results. Please comment about optimizing my code and if there a better way to calculate the black level of an image. using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace BlackLevelCorrectionParallel { class Program { static void Main(string[] args) { string directoryPath = @"C:\Examples\blc\"; uint width = 1800; uint height = 1200; uint bpp = 10; uint colorSpace = 0; //RGGB List<BlcResults> results = new List<BlcResults>(); Parallel.ForEach(Directory.GetFiles(directoryPath, "*.raw").Select(Path.GetFullPath), rawImagePath => { Image newImage = new Image(width, height, bpp, colorSpace); newImage.ReadImage(rawImagePath); BlcResults res = newImage.CalculateBlackLevel(); results.Add(res); }); } } } Here is my image class - it basically has two functions - readImage and Algorithm. using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace BlackLevelCorrectionParallel { public class Image { public uint Width { get; set; } public uint Height { get; set; } public uint BPP { get; set; } public uint ColorSpace { get; set; } public byte[] Data { get; set; } public Image(uint width, uint height, uint bPP, uint colorSpace) { BPP = bPP; ColorSpace = colorSpace; Height = height; Width = width; } public void ReadImage(string path) { byte[] fileData = null; if (!File.Exists(path)) { throw new FileNotFoundException(path); } fileData = File.ReadAllBytes(path); var bytesPerPixel = (BPP + 7) / 8; var dataSize = Width * Height * bytesPerPixel; Data = new byte[Width * Height * bytesPerPixel]; var sdata = new short[dataSize / 2]; for (int i = 0, shortIndex = 0; i < dataSize; i += 2, shortIndex++) { CopyBytesToShort(fileData[i], fileData[i + 1], out sdata[shortIndex], (int)BPP, false); } Buffer.BlockCopy(sdata, 0, Data, 0, Data.Length); } private void CopyBytesToShort(byte byte1, byte byte2, out short retShort, int bitsPerPixel, bool isPerformShift) { short lsb, msb; lsb = byte1; msb = byte2; if (isPerformShift) { lsb <<= 16 - bitsPerPixel; msb <<= (16 - (bitsPerPixel - 8)); } else { msb <<= 8; } retShort = (short)(msb | lsb); } public BlcResults CalculateBlackLevel() { double channelGR = 0; double channelR = 0; double channelGB = 0; double channelB = 0; if (ColorSpace == 0) //RGGB { for (int i = 0; i < Width; i++) { for (int j = 0; j < Height; j++) { if (i % 2 == 0 && j % 2 == 0) { channelR += Data[i * Height + j]; } else if (i % 2 == 0 && j % 2 == 1) { channelGR += Data[i * Height + j]; } else if (i % 2 == 1 && j % 2 == 0) { channelGB += Data[i * Height + j]; } else if (i % 2 == 1 && j % 2 == 1) { channelB += Data[i * Height + j]; } } } } else if (ColorSpace == 1) { } else if (ColorSpace == 2) { } else if (ColorSpace == 3) { } double avgChannelB = channelB / ((Width / 2) * (Height / 2)); double avgChannelGB = channelGB / ((Width / 2) * (Height / 2)); double avgChannelGR = channelGR / ((Width / 2) * (Height / 2)); double avgChannelR = channelR / ((Width / 2) * (Height / 2)); BlcResults results = new BlcResults(channelB, channelGB, channelGR, channelR); return results; } } } Helper class for results using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace BlackLevelCorrectionParallel { public class BlcResults { double AvgChannelB { get; set; } double AvgChannelGb { get; set; } double AvgChannelGr { get; set; } double AvgChannelR { get; set; } public BlcResults(double channelB, double channelGb, double channelGr, double channelR) { AvgChannelB = channelB; AvgChannelGb = channelGb; AvgChannelGr = channelGr; AvgChannelR = channelR; } } } Answer: Your CalculateBlackLevel function can be simplified. If I understand correctly, your image is a sequence of pixels stored in a packed array (no stride). Each pixel is using 4 bytes. The Height of the image is the number of lines. The Width of the image is the number of channels. (Width / 4 = width of the image) A few tips: don't use floating when you don't need them. They are slow. don't use two loops for the coordinates when one can do the job as well (and faster) you don't need to compute where you are in the image since it's inferred by the loop. Lots of computations and tests can be avoided. at the end of the color calculation I assume you want to store the average you just computed and not the sum (if so there is a small bug in the code provided). I'm a bit surprised that the image seems to be stored by column instead of by line. (This is unusual to me). I wrote a pseudo C# code for the principle of what I'd do on a line by line image but this can be adapted to your format. What is missing is mostly casts. The loop is made for the following pattern: R G G B You can notice there are no multiplication, modulo, test/jump so it should be substantially faster. It turns out two levels of loops are needed but for a different reason. public BlcResults CalculateBlackLevel() { long channelGR = 0; // don't use floats here. Keep integer types. long channelR = 0; // use long instead of ints if you plan having long channelGB = 0; // pictures of more than 2^24 pixels long channelB = 0; long size = Width * Height; // number of bytes in the image if (ColorSpace == 0) //RGGB { // for all channels (o is for offset) for (long o = 0; o < size;) { // compute the first line pattern // first find where the line ends long end_line = o + Width; while(o < end_line) // line B/GB { // do the next pair of channels channelB += Data[++o]; channelGB += Data[++o]; } // compute the second line pattern // first find where the line ends end_line = o + Width; while(o < end_line) // line GR/R { // do the next pair of channels channelGR += Data[++o]; channelR += Data[++o]; } } } else if (ColorSpace == 1) { } else if (ColorSpace == 2) { } else if (ColorSpace == 3) { } // now you need double double pixel_count = (1.0 * size) / 4.0; double avgChannelB = (1.0 * channelB) / pixel_count; double avgChannelGB = (1.0 * channelGB) / pixel_count; double avgChannelGR = (1.0 * channelGR) / pixel_count; double avgChannelR = (1.0 * channelR) / pixel_count; BlcResults results = new BlcResults(avgChannelB , avgChannelGB , avgChannelGR , avgChannelR ); return results; } I got a bit more time today. In CopyBytesToShort, if you ever set isPerformShift to true: lsb <<= 16 - bitsPerPixel; // (16 - (bitsPerPixel - 8)) = 24 - bitsPerPixel msb <<= (24 - bitsPerPixel); bitsPerPixel needs to be bigger than 8 for this to set msb to any other value than 0 It looks you plan to handle more bit depths. I'd suggest to have one class for each range of BPP so accessing values would be simple and fast. Let's focus on the 8BPP one. public class Image { public uint Width { get; set; } public uint Height { get; set; } public uint BPP { get; set; } public uint ColorSpace { get; set; } public byte[] Data { get; set; } public Image(uint width, uint height, uint bPP, uint colorSpace) { if(bPP != 8) { throw new Exception("Unsupported BPP"); } if( ((width & 1) != 0) || ((height & 1) != 0)) ) { throw new Exception("Width and Height expected to be even"); } BPP = bPP; ColorSpace = colorSpace; Height = height; Width = width; } public void ReadImage(string path) { // The original call was copying bytes into shorts then into bytes // on a little endian architecture this is another way to do nothing // on a big endian architecture swapping the bytes while copying them from fileData into Data would be more efficient. // If you expect more that 8BPP then maybe you should have one class of image for each depth. // So 8 bits for depths from 1 to 8, 16 bits for depths from 9 to 16, ... if(!File.Exists(path)) { throw new FileNotFoundException(path); } Data = File.ReadAllBytes(path); if(Data.Length != Width * Height) { throw new Exception("size of the file does not match the expected number of channels"); } } public BlcResults CalculateBlackLevel() { long channelGR = 0; long channelR = 0; long channelGB = 0; long channelB = 0; long size = Width * Height; switch(ColorSpace) { case 0: //RGGB, maybe make an union for the colorspaces { for(long offset = 0; offset < size;) { long line_end = offset + Height; while(offset < line_end) { channelR += Data[offset++]; channelGR += Data[offset++]; } // given the format, there can only be an even number of columns/rows, // so no need to test for overflow // this precondition should be tested when loading the file line_end = offset + Height; while(offset < line_end) { channelGB += Data[offset++]; channelB += Data[offset++]; } } break; } default: { throw new Exception("Colorspace is not supported"); } } double weight = (double)size / 4.0; double avgChannelB = (double)channelB / size; double avgChannelGB = (double)channelGB / size; double avgChannelGR = (double)channelGR / size; double avgChannelR = (double)channelR / size; BlcResults results = new BlcResults(avgChannelB, avgChannelGB, avgChannelGR, avgChannelR); return results; } } I'm still surprised that, based on the original code, the image is store column by column instead of line by line. Could you tell the source/generator for the images? I've seen upside down, backward and forward, but never rotated. Or is it a specific RAW format ?
{ "domain": "codereview.stackexchange", "id": 14259, "tags": "c#, image" }
Can I consider this scenario for the phase change of $H_2O_{(s)}$ to $H_2O_{(L)}$
Question: From a particular topic : Enthalpy changes during phase transformations. It says in my textbook that standard state of a substance at a specified temperature is it’s pure form at 1 bar pressure. Let’s consider the phase change for $H_2O(s)$ to $H_2O$ (L). $\delta H$ of fusion with a subscript I.e - . It’s value is 6 kJ/mol is value needed for such a change. This phase change is happening at a standard state according to my textbook. Here . I have drawn is the image of how is the phase change happening. Mostly , I have just combined all the definition and statements. The $\delta H$ of fusion is at standard state. I.e it has a pressure of 1 bar of its own and a specified temperature. It is the amount of energy needed to melt $H_2O$ (s). The surrounding are at a temperature = 273K and pressure = 1 atm. When ice melts at 273K. It is converted to water at 273K. Same temperature. Pressure in scenario is always 1 atm. I wish to confirm my scenario. Answer: Figure 1. The phase diagram for water. The pressure and temperature axes on this phase diagram of water are not drawn to constant scale in order to illustrate several important properties. Image source: Chem.LibreTexts.org. The situation you are describing is circled in Figure 1. You are travelling along the horizont line through the melting point of water at 1 bar. In engineering disciplines we don't use kJ/mol but rather work with the latent heat of fusion of water which is 334 kJ/kg. Using that figure you can work out that a 1 kW (1 kJ/s) heating source will take 334 s (6.5 minutes) to convert 1 kg of ice at 0°C to 1 kg (1 L) of water at 0°C. If you leave the heater on for another 334 s the water temperature will rise to 79.8°C which illustrates how significant the cooling effect of melting ice is. Now, how much ice do you need to add to a 330 ml can of drink to cool it from 20°C to 4°C?
{ "domain": "engineering.stackexchange", "id": 4110, "tags": "mechanical-engineering, thermodynamics, heat-transfer, heat" }
Contradiction proof for inequality of P and NP?
Question: I'm trying to argue that N is not equal NP using hierarchy theorems. This is my argument, but when I showed it to our teacher and after deduction, he said that this is problematic where I can't find a compelling reason to accept. We start off by assuming that $P=NP$. Then it yields that $\mathit{SAT} \in P$ which itself then follows that $\mathit{SAT} \in TIME(n^k)$. As stands, we are able to do reduce every language in $NP$ to $\mathit{SAT}$. Therefore, $NP \subseteq TIME(n^k)$. On the contrary, the time hierarchy theorem states that there should be a language $A \in TIME(n^{k+1})$, that's not in $TIME(n^k)$. This would lead us to conclude that $A$ is in $P$, while not in $NP$, which is a contradiction to our first assumption. So, we came to the conclusion that $P \neq NP$. Is there something wrong with my proof? Answer: Then it yields that $SAT \in P$ which itself then follows that $SAT \in TIME(n^k)$. Sure. As stands, we are able to do reduce every language in $NP$ to $SAT$. Therefore, $NP \subseteq TIME(n^k)$. No. Polynomial time reductions aren't free. We can say it takes $O(n^{r(L)})$ time to reduce language $L$ to $SAT$, where $r(L)$ is the exponent in the polynomial time reduction used. This is where your argument falls apart. There is no finite $k$ such that for all $L \in NP$ we have $r(L) < k$. At least this does not follow from $P = NP$ and would be a much stronger statement. And this stronger statement does indeed conflict with the time hierarchy theorem, which tells us that $P$ can not collapse into $TIME(n^k)$, let alone all of $NP$.
{ "domain": "cs.stackexchange", "id": 13814, "tags": "complexity-theory, time-complexity, p-vs-np" }
Where on Earth has the least changing temperature?
Question: Is there a map that shows places where the temperature changes the least season to season? Is there a place on Earth where the temperature is the most constant in a comfortable range? Answer: To answer the first question, yes, there are maps showing the seasonal temperature variation, for example here Unsurprisingly, the equatorial areas have the least variation because the angle of the sun relatively to the earth changes the least with seasons. Regarding nice weather- I have worked as a geologist in northern South America and the Caribbean. In the lower lying areas (e.g. Aruba) I often thought it was too hot to be pleasant to work, but beachgoers think that was perfect weather. In the mountains it was often more pleasant, subjectively. One city I visited where the temperature is remarkably stable is Medellin in Colombia - the "city of eternal spring". Situated close to the equator, temperature is always nice there (daily mean at 22 degrees throughout the year), but it can have a lot of rainy days.
{ "domain": "earthscience.stackexchange", "id": 1483, "tags": "atmosphere, temperature, geography, mapping" }
How can a light-water reactor breed plutonium-239?
Question: The "Light-water reactor" Wikipedia page states that "the uranium 238 atoms also contribute to the fission process by converting to plutonium 239". But from what I've read you need fast neutrons to breed plutonium 239 from uranium 238, not thermal neutrons. The point of using water is to moderate the neutrons so a LWH must be a thermal reactor by definition, right? So how can a thermal reactor, specifically a LWR, breed plutonium? Maybe it's actually some kind of hybrid thermal-fast reactor where some neutrons get moderated and some dont? Answer: You need fast neutrons to fission U238.not to breed plutonium from it. The reason reactors don't use U238 as a fuel and fission it with fast neutrons is that fission of U238 would result in a runaway reaction which couldn't be controlled. To have a controlled nuclear reaction, you need delayed neutrons, which aren't present with fission of U238. As you say, fission of U235, the usual fuel of fission reactors, has to be done with slow neutrons, so the fission neutrons are slowed with a moderator. They can then be captured by U238 to transmute it into Pu239, as well as causing fission in the U235 content of the fuel rods. The daughter nuclides of U235 emit delayed neutrons at up to a second later, and this enables the reaction to be controlled.
{ "domain": "physics.stackexchange", "id": 73128, "tags": "nuclear-engineering" }
ROS2 : ModuleNotFoundError: No module named ‘rclpy._rclpy’
Question: Hi all, I’m new in using linux and i’ve been trying to run ros 2 on linux 20.04. I’ve followed the “Installing ROS 2 on Linux” tutorial : https://index.ros.org/doc/ros2/Installation/Foxy/Linux-Install-Binary/ All have been fine until i run the command ros2 run demo_nodes_py listener and i get the following traceback : Traceback (most recent call last): File “/home/moutalib/ros2_foxy/ros2-linux/lib/demo_nodes_py/listener”, line 11, in load_entry_point(‘demo-nodes-py==0.9.3’, ‘console_scripts’, ‘listener’)() File “/home/moutalib/anaconda3/lib/python3.7/site-packages/pkg_resources/ init .py”, line 490, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File “/home/moutalib/anaconda3/lib/python3.7/site-packages/pkg_resources/ init .py”, line 2854, in load_entry_point return ep.load() File “/home/moutalib/anaconda3/lib/python3.7/site-packages/pkg_resources/ init .py”, line 2445, in load return self.resolve() File “/home/moutalib/anaconda3/lib/python3.7/site-packages/pkg_resources/ init .py”, line 2451, in resolve module = import (self.module_name, fromlist=[‘ name ’], level=0) File “/home/moutalib/ros2_foxy/ros2-linux/lib/python3.8/site-packages/demo_nodes_py/topics/listener.py”, line 16, in from rclpy.node import Node File “/home/moutalib/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/node.py”, line 41, in from rclpy.client import Client File “/home/moutalib/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/client.py”, line 22, in from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy File “/home/moutalib/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py”, line 31, in rclpy_implementation = _import(’._rclpy’) File “/home/moutalib/ros2_foxy/ros2-linux/lib/python3.8/site-packages/rclpy/impl/ init .py”, line 27, in _import return importlib.import_module(name, package=‘rclpy’) File “/home/moutalib/anaconda3/lib/python3.7/importlib/ init .py”, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named ‘rclpy._rclpy’ I have tried to look on other topics but couldnt find a solution that works for me. I would be grateful if anyone can help me debug this problem. Thank you. Originally posted by BadrMt on ROS Answers with karma: 23 on 2020-06-16 Post score: 2 Original comments Comment by gvdhoorn on 2020-06-16:\ I’ve followed the “Installing ROS 2 on Linux” tutorial [..] which one? Always include a link. Comment by Dirk Thomas on 2020-06-20: In the future a much more helpful error message should be shown: https://github.com/ros2/rclpy/pull/580 Answer: your stack trace looks extremely suspect to me, it looks as if you built ros2 with python3.8 but it seems you're in a conda environment that is using python3.7? I may be wrong about the environment, but you definitely want to make sure you're using a single python version If we look at the last two lines of the stack trace File “/home/moutalib/anaconda3/lib/python3.7/importlib/ init .py”, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named ‘rclpy._rclpy’ The python 3.7 importlib is going to be looking in all of the python3.7 install paths for rclpy, but it will not find it because rclpy was installed with python3.8 Originally posted by johnconn with karma: 553 on 2020-06-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by BadrMt on 2020-06-19: Hello, Thank you for your response. Can you tell me what to do in order to have a single python version ? Should i uninstall conda ? Comment by johnconn on 2020-06-19: make sure your conda environment is deactivated using the command conda deactivate. Then things should work. Unfortunately up to ros2 eloquent (and I assume foxy) the build system will only ever look at your system python, and wouldn't work with virtual environments or conda. Comment by BadrMt on 2020-06-20: Thank you very much. This solve dmy problem and made me realise an important detail that i was missing about why did i have (base) written before my terminal line. Comment by johnconn on 2020-06-20: i'm glad we could get this resolved! Could you click the checkmark to indicate this answer solved your issue? Comment by dm on 2021-10-02: I have the same issue but when I type conda deactivate, it says that conda: command not found Can you help me with this? Comment by Roy Lim on 2023-07-10: check ~/.bashrc if conda command is allowed
{ "domain": "robotics.stackexchange", "id": 35140, "tags": "ros, ros2, rclpy" }
Enthalpy of Zinc and Sulfur Reaction
Question: I was working on my homework which was to write a description of what would occur if zinc and sulfur reacted and what steps would a scientist have to make in order to make them react. When I researched I bit, I saw that it would be a exothermic reaction, and I wanted to show this by using enthalpy. I haven't been taught this however, and as a result, I'm confused. I know that enthalpy equals the heat transfer and there is some sort relationship with bond energy, but I don't know how to apply this to the reaction. Zinc Powder + Sulfur Powder = Zinc Sulfide + Energy $\ce{Zn_{(s)} + S_{(s)} -> ZnS_{(s)}}$ + $205.98 \text{ kJ}$ Because $205>0$ this is an exothermic reaction. But where did this $205.98\text{ kJ}$ come from? Answer: The free energies of formation of the elements in their standard states is zero, by definition. You can look up the (variously reported) standard enthalpy of formation of the product, −204.6 kJ/mol (exothermic!). It is roughly the binding energy of the crystal lattice less the ionization energies of the inputs. So, by the numbers, do you obtain spalerite or wurtzite? ZnS has crystal polymorphs and polytopes with differently populated unit cells $\ce{Zn_2S_2}$ Wurtzite-2H $\ce{Zn_4S_4}$ Wurtzite-4H and Sphalerite $\ce{Zn_6S_6}$ Wurtzite-6H $\ce{Zn_15S_15}$ Wurtzite-15R
{ "domain": "chemistry.stackexchange", "id": 888, "tags": "reaction-mechanism, thermodynamics" }
Why doesn't $dU=nC_{v}\,dT$ hold for all substances?
Question: Consider the following proof for change in internal energy of real gases, liquids and solids(assuming Non-$PV$ work $=0$): Let X denote real gases, liquids, and solids The First law of thermodynamics is $dU=dQ-dW=dQ-PdV$, which also holds for X At constant volume, $dU_{v}=dQ_{v}-0$. Now, $dQ_{v}=nC_{v}\,dT$ is a trivial expression and thus, will also hold for X. So we have $dU=nC_{v}\,dT$. Since U is a state function(in terms of V and T), $dU_{v}=dU$ since the path is irrelevant. Thus, we get $dU=nC_{v}\,dT$ for all X. However, some sources indicate that $dU=nC_{v}\,dT$ is applicable only for ideal gases. Are they correct? If so, what is the mistake in this proof? Addendum: It seems the issue is in point 6 in that $dU_{v}=du$ cannot be used. This is because the internal energy change does not depend on the path, but if you are choosing an alternative path to calculate $du$ (like isochoric), that path needs to exist between the two states. So $dU=nC_{v}\,dT$ is true for an isochoric process for all X, but not in general for any process. But, why doesn't this issue arise in ideal gases? Answer: The issue is in point 6 in that $du_v=du$ cannot be used. This is because the internal energy change does not depend on the path but if you are choosing an alternative path to calculate du(like isochoric), that path needs to exist between the two states. so $du_v=nC_vdT$ is true for an isochoric process for all X, but not in general for any process. Now, why doesn't this issue arise in ideal gases? Let us consider an ideal gas going from $(P_1,V_1,T_1)$ to $(P_2,V_2,T_2)$. Regardless of whether the process is reversible or irreversible, $PV=nRT$ will hold at the endpoints so we can rewrite as $(\frac{kT_1}{V_1},V_1,T_1)$ to $(\frac{kT_2}{V_2},V_2,T_2)$. Now, since U is a state function, we can devise a path convenient to us to calculate $\Delta U$. Now,let us devise a path in 2 steps:- Step 1: $(\frac{kT_1}{V_1},V_1,T_1)$ to $(\frac{kT_1}{V_2},V_2,T_1)$ Step 2: $(\frac{kT_1}{V_2},V_2,T_1)$ to $(\frac{kT_2}{V_2},V_2,T_2)$ In step 1, $\Delta U=0$ since U is only a function of T in ideal gases. In step 2, $\Delta U=nC_v\Delta T$ since it is an isochoric process. Combining we get, $\Delta U=0+nC_v\Delta T=nC_v\Delta T$ Thus, we have proved that $\Delta U=nC_v\Delta T$ for any process of an ideal gas. Now, this analysis doesn't hold for a real gas because $\Delta U\neq0$ in step 1 since U is a function of V and T both. However, if the process in a real gas is isochoric, then $\Delta U=nC_v\Delta T$ does hold since we have eliminated step 1. In conclusion, although $\Delta U$ (and in general, change in any state function) does not depend on the path taken, the path is not to be completely ignored because you need to prove that your alternative path can in fact exist.
{ "domain": "physics.stackexchange", "id": 89474, "tags": "thermodynamics, energy, work, ideal-gas" }
Why don't massive industrial shredders shred themselves?
Question: With the huge metal shredders that can shred an entire car or a bus, they can shred parts like the axle and engine which are large solid chunks of metal, just like the massive spinning shredder blades. So why does the car get shredded and not the shredder? Are the blades made from harder/stronger metal, or is there something about their shape that makes them stronger (they just look like large plates with notches on)? Answer: The blades are indeed made of, or tipped with a hard steel/carbide, but they also don't contact each other. The shredding action is created by the shear forces of the two corotating drums of "teeth", and this non-contact between the hard points, high shear strength, and gearing advantages allow it to power through whatever you throw at it. https://www.youtube.com/watch?v=aVkTj9VrH4o
{ "domain": "engineering.stackexchange", "id": 22, "tags": "crushing, shredding, waste-disposal" }
Binary heap implementation with a generic base class
Question: This is my attempt at implementing a Binary heap. I made an abstract base class Heap with an abstract property controlling whether it is a min-heap or a max-heap. One thing I've been struggling with was which collections.generic interfaces to apply to the class. Enumerating the heap only makes a bit of sense. At the same time, it is a collection, but I'm not sure if all the interface methods really make sense for a Heap. So any pointers for that would be welcome. Abstract Heap<T> public abstract class Heap<T> : IEnumerable<T>, ICollection<T> where T : IComparable { private T[] innerT; private int capacity = 1; private int numberOfLevels = 1; public int Count { get; private set; } = 0; public bool IsReadOnly => false; protected abstract bool CloserToRoot(int comparison); public Heap() { innerT = new T[capacity]; } public Heap(int capacity) { innerT = new T[capacity]; this.capacity = capacity; this.numberOfLevels = CapacityToLevels(capacity); } public Heap(IEnumerable<T> sequence) { if (sequence == null) { throw new ArgumentNullException(nameof(sequence)); } foreach (var item in sequence) { Add(item); } } public void Add(T item) { Count++; if (Count == capacity) { Resize(); } innerT[Count - 1] = item; UpHeap(); } private int Find(T item) { return FindInternal(item, 0); } private int FindInternal(T item, int index) { if (index >= Count) { return -1; // end of the heap. } var comp = innerT[index].CompareTo(item); if (comp == 0) { // Found it! return index; } else if (CloserToRoot(comp)) { // Still possible to be lower. var rightChild = IndexToRightChildNode(index); var leftChild = rightChild - 1; var rightResult = FindInternal(item, rightChild); var leftResult = FindInternal(item, leftChild); if (rightResult >= 0) { return rightResult; } if (leftResult >= 0) { return leftResult; } } return -1; // Nope :( Either both children were -1 or this index has a value lower than item; } private void UpHeap() { var currentNode = Count - 1; var parentNode = IndexToParentNode(currentNode); while (currentNode != 0 && CloserToRoot(innerT[currentNode].CompareTo(innerT[parentNode]))) { Swap(currentNode, parentNode); currentNode = parentNode; parentNode = IndexToParentNode(currentNode); } } private void DownHeap(int startingIndex) { int currentIndex; var largestIndex = startingIndex; do { currentIndex = largestIndex; var rightChild = IndexToRightChildNode(currentIndex); var leftChild = rightChild - 1; if (leftChild < Count && CloserToRoot(innerT[leftChild].CompareTo(innerT[largestIndex]))) { largestIndex = leftChild; } if (rightChild < Count && CloserToRoot(innerT[rightChild].CompareTo(innerT[largestIndex]))) { largestIndex = rightChild; } Swap(largestIndex, currentIndex); } while (largestIndex != currentIndex); } private void Swap(int a, int b) { var placeholder = innerT[a]; innerT[a] = innerT[b]; innerT[b] = placeholder; } private static int CapacityToLevels(int capacity) { var Log2 = Math.Log(2); return (int)Math.Ceiling(Math.Log(capacity + 1) / Log2); } private static int IndexToRightChildNode(int index) { return 2 * index + 2; } private static int IndexToParentNode(int index) { return (index - 1) / 2; } private void Resize() { capacity = capacity << 1 | 1; numberOfLevels++; Array.Resize(ref innerT, capacity); } public IEnumerator<T> GetEnumerator() { return (IEnumerator<T>)innerT.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return innerT.GetEnumerator(); } public void Clear() { capacity = 1; numberOfLevels = 1; innerT = new T[capacity]; } public bool Contains(T item) { return Find(item) != -1; } public void CopyTo(T[] array, int arrayIndex) { if (arrayIndex < 0 || arrayIndex >= Count) { throw new ArgumentOutOfRangeException(nameof(arrayIndex)); } innerT.CopyTo(array, arrayIndex); } public bool Remove(T item) { var index = Find(item); if (index == -1) { return false; } innerT[index] = default(T); Swap(index, Count - 1); Count--; DownHeap(index); return true; } } MaxHeap public class MaxHeap<T> : Heap<T> where T : IComparable { public MaxHeap() { } public MaxHeap(int capacity) : base(capacity) { } public MaxHeap(IEnumerable<T> sequence) : base(sequence) { } protected override bool CloserToRoot(int comparison) => comparison > 0; } MinHeap public class MinHeap<T> : Heap<T> where T : IComparable { public MinHeap() { } public MinHeap(int capacity) : base(capacity) { } public MinHeap(IEnumerable<T> sequence) : base(sequence) { } protected override bool CloserToRoot(int comparison) => comparison < 0; } Answer: Design considerations The main purpose of a heap is to provide quick access to the top-most item, so I'd expect to see some kind of Peek and Pop methods, allowing for usage like while (heap.Count > 0) { DoWork(heap.Pop()); }. Surprisingly, such methods are absent. While a heap is some kind of collection, and implementing IEnumerable<T> does let you use Linq and other enumerable-consuming methods, that's not really using a heap for what it does best. If you do need to enumerate it, then perhaps you're better off using a different data-structure in the first place? Instead of relying on inheritance and requiring T to implement IComparable (or IComparable<T>), consider letting the user pass in a Func<T, T, int> or an IComparer<T>. This should be more flexible, allowing for usage like new Heap<PaymentRequest>((a, b) => a.DueDate.CompareTo(b.DueDate));, where PaymentRequest either doesn't implement IComparable, or does but in an unsuitable way (such as comparing payment amounts rather than due-dates). Also, instead of creating a separate class you now only need to create a separate method (or lambda). Bugs The Heap(IEnumerable<T> sequence) constructor fails to initialize innerT when the given sequence is empty. This breaks enumeration. The Heap(int capacity) constructor allows a capacity of 0, but it'll cause Add to fail with an IndexOutOfRange. Use >= instead of == when comparing count and capacity. IEnumerator<T> GetEnumerator fails with an InvalidCastException. You can fix this by casting innerT to IEnumerable<T>, to ensure that IEnumerable<T>.GetEnumerator is called instead of IEnumerable.GetEnumerator. But see the next point: GetEnumerator and CopyTo do not take into account that Count can be smaller than capacity, resulting in additional 'empty' values being enumerated or copied. CopyTo is not implemented correctly: the index parameter in Array.CopyTo is a destination index, not a source index. This means that the arrayIndex >= Count check makes no sense: it should be arrayIndex + Count > array.Length. Other notes When dealing with multiple constructors, try designating one as the 'main' constructor and let the others call it. In this case, Heap(int capacity) would be a good choice: Heap() : this(1), Heap(IEnumerable<T> sequence) : this(1). The current implementation of GetEnumerator doesn't take situations into account where the heap is modified while being enumerated. I'm not sure whether it's worth preventing that, but it's something to be aware of. FindInternal: can be optimized by first checking the result of FindInternal(item, rightChild). If that's a match then there's no need to search throught the left subtree. is only used by Find, so it could be made an inner method. its readability can be improved by adding a bit of whitespace between the first early-out check and the rest of the method. Also, personally I tend to omit braces for if-statements that only have a single break/continue/return/throw statement. I'm aware of the risks but I don't think it's worth the additional clutter in these particular cases. DownHeap performs an unnecessary swap when largestIndex == currentIndex. Instead of just adding each item, there's apparently a more efficient algorithm (Floyd's) for building heaps from a given sequence. Resize: would be more accurately named as ResizeToNextLevel. can be made more reusable by adding an int newCapacity parameter, allowing you to reuse it in both the constructors and the Clear method. is a little bit inconsistent in that it uses bitwise-operators, while all other utility methods use arithmetic operators. capacity is always equal to innerT.Length, so I would make it a property instead: private int capacity => innerT.Length;. To me, CloserToRoot(a.CompareTo(b)) isn't easily comprehensible. If it returns true, does that mean that a should be higher-up the tree than b, or the other way around? Similarly, UpHeap and DownHeap are, at least to me, not immediately descriptive names. Adding an int addedIndex parameter to UpHeap and renaming the startingIndex parameter in DownHeap to removedIndex should help to make their purpose a little more obvious. Some documentation wouldn't be a bad thing either. Some index variables are clearly named as such (startingIndex, currentIndex, and so on) but others are not (leftChild, rightChild, currentNode, parentNode and so on). Personally I would use the index suffix everywhere, but whatever you choose, try to be consistent. Public, private and static methods are somewhat mixed. Try grouping related methods together.
{ "domain": "codereview.stackexchange", "id": 32394, "tags": "c#, heap" }
Check if a binary tree is balanced
Question: Looking for code-review, optimizations and best practices. This question is attributed to geeksforgeeks. NOTE: Please help me verify space complexity, as I am creating TreeData objects in recursion. I believe it is \$\mathcal{O}(1)\$. Ignore the space-complexity of stack frames. Looking only for space occupied by TreeData. At any point there are only 3 TreeData objects. As frame is popped, the TreeData gets garbage collected. Please verify. public class CheckBalancedTree<E> { private TreeNode<E> root; public CheckBalancedTree(List<E> items) { create(items); } private void create (List<E> items) { if (items.size() == 0) { throw new IllegalArgumentException("The list is empty."); } root = new TreeNode<>(items.get(0)); final Queue<TreeNode<E>> queue = new LinkedList<>(); queue.add(root); final int half = items.size() / 2; for (int i = 0; i < half; i++) { if (items.get(i) != null) { final TreeNode<E> current = queue.poll(); int left = 2 * i + 1; int right = 2 * i + 2; if (items.get(left) != null) { current.left = new TreeNode<>(items.get(left)); queue.add(current.left); } if (right < items.size() && items.get(right) != null) { current.right = new TreeNode<>(items.get(right)); queue.add(current.right); } } } } // create a binary tree. private static class TreeNode<E> { private TreeNode<E> left; private E item; private TreeNode<E> right; TreeNode(E item) { this.item = item; } } private static class TreeData { private int height; private boolean isBalanced; TreeData(int height, boolean isBalanced) { this.height = height; this.isBalanced = isBalanced; } } public boolean checkIfBalanced() { if (root == null) { throw new IllegalStateException(); } return checkBalanced(root).isBalanced; } public TreeData checkBalanced(TreeNode<E> node) { if (node == null) return new TreeData(-1, true); TreeData tdLeft = checkBalanced(node.left); if (!tdLeft.isBalanced) return new TreeData(-1, false); // if boolean value is false, then no need to return the correct value for height. TreeData tdRight = checkBalanced(node.right); if (tdRight.isBalanced && Math.abs(tdLeft.height - tdRight.height) <= 1) { return new TreeData(Math.max(tdLeft.height, tdRight.height) + 1, true); } return new TreeData(-1, false); // if boolean value is false, then no need to return the correct value for height. } } public class CheckBalancedTreeTest { @Test public void testLeftSkewed() { /* 1 * / * 2 * / * 3 */ Integer[] leftSkewed = {1, 2, null, 3, null, null, null}; CheckBalancedTree<Integer> cbt = new CheckBalancedTree<>(Arrays.asList(leftSkewed)); assertFalse(cbt.checkIfBalanced()); } @Test public void testRightSkewed() { /* 1 * \ * 2 * \ * 3 */ Integer[] rightSkewed = {1, null, 2, null, null, null, 3 }; CheckBalancedTree<Integer> cbt = new CheckBalancedTree<>(Arrays.asList(rightSkewed)); assertFalse(cbt.checkIfBalanced()); } @Test public void testSuccessCase() { /* * 1 * / \ * 2 3 * /\ * 4 5 */ Integer[] successCase = {1, 2, 3, 4, 5, null, null}; CheckBalancedTree<Integer> cbt = new CheckBalancedTree<>(Arrays.asList(successCase)); assertTrue(cbt.checkIfBalanced()); } @Test public void testFailureCase() { /* * 1 * / \ * 2 3 * / \ * 4 5 * / \ * 6 7 */ Integer[] failure = {1, 2, 3, 4, 5, null, null, null, null, 6, 7, null, null}; CheckBalancedTree<Integer> cbt = new CheckBalancedTree<>(Arrays.asList(failure)); assertFalse(cbt.checkIfBalanced()); } } Answer: Although the space use of TreeData seems a prominent part of your question, perhaps you might be interested in an alternative algorithm that doesn't require TreeData at all, eliminating your concerns about space. I suppose the reason for TreeData was to carry a boolean value in addition to height. Note that the value of height is completely irrelevant when the boolean is false. Also note that height is always >= 0. Based on this logic, you can refactor your existing algorithm: Simply look for the height When a branch is found to be not balanced, return a negative value Instead of checking a boolean value, check if the returned height is negative Taking your algorithm, and replacing the types and and the boolean checks, the implementation becomes this, and it still passes all your unit tests, without using TreeData objects: private static final int UNBALANCED = -1; public boolean checkIfBalanced() { if (root == null) { throw new IllegalStateException(); } return checkBalanced(root) != UNBALANCED; } public int checkBalanced(TreeNode<E> node) { if (node == null) return 0; int tdLeft = checkBalanced(node.left); if (tdLeft == UNBALANCED) return UNBALANCED; int tdRight = checkBalanced(node.right); if (tdRight != UNBALANCED && Math.abs(tdLeft - tdRight) <= 1) { return Math.max(tdLeft, tdRight) + 1; } return UNBALANCED; } I would make a few more improvements on top that: The case of root == null is not necessarily an anomaly, but simply an empty tree. I think it makes sense to simply drop that check. The implementation will return true in this case, which is correct: an empty tree is balanced The repeated return UNBALANCED is a bit ugly, duplicated code. They can be eliminated by using nested if statements, where only the balanced case will reach the innermost statement, and everything else will fall back to a final default return UNBALANCED statements. As the code will become a bit arrow shaped, it might be arguable whether it's really an improvement. See below, and I'll let you decide that for yourself. With the logic refactored, now different names will make more sense for the functions and the local variables It's recommended to always use braces, when an if has only a single statement So the code becomes: private static final int UNBALANCED = -1; public boolean checkIfBalanced() { return getHeight(root) != UNBALANCED; } public int getHeight(TreeNode<E> node) { if (node == null) { return 0; } int leftHeight = getHeight(node.left); if (leftHeight != UNBALANCED) { int rightHeight = getHeight(node.right); if (rightHeight != UNBALANCED) { if (Math.abs(leftHeight - rightHeight) < 2) { return 1 + Math.max(leftHeight, rightHeight); } } } return UNBALANCED; }
{ "domain": "codereview.stackexchange", "id": 12161, "tags": "java, algorithm, tree" }
using smach with actionlib
Question: When using smach with actionlib actions, am I correct in saying that smach acts as the action client for the action server? So I don't need a separate action client anymore? Originally posted by Homer Manalo on ROS Answers with karma: 475 on 2011-02-27 Post score: 3 Answer: That is correct, if you use a simple action state Smach will construct the action client for you. Similarly, when you create a action server wrapper for a Smach container, Smach will construct the action server for you. Originally posted by Wim with karma: 2915 on 2011-02-28 This answer was ACCEPTED on the original site Post score: 5
{ "domain": "robotics.stackexchange", "id": 4894, "tags": "ros, executive-smach, actionlib, smach" }
In Deep Q-learning, are the target update frequency and the batch training frequency related?
Question: In a Deep Q-learning algorithm, we perform a batch training every train_freq and we update the parameters of the target network every target_update_freq. Are train_freq and target_update_freq necessary related, e.g., one should be always greater than the other, or must they be independently optimized depending on the problem? EDIT Changed the name of batch_freq to train_freq. Answer: It is fairly common in DQN to train a minibatch after every observation received after the replay memory has enough data (how much is enough is yet another parameter). This is not necessary, and it is fine to collect more data between training steps, the alogrithm is still DQN. The value higher than 1 for train_freq here might be related to use of prioritised replay memory sampling - I have no real experience with that. The update to target network generally needs to occur less frequently than training steps, it is intended to stabilise results numerically, so that over or under estimates of value functions do not result in runaway feedback. The parameters choices will interact each other, most hyperparameters in machine learning do so unfortunately. Which makes searching for ideal values fiddly and time-consuming. In this case it is safe to say that train_freq is expected to be much lower than target_update_freq, probably by at least an order of magnitude, and more usually 2 or 3 orders of magnitude. However, that's not quite the same as saying there is a strong relationship between choices for those two hyperparameters. The value of batch_size is also relevant here, as it shows the rate that memory is being used (and re-used) by the training process. The library you are using has these defaults: batch_size::Int64 = 32 train_freq::Int64 = 4 target_update_freq::Int64 = 500 They seem like sane starting points. You are relatively free to change them as if they were independent, as there is no simple rule like "target_update_freq should be 125 times train_freq". As a very rough guide, you can expect that high values of train_freq, low values of batch_size and low values of target_update_freq are likely to cause instability in the learning process, whilst going too far in the opposite direction may slow learning down. You might be able to set train_freq to 1, but I am not completely certain about that either in combination with the prioritised replay memory sampling which seems to be the default in the library you are using.
{ "domain": "ai.stackexchange", "id": 2095, "tags": "reinforcement-learning, deep-rl" }
Which permutations can not be obtained by moving elements through two stacks?
Question: I have a Stack1 which has the entries a,b,c ( with a on the Top) and Stack2 which is empty.The condition is. An entry pooped out of the stack1 can be printed immediatly or pushed to stack2. An Entry popped out of the stack2 can only be printed. In this Arrangement, which of the permutation of a,b,c is not possible. I tried diagrammatically by popping out the initial stack and found the first arrangement as c,a,b...Then i mind a formula taught in Permutation's course as (n-1)! possibilites so came up with answer of 5. so, which arrangement will be not possible.?? Answer: This process is called stack sorting. Wikipedia has a page on the subject. Here you read that the number of possible sequences when the input is $1,2,\dots,n$ equals the Catalan number $C_n = \frac{1}{n+1} {2n\choose n}$, which differs from your guess. In fact there is a nice characterization of the orders which are possible, they avoid the permutation pattern $231$. However your problem is "backwards", not the strings that can be sorted using a single stack, but the strings that can be obtained that way.
{ "domain": "cs.stackexchange", "id": 3546, "tags": "data-structures, combinatorics, stacks" }
Linq C# pulling from table row into input fields
Question: Today is the first time I've used Linq, so I wanted to get a code review on this and make sure I'm going about it correctly. I'm filling text boxes and dropdown lists from a table row. protected void Page_Load(object sender, EventArgs e) { using (DBContext da = new DBContext()) { string dob; var detail = from a in da.tblPatients where a.ID == 1198 select a; foreach (var d in detail) { txtFirstName.Text = d.FirstName; txtLastName.Text = d.LastName; txtPreferredName.Text = d.PreferredName; dob = d.DOB; if(!String.IsNullOrEmpty(dob)) { try { int lastindexofdob = 0; string month = ""; string day = ""; string year = ""; lastindexofdob = dob.LastIndexOf("/"); month = dob.Substring(0, dob.IndexOf("/")); day = dob.Substring(month.Length + 1, (dob.LastIndexOf("/") - month.Length) - 1); year = dob.Substring(dob.Length - 4); ddlDOBMonth.SelectedIndex = ddlDOBMonth.Items.IndexOf(ddlDOBMonth.Items.FindByValue(month)); ddlDOBDay.SelectedIndex = ddlDOBDay.Items.IndexOf(ddlDOBDay.Items.FindByText(day)); ddlDOBYear.SelectedIndex = ddlDOBYear.Items.IndexOf(ddlDOBYear.Items.FindByText(year)); } catch { //error catching will go here} } txtAddress.Text = d.Address; } } } There are actually about 20 more input fields to do, but I wanted a code review at this point. Answer: Your use of linq is fine. You may consider using a more descriptive variable than a, such as patient You should pass any query parameters likely to change as variables. Consider: var patientID = 1198; var detail = from patient in da.tblPatients where patient.ID == patientID select patient; I find the form population logic and database design to be a touch more troubling. txtFirstName.Text = d.FirstName is perfectly fine. DOB, which I assume to be date of birth should be stored and used as a Date rather than a String. If you are unable to change the data-layer, at least convert it to a Date for use using Date.Parse or similar. Your try block is lazy coding. Check the results from your dob.IndexOf and 'dob.Substring' calls rather than relying on catching an error -- be defensive in your code. Consider the following: DateTime dob = DateTime.MinValue; if (DateTime.TryParse(d.DOB, dob)) { ddlDOBMonth.SelectedValue = dob.Month.ToString(); ddlDOBDay.SelectedValue = dob.Day.ToString(); ddlDOBYear.SelectedValue = dob.Year.ToString(); }
{ "domain": "codereview.stackexchange", "id": 10489, "tags": "c#, linq, linq-to-sql" }
Which wave is widely used in waveguides?
Question: I've studied that the rectangular waveguides are widely used because they have less cut-off frequency and as we know that the cut-off frequency is one of the major factors for waveguides. The article I've read has mentioned TE wave! Question: Which wave is more commonly used in a waveguide?  or My question is wrong!? It can be that they both are used in particular situations. Answer: It depends on the shape. For a homogeneous rectangular waveguide the 1st mode is the $TE_{10}$ but for a circular guide is $TE_{11}$. For a coaxial guide the fundamental mode is TEM but if it is inhomogeneously filled such as a microstrip line where propagation is partly in air and partly in the dielectric substrate the mode is essentially a hybrid of TEM and TE. In dielectric guides that are popular in fiber optics the propagating modes are all hybrid modes, as well. In general, the closer the frequency is to that of the cutoff the more the mode approximates a pure TEM, TE or TM, respectively. Away from the cutoff there are no pure TEM, TE or TM modes unless the guide is filled homonegeneously.
{ "domain": "physics.stackexchange", "id": 51783, "tags": "frequency, microwaves, waveguide" }
Why do my slow burning fuses light off?
Question: So recently II have been making a few times some slow burning fuses with: 34 g potassium nitrate, 26 g sugar, 60 g of water and some wool yarn. However when I light it up it burn a few seconds and just lights off. I have also dried it at 130 °C for 20 min. How can I make it last the length of the yarn? edit: I was using cotton and not wool (sorry). Answer: Have you evaluated the ideal theoretical or experimental ratio $\ce{KNO3}$ : sum of sugar and cotton? Stoichiometric mass ratio $\ce{KNO3}$: sucrose is 3 : 1. $$\ce{48 KNO3 + 5 C12H22O11 -> \\ 60 CO2 + 55 H2O + 24 N2 + 24 K2O}$$ (A part of potassium oxide and carbon dioxide may end as potassium carbonate, but it does not affect the ratio.) The needed ratio is even higher as the cotton thread is being burnt as well. Your ratio is way too low. You may need to use hot solution, as $\ce{KNO3}$ solubility dramatically raises with temperature. As info has been corrected, the stoichiometric ratio is for cotton similar as for sugar/sucrose, i.e. 3 : 1. You can estimate the overall mass ratio from the nitrate/sugar ratio in solution and final/initial thread mass ratio. If use use nitrate without sugar, the final thread mass should be 4 times bigger than its initial mass. If you do use sugar and if the solution nitrate/sugar mass ratio is $a$ and the final/initial thread mass ratio is $b$, then the total nitrate/(sugar+cotton) ratio is $$\frac{(b-1)a/(a+1)}{1+(b-1)1/(a+1)} = \\ \frac{(b-1)a}{a+b}$$ If we respect the desired 3 : 1 ratio and if we use the solution ratio $a$, then the final/initial tread mass ratio must be: $$3 = \frac{(b-1)a}{a+b}$$ $$4a + 3b = ab$$ $$b=\frac{4a}{a-3}$$ I think it is worthy to try: hot saturated $\ce{KNO3}$ solution, without sugar $\ce{KNO3}$/sugar ratio 4-6/1, as correction on cotton. as supporting procedure, accumulative deposion by repetitive soaking/drying. to let (repeatedly) $\ce{KNO3}$ crystallize on the thread in saturated solution being cooled down. (The original wool part had been left for completeness, but is not relevant any more.) Estimated stoichiometric mass ratio $\ce{KNO3}$ : wool is 4.2 : 1. (based on typical ranges of major elemental wool composition, represented by the model empirical formula $\ce{C15O5N4H20}$ ). $$\ce{14 KNO3 + C15O5N4H20 -> \\ 15 CO2 + 10 H2O + 9 N2 + 7 K2O}$$
{ "domain": "chemistry.stackexchange", "id": 17368, "tags": "inorganic-chemistry, explosives, pyrotechnics" }
How can I implement an arbitrary quantum channel in a quantum circuit for real experiments using IBM quantum experience?
Question: Let's suppose I have a quantum channel given in the Kraus decomposition $T(\rho) = \sum_{i} K_i \rho K_i^{\dagger} $ is there any way to put explicitly these $K_i$ in IBM quantum experience (QE) to run real experiments? Or is only possible to implement quantum channels in IBM QE by using combinations of unitary gates? Answer: I don't think there is a general method in Qiskit that takes Kraus operators as input and run them on a real device by controlling the noise of the hardware (instead of using a larger Hilbert space for instance). In fact, there is no method to do that with arbitrary unitary gates neither (as far as I know), you need to first decompose your unitary matrix into elementary gates that have an implementation on the hardware. So you have two possibilities to implement your channel: You don't mind using a larger Hilbert space (i.e. some ancilla qubits). Then you have many ways to implement your channel. But there is no general function in Qiskit to do that neither, you need to do the maths and figure out the algorithm yourself. If your Kraus operators are nice enough (for instance unitary), you can try to extend this example in your case. Otherwise, you might need to use an LCU (linear combination of unitary) decomposition or some other techniques. You really want to use the intrinsic noise of the device. Then, maybe, your channel is implementable using OpenPulse, but you will need to prepare it using some quantum control and it will be hard if you don't have the background.
{ "domain": "quantumcomputing.stackexchange", "id": 1425, "tags": "qiskit" }
Energy in $RC$ Circuit
Question: In circuit A, the total energy dissipated in the resistor is $\frac{Q_i^2}{2C}$ which equals the initial energy, meaning that all the energy was dissipated in the resistor, and lost as heat. Here's my problem: I have read that in the circuit in figure B, the energy is dissipated as electromagnetic waves (which I have yet to learn about). Why do these waves not occur in circuit A? Answer: Circuit $B$ is an LC circuit so it will oscillate with an angular frequency $\omega = 1/\sqrt{LC}$. You will probably point out that there is no inductor in the circuit but all electrical components, even a straight wire, have an inductance so your circuit will have an inductance even though it's probably very small. The oscillating current generates an oscillating electromagnetic field around the circuit, and that oscillating EM field radiates energy as EM waves. This is basically how a radio aerial works. Circuit $A$ is an RLC circuit i.e. an LC circuit like $B$ but with a resistor that provides damping. And since the inductance is very small for any reasonable value of the resistance the circuit will be overdamped. That means instead of oscillating the current will decay exponentially with time. Since there are no oscillations there is no oscillating EM field and therefore no EM radiation. All the energy is lost in the resistor instead.
{ "domain": "physics.stackexchange", "id": 90367, "tags": "electric-circuits, energy-conservation, electrical-resistance, capacitance" }
Stress calculations in a perforated paper
Question: You have a sheet of paper (torn out of a good quality foolscap notebook) as shown above, and you start pulling it apart with both your hands (forces indicating by the blue arrows). Its difficult to tear the paper apart this way --- it has a very high tensile strength, I assume. If you try this at home, you are bound to fail. Now if you make tiny perforations in the paper (indicated by red circles), pull it apart the same way, you'll notice that its very easy to tear apart the paper. And the line of tearing will definitely have a few circles on them. Is this happening because of the high stresses that are formed around the circle? Can someone give a good mathematical as well as verbal explanation of this phenomenon. Answer: Yes. The tear is initiated at stress concentrations around the holes, where stress is highest. After initiation, the tear continues to propagate along the line of highest stress. Stress is a function of force and geometry ($\sigma_{n} = \frac {F}{A_{n}}$). In a piece of paper without holes, the stress is uniform, and the paper tears when stress exceeds the ultimate tensile strength of the paper ($\sigma_{n} \gt \sigma_{UT, paper}$). Average Stress- the most basic explanation When holes are present, they effectively reduce the cross sectional area ($A_{eff} = A_{o} - n_{holes}A_{hole}$)* that transmits force. Static equilibrium requires that the stress increases proportional to the reduction in area, visualized below. $$\therefore \sigma_{eff} = \frac {F}{A_{eff}}$$ Because $\sigma_{eff} \gt \sigma_{n}$, it follows that the paper will tear along cross sections where holes are present. While this correctly calculates the average stress, it assumes the stress between holes is uniform (and equal to the average stress). In reality, the stress profile between holes is not uniform, as discussed in the following. * Note that 'area' refers to cross sectional area Stress Concentrations- approximate stress state Stress concentrations describe the stress state at abrupt changes in geometry, where the stress profile is non-uniform. Analogous to lines of pressure in (laminar) fluid flow around an immersed body, lines of force 'flow' through geometry, becoming concentrated around the holes (where no material exists to transmit force). A stress concentration factor ($K_{s}$) is applied to the nominal stress to calculate the maximum stress, where $\sigma_{max} = K_{s}\sigma_{n}$. Stress concentration factors depend on geometry and are determined analytically, or by experimental data. From the analytical solution of an infinite plate with a single hole, loaded uniaxially, $K_{s} = 3$. More applicable, from Peterson's Stress Concentration Factors, an infinite plate with a linear hole pattern: $$\therefore \sigma_{SC} \approx 3 \sigma_{n}$$ Finite Element Method- complete stress state Complete, accurate solutions are readily obtained by Finite Element Methods (FEM), where analytic solutions are not possible with complex geometry. With assumed dimensions (similar to the posed problem), $K_{s} = 3.75$, determined from the converged solution shown below (where 'brighter' colors indicate higher stress, consistent with the solution given by stress concentration). $$\therefore \sigma_{FEM} = 3.75 \sigma_{n}$$ All solution methods demonstrate that stress increases in cross sections where holes are present: When stress at any location in the paper exceeds the paper's strength ($\sigma_{max} \gt \sigma_{UT, paper}$), a tear is initiated and follows the line of highest stress- this validates your insight. There are several other considerations, not discussed, but listed under 'References': Paper is not a ductile material- it does not plastically deform Paper is not (usually) isotropic- it is orthotropic, where its strength depends on orientation Fracture mechanics (founded on the assumption that all materials have defects)- material irregularities act as micro stress concentrations References: Mechanical Properties of Paper- Basic Mechanical Properties of Paper-Advanced Fracture Mechanics of Paper
{ "domain": "physics.stackexchange", "id": 26549, "tags": "newtonian-mechanics, classical-mechanics, tensor-calculus, stress-strain, stress-energy-momentum-tensor" }
Is HCl (l) + H2O (l) equal to HCl (aq)?
Question: The Wikipedia article about aqueous solutions claims that: An aqueous solution is a solution in which the solvent is water So basically, is writing $\ce{HCl(aq)}$ equal to: $\ce{HCl(l) + H2O(l)}$? Answer: $\ce{HCl{(l)}}$ is not the correct notation in this case. Arrhenius acids and bases (chemicals that interchange $\ce{H^+}$ with water to form $\ce{H_3O^+}$ or $\ce{OH^-}$) break apart in water, forming ions. In your case, there is an equilibrium $\ce{HCl{(aq)} + H_2O{(l)}<=> H_3O^+{(aq)} + Cl^-{(aq)}}$ which very strongly lies to the right; virtually all of the $\ce{HCl{(aq)}}$ is broken apart in this way (which is why $\ce{HCl}$ is known as a strong acid). Now, not everything dissolved in water has to dissociate into ions. For example, glucose $\ce{(C_6H_12O_6{(s)})}$ dissolves in water to form $\ce{C_6H_12O_6{(aq)}}$. Weak acids are acids that do not entirely dissociate into ions (for example, $\ce{CH_3COOH{(aq)} + H_2O{(l)} <=> CH_3COO^-{(aq)} + H_3O^+{(aq)}}$ still has some undissociated $\ce{CH_3COOH{(aq)}}$ in solution.) A final point is that some dissolved species can, in fact, be solvents on their own. In this case, the definition of whether the species is aqueous or liquid is not well-defined and usually depends on the context. For example, a 1:4 mixture of $\ce{MeOH}$ and $\ce{H_2O}$ is typically written as $\ce{MeOH{(l)} + {H_2O{(l)}}}$, but a solution where a small amount of $\ce{MeOH}$ (maybe used as a reactant) would be denoted $\ce{MeOH{(aq)}}$. This has to do with the definition of the thermodynamic activities of the species and if you have questions about this, there's probably someone else on here who is a better expert than I to explain.
{ "domain": "chemistry.stackexchange", "id": 7581, "tags": "aqueous-solution" }
Energy shift between hydrogen and deuterium
Question: Stated: The atomic spectra of hydrogen and deuterium are similar however shifted in energies. So im trying to explain why it is that the emission lines are shifted and how they are shifted. Since the nucleus of deuterium contains both a proton and a neutron its notably heavier than the nuclueus of hydrogen, which only contains a proton. And since the transition energy is given by following equation: $E_i-E_f = \frac{\mu_xZ^2e^4}{(4\pi\varepsilon_0)^22h^2}\left[\frac {1}{n_f^2}-\frac {1}{n_i^2}\right]$, where $\mu_x $ is the reduced mass of Atom $X$. its clear that the energy varies with the atom mass. But i dont really know how to tackle the second part of my problem i.e. explaining how they are shifted. Thank you in advance! Answer: Consider the ratio of their shifts in energy. Since all the numbers are the same except for the reduced mass you are looking at something like: $$ \frac{(E_i -E_f)_p}{(E_i -E_f)_d}=\frac{\mu_p}{\mu_d}=\frac{m_p m_e(m_d +m_e)}{(m_p +m_e)m_d m_e} $$ If we cancel the $m_e$ and then pull a $m_d$ out of the top and a $m_p$ out of the bottom we get $$ \frac{(E_i -E_f)_p}{(E_i -E_f)_d}=\frac{(1+\frac{m_e}{m_d})}{(1+\frac{m_e}{m_p})} $$ I hope this helps.
{ "domain": "physics.stackexchange", "id": 4417, "tags": "quantum-mechanics, atoms, spectroscopy" }
Potential explanations of Red Sea crossing
Question: I am looking for a believable explanation of the Red Sea Crossing in the Bible. This would involve either strong winds(which is mentioned in the Bible) or plate tectonics which could cause land to rise up and fall again. My problem with wind is that I know of no incident where strong winds parted a sea --- even one as shallow as the Red Sea which is, as far as I know as low as 50m in some areas. Winds always move inland not parallel to the coast. Tidal waves and tsunamis always crash into the land not part the sea. Unless I am mistaken..... Plate movement that both raised and lowered the seabed seems more likely and the Red Sea sees a lot of earthquakes. I've heard the surface area can be crossed in as little as 4 hours. Does anyone know of an example where the land was both raised and lowered in an earthquake, high and low enough to create both a crossing and drowning ? Answer: To my knowledge, the best study looking at potential explanations for the Red Sea crossing is the one by Nof and Paldor (1992). They present a couple of plausible scenarios for the crossing. The main one is the effect of strong winds blowing along the Gulf of Suez and they find that the sea level drop could be sufficient: It is found that, even for moderate storms with wind speed of about 20 s−1 receding distance of more than 1 km and a sea level drop of more than 2.5 m are obtained. These relatively high values are a result of the unique geometry of the gulf (i.e., its rather small width-to-length and depth-to-length ratios) and the nonlinearity of the governing equation. Upon an abrupt relaxation of the wind, the water returns to its prewind position as a fast (nonlinear) gravity wave that floods the entire receding zone within minutes. The second mechanism involves a tsunami propagating along the Gulf of Suez from the Red Sea, but it seems like way more of a stretch. Nof D. and N. Paldor, 1992: Are There Oceanographic Explanations for the Israelites'Crossing of the Red Sea?. Bull. Amer. Meteor. Soc., 73, 305–314. doi: http://dx.doi.org/10.1175/1520-0477(1992)073%3C0305:ATOEFT%3E2.0.CO;2
{ "domain": "earthscience.stackexchange", "id": 224, "tags": "ocean, earthquakes, sea-level, wind, waves" }
Indoor GPS with Navsat_transform
Question: I have a set of Marvelmind robotics indoor GPS beacons and have it setup with pose data(no yaw, only x,y) published on a topic with msg type: nav_msgs::Odometry Is there a way to get this works with the Navsat transform? UTM co-ordinates are not used here. Is this the route to go: One instance of robot_localization with continuous sensors, in odom_frame. Second instance of robot_localization with output of (1.) + GPS(pose0) in map frame I'm using this documentation as a reference. Thanks. Originally posted by bluehash on ROS Answers with karma: 120 on 2017-05-02 Post score: 0 Answer: The point of navsat_transform_node is to convert GPS data into world-frame coordinates in a nav_msgs/Odometry message. The sensor you are describing is not actually a GPS, and already provides the (X, Y) position of your robot, so just use that data directly as an input to the EKF (i.e., don't bother running navsat_transform_node). Also, I don't recommend feeding the output of (1) into (2). Just fuse the same inputs that (1) has, and add the input for the indoor GPS sensor. Remember that you don't want to have more than one EKF input measuring the same pose variable (X, Y, Z, roll, pitch, or yaw). EDIT 1 to clarify previous answer: Sure, you can run two state estimation nodes. You can do something like this: EKF (odom): fuse wheel encoder odometry (velocities) and IMU EKF (map): fuse wheel encoder odometry (velocities), IMU, and the nav_msgs/Odometry message that is being spit out by your indoor "GPS." My point is that if you had a real GPS unit, the role of navsat_transform_node would be to convert latitude and longitude into a pose that you can fuse into the second EKF. Since you already have the indoor beacons spitting out pose data in a message type that the EKF can take in directly, you don't need navsat_transform_node. Originally posted by Tom Moore with karma: 13689 on 2017-05-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by bluehash on 2017-05-04: A little confused with the documentation here and what you said. Do I still use two state estimation nodes? The GPS is discontinuous. Cont. sensors with world_frame as odom. Cont. sensors + GPS with world_frame as map. What happens to the output of 1. in this case? Comment by Chrisando on 2019-06-06: Please correct me if I am wrong, to my current understanding navsat_transfrom_node not only transforms GPS data into nav_msgs/Odometry message but also finds a transformation between the starting point of the robot and the first GPS measurement. My indoor GPS beacons report poses in the different coordinate frame then the starting frame of the robot. Is there any package that would take care of this transformation for me? Alternatively, what would be the most convenient way to achieve the same behavior? Modify navsat_transform_node to work with indoor GPS beacons instead of normal GPS? Comment by Tom Moore on 2019-06-13: You should just be able to create a simple python node that takes in the initial pose of your robot in your world frame, and the initial pose that comes from your indoor GPS beacons, creates a static transform between them, and publishes. Then the EKF can receive your GPS poses, transform them, and fuse them. Comment by Chrisando on 2019-06-13: Thank you Tom for answer! How about to orientation problem? (GPS only gives x,y,z position). Shall I make my robot move forward some distance and calculate orientation based on that? How is it usually done? Thank you! Comment by Tom Moore on 2019-07-03: If you lack an IMU, then yes, that seems like a valid approach.
{ "domain": "robotics.stackexchange", "id": 27781, "tags": "navigation, ekf, gps, robot-localization" }
How to derive Eq. (2.1.24) in Polchinski's string theory book
Question: Excuse me, I got one more stupid question in Polchinski's string theory book :( $$\partial \bar{\partial} \ln |z|^2 = 2 \pi \delta^2 (z,\bar{z}) (1) $$ I shall check this equation by integrating both sides over $\int \int d^2z $ The right hand side is obviously $2\pi$. The left hand side is evaluated as following $$ \partial \bar{\partial} \ln |z|^2 = \partial \bar{\partial} \left( \ln z + \ln \bar{z} \right) = \partial \left( \bar{\partial} \ln \bar{z} \right) + \bar{\partial} \left( \partial \ln z \right) (2) $$ with the help of Eq. (2.1.9) in that book, we have $$ \int \int_R d^2 z \left[ \partial \left( \bar{\partial} \ln \bar{z} \right) + \bar{\partial} \left( \partial \ln z \right) \right] = i \oint_{\partial R} \bar{\partial} \ln \bar{z} d \bar z - \partial \ln z d z (3) $$ $$= i \oint_{\partial R} \frac{1}{\bar{z}} d \bar z - \frac{1}{z} d z = 2 \pi + \oint_{\partial R} \frac{1}{\bar{z}} d \bar z $$ Here I have used the contour integral. But is the a remaing term $\oint_{\partial R} \frac{1}{\bar{z}} d \bar z$ zero? Why? Answer: Obviously you are trying to solve problem 2.1 from Polchinski´s book which states: "Verfify that $\partial\overline{\partial}\ln\left|z\right|^{2}=\partial\frac{1}{\overline{z}}=\overline{\partial}\frac{1}{z}=2\pi\delta^{2}\left(z,\:\overline{z}\right)$ (a) by use of the divergence theorem (2.1.9) ($\int_{R}d^{2}z\left(\partial_{z}v^{z}+\partial_{\overline{z}}v^{\overline{z}}\right)=i\oint_{\partial R}\left(v^{z}d\overline{z}-v^{z}dz\right)$); (b) by regulating the singularity and then taking the limit." (a) You have For holomorphic test functions $f(z)$: $\int_{R}d^{2}z\partial\overline{\partial}\ln\left|z\right|^{2}f\left(z\right)=\int_{R}d^{2}z\overline{\partial}\frac{1}{z}f\left(z\right)=-i\oint_{\partial R}dz\frac{1}{z}f\left(z\right)=2\pi f\left(0\right)$. For antiholomorphic Test functions $f\left(\overline{z}\right)$: $\int_{R}d^{2}z\partial\overline{\partial}\ln\left|z\right|^{2}f\left(\overline{z}\right)=\int_{R}d^{2}z\partial\frac{1}{\overline{z}}f\left(\overline{z}\right)=-i\oint_{\partial R}d\overline{z}\frac{1}{\overline{z}}f\left(\overline{z}\right)=2\pi f\left(0\right)$. (b) Now comes the second part of the problem: To regulate $\ln\left|z\right|^{2}$ use the good old $\epsilon$-environment trick and rewrite it as $\ln\left(\left|z\right|^{2}+\epsilon\right)$. This regularizes you also $\frac{1}{z}$ and $\frac{1}{\overline{z}}$: $\partial\overline{\partial}\ln\left(\left|z\right|^{2}+\epsilon\right)=\partial\frac{z}{\left|z\right|^{2}+\epsilon}=\overline{\partial}\frac{\overline{z}}{\left|z\right|^{2}+\epsilon}=\frac{\epsilon}{\left(\left|z\right|^2+\epsilon\right)^{2}}$. From this point the symmetry of the problem makes the use of polar coordinates more convenient. There consider a general test function $f\left(r,\:\theta\right)$, and define $g\left(r^{2}\right)\equiv\int d\theta f\left(r,\:\theta\right)$ which is assumed to be sufficiently well behaved in the asymptotic cases $0$ and $\infty$, then $\int d^{2}z\frac{\epsilon}{\left(\left|z\right|^{2}+\epsilon\right)^{2}}f\left(z,\:\overline{z}\right)=\int^{\infty}_{0} du\frac{\epsilon}{\left(u+\epsilon\right)^{2}}g\left(u\right)=\left.\left(-\frac{\epsilon}{u+\epsilon}g\left(u\right)+\epsilon\ln\left(u+\epsilon\right)g^{\prime}\left(u\right)\right)\right|_{0}^{\infty}-\int^{\infty}_{0} du \epsilon\ln\left(u+\epsilon\right)g^{\prime\prime}\left(u\right)=g\left(0\right)=2\pi f\left(0\right).$
{ "domain": "physics.stackexchange", "id": 8655, "tags": "homework-and-exercises, string-theory" }
Why does inverting a song have no influence?
Question: I inverted the waveform of a given song and was wondering what will happen. The result is that it sounds the exact same way as before. I used Audacity and doublechecked if the wave-form really is inverted. The second thing I tried was: I removed the right channel, duplicated the left one and set the duplicated layer as right channel. This way I made sure that both channels are exactly the same. Then I inverted the second channel only. I thought that this would create some kind of anti-noise, but it didn't. Why is that? Answer: The human ear responds only to the intensity $I$ of the sound it receives (more specifically, to the intensity distribution over the different frequencies) and this goes more or less like the square of the amplitude, $$I\sim A^2.$$ Changing the sign of the waveform changes the sign of $A$, which has no effect on $I$.
{ "domain": "physics.stackexchange", "id": 5970, "tags": "waves, acoustics" }
Connecting two vessels of equal volume, one with an ideal gas and other having vacuum
Question: Let's say we have two vessels connected by a valve. Both the vessels have the same volume. Initially, there is vacuum in one vessel and the other is maintained at temperature $T_0= 300K$ and pressure $p_0=1 atm$. The condition on the valve is such that it opens only when the pressure difference $\triangle p \ge 1.10atm$ The finally both the vessels were heated to a temperature $T'=380K$ and I want to find the increment in the pressure in the vessel having volume. Here it's clear that the number of moles initially in the first vessel at temperature $T_0$ and pressure $p_0$ are $$n = {p_0V}/{RT_0}$$ The gas distributes in the two vessels. Lets say finally there are $n_1$ moles and pressure $p_1'$ in first vessel and $n_2$ moles and pressure $p_2'$ in the second vessel(that had vacuum initially), then we can write $$n_1=p_1'V/n_1RT'$$ $$n_2=p_2'V/n_2RT'$$ Here one this that we can also use is $$n_1+n_2=n$$ $$ {p_1'V \over n_1RT'}+ {p_2'V \over n_2RT'} = {p_0V \over RT_0}$$ Now here comes my question. IS THERE ANY RELATION BETWEEN THE FINAL AND INITIAL PRESSURE OF THE TWO VESSELS? My solution manual says that $p_2'=p_1'- \triangle p$. Now I don't get the fact as to how can this be true. After the $\triangle p$ factor has been reached by the first vessel, the two vessels exchange the number of moles freely then how can this be possible that they show pressure increment with the same difference? Answer: I will consider the process to be quasi-static: this assumption doesn't reflect what's really going on, but it is the only way to solve this problem; indeed, if it wasn't quasi-static, then there could be pressure inhomogeneity which would make the problem unsolvable without knowing the geometry of the vessels. So, let's consider that at any given time, pressure is homogeneous in any of the two vessels. You know that there will be a gas exchange only if $\Delta p \geq 1.1$ atm. In other words, the process stops when $\Delta p < 1.1$ atm, or shortly after $\Delta p$ reaches $1.1$ atm. Then, you know that, by definition of $\Delta p$, $p_2' = p_1' - \Delta p$. On the other hand, as you noticed, you know that $n_1 + n_2 = n$, so these two equations give you the value of pressures depending on the temperature (which appears because the gas is perfect). Now, you can find a link between initial and final pressures.
{ "domain": "physics.stackexchange", "id": 67523, "tags": "thermodynamics" }
$\partial^{\nu} \partial_{\nu}$ vs. $\partial_{\nu} \partial^{\nu}$
Question: I was doing a problem regarding field theory. I am given the following lagrangian density: $$\mathcal{L}=\frac{1}{2}\partial_\mu\phi_i\partial^\mu\phi_i-\frac{m^2}{2}\phi_i\phi_i$$ for three scalar fields. I want to determine the equations of motion for the field $\phi_i$. I used the Euler Lagrange equation:$$\frac{\partial\mathcal{L}}{\partial\phi_j}-\partial_\mu[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi_j)}]=0.$$ My question is, is there a difference between using the Euler Lagrange equation the way I wrote above and the Euler Lagrange equation with an upper $\mu$ index: $$\frac{\partial\mathcal{L}}{\partial\phi_j}-\partial^\mu[\frac{\partial\mathcal{L}}{\partial(\partial^\mu\phi_j)}]=0~?$$ I ask this because by the end of my calculations, if I use the former equations , I get $$\partial_\nu\partial^\nu\phi_j+m^2\phi_j=0$$ while the latter equations give: $$\partial^\nu\partial_\nu\phi_j+m^2\phi_j=0.$$ Are both the Klein-Gordon Equation or just the last one? Answer: As mentioned in the comments by WarreG and user2723984, since the full expression of the two in terms of the metric tensor is $$ \partial_\nu \partial^\nu=\partial_\nu g^{\mu\nu}\partial_\mu=\partial^\mu\partial_\mu, $$ the two equations are identical.
{ "domain": "physics.stackexchange", "id": 58384, "tags": "special-relativity, lagrangian-formalism, metric-tensor, field-theory, klein-gordon-equation" }
Why small particles are attracted by charged objects?
Question: Everyone knows this experiment: You mix salt and pepper and use a charged balloon to separate the pepper from the salt. I never really understood how this works. In school (long time ago) we learned that unlike charges attract each other while like charges distract each other. In the experiment the negatively charged balloon attracts the pepper. Does this mean the pepper has a positive charge? Why is pepper charged? How can I predict for given particle if the balloon will attract, distract or not interact with it? Answer: Promoted from a comment that should have been posted as an answer: No, it is about mass to surface area ratio. The charged balloon induces polarisation on the salt and the pepper, but the salt is too heavy for this tiny separation, and thus tiny attractive force, to overcome weight, whereas the pepper is light enough for that to happen. The salt, being made of ionic bonds, is properly stronger polarised than the pepper, but the density difference is big enough. Note that neither salt nor pepper become charged; it is just that the charges separated in position just a little, so that the closer one is attracted more than the farther one is repelled.
{ "domain": "physics.stackexchange", "id": 98041, "tags": "electrostatics, electricity, experimental-physics, charge" }
How can I automatically adjust PID parameters on the fly?
Question: I have a simple servo system that uses a PID controller implemented in an MCU to perform the feedback. However, the properties of the system change dynamically, and so the PID parameters can never be tuned for all circumstances. My robot is a light weight arm with back-drivable electric motors, similar to this one: The arm performs several tasks, including picking up heavy weights, pushing and pulling objects across the desk. Each of these tasks requires different PID tuning parameters which I cannot easily predict. What I would really like is for some higher level function which can carefully adjust the parameters in response to the arm's behaviour. For example, if it notices that the arm is oscillating, it might reduce P and increase D. Or if it noticed that the arm wasn't reaching its target, it might increase I. Do such algorithms exist? I would be happy even if the algorithm didn't perfect the parameters immediately. E.G. the arm could oscillate a few times before the parameters were adjusted to their new values. Answer: A co-worker and I once implemented a simplex algorithm for on-the-fly tuning of the PID parameters of a current control loop for a motor. Essentially the algorithm would modify one parameter at a time and then collect data on some feedback parameter that was our measure of goodness. Ours was percent deviation from a current target setpoint. Based on whether the feedback parameter got better or worse, the next parameter was modified accordingly. Or, in Wikipedia speak: Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations which each give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot does improve the solution. Technically we used the Nelder-Mead method which is a type of simplex. It could also be described as a hill climbing algorithm as well if you watch how it modifies it's input parameters as it searches for an optimum output parameter. Nedler-Mead worked best in our case because it can chase a setpoint. This was important because our current target setpoint changed as torque demand increased. the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points
{ "domain": "robotics.stackexchange", "id": 21, "tags": "control, pid, automatic, tuning" }
Does ros::Duration::sleep spin?
Question: If I do ros::Duration::sleep does ros spin and are callback functions called? If not is there a way to pause the program while still managing callbacks? Originally posted by HassanNadeem on ROS Answers with karma: 71 on 2015-02-25 Post score: 2 Answer: If I do ros::Duration::sleep does ros spin and are callback functions called? No. Sleep just sleeps the thread for the specified duration. If not is there a way to pause the program while still managing callbacks? You can have a loop that alternates sleeping and spinning, as exemplified in this roscpp tutorial, or you can have separate spinner threads. I recommend reading the ROS wiki page on callbacks and spinning. Originally posted by Adolfo Rodriguez T with karma: 3907 on 2015-02-25 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Adolfo Rodriguez T on 2015-03-02: If your question is answered, could you post, for the record, which approached you ended up using?. Comment by Mehdi. on 2019-03-14: So if I have an image callback and a service callback and do sleep in the service callback, will the image callback be blocked too? Comment by knxa on 2019-10-07: @Mehdi: With the standard single-threaded ros::spin: yes
{ "domain": "robotics.stackexchange", "id": 20986, "tags": "ros, spin, sleep, time, clock" }
Why is PyTorch's DataLoader not deterministic?
Question: I've set the seeds like this (hoping to cover all bases): random.seed(666) np.random.seed(666) torch.manual_seed(666) torch.cuda.manual_seed_all(666) torch.backends.cudnn.deterministic = True The below code will still output DIFFERENT batches for both namesTrainLoader1 and namesTrainLoader2 but they should really be the same. How come creating the model is affecting the deterministic values? namesDataset = NamesDataset() namesTrainLoader1 = DataLoader(namesDataset, batch_size=5, shuffle=True) for each in namesTrainLoader1: print(each) model = TorchRNN(inputSize, hiddenSize, outputSize) namesTrainLoader2 = DataLoader(namesDataset, batch_size=5, shuffle=True) for each in namesTrainLoader2: print(each) Output for namesTrainLoader1: ('saiki', 'close', 'sloan', 'horos', 'roman') ... Output for namesTrainLoader2: ('david', 'abeln', 'hatit', 'holan', 'protz') ... I also tried using worker_init_fn (e.g. with lambda x: 0) in the DataLoader, but that made no difference. Why is this not deterministic? How can I make it deterministic? i.e. reset the internal seed of the DataLoader? Answer: If you want to shuffle the data in a deterministic way, how about shuffling the dataset beforehand e.g. in a simple list of filenames, then simply reading that list deterministically in a single-processed loop, with shuffle = False in the DataLoader?? Another things that may cause non-deterministic behaviour is using multiple processes - then there are operations that are passed out and managed by the operating system, which doesn't pay attention to any of the random seeds you set. Performance is dependent on available resources i.e. affected by other activities running on your host machine. In addition to that, any interaction between CPU and GPU could be causing non-deterministic behaviour, as data transfer is non-deterministic (related Nvidia thread). Data packets can be split differently every time, but there are apparent CUDA-level solutions in the pipeline.
{ "domain": "datascience.stackexchange", "id": 7152, "tags": "python, pytorch" }
Reading event logs of several computers
Question: This is my first PowerShell script. It reads the event logs of several computers in the local network and lists error/warning messages written to them in the last 24 hours, if any. I'll appreciate any suggestion to improve my code. $computers = 'COMP1','COMP2','COMP3','COMP4' $logs = 'Log1','Log2','Log3','Log4' $yesterday = (Get-Date).AddDays(-1) foreach ($comp in $computers) { try { $events = Get-WinEvent @{logname=$logs; starttime=$yesterday; level=2,3} -ComputerName $comp -ErrorAction Ignore; if ($events.count -gt 0) { $count = $events.count; "$comp $count event(s):"; $events | select logname, providername, timecreated, leveldisplayname, message ` | Format-Table -AutoSize; } else { "$comp no events"; } } catch { "$comp error!!! $_"; } } Answer: Basics Aliases PowerShell is meant to be both shell and programming language. As such, it has a concept of Aliases. An Alias is another name for a command. An alias can resolve to a cmdlet, a function, or even a native application. You can create your own aliases, but there are already many built in. Aliases are great for quickly typing out commands in the shell or prototyping, but I and many others strongly believe you should avoid using them in scripts. Try to use the full name of each command, as this can reduce ambiguity and increase code clarity. In your script you only use one: select This is an alias to Select-Object. Parameters Many (most) PowerShell commands accept parameters both as named parameters (MyCommand -ParamName ParamValue) and positional parameters (MyCommand ParamValue). Similarly to the preference for using full command names, it is recommended to always use named parameters. In your script for example, so you are doing select logname, providername, timecreated (truncating the rest). What you may not realize is that you are passing an array to a single parameter, and that parameter does in fact have a name: Property (the same is true for your first parameter to Get-WinEvent, whose first parameter in your case is called FilterHashTable). Expanded out: $events | Select-Object -Property logname,providername,timecreated Although not in use in your script, you should be aware of the following: Parameter names can also have aliases. So for example Get-WinEvent -ComputerName $comp could also be written Get-WinEvent -cn $comp. Aliases have to be defined; they aren't automatic. Parameter names accept any unambiguous stub of the name. So, you can use a partial name, like Get-WinEvent -FilterH @{}. In this example, you can't use Get-WinEvent -Filter because there are also -FilterXml and -FilterXPath parameters, but you can do down to a single character as long as it's unambiguous for that command. Both of these are good to know, but again should be avoided in scripts. Format- commands PowerShell makes heavy use of pipelines, but unlike most shells, PowerShell sends complete objects throughout its pipeline. This is a great thing, and it's what allows you to do complex filtering and manipulation. But when you see the output on the screen and it shows you a table or list of information, it's easy to think just think of it as formatted text. PowerShell has Format- cmdlets that let you display information explicitly the way you want (since normally the host chooses a format based on the data types or amount of information). Here's the really important thing to remember: Format- cmdlets are for end-result, final display only. Once you send an object through a Format- cmdlet, it is no longer the original object; you've lost the original. In almost all cases, it's best to send the original object, then let the caller or use decide what they want to do with it (whether it's format it, convert it to another data type, save is as a CSV, or whatever). It's hard to tell what you want to do with the data in your script, but since you use Format-Table you've already excluded all of your other options. Backtick ` as line continuation character The backtick ` is the escape character in PowerShell. The standard backslash \ used in most languages is not used here because backslash is also the path separator on Windows, and since PowerShell is supposed to work as a shell and was born on Windows, this would be inconvenient. The backtick was chosen instead. When you use backtick at the end of a line, you are in essence "escaping" the newline for the purposes of the parser. The unfortunate thing is that the backtick is very small, and very difficult to see, so it can make things confusing. I highly recommend that you avoid the backtick for line continuation at all costs. Luckily, you do have some other options, as there are other points in commands and statements where you can start a newline without the need for any special character. Pipe | Since pipes are so common in PowerShell and you may be doing a pipeline that includes several statements, it's very useful to know that you can use a newline after a pipe (but not before). So the following works without any line continuation: $events | Select-Object -Property logname,providername | Sort-Object -Property logname (indentation after the first pipe is a stylistic choice, not at all required) Operators You can use a newline directly after an operator: if ($var -gt 5 -and $var -lt 100) { # code } For long conditionals, this is essential. ScriptBlocks {} Any scriptblock is essentially a whole script, so you can use newlines liberally inside it, even if you were passing the scriptblock as a parameter. As soon as the scriptblock is opened, go for it! Invoke-Command -ScriptBlock { # A whole new script here Get-Service | Get-Random | Stop-Service -Verbose -WhatIf } In grouping expressions () and sub-expressions $() Using Grouping and Sub expressions even where not necessary can help if you feel the code needs better formatting. You can use newlines inside them as needed: Get-Service -Name ( 'Win*' ) Get-Service -Name ( "Win$( '*' )" ) Those examples are convoluted, but you get the idea. In array/hashtable definitions @()/@{} I use these a lot. Format as needed within these. Note that in a hashtable definition, if you are used to using semicolons ; to separate items on a single line (like @{ Key1 = 'val1' ; Key2 = 'val2' } the newlines take the place of the semicolon. # Hashtable @{ Key1 = 'val1' Key2 = 'val2' } # Array @( 1,2,3 ) @( 1, 2, 3 ) @( 1 , 2 ,3 ) # why oh why # my preferred (long) array style: @( 1 ,2 ,3 ) Combined: @{ Set1 = @{ Arr1 = @( 'A' ,'B' ,'C' ) Arr2 = @( 1 ,2 ,3 ) } Set2 = @{ Scalar = 10 } } In a command: Get-WinEvent -FilterHashTable @{ logname = $logs starttime = $yesterday level = @( 2 ,3 ) # I wouldn't expand the array like this for 2 elements, just an examples } -ComputerName $comp Errors and Exceptions PowerShell has a concept of terminating vs. non-terminating errors. Different commands and different code determine whether an error stops execution or whether it just reports an error and keeps going. Most commands allow you to override whatever their default is with the -ErrorAction common parameter. In your code, you are using try/catch, but you've added -ErrorAction Ignore to Get-WinEvent. Errors that are non-terminating, which includes an error you've told PowerShell to ignore or ContinueSilently, will not trigger a try/catch, so I believe your catch block will never run even if there is an error. Instead, you probably want -ErrorAction Stop, which is supposed to force all errors to become terminating errors, so that they do trigger the exception handling. Truthiness and Falsiness PowerShell tries pretty hard to convert or coerce values into the type that is required. In the case of a boolean value, like that used in an if statement, almost anything can be interpreted as true or false. The following all evaluate to $false with no additional action or conversion needed: $false 0 $null Empty string Empty array Uninitialized variable (not the same as one containing $null) Just about everything else, including, confusingly, an empty hashtable, is interpreted as $true. So what does this mean for you? if ($events.count -gt 0) Can be reduced to: if ($events.count) Which could even further be reduced to: if ($events) This is useful to check for lots of things; return values of course, but also whether a parameter contains a value or is otherwise available. Pipeline Output When you do something like this: "Some thing" You're used to it showing up on the screen, so it's easy to think that this is the same as a sort of print statement (which in PowerShell would be Write-Host). But that isn't what's happening; it's actually implicitly calling Write-Output, which sends that to the output stream. If the output stream makes its way back to the host, then the host decides what to do with it; in the case of powershell.exe (or ISE) it decides to display it, but in the case of a function for example, this becomes part of the return value. In the case of a status message that's not really what you want. Here's an example: function Test-Output1 { "I'm returning 5" 5 } function Test-Output2 { Write-Host -Object "I'm returning 5" 5 } Now, if you run these directly: Test-Output1 Test-Output2 You will see the exact same output. They are indistinguishable that way. But try to assign the result to a variable instead: $v1 = Test-Output1 $v2 = Test-Output2 Now, you will see the difference. Test-Output1 shows nothing on the screen. If you check the value of $v1, it will be an array, with 2 elements. The first element is a string (your message), the second element is the number. Test-Output2 on the other hand showed a message anyway, and the value of $v2 is only the number. That's the critical difference. So even if you're in a situation where you can't see the difference, you should always be thinking about it. "Am I trying to show a message to a person on the screen, or am I trying to return a value to somewhere else in the program?" Details Ok, so now some stuff that actually has to do with the task you're trying to achieve! Parameters This is something of a general language element thing too, but instead of hardcoding variables at the top of your script, consider adding parameters to the script so that as a caller you can specify the values you want. Let's look at replacing the first 3 lines of your script: $computers = 'COMP1','COMP2','COMP3','COMP4' $logs = 'Log1','Log2','Log3','Log4' $yesterday = (Get-Date).AddDays(-1) With this: [CmdletBinding()] param( $Computers, $Logs, $StartTime ) Now, I renamed $yesterday to $StartTime so that you can specify the date you want when calling, so that variable would have to change, but the entire rest of your script would work the same. If your script is named Get-MyLogs.ps1 then you can call it like this: .\Get-MyLogs.ps1 -Computers 'COMP1','COMP3' -Logs 'Log1','Log9999' -StartTime (Get-Date).AddDays(-1) But maybe you almost always want to use "yesterday" and it's too annoying to enter that value every time. You can provide your parameter with a default value instead: [CmdletBinding()] param( $Computers, $Logs, $StartTime = $((Get-Date).AddDays(-1)) ) Now you can leave that parameter off, and it will use the default, but you can still explicitly set a value to override it. This gives you a great amount of flexibility, without having to edit the script, and it required almost no effort. I didn't even go into adding data types to the parameters, validation, etc. I will only add that there are some naming conventions you should follow for parameters. Typically plural is not used, so even if you want to accept multiple logs, you would probably name that parameter $Log and not $Logs. Accepting a computer name is common and the parameter is pretty much always called $ComputerName (again, even if multiple names are accepted). It's good to follow these conventions. You can add aliases to your parameters if you want to support non-standard additional names, but real name should generally follow convention when possible. Parallelization Right now, you go through the list of computers sequentially. For more than a few computers, this is going to be slow. If any of the computers are not available, and have to time out, it's going to be really slow. Before going further, I'll just say this: if it's ok that it's slow, then stop here. Let it be slow. Sequential, non-parallelized code is easy to write, easy to understand. If this just runs as a background task or whatever, don't complicate it. But if you want to check multiple computers at once, you can. PowerShell sucks at this, but there are options. The built-in ways on Windows are Workflows and Jobs. Workflows are weird. They're a bit complicated, difficult to understand all the implications, and hard to find good help on, because nobody uses them. They are also not going to be included in PowerShell core since they only work on Windows so I don't recommend this. For very simple things, it might be a good way to go. Jobs are supported through the *-Job* cmdlets, and the -AsJob parameter that's available on some commands. Jobs actually run as separate PowerShell processes so they are a bit expensive in terms of start time and resource usage (especially with large numbers of them). Also because they are not in-process, any objects passed between them are serialized/deserialzed so some functionality may be lost. If you start a scriptblock in a job, you then have a job object which you can wait on, check the progress on, and then when ready receive its output. There are more advanced ways to do parallel workflows like runspaces, which you can do yourself, or use a library like Boe Prox's excellent PoshRsJob module. I'm not going to get any deeper on this since I don't even know your intention and there's so much to write. If you try to parallelize this, try some stuff, try different methods, and use Stack Overflow as needed. Conclusion Despite my novel, your code is pretty straightforward. Without knowing how you intend to use the data it's hard to be opinionated on any deeper design changes, so clearly most of my suggestions are general language stuff.
{ "domain": "codereview.stackexchange", "id": 28211, "tags": "beginner, logging, powershell" }
ros::Time.now().toSec() optimized out
Question: Dear all, Using ROS Hydro on Ubuntu 12.04. I am trying to see the result of ros::Time.now().toSec() while debugging a node, but the debugger says the value is optimized out. I know that normally we can change a CMake file to change the debug type of a binary, however, since this is part of ROS deb package, I am not sure how to go about it. Any suggestions? Originally posted by Juan on ROS Answers with karma: 208 on 2014-10-29 Post score: 0 Answer: I assume you have something like: double seconds = ros::Time.now().toSec(); Now you try to debug the value seconds? (Other: you can't debug into the functions now() or toSec() because the deb is - obviously correctly - build as release...; If you want to debug this you have to recompile it yourself from source in debug mode...) Assuming you can't access the seconds value, it is because it is optimized out in your code (or your code is not even build with debug info). You can change this by setting the build type for catkin via command like like: with debug info, with optimization: catkin_make -DCMAKE_BUILD_TYPE=RelWithDebInfo with debug info, without optimization: catkin_make -DCMAKE_BUILD_TYPE=Debug Originally posted by Wolf with karma: 7555 on 2014-10-29 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19876, "tags": "ros, optimization, cmake, rostime" }
Undocumented planning_environment parameters
Question: The PR2's environment server launch file sets some parameters that are undocumented in the planning_environment package wiki page, in particular: object_padd pointcloud_padd joint_state_cache_allowed_difference Of these, the first two appear in the API docs of the package. My intention is to edit the wiki and document them. My question is mostly concerned with the third (undocumented?) parameter. Does it refer to the maximum allowed (past/future) extrapolation time when querying the joint states cache?. Originally posted by Adolfo Rodriguez T on ROS Answers with karma: 3907 on 2011-06-09 Post score: 0 Original comments Comment by Sachin Chitta on 2011-06-28: Yes, it does refer to the maximum allowed past/future time. It won't really extrapolate, it will just use the joint states at the cache limits if the timestamps are outside the cache timestamp limits. Answer: Added undocumented parameters to the wiki and submitted a trivial doc patch. Originally posted by Adolfo Rodriguez T with karma: 3907 on 2011-06-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5799, "tags": "ros, planning-environment" }
Finding pH of tri-protic acid
Question: I am studying for my final and I am given this problem: A solution of sodium phosphate is made from 10.5 g sodium phosphate in 150 mL of water. What is the pH of this solution Given: Ka1, Ka2. The solution says that we can simply calculate this pH with the following eqution: $$pH=\frac{pK_{a1}+pK_{a2}}{2}$$ However this makes absolutely no sense to me, as this is the pH at the equivalence point. What am I missing as towhy this equation can be used? Answer: amphoteric A chemical species that behaves both as an acid and as a base is called amphoteric. This property depends upon the medium in which the species is investigated: $\ce{H2SO4}$ is an acid when studied in water, but becomes amphoteric in superacids. from the IUPAC goldbook. For ampholytes like $\ce{NaH2PO4}$ the $\ce{pH}$ is independent on the concentration (in first approximation). Consider the reactions of that are possible: \begin{align}\ce{ H2A- + H3+O &~<=> H3A + H2O \tag1\\ H2A- + H2O &~<=> HA^{2-} + H3+O \tag2\\ HA^{2-} + H2O &~<=> A^{3-} + H3+O \tag3\\ }\end{align} As the first approximation, we will assume that everything happening in equation $(3)$ is negligible, as is will only very little influence the concentration of $\ce{HA^{2-}}$. Therefore we can focus on the acidity constants $K_{a_1}$ of $(1)$ and $K_{a_2}$ of $(2)$: \begin{align} K_{a_1} &= \frac{c(\ce{H3+O})\cdot c(\ce{H2A-}) }{c(\ce{H3A})} \tag4\\ K_{a_2} &= \frac{c(\ce{H3+O})\cdot c(\ce{HA^{2-}})}{c(\ce{H2A-})} \tag5\\ \end{align} We will just go ahead and multiply these equations, since they are coupled, simultaneously happening processes. Then we will cancel whatever cancels. \begin{align} K_{a_1} \cdot K_{a_2} &= \frac{c(\ce{H3+O})\cdot c(\ce{H2A-}) }{c(\ce{H3A})} \cdot \frac{c(\ce{H3+O})\cdot c(\ce{HA^{2-}})}{c(\ce{H2A-})} \\ K_{a_1} \cdot K_{a_2} &= c^2(\ce{H3+O})\cdot \frac{c(\ce{HA^{2-}})}{c(\ce{H3A})} \tag7\\ \end{align} Now another major assumption is that the reactions $(1)$ and $(2)$ are happening to the same extent and therefore we can approximate $$c(\ce{HA^{2-}})=c(\ce{H3A})\tag8$$ and rewrite $(7)$ as \begin{align} c^2(\ce{H3+O}) &= K_{a_1} \cdot K_{a_2}\\ c(\ce{H3+O}) &= \sqrt{K_{a_1} \cdot K_{a_2}}\\ \ce{pH} &= \frac12\left(\mathrm{p}K_{a_1}+\mathrm{p}K_{a_2}\right). \end{align} However, this equation only works as a first approximation. Usually the coupled equilibria are very complex and in most instances a computer needs to be involved to calculate it accurately. More information on polyprotic acids.
{ "domain": "chemistry.stackexchange", "id": 2422, "tags": "acid-base, ph" }
Word vectors as input
Question: I have a corpus on which I want to perform sentiment analysis using LSTM and word embeddings. I have converted the words in the documents to word vectors using Word2Vec. My question is how to input these word vectors as input to Keras? I don't want to use the embeddings provided by Keras. Answer: You can just skip the Embedding layer and use a normal input layer with n input nodes where n is the dimensions of your word2vec embeddings. The rest is the same as you would with an embedding layer, just pass a sequence of n dimensional vectors as the input, potentially padded or truncated depending on your model.
{ "domain": "datascience.stackexchange", "id": 1098, "tags": "deep-learning, keras, word-embeddings, word2vec, sentiment-analysis" }
Are there echolocating insects?
Question: Echolocation is the ability to obtain spatial information of the surroundings from echos generated by the animal. There are bats and other vertebrates that naturally use it. I was wondering if this is limited to vertebrates, or if there are examples among the invertebrate, especially insects. Answer: Some moth actually do use clicks for their own echolocation: "Noctuid moths (Noctuidae) are the only group of invertebrates for whom echolocation was demonstrated": Lapshin & Vorontsov 1998 Lapshin et al 1994.
{ "domain": "biology.stackexchange", "id": 11765, "tags": "entomology, invertebrates, ultrasound, echolocation" }
Covert html to pdf, calculate the vertical space to the end of the last page faster
Question: I'm working on a custom puppeteer script which converts dynamic generated HTML to an A4 PDF. This script converts HTML to a near perfect A4 PDF file, the only issue I have is speed. Document conversion is anywhere between 1-10 minutes. I have tracked the slowness to the part where we calculate how much vertical space needs to be added to the last page in order to have a full page. If we don't do this the background we have will be clipped. Next up the relevant code, which is called as followed; node convert.js "{\"url\":\"file://$(pwd)/generated.html\",\"options\":{\"path\":\"$(pwd)/generated.pdf\"}}" async function addPadding(page) { await page.emulateMedia('print'); await page.evaluate( _ => { let container = document.querySelector('#padding-container'); if (container) container.style.height = '1145px' } ); return Promise.resolve() } function getPageAmount(buffer) { let index = buffer.indexOf('/Count '); let string = buffer.slice(index + '/Count '.length, index + '/Count '.length + 32).toString(); return ~~string.trim().split(/\n|\/|\[|\>/)[0] } convert.js (async _ => { const browser = await puppeteer.launch({ headless: true, args: [ isDocker ? '--no-sandbox' : '' ], defaultViewport: { width: 724, height: 1145 * 32 } }); const page = await browser.newPage(); await page.emulateMedia('print'); await page.goto(request.url, { waitUntil: 'networkidle0' }); await page.setViewport({ width: 724, height: 1145 }); let buffer = await render(page); let originalPages = getPageAmount(buffer); let maxPasses = 3; let passes = 0; await addPadding(page); buffer = await render(page); while(getPageAmount(buffer) !== originalPages && passes < maxPasses) { await page.evaluate( _ => { let paddingContainer = document.querySelector('#padding-container'); let value = ~~paddingContainer.style.height.replace('px', ''); paddingContainer.style.height = `${value / 2}px` } ); buffer = await render(page); if (getPageAmount(buffer) === originalPages) { while (getPageAmount(buffer) === originalPages) { await page.evaluate( _ => { let paddingContainer = document.querySelector('#padding-container'); let value = ~~paddingContainer.style.height.replace('px', ''); paddingContainer.style.height = `${value * 1.5}px` } ); buffer = await render(page) } } passes++ } while(getPageAmount(buffer) !== originalPages) { await page.evaluate( _ => { let paddingContainer = document.querySelector('#padding-container'); let value = ~~paddingContainer.style.height.replace('px', ''); paddingContainer.style.height = `${value - 1}px` } ); buffer = await render(page) } if (request.options.path) fs.writeFileSync(request.options.path, buffer); browser.close() })(); I really would appreciate any feedback/help on this. Answer: Ways to optimize: Nested loops You got nested while loops in convert.js script and that requires a particular attention.The outer while loop contains the construction: ... if (getPageAmount(buffer) === originalPages) { while (getPageAmount(buffer) === originalPages) { ... where if conditional redundantly checks the condition getPageAmount(buffer) === originalPages whereas the underling while loop would check the same condition by itself. Therefore, remove the redundant if "wrapper". getPageAmount function Deserves a separate attention (frequently invoked function). '/Count '. Many-times hardcoded search string '/Count ' begs for extracting into a variable let searchStr = '/Count '; index + '/Count '.length. Duplicated expression points to a starting offset for input buffer slicing. Worth to be a variable: let pos = buffer.indexOf(searchStr); let startOffset = pos + searchStr.length; let str = buffer.slice(startOffset, startOffset + 32).toString(); splitting a string by pattern and get the 1st chunk (~~string.trim().split(/\n|\/|\[|\>/)[0]). What it does is splitting the input string by regex pattern /\n|\/|\[|\>/ into array of substrings. Though it creates a new array of strings/chunks in memory - whereas we only need the 1st leftmost chunk .[0].Instead, a much more efficient way is to just find the position of the 1st occurrence of the pattern and slice the input string to that point.That's achievable with String.search + String.slice combination and will go smashingly faster compared to the initial approach. Eventually the optimized function would look as: function getPageAmount(buffer) { let searchStr = '/Count ', pos = buffer.indexOf(searchStr), startOffset = pos + searchStr.length; let str = buffer.slice(startOffset, startOffset + 32).toString(); return ~~str.trim().slice(0, str.search(/[\n\/\[\>]/)) } DOM tree scanning The "hero" of this section is document.querySelector('#padding-container') which appears in many places within while loops and queries the current document for a specific tag/element. Such DOM queries become an expensive operations if used frequently, moreover - in massive traversals. Depending on markup complexity and "amount" of traversal such repetative queries may make the processing +50% slower.The solution here is to extract the reference to an element into a top-level variable and reference it in all needed places. # top-level variables ... let passes = 0; let paddingContainer = document.querySelector('#padding-container'); Extracting "padding container" height Expression ~~paddingContainer.style.height.replace('px', '') is duplicated in many places and is candidate for Extract function technique.Could be even defined as unified function for getting height for the element passed as parameter: function getElHeight(el): return ~~el.style.height.replace('px', '') ... ... _ => { let padding_height = getElHeight(paddingContainer); paddingContainer.style.height = `${padding_height * 1.5}px` }
{ "domain": "codereview.stackexchange", "id": 36433, "tags": "javascript" }
Why is tauon not being probed for high accuracy $g-2$ values?
Question: The recent results from LHCb (regarding violation of lepton universality in $B$ meson deacy) and Fermilab (regarding anomalous muon $g-2$ factor) have set the HEP$^1$ community abuzz right now$^0$. In both cases, it seems that the muon isn't just a heavier electron$^2$. Something about the heaviness of the lepton has allowed a probe of all these discrepant effects. For the Fermilab$^3$ result, the hadronic contributions to muon decay seem to be the culprit for the $8^{th}$ decimal onwards, currently $4.2\sigma$, tension b\w theory and expt. Why do similar effects not arise for the electron, albeit at a higher decimal place, or do they? Are similar experimental/theoretical investigations being performed for the even heavier cousin, tauon? Should one expect a more pronounced disagreement in its case? ( I understand that you loose stability as you go up generations but why is strong and top physics probable but taoun's not?) $^1$ High Energy Physics $^2$ I am extrapolating here from Fermilab's insinuation to B's asymmetry titilated by the fact that they both involve the muon and have been published - oh so closely - but the latter may have nothing to do with the former. $^3$ and BNL $~$ $^0$ Hurray! An excellent summation may be watched at Sixty Symbols. Answer: This part-per-million measurement of the muon's anomalous magnetic moment is a part-per-billion measurement of the muons total magnetic moment. Every part-per-billion measurement is hard. This one is, fundamentally, a measurement of a frequency: the frequency at which the muon's spin precesses in a magnetic field. In order to measure a frequency with precision, you want to be able to measure it for a long time. If you can't measure it for a long time, you at least want to be able to measure it many times. That suggests, for example, that it should be easier to measure the anomalous magnetic moment for the electron than for the muon. And it is: the electron's $a_e$ is known a thousand times more precisely than the new result for the muon. The rest-frame lifetime of the muon is about two microseconds. Take a look at the inset to the figure below from the paper you link, the sub-graph with the seven squiggly lines: That is a histogram: it shows the relationship between the number of decay electrons observed and the amount of time the muon beam has spent in the storage ring, going as many as seven laps around. I count about 140 precession periods on the figure. The result is precise enough to be interesting entirely because that plot is so lovely. The tau lepton is a factor of ten heavier than the muon, so its magnetic moment is about ten times smaller. However the tau’s lifetime is ten million times shorter than the muon's. This ratio suggests that getting a beam of tau leptons to precess even once before decaying would be a tall order. And the data reflect this challenge: while the new Fermilab result gives $a_\mu$ to about eight significant figures, for $a_\tau$ we have not yet measured the sign, though we predict $a_\tau \approx \alpha/2\pi$ as for the other leptons.
{ "domain": "physics.stackexchange", "id": 78160, "tags": "particle-physics, experimental-physics, standard-model, magnetic-moment, hadronization" }
Can special/general relativity be derived from the standard model?
Question: Can special/general relativity be derived from the standard model? For example the time dilatation in strong gravitation? My feeling is yes, but I am not quite sure. Answer: Special relativity is used in the SM formulation. It is kinematics, so somehow more basic than interactions between bodies. A QFT derivation of General Relativity has been the Holy Grail of the field for many years. In the early times, Feynman, Dirac, and the others tackled this problem, but after decades of failures it was more or less considered impossible with the present tools. This said, there are theories that try to give a quantum description of gravity, like string theory and p-branes, or exotic things like bi-gravity. But, in the best case, they cannot be probed experimentally. You can't stop theorists from developing new theories (and they certanly shouldn't be stopped!), but my take is that none of this will definitely solve the issue until a whole new generation of particle physics experiments is out and the theories can be tested and guided by evidence. The good thing is that,even if the theories are wrong or completely unapplicable, the mathematical tools and physical insights can always be used for who knows what cool science completely unrelated.
{ "domain": "physics.stackexchange", "id": 12291, "tags": "general-relativity, special-relativity, standard-model" }
Derivation of variation
Question: $$\delta S=2\int dx^+dx^-\left(\frac{\partial\delta \phi}{\partial x^+}\frac{\partial \phi}{\partial x^-}+\frac{\partial \phi}{\partial x^+}\frac{\partial\delta \phi}{\partial x^-}\right)=-4\int dx^+dx^-\frac{\partial^2\phi}{\partial x^+\partial x^-}\delta\phi \tag{1}$$ This variation leads to the equation of motion. Can someone explain the the step from the middle equation to the right hand side? I dont know how to take the derivative of $\frac{\partial \delta \phi}{\partial x}$. Answer: Hint: you can integrate by parts $$\int_{-\infty}^\infty\text{d}x\frac{\partial f}{\partial x}\frac{\partial g}{\partial x}=\left[\frac{\partial f}{\partial x}g(x)\right]_{-\infty}^{\,\infty}-\int_{-\infty}^\infty\text{d}x\frac{\partial^2 f}{\partial x^2}g(x)$$ In these types of calculations you can always neglect the boundary term, $\left[\frac{\partial f}{\partial x}g(x)\right]_{-\infty}^{\,\infty}=0\,,$ so essentially you can move a derivative from one factor to the other at the cost of a minus sign.
{ "domain": "physics.stackexchange", "id": 68483, "tags": "lagrangian-formalism, field-theory, integration, variational-calculus" }
Is there any relationship between time complexity and space complexity of an algorithm?
Question: For example: If algorithm A takes an input of size n, and has a time complexity of O(a^n) and a space complexity of O(1) Is there a way to increase the space complexity to something like O(n^2) that would guarantee that the time complexity would decrease? Answer: If algorithm A takes an input of size n, and has a time complexity of O(a^n) and a space complexity of O(1) First of all we do not know any exponential or sub-exponential time algorithm that requires only $O(1)$ space, having said this, it is difficult to reason about a hypothetical algorithm "A" because the spatial and temporal complexity are closely linked to the functioning of the algorithms and are (generally) proportional to each other, however, the answer to your question is no. Let's try to reason starting from a $3SAT$ instance. Now we know that $3SAT$ is NP-Complete (best known time complexity for $3SAT$ is currently $O(k^n)$ with $ K=1.439$ for a deterministic algorithm) and $3SAT$ $∈$ PSPACE , in fact space complexity of $3SAT$ is $O(n)$. Now it is difficult to imagine how increasing the space to a constant $k$ (in your question $k = 2$) may lead to a decrease in the execution time of the algorithm that solves $3SAT$ ... in fact always keep in mind that an increase in space also corresponds to a proportional increase over time. Let me conclude by recalling the relationships between temporal and spatial complexity classes for which we believe all inclusions to be strict: $L⊆NL⊆P⊆NP⊆PSPACE⊆EXPTIME⊆EXPSPACE$
{ "domain": "cs.stackexchange", "id": 14222, "tags": "complexity-theory, turing-machines" }
Evolution and the levels of selection
Question: Reading Okasha's "Evolution and the levels of selection" he talks about "the levels of selection problem." There is a bit of a problem with this opening chapter because, while he talks about why the levels of selection problem is a problem, he doesn't define what the levels of selection problem is. The opening line from the book.. "The levels of selection problem is one of the most fundamental in evolutionary biology, for it arises directly from the underlying logic of Darwinism. The problem can be seen as the upshot of three factors..." I think what he means by the "levels of selection problem" is that selection acts on many levels, and what is adaptive at one level may be maladaptive at another. Therefore we can not define selection as acting at one single level, and studying it as such is likely to lead to incorrect conclusions - studying selection is not a simple process. Can anyone provide a firm definition of what Okasha, and the general community on the matter, mean by the "levels of selection problem"? (Note: The factors are the abstract nature of the principles of selection, the hierarchical nature of biological organisation, and the process of adaptation via natural selection). Answer: Reeve and Keller in the first chapter of Levels of Selection in Evolution (Princeton Univ. Press,1999) seem to address this question in the second item in the following paragraph. The purpose of this volume is to sample current theoretical and empirical research on (1) how natural selection among lower-level biological units (e.g., organisms) creates higher-level units (e.g., societies), and (2) given that multiple levels exist, how natural selection at one biological level affects selection at lower or higher levels [boldface added]. These two problems together constitute what Leigh (chap. 2) calls the "fundamental problem of ethology." Indeed, as Leigh further suggests, they could be viewed jointly as the "fundamental problem of biology," when genes and organisms are also included as adjacent levels in the biological hierarchy. This generalization has the desirable property of immediately removing the long-standing conceptual chasm between organismal and molecular biologists. The authors suggest definitions of fitness that account for hierarchical relationships (the level of selection problem) by considering the effect of a trait (nepotism among social insects) on "adjacent" biological levels, such as individual and colony. By measuring what they call 'absolute fitness force' they propose to eclipse what they see as spurious debate over the fundamental level of selection (genes vs. individuals, say). A somewhat surprising aspect of their discussion emerges here: "It is still embarrassingly common to read inaccurate statements...that frogs have to produce many eggs to ensure the survival of the species because tadpoles suffer extremely high rates of predation, or that wolves have evolved ritualized displays to establish dominance hierarchies because physical combats would be too disadvantageous for the species. These naive statements betray a widespread and persistent misunderstanding of the level at which natural selection most commonly operates." Their point is that such statements focus on the wrong level of selection (group), and that stronger selection may occur at the individual level; their overall argument being that both levels should be factored in. I cannot present this as a 'community definition' but it may be a representative example.
{ "domain": "biology.stackexchange", "id": 2229, "tags": "evolution, natural-selection, philosophy-of-science" }
Capacitance in a parallel plate cap with the real electric field
Question: In many books of general physics, they prove the equations for capacitance of a cap of parallel plates as if the electric field where constant everywhere but what would happen if we take the real electric field that is non uniform at the edge of the parallel plates, would the capacitance be greater of smaller? Answer: First think of initially putting charges $\pm Q$ uniformly on the two plates and then letting the system arrive at the final configuration in isolation. If fringe effects were ignored, the initial uniform charge distribution on the plates will remain unchanged. In this case, the potential difference between the two plates will be $V_\infty=Q/C_\infty$, where $C_\infty$ is the capacitance ignoring the fringe effect. Now let us put fringe effects into the argument. The charges will re-distribute such that the initial uniform charge density will become non-uniform. The total energy stored in the capacitor will necessarily decrease. This implies $$ \frac{Q^2}C < \frac{Q^2}{C_\infty}\implies \quad \boxed{C > C_\infty}\ , $$ where $C$ is the (true) capacitance that takes fringe effects into account. Here is a more quantitive computation. (I have implicitly assumed that $d< \sqrt{A}$, where $d$ is the separation of the plates and $A$ the area.) It is possible to exactly solve for the exact potential difference using the method of conformal mapping when the geometry of the capacitor is that of a long strip.
{ "domain": "physics.stackexchange", "id": 16924, "tags": "electrostatics, capacitance" }
Inline error checking in VBA
Question: I asked a question elsewhere on Stack Exchange and was given an answer by multiple people that checking for errors in-line was not a good practice. I have been using an on error resume next ' do something on error goto 0 block structure for years as kind of an improvised try-catch construct for VBA. Here is a simple example of some actual code: I need to check if an object has been passed before I attempt to access the properties of the object. Rather than having the same Error handling label in each subroutine that accesses that object, I've opted to do my error check in a function that returns a Boolean value telling me whether or not the object has been initialized. I want to know if this is acceptable practice, or if there is a better way this should be handled. NOTES: This is NOT the complete code. I just took the relevant props/methods for the question and added them below. The main program allows data sharing, updating, and communication between three different systems. This example is taken from a class object which bridges communication between a corporate intranet website (through ASP) and a Solidworks model object, whose information is made available using the Class_MyModel object. Option Explicit 'Declare module level constants Private Const errNoMyModel As Long = -999 'Declare module level variables Private pMyMod As Class_MyModel 'PROPERTY that holds an instance of custom class MyModel which provides all the model info from solidworks Public Property Get MyMODEL() As Class_MyModel Set MyMODEL = pMyMod End Property Public Property Let MyMODEL(object As Class_MyModel) Set pMyMod = object End Property 'METHOD which can be called to submit a drawing Public Sub SubmitDrawing() DoSubmittal End Sub Private Sub DoSubmittal() 'This procedure updates the intranet site with info from the mymodel object _ it will create a new record if none exists for the given rev, or update the _ record if one already exists 'Check first that the mymodel object property has been set If Not MyModelExists Then Exit Sub 'Nothing can be done if we don't have a mymodel object to work with 'Declare variables Dim strURL As String Dim blnExists As Boolean 'Call the procedure that deals with existing records. blnExists = HandleExisting(False) 'The procedure above returns a true value if a record already existed for the current revision, so if it returns false, we need to add a new record for the current revision If Not blnExists Then 'Construct the URL that will add a new record for the current revision strURL = GetURL(aNew) 'Call the procedure to execute the URL. Print the return text to the debug window so it can be reviewed if necessary. Debug.Print MyMODEL.PARTNO & " " & MyMODEL.REVISION & " CREATE NEW RECORD ATTEMPT RETURNED : " & vbCrLf & DoASP(strURL, False) 'Calls the procedure to exectue the URL and prints the ASP return message to the debug window. End If 'Activate the current the record for the current revision HandleExisting (True) End Sub Private Function MyModelExists() As Boolean 'This function simply checks that the calling procedure has successfully passed a Class_MyModel object to work with 'Declare variables Dim tempModel As Class_MyModel 'Attempt to set an object to reference the mymodel object instance On Error Resume Next Err.Clear Set tempModel = MyMODEL On Error GoTo 0 'Check if the attempt was successful, if not, then no mymodel object exists. _ Return the result to the calling procedure If tempModel Is Nothing Then MyModelExists = False ErrorMsg (errNoMyModel) Else MyModelExists = True End If If Err.Number > 0 Then ErrorMsg 'Cleanup objects before ending the procedure because don't entirely trust VBA's garbage collection Set tempModel = Nothing End Function Private Sub ErrorMsg(Optional ByRef ErrNum As Long) 'This is a simple error message handling procedure 'Choose what do do based on the error that occurred Select Case ErrNum Case errNoMyModel 'This is a constant delcared at the module level that is used to identify an error where the mymodel property has not been set before attempting to call the procedure to update the entry MsgBox "In order to proceed, a MyModel object containing a Solidworks model must be passed to this object. " & _ "Please check that you have selected a valid Solidworks object (.sldprt, .slddrw, .sldasm). " & _ "If you continue to get this message when a valid object is selected, please contact the software developer " & _ "to resolve this problem.", vbOKOnly, "ERROR: Object not assigned" Case Else 'For all other erros, just inform the user of what happened MsgBox "ERROR! " & vbCrLf & "Error type: " & Err.DESCRIPTION & vbCrLf & "Error number: " & Err.Number & vbCrLf & "Error source: " & Err.Source, vbOKOnly, "ERROR" End Select End Sub Answer: Nitpicks Private Const errNoMyModel As Long = -999 I like that you're declaring a constant for custom errors. Best practice would be to add the built-in vbObjectError constant to your own custom error codes - and for better maintainability, it's often best to define these constants in an Enum: Private Enum MyModelError ModelNotSet = vbObjectError + 999 ServerNotFound InvalidUrl OtherCustomError End Enum The name given to errNoMyModel looks like a private field or local variable. Constants are usually clearer in YELLCASE... but I read that identifier as "error number for MyModel", which means absolutely nothing. Contrast to MyModelError.ModelNotSet, which tells you just by its name, that the model isn't set on MyModel. Speaking of MyMODEL: Public Property Let MyMODEL(object As Class_MyModel) Set pMyMod = object End Property This should be a Property Set accessor; Property Let works better for value types. Besides client code that does this: model.MyMODEL = instance Looks very confusing, given that instance is an Class_MyModel instance. But without a Property Set accessor, client code can't even do this: Set model.MyMODEL = instance ...which would be the correct and expected way to assign an object reference. I don't understand the need for this method: Public Sub SubmitDrawing() DoSubmittal End Sub Why not make DoSubmittal a Public member, and simply call it Submit? Note, vbCrLf is Windows-specific. Better use vbNewLine instead. And this... 'Cleanup objects before ending the procedure because don't entirely trust VBA's garbage collection Set tempModel = Nothing VBA doesn't do garbage collection, it does reference counting: that line is utterly useless, since tempModel is locally declared - its reference is destroyed as soon as the procedure exits. By the way, nulling a reference in a garbage-collected language (like VB.NET, or C#) would not force garbage collection. Reference Check If Not MyModelExists Then Exit Sub That's very good. What's less good, is what's under the covers here: Private Function MyModelExists() As Boolean 'This function simply checks that the calling procedure has successfully passed a Class_MyModel object to work with The problem is that... 'Attempt to set an object to reference the mymodel object instance On Error Resume Next Err.Clear Set tempModel = MyMODEL On Error GoTo 0 Assigning a null reference (Nothing) isn't illegal, so this assignment will never blow up; you don't need to expect an error here. In fact, you don't even need this tempModel - and this is overkill: If tempModel Is Nothing Then MyModelExists = False ErrorMsg (errNoMyModel) Else MyModelExists = True End If You could just do this instead: MyModelExists = (Not MyMODEL Is Nothing) ...and then you don't even need a MyModelExists function, you could just inline that simple check. Error Handling What you're trying to do here, is gracefully handle the runtime error 91 that would occur if DoSubmittal were to execute without MyMODEL being set. As per your post, we're not seeing the whole picture. That's sad, because based on what I'm seeing, this whole "ensure MyMODEL is set" spaghetti looks futile, since MyMODEL is really a dependency of the DoSubmittal method, and should be passed as a parameter. But let's say it has to be an instance field because other members need to access it later (or earlier... whatever). Here's how I'd handle this - I would have a procedure responsible solely for assigning the member values; this procedure would need to handle the case where MyMODEL is not set: Private Sub AssignMemberValues(ByVal result As WhateverThisIs) On Error GoTo CleanFail MyMODEL.PARTNO = result.PartNumber '... Exit Sub CleanFail: If Err.Number = 91 Then 'object variable not set 'raise meaningful error with custom error message: Err.Raise MyModelError.ModelNotSet, TypeName(Me), ERR_MODEL_NOT_SET Else Err.Raise Err.Number ' rethrow if we don't know how to handle End If End Sub The calling code (perhaps the DoSubmittal procedure) can then handle all errors with a simple message box, because any error that could be raised in the procedures called by this one would contain a specific and meaningful description: Public Sub DoSubmittal() On Error GoTo CleanFail '... result = GetValues 'may raise ServerNotFound or InvalidUrl errors AssignMemberValues result 'may raise MyModelError.ModelNotSet error '... CleanExit: 'clean-up code goes here Exit Sub CleanFail: MsgBox Err.Description Resume CleanExit End Sub The key here, is to avoid God-like methods that do everything that ever needs to happen: by splitting the work into specialized methods that do one thing (and ideally, do it well), you limit the number of runtime errors you need to handle. Bottom line, On Error Resume Next is hardly ever an option for clean code.
{ "domain": "codereview.stackexchange", "id": 13590, "tags": "error-handling, vba" }
What is a cascaded convolutional neural network?
Question: For a project I am doing, I found the paper Face Alignment in Full Pose Range: A 3D Total Solution. It is using a cascaded convolutional neural network, but I wasn't able to find the original paper explaining what that is. In layman's terms and intuitively, how does a cascaded CNN work? What does it solve? Answer: The paper you are citing is the paper that introduced the cascaded convolution neural network. In fact, in this paper, the authors say To realize 3DDFA, we propose to combine two achievements in recent years, namely, Cascaded Regression and the Convolutional Neural Network (CNN). This combination requires the introduction of a new input feature which fulfills the "cascade manner" and "convolution manner" simultaneously (see Sec. 3.2) and a new cost function which can model the priority of 3DMM parameters (see Sec. 3.4) where 3DDFA stands for 3D Dense Face Alignment, the framework proposed in this paper for face alignment, in which a dense 3D Morphable Model (3DMM) is fitted to the image via cascaded CNNs (the regressor), where the term dense refers to the number of points of the face that will be modeled. See figure 1 of this paper, which should provide some intuition behind the purpose of this framework. In section 3 (page 3), they also say In this section, we introduce how to combine Cascaded Regression and CNNs to realize 3DDFA. By applying a CNN as the regressor in Eqn. 1, Cascaded CNN can be formulated as: \begin{align} \mathbf{p}^{k+1} = \mathbf{p}^{k} + \text{Net}^{k} (\text{Fea}(\mathbf{I}, \mathbf{p}^k)) \tag{1}\label{1} \end{align} where $k$ is the iteration number $\mathbf{p}$ is the regression objective $\text{Net}$ is the CNN structure $\text{Fea}$ contains the two constructed image features Pose Adaptive Feature (PAF) (section 3.2.1) Projected Normalized Coordinate Code (PNCC) (section 3.2.2) $\mathbf{I}$ is the image The expression cascaded CNN apparently refers to the fact that equation \ref{1} is used iteratively, so there will be multiple CNNs, one for each iteration $k$. In fact, in the paper, they say Unlike existing CNN methods that apply different network structures for different fitting stages, 3DDFA employs a unified network structure across the cascade. In general, at iteration $k$ ($k = 0, 1, \dots, K$), given an initial parameter $\mathbf{p}^k$, we construct PNCC and PAF with $\mathbf{p}^k$ and train a two-stream CNN $\text{Net}^k$ to conduct fitting. The output features from two streams are merged to predict the parameter update $\Delta \mathbf{p}^k$ $$ \Delta \mathbf{p}^k = \text{Net}^k(\text{PAF}(\mathbf{p}^k, \mathbf{I}), \text{PNCC}(\mathbf{p}^k, \mathbf{I})) $$ Afterwards, a better intermediate parameter $\mathbf{p}^{k+1} = \mathbf{p}^k + \Delta \mathbf{p}^k$ becomes the input of the next network $\text{Net}^k$ which has the same structure but different weights with $\text{Net}^k$. In figure 2 of the paper (page 4), the structure of this two-stream CNN, $\text{Net}^k$, at iteration $k$, is shown.
{ "domain": "ai.stackexchange", "id": 1639, "tags": "deep-learning, convolutional-neural-networks, computer-vision, definitions, papers" }
Why does a system want to achieve randomness to be feasible?
Question: According to entropy and gibbs free energy, randomness is one of the factors that determines reaction spontaneity, but why? Shouldn't it be the opposite? Answer: Randomness is not the correct word to use as it is ambiguous in this context. What happens is that the molecules' energy is spread out among as many energy levels as possible, i.e. the number of possible ways of filling the energy levels is maximised and this number produces what we call the entropy. It is an experimental observation that this happens. In a thermodynamic explanation when using just the first law the external work performed in a reaction must be equal to the heat loss, unless some heat is given to or taken from the surroundings. This is the point first clearly realised by Gibbs. If an isothermal reaction runs reversibly $T\Delta S$ is the heat absorbed from the surroundings, and if this is positive the work done ($\Delta G$) will be greater than the heat of reaction. Thus you can see that as $\Delta G = \Delta H-T\Delta S$ the entropy can be used towards changing the free energy. When the number of ways of arranging the energy among the numerous energy levels is greater in the products than in the reactants $\Delta S$ will be positive and tend to make $\Delta G$ negative and 'spontaneity'.
{ "domain": "chemistry.stackexchange", "id": 12337, "tags": "thermodynamics" }
Did I see a mentionable event in Germany early morning on 6th of February 2019?
Question: Around 6.15am (GMT + 1) on the 6th of February 2019 I was looking out of my window somewhat north, when I saw something one would probably call a shooting star. The features were: A magnitude I haven't seen before; it even had a soft blinding effect, that's how bright it was. From my view frame it moved from my right to my left (so most likely from east to west). The size was a quarter to half of what the moon usually has from my view frame. It traveled almost through the sky half I was able to see within half a second before its glow was immediately gone again. I am assuming the object must have been either very large and fast or quite close. Was there any meteor worthy mentioning passing by Earth at that given time? Or are the features I just described way more common than I am assuming, and I just spotted an event that happens quite regularly? Answer: Fireballs are real but rare events. Meteors are formed when small rocks enter the atomsphere from space. Larger rocks or small asteroids will produce brighter meteors. The brightest meteors are called fireballs or bolides. It is not uncommon for parts of the rock that made the fireball to survive and can later be found as meteorites. A couple of things can be confused with fireballs. When satellites re-enter, they can burn up. Satellites move much more slowly than true fireballs, most fireballs last less than 10 seconds. A comet is an astronomical, not atmospheric, object. It looks like a fuzzy star with a tail. It remains almost motionless in the sky, sometimes for months. Really bright comets are rare, and even the brightest comets are much much dimmer than a fireball. There are several organisations that collect reports of fireballs: https://fireball.amsmeteors.org/members/imo/report_intro https://ukmeteornetwork.co.uk/fireball-report/ The advice given states: Please, don't report sighting that lasted more than 30 seconds: the vast majority of fireballs are only visible for few seconds. Please, don't report recurring events: seeing a fireball is extremely rare and often a once in a lifetime event. Please, don't report slow blinking objects or lights crossing the sky going by 2 or 3: a fireball looks like a big shooting star.
{ "domain": "astronomy.stackexchange", "id": 3485, "tags": "meteor" }
C# Language Lexer
Question: Here is a Lexer for a programming language I am working on. Any feedback would be appreciated. I only started learning C# a couple of days ago, so please excuse my newbie code :) namespace Sen { enum TokenType { IDENTIFIER, NUMBER, STRING, SEMICOLON, PLUS, MINUS, STAR, SLASH, } class Token { public TokenType type; public string value; public Token(TokenType type, string value = "") { this.type = type; this.value = value; } } class Lexer { public readonly List<Token> tokens; private int charIdx; private readonly string sourceRaw; char CurrentChar { get { return sourceRaw[charIdx]; } } public Lexer(string sourceRaw) { this.sourceRaw = sourceRaw; tokens = new List<Token>(); } bool IsEnd { get { return charIdx >= sourceRaw.Length; } } char? NextChar() { try { return sourceRaw[charIdx++]; } catch (IndexOutOfRangeException) { return null; } } public void Lex() { while (!IsEnd) { switch (CurrentChar) { case ';': AddToken(TokenType.SEMICOLON); break; case ' ': break; case '\'': case '"': LexString(); break; case '+': AddToken(TokenType.PLUS); break; case '-': AddToken(TokenType.MINUS); break; case '*': AddToken(TokenType.STAR); break; case '/': AddToken(TokenType.SLASH); break; default: if (char.IsLetter(CurrentChar)) { LexIdentifier(); continue; } else if (char.IsNumber(CurrentChar)) { LexNumber(); continue; } throw new UnexpectedCharacterException(CurrentChar); } NextChar(); } } void AddToken(TokenType type, string value = "") { tokens.Add(new Token(type, value)); } void LexIdentifier() { int startIdx = charIdx; int endIdx = startIdx; while (!IsEnd && CurrentChar != ' ' && CurrentChar != ';') { if (!char.IsLetterOrDigit(CurrentChar) && CurrentChar != '_') throw new UnexpectedCharacterException(CurrentChar); NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.IDENTIFIER, value); } void LexNumber() { int startIdx = charIdx; int endIdx = startIdx; while (!IsEnd && CurrentChar != ' ' && CurrentChar != ';' && char.IsNumber(CurrentChar)) { NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.NUMBER, value); } void LexString() { char opening = CurrentChar; int startIdx = charIdx + 1; int endIdx = startIdx - 1; NextChar(); while (CurrentChar != opening) { if (IsEnd) throw new ExpectedCharacterException(opening); NextChar(); endIdx++; } string value = sourceRaw[startIdx..endIdx]; AddToken(TokenType.STRING, value); } } } Answer: Welcome to CR and to C#. First thing first, you should become familiar with C# Naming Conventions. A few that I choose to emphasize in regards to your post: In Token class, the fields type and value should become properties named Type and Value. In general, fields are private unless they are constant or static. If you wish to expose a field as public, then it should be a property instead. Also, properties and methods should be named with Pascal casing. Though not required, I personally prefer to decorate all properties, fields, and method with its access modifier, even if it is private. Granted, private is the default but I want to make sure that a beginner has given it thought and explicitly marked it so. Regarding braces, there are 2 areas for improvement. One, the current thinkng with C# is that the open and close braces occur on their own line. And two, one-liners are frowned upon and should encorporate braces. Taking that into consideration, this would be a rewrite of one method: private char? NextChar() { try { return sourceRaw[charIdx++]; } catch (IndexOutOfRangeException) { return null; } } Except that entire method can use a less expensive if rather than a try-catch block. private char? NextChar() => (charIdx >= 0 && !IsEnd) ? sourceRaw[charIdx++] : null; Why both to catch an exception if all you is ignore it? Especially when there is simple code that can easily work around it. Back to braces, lines such as: if (IsEnd) throw new ExpectedCharacterException(opening); should be converted to: if (IsEnd) { throw new ExpectedCharacterException(opening); } There are a few properties or methods where you may consider using =>. Example: private bool IsEnd => charIdx >= sourceRaw.Length; You seem to use CurrentChar != ' ' && CurrentChar != ';' frequently. Apparently, these are delimiters between tokens and values. The DRY Principle (Don't Repeat Yourself) suggests this could become its own property: private bool IsDelimiter => CurrentChar == ' ' || CurrentChar == ';' Elsewhere in code you would replace CurrentChar != ' ' && CurrentChar != ';' with !IsDelimiter. The advantage here, besides readability, is that if you were ever to add a 3rd delimiter in the future, you would only have to change it in one spot.
{ "domain": "codereview.stackexchange", "id": 43304, "tags": "c#, language-design, lexical-analysis" }
How do I construct a Density Matrix corresponding to a Hamiltonian?
Question: I have a Hamiltonian and I want to know the corresponding density matrix. The matrix I'm interested in is the one in this question. Answer: There's many different density matrices that can correspond to a given Hamiltonian. For the 8x8 matrix in your question, there's 8 different "eigenstate" density matrices that can be obtained, one for each of the 8 eigenvectors. The density matrices are constructed by doing the outer product of the eigenvectors. For the $i^{\rm{th}}$ eigenstate of the Hamiltonian, the density matrix $\rho_i$ is: $ \rho_i = |\psi_i\rangle_ \langle \psi_i| $. A system can also be in a "pure" superposition of eigenstates, for example: $|\psi \rangle = \frac{1}{\sqrt{2}}|\psi_1\rangle + \frac{1}{\sqrt{2}}|\psi_2\rangle $. Then the density matrix is once again made by doing the outer product of the pure wave function $|\psi\rangle$ with itself. A system can also be in a "mixed" state, which means it's a linear combination of "pure" states. In this case you would construct the density matrix like this (for example): $\rho = 0.5 \rho_1 + 0.5\rho_2$, which descrbes a state which is a 50% mixture of $\rho_1$ and a 50% mixture of $\rho_2$.
{ "domain": "quantumcomputing.stackexchange", "id": 388, "tags": "quantum-state, programming, density-matrix, hamiltonian-simulation" }
Epoxide ring opening via decarboxylation
Question: The answer for the above multiple choice problem is option (C), from which I can understand that at some point in the mechanism the molecule must've undergone decarboxylation. Also, I think the first step should be protonation of the epoxide oxygen. (No other choice, isn't that the most nucleophilic site?) I'm not able to understand how the ring opening takes place (guessing it is via decarboxylation, but how?) Answer: You are correct about protonating the epoxide of the glycidic acid 1 (by some other molecule of glycidic acid). Ring opening gives stable, tertiary cation 3 and NOT cation 6, which is destabilized by the carbonyl group. Collapse of 3 gives CO2 and the enol 4 which tautomerizes to the aldehyde 5.
{ "domain": "chemistry.stackexchange", "id": 9260, "tags": "organic-chemistry, reaction-mechanism, carbonyl-compounds" }
Graphs: Dectect a sink in $\mathcal{O}(V)$
Question: Given a directed conected graph which representation is its adjacency matrix $A$, design an algorithm to detect a sink in $\mathcal{O}(V)$ time, being $V$ the number of vertices. As definitions can vary, in this context, a sink is defined as a vertex with $0$ exit degree and $V-1$ enter degree. Obviously, the problem is reduced to find a $j\in\{1,\dots,V\}$ such as $a_{ji}=0$ for all $i$ and $a_{ij}=1$ for all $i\not=j$, thats what I tried. However, this solution is in $\mathcal{O}(V^2)$. Any idea? Answer: Assumption :There is at most one row which contains all zero's. You want to find a row of the adjacency matrix $A$ whose all entries are zero's. Simple observation if say the desired row is $k$th than in the $k$th column of adjacency matrix $A$ will contain all ones except the [k,k] index (it is easy to verify ). Let $n$ is the number of rows and columns in the matrix $A$. Algorithm : Start from the top right corner of the matrix $A$. If $a_{i,j} = 0$ and $i \neq j$ then it's not your required column, skip this column mean move to $ j-1 $th column (see you have skipped one column in this step, so the number of columns for your search has been reduced). if $a_{i,j} = 1$ and $i \neq j$ then it is your required column, but it is not required row (think why ? ) so change the row move to $ i+1 $th keep the column same. Running time. : In each step you either reducing the number of rows or number of columns, so maximum number of steps is going to be at max $2n$ (number of rows + number of columns ).
{ "domain": "cs.stackexchange", "id": 9642, "tags": "algorithms, graphs, searching" }
Distinguishing two quantum states practically
Question: Suppose we have two states $$|x\rangle = 1 |0\rangle + 0 |1\rangle$$ and $$|y\rangle = \sqrt{1-\epsilon^2} |0> + \epsilon |1>$$ where say $\epsilon = 10^{-20}$ Can we distinguish between $|x\rangle$ and $|y\rangle$ practically? How many times will we have to repeat the measurement experiment in order to be sure that $|x\rangle \neq |y\rangle$? Because $|x\rangle = |0\rangle$ and $|y\rangle$ is very very very close to $|0\rangle$. Answer: For the state $|x\rangle$, if you repeat the same experiment with this initial state $N$ times, you never get $|1\rangle$ as the outcome. However, if you repeat the same experiment with $|y\rangle$ as the initial state, the probability to get $|1\rangle$ in each copy of the experiment is $\epsilon^2$. The results $|1\rangle$ will appear infrequently, distributed as in a Poisson process. To prove that the coefficient in front of $|1\rangle$ is nonzero, you need to get the outcome $|1\rangle$ at least once. It will approximately occur after $1/\epsilon^2$ repetitions of the experiment, in your case $10^{40}$ repetitions (clearly, your numbers mean that $|x\rangle$ and $|y\rangle$ are indistinguishable in practice because $10^{40}$ is very large). The probability that you will never get $|1\rangle$ after many more experiments than $10^{40}$ is dropping to zero exponentially. If you want to measure $\epsilon$ with a relative error $\delta$, you need roughly $1/\delta^2\epsilon^2$ copies of the experiment because the relative errors in a measured quantity goes down like $1/\sqrt{N}$. One should use all these situations for emphasizing the main difference between classical and quantum physics: in classical physics, small changes of the state imply small changes in the things we can measure. In quantum physics, small changes of the state vector may bring large changes in the outcome of a single experiment, but the probability of such a large change is very small.
{ "domain": "physics.stackexchange", "id": 1903, "tags": "quantum-mechanics, measurements" }
Does ambipolar transport when an electric field is applied to the semiconductor imply zero drift current?
Question: I am reading Neaman's book on semiconductor physics and devices and one of the more important conclusions are that excess holes and electrons drift and diffuse together due to internal electric fields. However, when holes and electrons move in the same direction, wouldn't that mean zero net current? Additionally, if this is true, why would the electrons/holes move at all if the result in terms of current is the same. I find it very hard to grasp why the electrons and holes move in the same direction. As the external field tries to separate the electrons from the holes, doesn't the internal electric field get weaker as the electrons move further away from the holes? And wouldn't the internal field be formed locally in the semiconductor, near the excess carriers, resulting in the external field to be dominating in most location in the lattice? This would cause the carriers to separate in the end. However, Neaman's explanation indicates otherwise. Thanks in advance Answer: The easiest way to understand this is to consider the situation of low injection where the excess minority concentration is much smaller than the majority concentration. Let us assume that you have a homogeneous one-dimensional n-type semiconductor and introduce a localized spatial pulse of excess minority carrier holes into it, first without external field. The positive charge pulse produces an electric field which produces a majority carrier electron current in the direction of the minority carrier holes which, essentially, leads to a very fast shielding of the hole field. This is a very fast process because the internal electric field of the holes produces a strong electron current due to the relatively high concentration of electrons. The end effect is that the positive excess hole pulse is practically neutralized by a minor change in electron concentration relative to its equilibrium concentration which mirrors the excess hole distribution. Therefore the neutrality approximation can be assumed. When the spatial excess hole pulse disperses due to diffusion, the excess electron concentration follows its shape. When you move the positive excess hole pulse by any means along the semiconductor the shielding negative excess electron pulse follows but no net electron transport occurs. The electron concentration accumulates at the location where the hole pulse is and returns to the equilibrium when it has passed. This is similar to the local accumulation of charge at a metal surface by electrostatic induction when you move an isolated charged object along it. No net charge transport occurs in the metal. When you move the hole pulse by applying an external electric field, the hole charge will be transported by drift and a much larger electron drift current will be produced in opposite direction due to the much larger majority concentration. The shielding electron pulse will nor separate from the hole pulse. It consists of different electrons that accumulate along the location and following the shape the hole pulse takes.
{ "domain": "physics.stackexchange", "id": 38156, "tags": "solid-state-physics, electric-fields, electric-current, semiconductor-physics, diffusion" }
Torque Problems vs Motor Problem?
Question: I want to preface this by saying I am software engineer so each part of this robot build has been a lesson on its own. From welding, to CAD design, pillow block bearings, etc, etc.... I am working on a DIY Rover using the following components: 4 DC Motors from electric scooters: 24V, 250W. Connected to axle with #40 roller chains and sprockets. 4 Lawn Mower Wheels. They are 16x6.50-8 for a 3/4" axle with a keyway A Pair of 12 Volt - 60Ah GEL Batteries : I pulled them from a discarded electric wheel chair. They are fully charged, and they show 26V when connected in series I replaced the stock sprockets on the motors with these #40 chain 12 teeth, 1/2" pitch. I put the same sprockets on the axles The problem is that the robot does not begin to move. When I lift the wheel off the ground, they move fine, and spin fast depending on how much throttle I give them. However on the ground they don't move, which is typically a torque problem. However I am not so convinced because these motors are supposed to be for electric scooters and each one capable of moving an adult person, weighing 175+ lbs. Here are my diagnostic steps: Using a FLYSKY controller and 12x2 Sabertooth motor controller. Unfortunately this controller has an over current protection, and those batteries are massive, so it just shuts itself down whenever I move throttle just a little bit. But again, when the wheels are off the ground they are spinning fine, so the connections are fine. To avoid the over-current protection problem, I purchased this 60Amp DC Motor Controller. I connected it to just one wheel so far, and spliced a Multimeter into the connection to measure voltage. I can see the motor will get up to 20V when I turn the Potentiometer dial, and I can see the car begin to try to move. ​ Another question is: what's a good way to measure how many Amps the motor is trying to pull? ​ My conclusions are: The chains are just too heavy for this motor. It's a #40 chain which is way overkill. The sprocket on the motor shaft and the axle are also too heavy The tires maybe too heavy. But again, it's 4 x 250W motor that's getting 24V and probably up to 10Amps What is your advice? What would you do to fix this problem with minimal changes to the design? I.e. just replacing the sprockets? Or are the tires also too heavy? How would I keep the tires and motors as is? Replace the sprockets and chains? Figure out a way to give a higher initial Amperage? Answer: The motors have a rated speed of 2750rpm. It doesn't look like there's much of a gear ratio, so at the moment the motors spin at the same speed as the wheels. With 16" wheels, 2750rpm is about 130mph (rpm*diameter*pi, then converting inches/minute to mph). I'm guessing you don't want it to go that fast, and that speed wouldn't be possible with a total power of 1kw, or 1.3hp. So you want to gear the motors down a lot, which increases the wheel torque. If you want a design speed of say 20mph, that's a gear ratio of $\frac{130}{20}=6.5$, so the wheel sprocket should have 6.5 times as many teeth as the motor sprocket and be 6.5 times the diameter. That will also give you 6.5 times as much torque at the wheels
{ "domain": "engineering.stackexchange", "id": 4769, "tags": "mechanical-engineering, motors, torque, power, robotics" }
Where can I find a fake localization node (ROS2)?
Question: I am correctly running the nav2 stack, but now I need to provide a localization node for my simulation, in order to make the robot actually move. I want to know if already exists somewhere a node simulating the localization of the robot based only on cmd_vel topic furnished by the nav2 stack. Originally posted by lorenzo on ROS Answers with karma: 88 on 2020-02-17 Post score: 2 Answer: The answer is no. We have AMCL ported to ROS2 that we utilize in simulation for positioning, as that represents a far more accurate simulation to reality. now I need to provide a localization node for my simulation, in order to make the robot actually move if you have a laser scanner on your robot, you can just use AMCL the way you would but a non-simulated robot as well. Originally posted by stevemacenski with karma: 8272 on 2020-02-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by lorenzo on 2020-02-18: Ok thanks. You can close the question.
{ "domain": "robotics.stackexchange", "id": 34448, "tags": "localization, navigation, simulation, ros2" }
Approximation of Hilbert Transform Using Very Short Hilbert Transform FIR / IIR Filter
Question: Hilbert transform is a quite sensitive topic here, since Gabor's paper, Theory of Communication, J. Inst. Electr. Engineering, London, 1946. Perhaps even more important than the Fourier transform. There are short Gaussian FIR approximations, smoothing filters, or polynomial fits. What would be methods for very short (say, less than $16$ taps) Hilbert tranform approximations of a signal, possibly including a weighting factor? I am intending to use them on small chunks of signals, in moving frames that progress along data as new samples are acquired. Signals can be considered freed from a potential low-frequency trend (e.g. polynomial). Fey features would be: a management or reduction of small-sized frame artifacts (I am aware of those for short Fourier frames, much less with short Hilbert transforms), potential asymmetric windowing to add a forgeting factor to older samples, a recursive formulation, in the spirit of the sliding DFT, would be a plus. Finally, an IIR formulation, such as the one of the recursive least squares (RLS) adaptive filters, or the exponentially-weighted moving average filter, would be very interesting as well. Answer: This is more like an extended comment to chart the possible answers. Hilbert transform is a frequency domain 90-degree phase shift of the signal. It has an antisymmetrical impulse response around time = 0. You specify that the approximation shall be causal (EDIT: this requirement has since been removed), so I think you need to reference the phase shift to that provided by another filter. If we denote the two filters F1 and F2, some usable alternatives are: Case | Filter F1 | Filter F2 | Phase diff | Magnitude frequency responses -----+--------------+---------------------+----------------+------------------------------ C1 | pure delay | antisymmetrical FIR | exactly 90 deg | F2 has ripple C2 | pure delay | all-pass IIR | approx 90 deg | No ripple C3 | all-pass IIR | all-pass IIR | approx 90 deg | No ripple In all those cases you have two filters whose phase frequency responses have a phase difference of exactly or close to 90 degrees over the desired frequency band. If the filter outputs are used "in quadrature" (summed, with the other multiplied by the imaginary unit) to create an analytic signal, any phase frequency response ripple will become magnitude frequency response ripple of the composite negative frequency removal filter.
{ "domain": "dsp.stackexchange", "id": 3689, "tags": "filter-design, stft, finite-impulse-response, approximation, hilbert-transform" }
What are the exact dimensions of this screw?
Question: When working on a technical drawing project, I couldn't find the exact dimensions of the following screw In particular, I don't understand the $0.5:1$ to $\emptyset 15$ indication. Is this screw in any DIN/ISO (maybe DIN 914 or ISO 4027) standard? Answer: Expanded from Solar Mike's answer: The screw is 8mm in diameter, 22mm long, has a thread every 1.25mm, and the tip is 5mm in diameter, tapered down from the full screw diameter (8mm) with 0.5(H): 1(V) tapering.
{ "domain": "engineering.stackexchange", "id": 4291, "tags": "technical-drawing" }
What happens if an electric dipole is placed in a non-uniform electric field?
Question: I have viewed the question : An electric dipole placed in a non-uniform electric field But that is a different one from my query. I also viewed its answer. My question is if an electric dipole is placed in a non-uniform electric field will it also do simple harmonic motion like an electric dipole does in a uniform electric field? PS: I am learning electrostatics as of now. I am new to it. Answer: You mean an oscillation around some equilibrium orientation due to a torque, that is not dampened by some friction or radiation? That would (almost) always be harmonic for sufficiently small amplitudes by approximation. One exception would be particular places where the Taylor series of torque as a function of the angle has no linear term ~ $\alpha$, like $t(\alpha) = k\cdot\alpha^3$. That would not yield a harmonic motion, and I don't know a general answer for that case - one would need to study non-linear ordinary differential equations. Anyway, for larger amplitudes of $\alpha$, it would depend on how exactly the torque increases with the angle. As soon as the torque back to equilibrium deviates from a linear=proportional increase with the angle, the resulting motion cannot be harmonic (sine-shaped) any more: Harmonic motion like $\alpha(t) = \alpha_0\cdot\sin(t)$ means that the second derivative of $\alpha$ is sinusoidal, too, but with a negative sign, just derive it two times! But this angular acceleration $\ddot{\alpha}$ is always proportional to the torque, analogously to a ~ F with linear motion according to Newton. Thus, the torque needs to increase proportionally to the negative angle (since both follow the sine in time) - or else there is no perfect harmonic motion in the first place. But even in a homogeneous/uniform field, the torque increases with the sine of the angle, so not linearly for large angles, and it is only approximately linear for small angles where $sin(\alpha) \approx \alpha$. Think of two opposite point charges at the end of a stick. The torque only counts the tangential components of the otherwise constant forces on the charges. It is analogous to the gravity pendulum that also displays true harmonic motion for sufficiently small angles only. The dipole behaves exactly like that pendulum up to 90 degrees.
{ "domain": "physics.stackexchange", "id": 69001, "tags": "electrostatics, harmonic-oscillator, oscillators, dipole" }
Why do I feel centrifugal force If I move with constant speed on a turn?
Question: Title sums it up pretty much, I'm studying for my physics exam right now and I just can't wrap my head around this Answer: Acceleration is the change in velocity $$\frac{d\vec v}{dt}$$. Importantly $\vec v$ is a vector, meaning that it has both magnitude and direction. So changing the direction of your velocity vector is indeed an acceleration. In uniform circular motion your velocity vector is not constant because its direction is changing, even though the magnitude stays the same.
{ "domain": "physics.stackexchange", "id": 80177, "tags": "forces, centrifugal-force" }
Daemonizer in C
Question: I am aware that the malloc is a potential memory leak, but with an execvp coming, and that never returning. The purpose is to do something like: daemon my-blocking-program arg1 The idea is to daemonize ourselves, null stdin/stdout, and then execute whatever program was specified in the arguments. This is very similar to nohup, but without logging to a file, or requiring an &. /* * daemon.c * * Created on: Aug 1, 2015 * Author: javaprophet */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <string.h> #include <errno.h> #include <sys/wait.h> int main(int argc, char** argv) { if (argc < 2) { printf("No program to execute!\n"); return 1; } pid_t pid = fork(); if (pid < 0) { exit(1); } else if (pid > 0) { exit(0); } if (setsid() < 0) { exit(1); } freopen("/dev/null", "r", stdin); freopen("/dev/null", "w", stdout); freopen("/dev/null", "w", stderr); char** args = malloc(argc * sizeof(char*)); if (args == NULL) { exit(1); } for (int i = 1; i < argc; i++) { args[i - 1] = argv[i]; } args[argc - 1] = NULL; execvp(argv[1], args); return 1; // will only ever return if execvp failed } Answer: The points jacwah made in his answer are basically valid. You don't seem to be using any feature from <sys/wait.h> so that could be dropped from your list of headers. Similarly, you aren't using <errno.h> directly; you aren't using anything from <sys/stat.h>; you aren't using anything from <string.h> — so those could be dropped from your list of headers. The header <sys/types.h> isn't needed since POSIX 2001 — officially. Nominally, you should check that freopen() works. There isn't going to be much of anything useful happening on the system if /dev/null is AWOL, but maybe reporting the problem would be a useful heads up for the user. This is a very minor point. If you were going to copy the arguments (which you're not), then it would be more appropriate to do that before modifying standard error; in fact, modifying standard error should be the last thing you do before the execvp(), precisely so that you can report errors until the last possible moment. There are some other points to consider when daemonizing. These do not all necessarily need to be acted on — but they are points to consider. Should the daemon change directory to /? Pro: it means that the daemon isn't blocking unmounting the file system its on, etc. This used to be an issue in the days of multiple file systems because of multiple disk drives. These days, it is far less of a concern, in general. But beware of running the daemon on a network-mounted file system (NFS or equivalent). Con: it means that the daemon doesn't ensure that the file system it needs is still mounted. Yes, the converse of the first problem. I have a daemonize program which was used to daemonize a DBMS, and it consciously has an option to specify which directory the daemonized program should have as its current directory (the default is not to change directory — like yours), and I used it to change directory to where the DBMS software was installed, precisely so that if it was a network-mounted file system, it wouldn't be unmounted while the daemon was running because the file system would be busy. Should the daemon close open files? The usual answer is yes; you probably don't want any files other than than the standard ones open. My code provides for keeping selected file descriptors open. Should the daemon be able to write to a log file? My program has command line options for each of the standard descriptors separately. The default is /dev/null for all three, though. If you're opening a log file, which options are used with the open() call. My code goes over the top; you can control separately O_CREAT, O_APPEND, O_EXCL, O_TRUNC. It also ensures O_NOCTTY. Should the daemon ignore any signals? Should the daemon set umask? Should the PID of the final daemon process be reported on standard output? I used my daemonize in conjunction with env to set the exact environment and in conjunction with a program similar to sudo or su to set the user ID and the active groups for the daemon, which gave me very precise (but not concise) control the environment of the DBMS daemon. What you've got can easily be fixed into a workmanlike usable program by simply dropping the argument copying. The extensive list of other issues to consider doesn't prevent it being usable.
{ "domain": "codereview.stackexchange", "id": 15033, "tags": "c, linux" }
Benefit of Backward Pass at compile time
Question: We collect most of the information about possible compiler optimizations during forward pass. Is it possible to utilize the information collected in forward pass in a backward pass so as to perform better optimizations ? Note: I have been going through the patent Compiler with cache utilization optimizations by Roch G. Archambault et al. (2004) and was wondering what kind of information might have been utilized in their backward pass. Answer: Many major compiler optimizations use backward passes. For example, computing liveness information, which is used to do dead code elimination, usually requires a backwards pass over the CFG of a program in order to identify unnecessary statements. Partial redundancy elimination, a powerful algorithm for eliminating redundant computation or rearranging code, consists of two forward passes combined with two backward passes to determine what expressions are available (forward), what expressions are anticipated (backward), etc. Hope this helps!
{ "domain": "cs.stackexchange", "id": 459, "tags": "compilers, program-optimization" }
Stopping Viola Jones Face Detection early
Question: I want to try to improve an image processing technique for finding faces in images. I want to stop the Viola Jones face detector before it finds all faces, and do histogram equalization and other techniques on "candidate" faces. For example, say we have an image with one face. I would like to cut off facial detection before it has determined there is only one face, i.e. where there might be 3 prospective faces and Viola Jones has not yet taken out those regions which are not actually faces. Is this possible, and if so how do I do it? Answer: Viola-Jones works in stages. Only areas of the image that have a high enough "maybe" score get passed to the next stages. If you implement your own Viola-Jones detector, then you should be able to stop between each stage and apply color/lighting corrections before going on to the next stage. As an alternative to implementing your own, you might take a look at the openCV implementation. You might be able to modify it for your needs. The source is here for the cascade detector, which is the openCV Viola-Jones implementation. You will need to examine the area after the comment //---------------------------------------- Classifier Cascade That is where the actual detector is implemented. I think the interesting places are operator and ocl_detectMultiScaleNoGrouping, and I really think operator is where you would need to start.
{ "domain": "dsp.stackexchange", "id": 2130, "tags": "image-processing, detection" }