anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What (electrical) property does $\alpha$ represent with units of $\textrm{K}^{-1}$?
Question: What property does $\alpha$ represent in the following passage? . . . A block (made of material of $ρ_o = 0.5 \, Ω·m$ and $α=0.004\,\textrm{K}^{-1}$) with dimensions ℓ = 8 cm, w = 3 cm, and h = 6 cm is maintained (by external means) at room temperature (20 $ ^{\circ}$C) . . . Answer: Presumably it is the linear temperature coefficient of resistivity (or of resistance as they come down to the same thing).
{ "domain": "physics.stackexchange", "id": 20954, "tags": "electricity, electrical-resistance" }
Empirical and Molecular Formulae
Question: I'm not a chemist, and nor do I study chemistry so please try to be gentle. Glucose has the molecular formula $\mathrm{C}_6\mathrm{H}_{12}\mathrm{O}_6$ and the empirical formula $\mathrm{C}\mathrm{H}_{2}\mathrm{O}$. Starting from the empirical formula and working the other way, am I guaranteed that $\mathrm{C}_n\mathrm{H}_{2n}\mathrm{O}_n$, where $n$ is a positive whole number, are all going to be well-define molecular formulae? Some examples are Formaldehyde ($n=1$), Acetic Acid ($n=2$) and Ribose ($n=5$). As a starting point, does $n=3$ and $n=4$ make sense, i.e. $\mathrm{C}_3\mathrm{H}_{6}\mathrm{O}_3$ and $\mathrm{C}_4\mathrm{H}_{8}\mathrm{O}_4$ respectively? Answer: Yes they make sense but the same molecular formula can represent different compounds, for the first case $\ce{C3H6O3}$ may rappresent: Glyceraldehyde: or lactic acid (and its optical isomers): $\ce{C4H8O4}$ leads to even more possibilities see the three different tetrose Generally these compounds may be created when the sum of the formal charges of each atom in the molecular formula is equal to zero. Oxygen, hydrogen normally have formal charge of -2 and -1 carbon +4 so: $$-2\times n + (-1 \times n \times 2) + (+4 \times n)=-4\times n +4\times n=0$$
{ "domain": "chemistry.stackexchange", "id": 1358, "tags": "physical-chemistry, isomers, notation" }
How to test GAZEBO works properly. Save windows don't show any component
Question: How can I test whether my gazebo installation works properly or not? I'm trying to "save myworld" and "save as" options but no window is shown. Answer: "Save World as" is broken on Gazebo 5.1.0 . You can either install the experimental version of Gazebo 6 using the gazebo6-prerelease package (assuming you are on Ubuntu) or wait for the fix to be released in Gazebo 5.2.0 . The relevant bug on Gazebo issue tracker is: https://bitbucket.org/osrf/gazebo/issue/1593/world-save-as-broken-on-gazebo-510 .
{ "domain": "robotics.stackexchange", "id": 686, "tags": "gazebo" }
Implement a software PID for the first time in real time software, I can find P what about I and D?
Question: Here we have the basic formula of a Proportional Integral Derivative controller. This is the formula from the Wikipedia page on the subject I want to implement this type of controller in my system. Currently the system (function in the software) is fed a psi value and told to set an analog output to a value so that this pressure is achieved. The function, lets call it SetPressure(int psi), has access to a table of psi vs analog signal output, it does quick binary search and finds the closest match and sets the analog output to that match. This is not good enough for when the machinery/valves "wears in". So now I have access to a pressure transducer to tell me the actual pressure in the line I am trying to control. It gives me an analog input to my function. I would like to figure out, given these values of commanded and actual values, how to create a software PID. So, the first value is easy; P. This is found by determining the error in the value error = DesiredValue - ActualValue and multiplying it by some proportional gain Kp. So P = Kp*error this is easy enough to understand. After some tuning I should find a suitable value of Kp such that I can control the pressure a little better. But what about if I would like to find the I and D? The software this PID runs in is real time. It gets called every few milliseconds and reads the inputs and determines the outputs based on its state. I guess for a start, where do I go from here to get I and D. I understand that there are gains for each of these terms but I am not sure about how I calculate the entire term. For instance, Do I need to save each calculated error for each time t so that I might find the integral? This seems it would be a waste of memory, saving each error for each moment in time would accumulate to a huge list in a matter of seconds. Any help is appreciated. Please ask for clarification if needed, and note this is my first time working with something like this. . Code Sample so far: (anaOutput[PRESS_OUT_DAC] is already determined above in the code from the analog vs psi table) float KpPressure = 1; // These gains are to be determined float KiPressure = 1; float KdPressure = 1; float MVout = anaOutput[PRESS_OUT_DAC]; // Init to output if(anaInput[PRESS_IN_DAC] != anaOutput[PRESS_OUT_DAC]) { // error = SetPoint - ProcessValue float error = (float)anaOutput[PRESS_OUT_DAC] - (float)anaInput[PRESS_IN_DAC]; // Pout = Kp * error float Pout = KpPRessure * error; // Iout = Ki * Int(0,t)[error(t)] float Iout = KiPressure * 1; //// What should 1 be? // Dout = Kd * (d/dt)[error(t)] float Dout = KdPressure * 1; //// What should 1 be? // Manipulated Output = Combination of PID MVout = P + I + D; } if(MVout > MAX_SIGNAL) MVout = MAX_SIGNAL; anaOutput[PRESS_OUT_DAC] = (UWORD)MVout + ANALOG_ZERO; Answer: It sounds like you're on the right track for the proportional part of the controller; you seem to understand what's going on pretty well. The other two portions are easier than you think. The main difference between what you might read in basic control theory and how you're implementing it is that your computer is a discrete-time system. Since you don't have infinitesimal time steps, you'll need to approximate the other two branches in the controller. Integral: You can approximate integration via discrete summation: $$ I_{out} = K_i\int_{0}^{t}e(\tau)d\tau \approx K_i \sum_{n=0}^{N} e[n]\Delta t $$ where $\Delta t$ is the time step between updates of the controller output, and $e[n]$ is the calculated error at the $n$-th time step. This does not require a lot of memory; instead of storing each error sample, you can store their cumulative sum. This is analogous to how an analog integrator would operate. Derivative: You can approximate differentiation in multiple ways. The simplest is a first-order difference: $$ I_{out} = K_d\frac{d}{dt}e(t) \approx K_d \frac{e[n] - e[n-1]}{\Delta t} $$ If your time step is large, then this may not be a great approximation of the derivative. In fact, it is difficult in any case to implement a good, robust differentiator in practice. You could improve the approximation by decreasing your time step (effectively increasing the sample rate of your discrete-time controller) or by using a more sophisticated digital differentiator design.
{ "domain": "dsp.stackexchange", "id": 928, "tags": "real-time" }
Why this single layer network does'nt work
Question: I am trying following code (modified from https://www.kdnuggets.com/2017/10/seven-steps-deep-learning-keras.html ): def single_layer(input_shape, nb_classes): print("input shape:", input_shape) print("print nb_classes:", nb_classes) from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential() model.add(Dense(nb_classes, input_shape=input_shape, activation='softmax')) model.compile(optimizer='sgd', loss='categorical_crossentropy') model.summary() return model However, when I try to fit this model with an X_train of dimensions 64,64,3 and 17 classes, following is the output with error: input shape: (64, 64, 3) print nb_classes: 17 Using TensorFlow backend. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 64, 64, 17) 68 ================================================================= Total params: 68 Trainable params: 68 Non-trainable params: 0 _________________________________________________________________ Traceback (most recent call last): .... .... File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training.py", line 950, in fit batch_size=batch_size) File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training.py", line 787, in _standardize_user_data exception_prefix='target') File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 127, in standardize_input_data 'with shape ' + str(data_shape)) ValueError: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (10396, 17) Why this code is not working and how should it be modified to make it work? Answer: I can't find the code in the link you posted to get more background information on what kind of data source it is that you are using. However, a data source that is $60\times60\times3$ is described as being high dimensional. You have a total of 10,800 input features which need to be mapped down to 17 different classes. This is actually a very complex task and will not be successful using a single layer network. Such a simple network will not have enough parameters to capture the non-linearities between features in your input space. That being said, if you insist on using a single layer network with a data of size $60\times60\times3$ with 17 input classes the code is as follows. Let's first create some artificial data of the same dimension as your data import numpy as np n = 1000 x_train = np.zeros((n,64,64,3)) y_train = np.zeros((n,)) for i in range(n): x_train[i,:,:,:] = np.random.random((64,64,3)) y_train[i] = np.random.randint(0,17) x_train = x_train.reshape(n,64,64,3,) n = 100 x_test = np.zeros((n,64,64,3)) y_test = np.zeros((n,)) for i in range(n): x_test[i,:,:,:] = np.random.random((64,64,3)) y_test[i] = np.random.randint(0,17) x_test = x_test.reshape(n,64,64,3,) print('Training data: ', x_train.shape) print('Training labels: ', y_train.shape) print('Testing data: ', x_test.shape) print('Testing labels: ', y_test.shape) (1000, 64, 64, 3) (1000,) (100, 64, 64, 3) (100,) For a classification task we should convert our outputs to categorical vectors. Where we use one-hot encoding to identify the correct class. import keras # The known number of output classes. num_classes = 17 # Convert class vectors to binary class matrices. This uses 1 hot encoding. y_train_binary = keras.utils.to_categorical(y_train, num_classes) y_test_binary = keras.utils.to_categorical(y_test, num_classes) We then build the model. from __future__ import print_function import keras from keras.models import Sequential from keras.layers import Dense, Flatten from keras.models import model_from_json from keras import backend as K input_shape = (64,64,3,) model = Sequential() model.add(Flatten(input_shape=input_shape)) model.add(Dense(17, activation='softmax')) model.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) You can see the summary of the model by using model.summary() Then we can train this model using batch_size = 128 epochs = 10 model.fit(x_train, y_train_binary, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test_binary)) This code works. However, the data is completely random thus the model cannot learn anything. However, even if your data is easily distinguishable, as I said above, I do not expect this model to be complex enough to distinguish such a large input space from 17 different possible classes. If you post the data source you are using we can design a model to get a good result.
{ "domain": "datascience.stackexchange", "id": 3866, "tags": "keras" }
Search for listings on a german website in a given postal code, return an excel spreadsheet with details from X listings
Question: I'm a newbie and originally I wanted to write some code to automate getting to the results page of a site that lists apartments for workers. Then I got some inspiration and wanted to automate getting the data from each entry as well. It works and it saves me a lot of time, but it does seem like I went to too much trouble for what it does? I was already advised that I should use more functions/define more functions, but I guess that'd just make seven functions? How would that be helpful in comparison to the 7 blocks of code I have? I am also convinced hat this is the hackiest thing I could have done and it does not seem like a good solution at all. import openpyxl from openpyxl import Workbook from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.select import Select import requests from bs4 import BeautifulSoup import pandas as pd #prepare the excel workbook wb = openpyxl.Workbook() sheet = wb.active driver = webdriver.Firefox() #give the driver the site to load and ask the user for the german postal code and number of results webpage = r"https://mein-monteurzimmer.de" print('Prosim vnesi zeljeno mesto') searchterm = input() print('Prosim vnesi stevilo rezultatov.') number_of_results = int(input()) driver.get(webpage) #go through the steps required to get to the results page GDPR = driver.find_element_by_css_selector("span.primary") GDPR.click() sbox = driver.find_element_by_xpath("//input[@placeholder='Adresse, PLZ oder Ort eingeben']") sbox.send_keys(searchterm) driver.implicitly_wait(2) addressXpath = "//div[contains(text(),'"+searchterm+"')]" driver.find_element_by_xpath(addressXpath).click() submit = driver.find_element_by_xpath("/html/body/main/cpagearea/section/div[2]/div/section[1]/div/div[1]/section/form/button") submit.click() First_result = driver.find_element_by_xpath("/html/body/main/cresultcontainer/div/div[2]/div[2]/section[1]/div") First_result.click() #iterate through the results as many times as the user demanded i = 0 while (i < number_of_results): Result_page = driver.current_url stran = requests.get(driver.current_url) soup = BeautifulSoup(stran.content, 'html.parser') #find and extract the relevant info for each listing ime = soup.find("dd", itemprop="name") if ime: print(ime.text) c1 = sheet.cell(row=i+1, column=1) c1.value = ime.text ulica = soup.find("dd", itemprop="streetAddress") if ulica: print(ulica.text) c1 = sheet.cell(row=i+1, column=2) c1.value = ulica.text postna_stevilka = soup.find("span", itemprop="postalCode") if postna_stevilka: print(postna_stevilka.text) c1 = sheet.cell(row=i+1, column=3) c1.value = postna_stevilka.text kraj = soup.find("span", itemprop="addressLocality") if kraj: print(kraj.text) c1 = sheet.cell(row=i+1, column=4) c1.value = kraj.text tel = soup.find("dd", itemprop="telephone") if tel: print(tel.text) c1 = sheet.cell(row=i+1, column=5) c1.value = tel.text spletna_stran = soup.find("dd", itemprop="url") if spletna_stran: print(spletna_stran.text) c1 = sheet.cell(row=i+1, column=6) c1.value = spletna_stran.text #this specific one doesn't work as they used the same class and name for #both mobile and landline. Need to figure this out. #However, if there is no landline, at least the mobile number is extracted. mobil = soup.find("dd", itemprop="telephone").parent.find_next_siblings() if mobil: print(mobil.text) c1 = sheet.cell(row=i+1, column=7) c1.value = mobil.text #click through to the next result next_entry = driver.find_element_by_xpath("/html/body/main/chousingdetail/div/div[2]/div[1]/nav/div/div[2]/a[2]/i") next_entry.click() i +=1 #once all the results have been worked through, save the workbook to this directory: wb.save("[Directory to save to]") Answer: Localisation This: #give the driver the site to load and ask the user for the german postal code and number of results and this: print('Prosim vnesi stevilo rezultatov.') are both great. The former uses English for code, which is generally advised; and the latter uses a localised language (Slovenian?) for user-facing content. These should be avoided: ulica = soup.find postna_stevilka = soup.find and use English instead (street, postcode). For better or worse, English is the de-facto language of international programming, and using it for your variable names will make your code more legible for collaborators and colleagues. General approach Always when thinking about scraping a website, look at the network or traffic tab of your browser's developer tools. In this case it shows: https://mein-monteurzimmer.de/api/v2/search That's an API that you can call into with Requests, which will be more simpler and more efficient than Selenium.
{ "domain": "codereview.stackexchange", "id": 39859, "tags": "python, web-scraping, automation" }
Pole Zero plot given a Transfer function
Question: I've been looking at how to plot zeros/poles based on a transfer function. I found a couple of Tutorials online. In the first youtube tutorial, the author brilliantly explains how to plot the zeros/poles. In the second tutorial, the author explains how to obtain the magnitude characteristics of the frequency response, which is also fantastic. In the second tutorial, under the Example in "3.2.3 Transfer function of discrete-time systems", we have the following transfer function Based on the transfer function, the poles and zeros can be defined as, a = [1 -2.2343 1.8758 -0.5713] b = [0.0088 0.0263 0.0263 0.0088] This is where my confusion starts. based on the first tutorial, i'll have to plot all the zeros/poles along the x-axis (Or am I mistaken?). But based on the MATLAB command to plot pole and zeros, zplane(a,b) I get this plot The plot for the poles and zeros are scattered all over. How can i plot the poles and zeros manually in the Z-plane given my poles and zeros and obtain a similar output to MATLAB ? Thanks for your help. Answer: What you have are not the poles and zeros, but simply the filter coefficients, i.e., the coefficients of the numerator and denominator polynomials. The poles are the roots of the denominator polynomial, and the zeros are the roots of the numerator polynomial. In Matlab they can be found by using the roots command: p = roots(a); z = roots(b); Note that in general, poles and zeros are complex numbers, that's why they are plotted in the complex plane. Just a remark: you used the zplane command with numerator and denominator interchanged, that's why the plot shows the zeros as crosses on the unit circle, and the poles as 'O's inside the circle. The correct call is zplane(b,a).
{ "domain": "dsp.stackexchange", "id": 8008, "tags": "matlab, transfer-function, poles-zeros" }
N in Nernst Equation
Question: My teacher told me that in the Nernst equation, N was the number of electrons transferred. For example, $n = 2$ in $\ce{Ni^{2+} + Zn -> Zn^{2+} + Ni}$. However, say all of the coefficients were doubled: $\ce{2Ni^{2+} + 2Zn -> 2Zn^{2+} + 2Ni}$. Would $n = 2$ or $n = 4$ in that case? Answer: The Nernst equation is as follows, \begin{align*} E_\text{red}&=E_\text{red}^{0}-\frac{RT}{nF}\ln{\left(\frac{c_\text{red}}{c_\text{oxi}}\right)}. \end{align*} where $c$ is the concentration of the species specified (usually the chemical activity, $a$, is used, but the equations follow the same principle). In the first case, $n=2$. So, \begin{align*} E_\text{red}&=E_\text{red}^{0}-\frac{RT}{2F}\ln{\left(\frac{c_\text{red}}{c_\text{oxi}}\right)}. \end{align*} Suppose we double the coefficients in the equation. Now, $n=4$. However, the concentrations of all the species have also doubled. This requires the reaction quotient (the stuff inside the natural logarithm) to be squared. Therefore, \begin{align*} E_\text{red}&=E_\text{red}^{0}-\frac{RT}{4F}\ln{\left(\frac{c_\text{red}}{c_\text{oxi}}\right)^2}. \end{align*} The laws of logarithms means that we can move the power of 2 to be a multiplication of the logarithm, \begin{align*} E_\text{red}&=E_\text{red}^{0}-\frac{2RT}{4F}\ln{\left(\frac{c_\text{red}}{c_\text{oxi}}\right)},\\\\ &=E_\text{red}^{0}-\frac{RT}{2F}\ln{\left(\frac{c_\text{red}}{c_\text{oxi}}\right)}, \end{align*} which is the exact equation we had previously, thus, nothing has "changed". So to answer your equation. Yes, $n=4$, but the equation does not change, as the reaction quotient alters accordingly, also.
{ "domain": "chemistry.stackexchange", "id": 10587, "tags": "electrochemistry, nernst-equation" }
Question about air-conditioning cycle
Question: All explanations of the air-conditioning cycle are somewhat like this: https://www.swtc.edu/ag_power/air_conditioning/lecture/basic_cycle.htm My question is... how do we ensure the refrigerant is in the appropriate state at the appropriate time in the cycle? For example, what if the high-pressure gas goes through the condenser and it hasn't condensed fully? similarly with all the other stages... what if the flow of the refrigerant is too fast, or too slow... Answer: Example Refrigeration System Referring to the sketch, an electric motor drives a compressor, which takes low temperature, low pressure refrigerant vapor, and compresses it to a high temperature and high pressure. The high pressure refrigerant vapor goes to a condenser, where it is condensed by a lower temperature environmental fluid (usually air) as heat is rejected to the environment. In my example, it is important to send the condensing vapor to the top of the condenser, such that any condensed droplets will drop to the bottom of the condenser, in order to keep condensed liquid from covering up active heat transfer area on the inside of the condenser tubes. In turn, the condensed liquid goes to the small volume storage tank, where it collects. From there, the high temperature, high pressure liquid goes through the expansion valve, where its pressure and boiling temperature are reduced. Next, the low pressure and low temperature liquid enters the evaporator, where heat is transferred into the evaporator for the purpose of cooling an enclosed space (e.g., the inside of an automobile). The liquid level in the evaporator is controlled by a level controller, which admits the correct flow rate of refrigerant from the small volume storage tank to hold the desired level in the evaporator. The pressure in the evaporator is controlled by a pressure controller on the evaporator’s exit-vapor line, which is also the compressor suction line. This device controls a valve which “modulates” the pressure of the evaporator, but cannot be fully closed because the compressor should never be isolated from the evaporator. In the case where the refrigeration system runs low on refrigerant, the pressure in the evaporator exit-vapor line will drop very low, and the pressure switch shown in the drawing will disconnect power to the compressor. Ameet, regarding your question of how refrigerant is not allowed to accumulate in the wrong places, any real-world refrigeration system will have either electronic or mechanical devices which will prevent such an occurrence. My drawing, and the details above, give one example of how this can be done. I am sure that there are several other ways of accomplishing the same thing, and I am also sure that the designers of any real refrigeration system will specify electrical or mechanical devices to deal with these same issues. Regarding your specific question of failure of the high pressure refrigerant to condense in the condenser, this can indeed happen. If the person charging the refrigeration system does not pull a hard vacuum on the system before charging with refrigerant, air will be in the system. This air will tend to accumulate at the condenser, and it acts to impede heat transfer because it is non-condensible. Until someone purges the air from the condenser by blowing it to the atmosphere, the refrigeration system will operate improperly. Assuming that there is no air in the system, the original system design takes account of the correct pressures and temperatures needed to condense the refrigerant when using ambient air. Unless the ambient air temperature substantially exceeds the design ambient air temperature, you can expect the condenser to operate as designed. Regarding your question about flow rates of refrigerant, my example uses the indicated controllers to ensure the proper flow rate of refrigerant. Unless a controller fails, or some design criterion is exceeded (either too little or too much of a design variable), flow rates should be whatever is needed for the system to function properly.
{ "domain": "physics.stackexchange", "id": 34824, "tags": "heat-engine, carnot-cycle" }
IR spectra and hydrogen bonds
Question: Normally we see IR signals that correspond to the vibration of covalent bonds. Can a (strong) hydrogen bond (or a vibration thereof, to be precise) correspond to an IR signal as well? How about other non-covalent forces? What are examples were hydrogen bonds can be seen in the IR spectrum? Answer: Your question implies that hydrogen bonds give rise to discrete peaks in the vibrational spectrum. It's better to say that hydrogen bonding affects the position of already existing peaks, in particular those vibrational modes coming from the $\ce{X-H}$. A hydrogen bonding network may also broaden low-frequency peaks coming from librations and hindered rotations. For example, take the $\ce{O-H}$ stretching region in water. The vapor spectrum consists of water molecules effectively in the gas phase that aren't interacting with each other, or are doing so rarely enough that the molecular configurations are "countable", shown by the discrete peaks between ~3600-3900 wavenumbers. A lack of water molecules interacting means hydrogen bonds aren't forming. Moving to a bulk phase, where the water molecules are now interacting with each other all the time, hydrogen bonds are being formed and broken much more often. This results in a continuous peak, as more configurations are being sampled. In the case of water, charge transfer into the $\ce{O-H}$ bond causes a red-shift (lowering) of its frequency, moving the center of the distribution to 3400 wavenumbers. The distribution begins to narrow again in the solid phase, as the number of configurations is still large but dynamical switching between them slows. It is also possible to experimentally measure spectra of small clusters through more advanced techniques such as mass spectrometry-detected rare gas tagging. Note the appearance of peaks around 3400 wavenumbers as the size of the cluster grows. (The presence of a proton doesn't affect these spectra in the hydrogen bonding region.) A similar effect can be seen in molecular dynamics simulations. See how the peaks broaden and shift to lower energies when going from 1 water to the dimer, then 3 in a line/triangle, and finally 4 in a square. First image taken from http://www1.lsbu.ac.uk/water/water_vibrational_spectrum.html. Experimental cluster spectra taken from http://pubs.acs.org/doi/abs/10.1021/acs.jpca.5b04355. BOMD plot my own creation (unnormalized dipole ACF spectra from HF/STO-3G, 0.36 fs time step, 10000 steps).
{ "domain": "chemistry.stackexchange", "id": 6674, "tags": "organic-chemistry, physical-chemistry, spectroscopy, ir-spectroscopy" }
Center of gravity
Question: Say an aircraft without its engines has a mass of 118 tonnes and balances on its rear wheels only. This must mean the center of mass of the aircraft without its engines is directly above the rear wheels. If the 2 engines have a mass of 7.5 tones each and their center of gravity lies 9m ahead of the rear wheels, is it possible to find the center of mass of the whole system? (If the distance between the front and rear wheels is 28.67m and the overall length of the aircraft is 66.8m) Answer: $\sum V$ = total force = 118 + 15 = 133 Distance of the total force to the rear wheel ($x$): $\sum M$ about the rear wheel = 0 $x = 15*9/133 = 1.015 m$ - the center of the whole mass is located 1.015m to the left of the rear wheel. You can verify the correctness of this solution by calculating the reactions $R_R$ and $R_L$ for both systems - 1) two loads on the beam, and 2) one (total) load on the beam, the resulting $R's$ should be identical.
{ "domain": "engineering.stackexchange", "id": 4674, "tags": "applied-mechanics" }
[Kinect] About sound streaming
Question: Hello, I want to know if there is any libray that allows ROS to stream over the web the microphone of the kinect, something like the web_video_server. Update: Navigating through the forum I found the library audio_common, it is posible to generate an audio stream from this library and the rosbridge Websocket? Thanks beforehand! Originally posted by pexison on ROS Answers with karma: 82 on 2015-03-06 Post score: 0 Answer: If you expand the main idea of audio_common tutorial out of the localhost space, you are able to stream sound. Originally posted by pexison with karma: 82 on 2015-03-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21076, "tags": "ros, kinect, web, beginner" }
Will the direction of two entangles particles going through another set of Stern—Gerlach magnets after an initial set, be opposite?
Question: Consider this experimental set-up: You have a source emitting two entangled 1/2-spin particles, one left, one right. You have two 180-degree-oppositely, horizontally oriented Stern—Gerlach magnets, one on the left, one on the right. The particles have opposite spin, and the magnets are oppositely oriented, so the particles will either both go 'up' or both go 'down', 100% of the time, right? We ignore, for now the particles that went 'down'. At this point we should have two up-particles fully oriented with the horizontal Stern—Gerlach magnets, is that right? So if we were to put them both through the same magnets again, they would both go 'up' again? Now say that after this point we put them through a set of Stern—Gerlach magnets that are 45-degree oriented clockwise from the initial set. So one has a net 45 degree rotation, the other has a net 225 degree rotation (but a relative 45 degrees from its initial orientation). Given the particles are perfectly aligned, this gives an ~85.36% chance to go 'up' (cos^2 (45/2)) along the nearer trajectory and ~14.64% chance to go 'down' along the further trajectory. My question is: will both particles always go in the 'same' direction, e.g. both take the 85.36% path or both take the 14.64% path? Or will they take opposite paths? Or are they no longer correlated with how they will go from here / will they go independently? Answer: Let's say your particles start in the state $\frac 1 {\sqrt 2} \left(|ud\rangle + |du \rangle\right)$. In your example, particles in this entangled state are deflected by magnets twice, where the magnets are pointed in opposite directions. If we were to put a screen behind the first set of magnets we would always see either both particles go up, or both down, just as you said. Let's say the first magnet directs both particles in the $|ud\rangle$ state up. Then if we add additional magnets that act only on particles deflected up, these magnets will only act on $|ud\rangle$, and any particles reaching the later magnets effectively been prepared in the $|ud\rangle$ state! This doesn't have any effect on the next set of magnets which, as you describe, will deflect both particles up again. But when the particles get to the third magnet, the correlation between the spins no longer exists, and we are no longer guaranteed to get the same outcome.
{ "domain": "physics.stackexchange", "id": 91669, "tags": "quantum-mechanics, quantum-entanglement" }
Difference equation when transfer function expressed as poles and zeros
Question: The transfer function $H(z)$: $$ H(z) = \frac{Y(z)}{X(z)} = \frac {b_0 + b_1 z^{-1} + b_2 z^{-2}} {1 + a_1 z^{-1} + a_2 z^{-2}} \tag{1} $$ Has difference equation: $$ y[n] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2] - a_1 y[n-1] - a_2 y[n-2] \tag{2} $$ How would you compute the difference equation when $H(z)$ is expressed as poles and zeros without expanding brackets: $$ H(z) = g \frac {(z - d_0)(z - d_1)} {(z - c_0)(z - c_1)} \tag{3} $$ I'm happy to assume real poles and zeros for the sake of simplicity. I know that when performing a cascade on a computer that the output of the first section is fed to the second and second order sections are used. I'm interested in the maths though. What I'm ultimately trying to do is get the difference equation for this expression direct without expansion: $$ H(z) = g \prod_{k=1}^{K} \frac {b_{k0} + b_{k1} z^{-1} + b_{k2} z^{-2}} {1 + a_{k1} z^{-1} + a_{k2} z^{-2}} \tag{4} $$ Answer: If I understood your question right, you would like to obtain a Linear Constant Coefficient Difference Equation (LCCDE) representation of a given system from its given System Transfer Function, $H(z)$, when it's expressed in a pole-zero product form such as: $$ H(z) = g \frac {(z - d_0)(z - d_1)} {(z - c_0)(z - c_1)} \tag{1} $$ instead of a, more directly apparent, power series form such as: $$ H(z) = \frac {b_0 + b_1 z^{-1} + b_2 z^{-2}} {1 + a_1 z^{-1} + a_2 z^{-2}} \tag{2} $$ As known, the latter form can be directly converted to its corresponding LCCDE representation from the coefficients ${b_k}$ and ${a_k}$ as follows: $$y[n] + a_1 y[n-1] + a_2 y[n-2] = b_0 x[n] + b_1 x[n-1] + b_2 x[n-2]$$ Where the pattern of conversion speaks for itself, without a need of further description. Now, we'd like to know whether one can obtain a similar conversion from the pole-zero form into the LCCDE without expanding the product? Well yes and no! Depending on how you consider the process of computing the coefficents. If your intention is on the fact that you do not want to algebricaly perform the paranthesis expansion operation, but instead use a machanism that computes each coefficient separately by a set of arithmetic operations; then yes you can devise such a method to, for example, compute $a_3$ or $b_5$ without computing any other terms. And No, eventually you must always compute the coefficient $a_k$ that multiplies $z^{-k}$ so as to find the coefficient that multiplies $y[n-k]$, hence you need to always expand the brackets and compute multipliers of $z^{-k}$ in one or the other way. Then I want to present here a simple method to find those coefficients : given a polynomial in the product of zeros form: $$p(x) = (x-a)(x-b)(x-c)$$ we can perform the following operation to get its coefficients in the expanded form $$p(x) = d_0 x^3 + d_1 x^2 + d_2 x + d_3$$ $$d[k] = (1 -a)\star(1 -b)\star(1-c)$$ where $k =0,1,2,3$ in this particular case. As clearly seen this is a discrete convolution operation, where each multiplier bracket is turned into a respective convolution operand. For which a simple MATLAB line to compute $d[k]$ would be: d = conv([1 -a],conv([1 -b],[1 -c])); where a, b, and c would be replaced by their numeric values or if you're competent in the symbolic math capability, you could instead obtain a pure symbolic result for the coeffients $d_k$, which would therefore give you a way to compute LCCDE coefficients from pole-zero products of $H(z)$ in a rather roundabout way. Finally, the computations for a power of $z^{-1}$ products is basicly the same if you describe the product as $$P(z) = (1 - az^{-1})(1-bz^{-1})(1-cz^{-1})$$ and, $$P(z) = d_0 + d_1 z^{-1} + d_2 z^{-2} + d_3 z^{-3} $$ where again $$d[k] = (1 -a)\star(1 -b)\star(1-c)$$
{ "domain": "dsp.stackexchange", "id": 3848, "tags": "infinite-impulse-response, z-transform" }
What's the logic behind such gradient descent
Question: The gradient descent is motivated from the leetcode question of minimal distance: https://leetcode.com/problems/best-position-for-a-service-centre/ $$\arg\min\limits_{x_c,y_c}\sum\limits_i\sqrt{(x_i-x_c)^2+(y_i-y_c)^2}.$$ And here is one of the correct solution (dist is distance function): double getMinDistSum(vector<vector<int>>& positions) { constexpr double kDelta = 1e-6; int n = positions.size(); vector<double> center(2, 0.0); for (const auto& pos : positions) { center[0] += pos[0]; center[1] += pos[1]; } center[0] = center[0] / n; center[1] = center[1] / n; double minDist = dist(center, positions); double step = 1.0; while (step > kDelta) { bool reduceStep = true; for (int y = -1; y <= 1; y++) for (int x = -1; x <= 1; x++) { if (abs(y) + abs(x) != 1) continue; double curX = center[0] + x * step; double curY = center[1] + y * step; double newDist = dist({ curX, curY }, positions); if (newDist < minDist) { minDist = newDist; reduceStep = false; center[0] = curX; center[1] = curY; } } if (reduceStep) { step /= 10.0; } } return minDist; } The logic is that each time only move in the decreasing of the five directions:$(\pm 1, \pm 1)$ and $(0,0).$ If none of them decrease, the step reduces the ten times I cannot understand why such gradient descent works? Answer: This is not gradient descent, but instead it is coordinate descent. The logic is the same as walking up a hill in order to reach the summit. You do not have to be aware of the entire hill and the area around you is sufficient. As long as you always take a step to a higher point then you will be able to get to the top. There are problems however. The function needs to be convex and smooth, otherwise you can get stuck in a local point that is not the peak.
{ "domain": "datascience.stackexchange", "id": 10916, "tags": "gradient-descent" }
Is the minicolumn the unit of the neocortex?
Question: There are many arguments for what the unit of the neocortex is. "Columns" seem to be the standard, but what exactly those are is extremely contradictory between individuals, cortical regions, and species. Often times, when a column is referred to, it's actually a functional column without any anatomical borders (such as Hubel and Wiesels ocular dominance columns). Sometimes, the "column" is semi anatomical, such as the rat barrel cortex. Other times, these functional columns are confabulated into anatomical units, without any evidence for a border. So my question is, could Mountcastle's "minicolumns" be the actual anatomical unit of the cortex? I've heard arguments that they are mere developmental relics. But they seem like the only reliable and consistent unit in the cortex. Answer: I am skipping the background information (I will amend my question later) for now. So, what is known about minicolumn? The term was coined by Ramon y Cajal who saw narrow stripes, running from white matter to the surface (reference) in his preparation with Nissl staining. The hypothesis about minicolumns to be anatomical units was proposed by Mountcastle in 1957, known as "columnar hypothesis" in neuroscience (reference). It has been shown that minicolumns grow from progenitor cells within the embryo (reference), and don't require brain functions to emerge. Minicolumns contain neurons within multiple layers of the cortex (reference). One of the latest review on the topic (Buxhoeveden & Casanova, 2001) contains 121 references to experimental papers on this topic. I wouldn't say that these columns are developmental relic, for there is a known heterogenity among these structures (like hypercolumns, macrocolumns and segregates) and it is rather improbable that all these types are relics. So, as long as we stick to the term "anatomical unit", I see no problems why columns (with all possible types and sizes of them) should not be it.
{ "domain": "biology.stackexchange", "id": 188, "tags": "neuroscience" }
Safe DbContext Disposal
Question: I have a service class called ClientService, the service class is called using an interface IClientService. The service class simply does CRUD methods for the DbSet<Client> in my context class. In my service class constructor I have: private MyContext context; public ClientService() { context = new MyContext(); } I then have a void in the ClientService class that can be called from the IClientService interface called Save() that does this: public void Save() { context.SaveChanges(); context.Dispose(); context = new MyContext(); } Is this safe or even necessary or am I good simply saying context.SaveChanges(); in the Save() void? Answer: You shouldn't need to call Dispose and new up another context in the Save method. SaveChanges should be good enough. I would recommend you make ClientService IDisposable and when that class disposes then dispose of the context as well. Update from comment to implement IDisposable on the IClientService public interface IClientService : IDisposable The will now require a Dispose method on any class that implements IClientService. In the dispose method you should call the dispose method of the context.
{ "domain": "codereview.stackexchange", "id": 10226, "tags": "c#, entity-framework, asp.net-mvc-4, interface" }
Homography valid for patch within the image but not the entire image?
Question: The homography I have is non-degenerate (det!=0), and generated from a valid planar pose. When I use it to warp the four corners of image, it returns something like the following: 1 0 3 2 instead of something like 0 1 3 2 where 0 represents top-left corner, 1 top right, 2 bottom left, 3 bottom right. It doesn't follow the clock-wise order anymore, and it's twisted. The weird thing is, if I apply it to a local patch within the image, i.e. where the plane is, the returned result is valid. How can this happen? Shouldn't it always return a valid quadrangle? Answer: Are you sure that the points are all in front of the camera (z > 0), if you apply a a projection like $(x, y) = (x/z, y/z)$ then points in a 3d quadrilateral where vertices 0 and 1 have negative z, you could end with a 2d figure like you described
{ "domain": "dsp.stackexchange", "id": 9923, "tags": "image-processing, computer-vision, camera-pose, homography" }
How to expand abbrevations in text during preprocessing?
Question: Im doing preprocessing on english text data. I have some domain specific abbreviations, for which i'm maintaining internal dictionary with key-value pairs. The problem i'm facing is the text has abbreviations in plural forms with and without contractions like: Mgr's = manager mgrs = manager mgr = manager All 3 points to a manager. Im able to capture the plural form with contractions using a regex(r"'s") and removing the 's' but, in case of no contractions i'm creating one more entry in the dictionary with plural form of the abbreviations. Im somehow feel this is duplication and not a clean approach. Is there any better solution to address this problem? Any immediate help on this is much appreciated. Thank you Answer: Do it in two steps: replace the ones with 's' first, then do the rest of them.
{ "domain": "datascience.stackexchange", "id": 8205, "tags": "machine-learning, nlp, data-cleaning, preprocessing" }
Eclipse custom message unresolved inclusion
Question: Hello everybody, I know that might be a common question, but all the solutions I have found did not work. I can import my project (around 35 packages) and all the inclusions work perfectly. However, when comes to the inclusion of messages it shows an error and says that the inclusion of the message header is unresolved even if the message is defined in the same package where it is used. Moreover, it builds the project without any problem. Of course I followed the steps in the wiki to set up Eclipse with ROS and catkin. Any clues on what it can be? Thank you very much Andrea Originally posted by Mago Nick on ROS Answers with karma: 385 on 2015-04-29 Post score: 0 Answer: It was a compiling problem. There were some packages with errors and the messages were not generated. To verify it I looked in the devel/include folder of the workspace. Thank you very much anyway Andrea Originally posted by Mago Nick with karma: 385 on 2015-04-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21579, "tags": "catkin, ros-indigo" }
Linear algorithm for off-line minimum problem
Question: I've been reading a paper "A Linear Time Algorithm for Maximum Matchings in Convex, Bipartite Graphs" by Steiner and Yeomans. In this paper at some point they solve an off-line minimum problem that is stated as follows: You are given an empty set $S$ and a sequence of $n$ $\textrm{INSERT}(i)$ and $m$ $\textrm{EXTRACT-MIN}$ calls. $\textrm{INSERT}(i)$ will insert a number no greater than $n$ into the $S$ (I think we can even claim that each number is inserted exactly once). And $\textrm{EXTRACT-MIN}$ will find minimum element from the $S$, delete it from the $S$ and return it. The problem is to determine sequence of results of $\textrm{EXTRACT-MIN}$ calls. Paper authors don't provide an algorithm implementation for this problem and claim that this problem can be solved in $O(n)$ by referencing "Data Structures and Network Algorithms" by Tarjan. I haven't found any mentions of off-line minimum problem in this book and all references I found on the internet reference a problem from "Introduction to Algorithms" by Cormen et al. But this problem is suggested to be solved using a disjoint-set data structure, which leads to $O(n\alpha(n))$ amortized complexity, which is clearly not $O(n)$ worst-case. So my question: is there really a $O(n)$ algorithm for this problem or is it a some mistake in the paper? Answer: The paper "A linear-time algorithm for a special case of disjoint set union" (1983) by Gabow and Tarjan gives an $O(m+n)$ time algorithm to process $m$ union and find operations on $n$ elements, under the condition that the union tree is given as well. The union tree is a rooted tree where the vertices initially are labelled with the individual elements, and the union operations replace two adjacent vertices by a single vertex with the set union as its label, contracting the edge. Note that in the algorithm of exercise in Cormen et al. you mention (as well as the algorithm from the book referenced in the Gabow and Tarjan paper), the union operations are described by a union tree where each initial set has an edge to its predecessor and successor in the sequence of extract-min/insert operations. This means we can apply the algorithm from Gabow and Tarjan to perform the union-find operations in the offline-min algorithm, and obtain an algorithm that runs in linear time.
{ "domain": "cs.stackexchange", "id": 21235, "tags": "algorithms, complexity-theory" }
Efficient vector-like polymorphic container which retains type information
Question: This is my attempt of implementing an efficient, cache-friendly, vector for polymorphic objects. From now on I will refer to "virtual functions" as functions which are dependent on an object's underlying dynamic type, though they might not be strictly marked "virtual". Memory model of polymorphic vector Rather than storing a vector of std::unique_ptr (or other smart pointer) which points to an object on the heap, I store polymorphic objects of the same dynamic type together in the same vector (i.e. I have a vector of Bases and a vector of Derived). A tuple stores each vector of each dynamic type. However, this has the downside that my polymorphic vector is unordered, so I store the vector index_map which stores the necessary information to represent the order. Also, I store a vector of pointers to the base class (named ptrs) for efficient calls to non-virtual member functions. Calling a virtual function at a given index First, I lookup in index_map the type number and index number of the object. I then create some sort of static constexpr v-table of function pointers (in the functionapply_func_to_tuple). All in all, this requires two levels of indirection, resulting in the same performance as a virtual function. /* Helper functions */ /* This set functions applies a function an element of a tuple given a runtime index */ template<typename R, int N, class T, class F> R apply_one(T& p, F& func) { static_assert(std::is_same<typename std::result_of<F(decltype(std::get<N>(p)))>::type, R>::value, "Wrong return type for polymorphic function"); return func(std::get<N>(p) ); } template<typename R, class T, class F, int... Is> R apply_func_to_tuple(T& p, int index, F& func, seq<Is...>) { using FT = R(T&, F&); /* This is the magic, a v-table is built on the spot here. */ static constexpr FT* arr[] = { &apply_one<R, Is, T, F>... }; return arr[index](p, func); } template<typename R, class T, class F> R apply_func_to_tuple(T& p, int index, F&& func) { return apply_func_to_tuple<R>(p, index, func, gen_seq<std::tuple_size<T>::value>{}); } /* Helper class to find the index of a type from a tuple type list at compile - time */ template <class T, class Tuple> struct Index; template <class T, class... Types> struct Index<T, std::tuple<T, Types...>> { static consteval int getValue() { return 0; } }; template <class T, class U, class... Types> struct Index<T, std::tuple<U, Types...>> { static consteval int getValue() { return 1 + Index<T, std::tuple<Types...>>::getValue(); } //static const std::size_t value = 1 + Index<T, std::tuple<Types...>>::value; }; /* Functions to calculate log at compile time, needed later */ constexpr unsigned floorlog2(unsigned x) { return x == 1 ? 0 : 1+floorlog2(x >> 1); } constexpr unsigned ceillog2(unsigned x) { return x == 1 ? 0 : floorlog2(x - 1) + 1; } /* The element class of my "order vector" , I use a bitfield to pack as much data as I can in 8 bytes */ template <typename Base, typename ...Ts> struct IndexElement { unsigned long type : ceillog2(sizeof...(Ts) + 1); unsigned long index: 64 - ceillog2(sizeof...(Ts) + 1); }; /* Start the class */ template <typename Base, typename ...Ts> struct PolymorphicVector { public: /* Vector which represents the order of the objects */ std::vector<IndexElement<Base, Ts...>> index_map; /* This contains the actual objects, as you can see they are stored without order, which is why index_map is needed to represent the order */ std::tuple<std::vector<Base>, std::vector<Ts>...> vec; /* This vector seems a little bit redundant, but is used to apply efficiently a non-virtual method at a given index */ std::vector<Base*> ptrs; public: /* Apply a "virtual" functor at a given index */ template <typename Functor> decltype(auto) apply(int index, Functor&& fn) { auto true_index = index_map[index].index; return apply_func_to_tuple<typename std::result_of<Functor(Base&)>::type> (vec , index_map[index].type , [true_index, &fn] (auto& vec) {return fn(vec[true_index]);}); } /* Apply a virtual functor to all the elements of my vector in order */ template <typename Functor> void ordered_apply(Functor&& fn) { for (int i = 0; i < index_map.size(); i++) { apply(i, std::move(fn)); } } /* Apply a non-virtual method at an index */ template <typename Functor> inline decltype(auto) apply_method(unsigned long index, Functor&& fn) { return fn(ptrs[index]); } private: /* Helper methods */ template <typename Functor, typename Head, typename... Tail> void unordered_apply(Functor& fn) { for (int i = 0; i < std::get<std::vector<Head>>(vec).size(); i ++) { fn(std::get<std::vector<Head>>(vec)[i]); } unordered_apply<Functor, Tail...>(fn); } template <typename Functor> void unordered_apply(Functor& fn) { } public: /* Apply a functor to all the elements in my vector without committing to a specific order. */ template <typename Functor> void unordered_apply(Functor&& fn) { unordered_apply<Functor, Base, Ts...>(fn); } /* Add an element to the vector */ template <typename Derived> PolymorphicVector& append(const Derived& derived) { // Check if reallocation is necessary if (std::get<std::vector<Derived>>(vec).capacity() == std::get<std::vector<Derived>>(vec).size()) { Derived* start = &std::get<std::vector<Derived>>(vec)[0]; std::get<std::vector<Derived>>(vec).push_back(derived); auto offset = &std::get<std::vector<Derived>>(vec)[0] - start; // Reinitialise invalidated pointers. for (int i = 0; i < index_map.size(); i ++) { ptrs[i] += (index_map[i].type == Index<Derived, std::tuple<Base, Ts...>>::getValue()) * offset; } index_map.push_back(IndexElement<Base, Ts...>{Index<Derived, std::tuple<Base, Ts...>>::getValue(), std::get<std::vector<Derived>>(vec).size() - 1}); ptrs.push_back(static_cast<Base*>(&std::get<std::vector<Derived>>(vec).back())); return *this; } std::get<std::vector<Derived>>(vec).push_back(derived); ptrs.push_back(static_cast<Base*>(&std::get<std::vector<Derived>>(vec).back())); index_map.push_back(IndexElement<Base, Ts...>{Index<Derived, std::tuple<Base, Ts...>>::getValue(), std::get<std::vector<Derived>>(vec).size() - 1}); return *this; } template <typename Derived> std::vector<Derived>& getVectorOf() { return std::get<Derived>(vec); } template <typename Derived> inline bool checkIndexIs(unsigned int index) { return index == Index<Derived, std::tuple<Base, Ts...>>::getValue(); } private: template <typename Head, typename First, typename... Tail> void reserve(unsigned long num) { std::get<std::vector<Head>>(vec).reserve(num); reserve<First, Tail...>(num); } template <typename Last> void reserve(unsigned long num) { std::get<std::vector<Last>>(vec).reserve(num); } public: // Reserve void reserve(unsigned long num) { reserve<Base, Ts...>(num); } }; Example use case: struct Base { virtual int a_method(int x) { z += x; std::cout << z; } int z; }; struct Derived: public Base { virtual int a_method(int x) { z*=x; std::cout << z; } }; int main() { PolymorphicVector<Base, Derived> x; x.append(Derived{5}); x.append(Base{10}); x.apply(0, [] (auto& base) { return base.a_method(5);}); // 25 return 0; } Advantages of this implementation over a vector pointer-to-base Huge performance increase when applying a functor to all the elements of my vector without committing to a specific order (4 times faster for a non-virtual method, 20 times faster for a virtual method). This is due to cache-friendliness and not actually having any dynamic dispatch in the virtual case. Retains all the type information (i.e. we can retrieve all elements of a certain type, check if an object at a certain index has a certain dynamic type) Small performance increase when adding objects to the vector (heap memory does not need to be found for each object, thereby decreasing heap fragmentation) Does not require virtual methods to achieve polymorphic behaviour (my implementation allows the use of virtual free function/ functors) Same performance for applying methods/ virtual methods at a given index. Drawbacks Must specify all the polymorphic types you want to store (as opposed to a vector of std::unique_ptr<Base> where you can store all the types derived from a single type) My implementation uses a less user-friendly "visitor" pattern to modify its elements. Small overhead when the vector undergoes reallocation Each element is not polymorphic in itself, the vector is polymorphic. Questions for code review: Is the code well defined and portable (no UB) ? Is my implementation as performant as it could be? Is my implementation as memory-efficient as it could be? Can my implementation be made a bit more user-friendly? Answer: A short analysis of some things that are quite clear to me: Rvalues and Forwarding references At first as already mentioned within the comments, you should refer to a clean move-semantic/forwarding scheme. For universal/forwarding references, only refer to std::forward. If no move semantic is effectively involved for all cases, do not refer to it: neither explicit rvalue usage nor forwarding reference usage. Otherwise, it's quite hard to follow data ownership flows for instance, at latest when your code expands later on. Moving data for the last iteration step of a loop: template <typename Functor> void ordered_apply(Functor&& fn) { for (int i = 0; i < index_map.size(); i++) { apply(i, std::move(fn)); } } As already mentioned, this is wrong as soon as the functors from calling side are rvalues effectively. You can simply correct it "inline" via apply(i, i == index_map.size() - 1 ? std::forward<Functor>(fn) : fn); The correct behavior here is ensured by the standard, but requires a further temporary in between. Therefore I prefer it more explicitly: for (int i = 0; i < index_map.size(); i++) { if (i == index_map.size() - 1) apply(i, std::forward<Functor>(fn)); else apply(i, fn); } But since this all is a no-op for the const reference argument case, the better overall design approach is to force a clean rvalue vs. lvalue separation via an internal helper struct and write the operations there in a very explicit way: template <typename Functor> struct CategorizedApplyForwarder { // implement doApply(Functor&& fnc); // implement doApply(const Functor& fnc); }; The first overload would then allow the move for the last element, the second would simply call the final apply() function with your arguments by const reference semantics. But since apply() doesn't really take ownership, template <typename Functor> decltype(auto) apply(int index, Functor&& fn) { auto true_index = index_map[index].index; return apply_func_to_tuple<typename std::result_of<Functor(Base&)>::type> (vec , index_map[index].type , [true_index, &fn] (auto& vec) {return fn(vec[true_index]);}); } the move/forwarding-semantics are no-ops. Being confident about the semantics can prevent you from a lot of trouble in doubt. The concrete questions Is the code well defined and portable (no UB) ? Except the rvalue issues from above, I'd say yes so far. In detail, you should ensure that your subscript operator usage (index-based) is range-safe always. I didn't analyze that deep in detail here. What you should reconsider is the general question about exception safety. As far as I know, this is almost one of the most important questions why the standard avoids excessively optimized containers in general. Is my implementation as performant as it could be? That question is not specific enough :) If you mean performance in terms of what the actual code is trying to achieve (line-wise granular), I'd say there are not obvious issues here. Maybe this line could be further improved via emplace_back usage: index_map.push_back(IndexElement<Base, Ts...>{Index<Derived, std::tuple<Base, Ts...>>::getValue(), std::get<std::vector<Derived>>(vec).size() - 1}); In terms of general performance behavior, you should distinguish at first, what the main purposes of your vector are in detail and how the general proportionality between the advantages and drawbacks should be. A deeper analysis here can become quite intensive in doubt. Accidental vs. theoretical complexity analysis is an important keyword here, as cache-line behavior is in doubt. Is my implementation as memory-efficient as it could be? Similar to the performance question, the devil is in the details. But from your internals only seen in general, I'd say you're quite fine since you refer to contiguous storage of "direct objects" only. Can my implementation be made a bit more user-friendly? I'd hide the internals of PolymorphicVector as far as possible. Make them private? I also miss a public clear() function. Compared to the common visitor approach, I really have to say that I prefer the common visitor on a std::vector of variants in terms of design principles and explicit usage. But I know that the common visitor approach is not as efficient as one might hope it is.
{ "domain": "codereview.stackexchange", "id": 40768, "tags": "c++, performance, cache, polymorphism, c++20" }
Can Montonen-Olive duality be used for studying $\mathcal{N}=4$ SYM at strong coupling? If not, why not?
Question: It's all in the title. To be more complete, the following is stated in the preamble of the Wikipedia article about S-duality: One of the earliest known examples of S-duality in quantum field theory is Montonen–Olive duality which relates two versions of a quantum field theory called N = 4 supersymmetric Yang–Mills theory. Is it possible to do the transformation $g^2 \to 1/g^2$ on $\mathcal{N}=4$ SYM, compute correlators using perturbation theory, and transform back to the original action? I would say yes, but it does not seem that many people are taking that route (or maybe I just have not seen it yet). Answer: What is Montonen-Olive Duality? It's a little more subtle than just taking the reciprocal of the coupling constant $g\to1/g$. To understand Montonen-Olive Duality, it pays to consider its abelian cousin, the electric-magnetic duality. Note that the term "electric-magnetic duality" is often used as a catch-all for all dualities in supersymmetry gauge theories that resemble it, but here we focus on the motivating idea behind all of them: Maxwell's equations. The $\mathbf E{-}\mathbf B$ formulation of Maxwell's equations in vacuum make it clear that $(\mathbf E, \mathbf B)\to(\mathbf B, -\mathbf E)$ is a symmetry of the equations. In the Lorentz-covariant formulation, the equations $\mathrm dF=0$ and $\mathrm d\star F=0$ are invariant under $F\to\star F$. When sources for the field strength and dual strength forms are added in, we also require these sources to transform into each other accordingly to preserve the duality. By analysing the Wick-rotated path integral over the standard abelian gauge action plus a topological term in $\mathbb R^4$, we can conclude that the electric-magnetic duality acts via $$\hat\chi:\tau\to\frac{-1}{\tau} \\\tau\equiv\frac\theta{2\pi}+\frac{4\pi i}{e^2} $$ where $\theta$ is the prefactor of the topological term. Finally, given some nice conditions on the manifold, the theory will also be invariant under $\hat\zeta:\tau\to\tau+1$. $\hat\chi$ and $\hat\zeta$ together generate the group $\mathrm{PSL}(2,\mathbb Z)=\mathrm{SL}(2,\mathbb Z)/\mathbb Z_2\subset \mathrm{Aut}(\mathbb C\mathrm P^1)$. It's easy to see that, at least at $\theta = 0$, $\hat\chi$ flips the coupling strength, sending the theory at strong coupling to one at weak coupling and vice versa. Montonen-Olive duality in its modern usage is the generalisation of this duality from $\mathrm{U}(1)$ theory to non-abelian gauge theory. We perform a similar analysis on the Euclideanised path integral (remembering to sum over the isomorphism classes of the principal bundle defining the theory). However, due to the presence of the connection one-form in the Yang-Mills equations of motion, naïvely copying the abelian case fails. So one must resort to a rather involved analysis of magnetic sources [2] that I will not go into here - but subsequently, Montonen and Olive conjectured that A Yang-Mills theory with gauge group $G$ and (complex) coupling $\tau$ is dual to a different Yang-Mills theory with gauge group $^LG$ and coupling $-\frac{1}{k\tau}$, where $^LG$ denotes the Langlands dual group of $G$ and $k$ is a constant depending on certain properties of the Lie algebra $\mathfrak g$ of $G$ Additionally, it is invariant under $\tau\to\tau + n$ where $n\in\mathbb Z$ depends on the properties of both the manifold and the gauge group under consideration - so in total there is symmetry under some $\mathfrak N\subset\mathrm{SL}(2,\mathbb R)$. Here I'll take the SQFT viewpoint on "non-perturbatively isomorphic theories" under MO duality, although it also famously descends from S-duality in type II string theory, particularly in D3 background solutions. So what can you do with it? Remember that it's still a conjecture, so there are reasons for and against its validity in different regimes (see [1] for a good review) but nothing 100% concrete. However, you can press on and analyse its consequences nonetheless, aided by the force of the deep, far-reaching geometric Langlands program. There is actually an issue involving renormalisation during the derivation of the general MO duality above but we are able to mostly bypass this issue in maximally supersymmetric $\mathcal N = 4$ SYM since it is superconformal (here is a super-cool argument for why the coupling doesn't run). Here are some examples of Langlands dual groups: you'll see that they aren't too exotic and so it seems we do stand a chance of calculating correlators at a well-understood coupling strength and transforming them to the dual coupling. $G$ $^LG$ $\mathrm{SU}(n)$ $\mathrm{SU}(n)/\mathbb Z_n$ $\mathrm{SO}(2n)$ $\mathrm{SO}(2n)$ $\mathrm{Sp}(2n)$ $\mathrm{SO}(2n+1)$ $G_2$ $G_2$ $F_4$ $F_4$ $E_8$ $E_8$ (again, this is not the full story, since the duality can e.g. act non-trivially on the Higgs sector - this is the very surgical-sounding "elliptic endoscopy" [3]). This is a key feature of the Langlands dual: it sends reductive groups to reductive groups, and a reductive Lie algebra is precisely the data that is required for a well-defined Yang-Mills theory. This means that the partition functions are isomorphic in the dual theories, and a fortiori that correlators of operators should in principle agree with the correlators of some "dual operators" in the dual theory - even though the gauge-invariant observables themselves (for example, Wilson loops) transform non-trivially: $$ \langle \hat O_{(1)}\hat O_{(2)}...\hat O_{(n)}\rangle\big|_{\{\mathcal M, G, \tau\}}=\langle\tilde O_{(1)}\tilde O_{(2)}...\tilde O_{(n)}\rangle\big|_{\{\mathcal M, ^LG, -1/k\tau\}} $$ This means that yes, provided the Montonen-Olive conjecture is valid, we can formulate some correlation function and flip to the dual theory where it is easier to evaluate. For the popular case of the $\mathrm{SU}(n)$ theory, it is essentially self-dual on 4-dimensional manifolds with trivial second cohomology group, with the partition function being mapped to itself up to some topological prefactors reflecting additional gravitational couplings. Unfortunately for non-CP-violating SYM, and I quote [1] here: [T]he conjecture is untestable unless we get a better handle at strongly coupled theories—of course, this also means that it cannot be disproved! Nevertheless, modern evidence in favour of the MO-duality is being procured from techniques in string theory [4], topologically twisted $\mathcal N = 4$ SYM (since this is where the relevance to geometric Langlands becomes evident) and, at a high level, matching OPEs of electric and magnetic sources in SYM (which is of course non-perturbative). References: [1]: J. Figueroa-O'Farrill, Electromagnetic Duality for Children (though, contrary to the name, it is a very comprehensive reference on these matters) [2], Goddards, Nuyt, Olive, Gauge Theories and Magnetic Charge [3] Argyres, Seiberg, Kapustin, On S-duality for Non-Simply-Laced Gauge Groups [4] Vafa, Geometric Origin of Montonen-Olive duality
{ "domain": "physics.stackexchange", "id": 77621, "tags": "supersymmetry, yang-mills, strong-force, duality" }
Can a drop of water be set in rotational motion by rotating mass around it?
Question: We are in empty space and see a spherical drop of water. Around the drop we have placed a massive shell with uniform density. The drop is positioned at the center. Then we set the shell in rotational motion (by small rockets on the side). Will the drop start rotating (slowly)? Will frame drag cause a torque? The Newtonian idea of gravity predicts a zero gravity field inside the sphere. General relativity predicts frame dragging. The mass-energy-momentum tensor includes momentum and that's what we see in this case. So, will it rotate? Will the shell and the droplet be eventually rotating in tandem? Of course we must stop the acceleration before a black hole develops... Can we say the rotating sphere induces torsion? Answer: Apparently Thirring computed this in 1918: Phys. Z. 19, 33 (1918) (in German) the central (corrected result) for the acceleration of a test particle inside a slowly rotating mass shell (of mass $M$, radius $R$, and angular momentum $\vec{\omega}$) is given by $$ \vec{a}=−2d_1(\vec{\omega} \times\vec{v} )−d_2[\vec{\omega} \times(\vec{\omega} \times\vec{r})+2(\vec{\omega} \cdot\vec{r})\vec{\omega} ], $$ with the constants $d_1 = 4MG/3Rc^2$ and $d_2 = 4MG/15Rc^2$ for the Coriolis- and centrifugal contributions respectively, according to H. Pfister (2005) On the history of the so-called Lense-Thirring effect. This expression is valid only close to the center of the sphere: $|\vec{r}|\ll R$. A macroscopic fluid drop in the center of the mass shell should start/be differntially rotating. So yes the shell would apply torque to the droplet taking into effects of general relativity (namely the dragging of interal frames/Lense–Thirring effect).
{ "domain": "physics.stackexchange", "id": 88664, "tags": "general-relativity, newtonian-gravity, rotational-dynamics, angular-momentum, frame-dragging" }
Can we determine an absolute frame of reference taking into account general relativity?
Question: Given that acceleration induces measurable physical effects, would it be correct to say that there should be an absolute inertial frame of reference? I know that one cannot distinguish a priori between acceleration and gravitational effects, but there should be a determined distribution of mass in the universe, and assuming it is known, its effects should be able to be subtracted to deduce 'absolute' acceleration. Is this incorrect? Answer: The problem is that to determine the distribution of mass in the universe you need to choose a coordinate system that you're going to be using for measuring the positions of all those masses. The trouble is that you are free to choose whatever coordinate system you want to make this measurement. There is no absolute coordinate system for measuring the mass distribution. Your choice of coordinate system will determine how much of any acceleration you measure is inertial and how much is gravitational. The four-acceleration is given by: $$ A^\alpha = \frac{\mathrm d^2x^\alpha}{\mathrm d\tau^2} + \Gamma^\alpha{}_{\mu\nu}U^\mu U^\nu $$ and speaking rather loosely the first term on the right is the inertial acceleration and the second term is the gravitational acceleration. The problem is that while the four-acceleration is a tensor the two terms on the right are not. It is always possible to choose a coordinate system that makes the inertial acceleration zero - in fact this is simply the rest frame of the accelerating object. Likewise it's always possible to choose coordinates that make the Christoffel symbols, $\Gamma^\alpha{}_{\mu\nu}$, equal to zero - these are the normal coordinates. This is the equivalence principle in action. While the four-acceleration is a tensor, and therefore a coordinate independent object, the two terms on the right can be interchanged by a choice of coordinates making the acceleration look purely inertial, purely gravitational, or some combination of the two just by changing coordinates. Since there is no absolute coordinate system for measuring the mass distribution there is no absolute coordinate system for measuring the inertial acceleration. The two types of acceleration are fundamentally indistinguishable.
{ "domain": "physics.stackexchange", "id": 53979, "tags": "general-relativity, gravity, reference-frames, acceleration, machs-principle" }
Mathematical formulation of Support Vector Machines?
Question: I'm trying to learn maths behind SVM (hard margin) but due to different forms of mathematical formulations I'm bit confused. Assume we have two sets of points $\text{(i.e. positives, negatives)}$ one on each side of hyperplane $\pi$. So the equation of the margin maximizing plane $\pi$ can be written as, $$\pi:\;W^TX+b = 0$$ If $y\in$ $(1,-1)$ then, $$\pi^+:\; W^TX + b=+1$$ $$\pi^-:\; W^TX + b=-1$$ Here $\pi^+$ and $\pi^-$ are parallel to plane $\pi$ and they are also parallel to each other. Now the objective would be to find a hyperplane $\pi$ which maximizes the distance between $\pi^+$ and $\pi^-$. Here $\pi^+$ and $\pi^-$ are the hyperplanes passing through positive and negative support vectors respectively According to wikipedia about SVM I've found that distance/margin between $\pi^+$ and $\pi^-$ can be written as, $$\hookrightarrow\frac{2}{||w||}$$ Now if I put together everything this is the constraint optimization problem we want to solve, $$\text{find}\;w_*,b_* = \underbrace{argmax}_{w,b}\frac{2}{||w||} \rightarrow\text{margin}$$ $$\hookrightarrow \text{s.t}\;\;y_i(w^Tx\;+\;b)\;\ge 1\;\;\;\forall\;x_i$$ Before proceeding to my doubts please do confirm if my understanding above is correct? If you find any mistakes please do correct me. How to derive margin between $\pi^+$ and $\pi^-$ to be $\frac{2}{||w||}?$ I did find a similar question asked here but I couldn't understand the formulations used there? If possible can anyone explain it in the formulation I used above? How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$? Answer: Your understandings are right. deriving the margin to be $\frac{2}{|w|}$ we know that $w \cdot x +b = 1$ If we move from point z in $w \cdot x +b = 1$ to the $w \cdot x +b = 0$ we land in a point $\lambda$. This line that we have passed or this margin between the two lines $w \cdot x +b = 1$ and $w \cdot x +b = 0$ is the margin between them which we call $\gamma$ For calculating the margin, we know that we have moved from z, in opposite direction of w to point $\lambda$. Hence this margin $\gamma$ would be equal to $z - margin \cdot \frac{w}{|w|} = z - \gamma \cdot \frac{w}{|w|} =$ (we have moved in the opposite direction of w, we just want the direction so we normalize w to be a unit vector $\frac{w}{|w|}$) Since this $\lambda$ point lies in the decision boundary we know that it should suit in line $w \cdot x + b = 0$ Hence we set is in this line in place of x: $$w \cdot x + b = 0$$ $$w \cdot (z - \gamma \cdot \frac{w}{|w|}) + b = 0$$ $$w \cdot z + b - w \cdot \gamma \cdot \frac{w}{|w|}) = 0$$ $$w \cdot z + b = w \cdot \gamma \cdot \frac{w}{|w|}$$ we know that $w \cdot z +b = 1$ (z is the point on $w \cdot x +b = 1)$ $$1 = w \cdot \gamma \cdot \frac{w}{|w|}$$ $$\gamma= \frac{1}{w} \cdot \frac{|w|}{w} $$ we also know that $w \cdot w = |w|^2$, hence: $$\gamma= \frac{1}{|w|}$$ Why is in your formula 2 instead of 1? because I have calculated the margin between the middle line and the upper, not the whole margin. How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$? We want to classify the points in the +1 part as +1 and the points in the -1 part as -1, since $(w^Tx_i+b)$ is the predicted value and $y_i$ is the actual value for each point, if it is classified correctly, then the predicted and actual values should be same so their production $y_i(w^Tx+b)$ should be positive (the term >= 0 is substituded by >= 1 because it is a stronger condition) The transpose is in order to be able to calculate the dot product. I just wanted to show the logic of dot product hence, didn't write transpose For calculating the total distance between lines $w \cdot x + b = -1$ and $w \cdot x + b = 1$: Either you can multiply the calculated margin by 2 Or if you want to directly find it, you can consider a point $\alpha$ in line $w \cdot x + b = -1$. then we know that the distance between these two lines is twice the value of $\gamma$, hence if we want to move from the point z to $\alpha$, the total margin (passed length) would be: $$z - 2 \cdot \gamma \cdot \frac{w}{|w|}$$ then we can calculate the margin from here. derived from ML course of UCSD by Prof. Sanjoy Dasgupta
{ "domain": "datascience.stackexchange", "id": 5613, "tags": "machine-learning, svm, optimization, linear-algebra" }
Autenocoder and anomaly detection task
Question: I'm trying to create an autoencoder for the anomaly detection task, but I'm noticing that even if it performs very well on the training set, it starts to stop recreating half of the test set. I tried with more than 10 models, (LSTM, ConvAE, ConvLSTM) and all of them fails to reconstruct the time series in the same point. These are the performances on the training set. The blue part is the original time series and the red one is the time series reconstructed by the AE. These are the performance on the training set. I don't understand why all the models stop performing from that point. Could that means that there are anomalies in that part? EDIT: I'm updating the question with some details about my dataset and my code: I have a dataset with 30 devices, and for each one I have about 9000 values. The dataset is structured as well: device1 device2 device3 .... device30 0.20 0.35 0.12 0.56 1.20 2.10 5.75 0.16 3.20 9.21 1.94 5.12 5.20 4.32 0.42 9.56 .... .... .... .... 7.20 6.21 0.20 -9.56 Since I'm following this guide, I started creating a sequence method to prepare my data for the Conv1D layer: TIME_STEPS = 10 # Generated training sequences for use in the model. def create_sequences(values, time_steps=TIME_STEPS): output = [] for i in range(len(values) - time_steps): output.append(values[i: (i + time_steps)]) return np.stack(output) This is where I create the sequences and normalize the dataset: # split the train/val/test set n_features = dataset_sequences.shape[1] X_train = dataset_sequences[0:3000, :] X_val = dataset_sequences[3000:6000, :] X_test = dataset_sequences[6000:9000, :] # normalize the data train_mean = X_train.mean() train_std = X_train.std() X_train = (X_train - train_mean) / train_std X_val = (X_val - train_mean) / train_std X_test = (X_test - train_mean) / train_std Then, I feed my X_train with shape (3000, 10, 30) to my Conv1D autoencoder: model = tf.keras.Sequential( [ tf.keras.layers.Input(shape=(X_train.shape[1], X_train.shape[2])), tf.keras.layers.Conv1D( filters=64, kernel_size=5, padding="same", strides=1, activation="relu"), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Conv1D( filters=32, kernel_size=5, padding="same", strides=1, activation="relu"), tf.keras.layers.MaxPooling1D(pool_size=2), tf.keras.layers.Conv1DTranspose( filters=32, kernel_size=5, padding="same", strides=1, activation="relu"), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Conv1DTranspose( filters=64, kernel_size=5, padding="same", strides=1, activation="relu"), tf.keras.layers.UpSampling1D(size=2), tf.keras.layers.Conv1DTranspose(filters=30, kernel_size=5, padding="same"), ] ) Answer: There are no errors in my code, instead I found out that when you work with anomaly detection tasks and you get a reconstruction which is not like the original one, then you just found the anomaly in the dataset.
{ "domain": "datascience.stackexchange", "id": 9326, "tags": "time-series, autoencoder, anomaly-detection, overfitting, matplotlib" }
Oscillating around the saddle point in gradient descent?
Question: I was reading a blog post that talked about the problem of the saddle point in training. In the post, it says if the loss function is flatter in the direction of x (local minima here) compared to y at the saddle point, gradient descent will oscillate to and from the y direction. This gives an illusion of converging to a minima. Why is this? Wouldn’t it continue down in the y direction and hence escape the saddle point? Link to post: https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/ Please go to Challenges with Gradient Descent #2: Saddle Points. Answer: It's important to note that in a pragmatic instance of ML on a "real dataset", you likely wouldn't have a "strict" saddle point with precisely zero gradient. Your error surface won't be "smooth", so even though you would have something that resembles a saddle point, what you would obtain would in fact be a small region with near-zero gradient. So let's say you are in a region with near-zero gradient. Assuming that the gradient is this area is normalized at 0 with small Gaussian distributed noise (thus gradient = small Gaussian noise). You can then see that the algorithm can't quite escape the region (or at least, will spend a lot of time here) since 1. Gaussian random walks will more-or-less stay in place (unless for a long time) 2. small gradients means there is no obvious direction to leave the region. In any case, SGD more or less solves this issue, and its usage is standard practice for reasons beyond this problem.
{ "domain": "ai.stackexchange", "id": 1767, "tags": "optimization, gradient-descent" }
Markov algorithm for words of the form $ww$, $w\in \{a,b\}^*$
Question: I need to construct a Markov algorithm as a set of an ordered rules which will recognize a language $L = \{ww |w \in \{a,b\}*\}$ and give $$\begin{cases} Y, \: \text{if given word is in L} \\ N, \: otherwise. \end{cases}$$ Is there a solution for this problem? UPD. Omitting the formalities Markov Algorithm is an ordered set of rules of the form $$ \begin{cases} u_1 \to v_1\\ \vdots \\ u_n \to v_n, \end{cases} $$ where $u, v \in \Sigma^*$. Markov algorithm executes the first possible rule in this list, so order is important. For example, next set is a solution of the same problem for a language $L = \{ww^R |w \in \{a,b\}*\}$: $$ \begin{cases} Ca \to aC \\ Cb \to bC \\ CC \to T \\ a \to AC \\ b \to BC \\ CT \to N [1][2]\\ AA' \to A'A\\ BA' \to A'B\\ A'A \to e, \text{e is an empty row}\\ A'B \to N \\ AT \to A'[3]\\ BB' \to B'B\\ AB' \to B'A\\ B'B \to e, \text{e is an empty row}\\ B'A \to N \\ BT \to B'[3]\\ G \to Y\\ e \to G \end{cases} $$ [1] if after converting all CC's to T there is a C, then word length is odd [2] In this and similar cases the string we're working with still has some symbols except for the N, then we will need to clear the output but I've omitted it for brevity. [3] check if the string has the same first and last symbols an remove them if yes. Algorithm implementation for the word abba is: $$abba \to ABbaCC \to ABbaT \to ABBATT \to ABBA'T \to $$ $$ \to A'ABBT \to BBT \to e \to G \to Y.$$ Answer: This is just some ideas to find the Markov algorithm, I have not tried to write it completely. In particular, I have not determined all priorities. First, it could help to add a left-marker and a right-marker. To do that, just create rule: $\varepsilon \to LR$ ($\varepsilon$ is the empty word) with low priority, and move the right-marker to the right with rules $Ra \to aR$ $Rb \to bR$ with high priority. Next, as you said, it is useful to find the middle of the word. You can use the tortoise and hare method to do that. The idea is to combine a tortoise $T$ that moves one letter at a time, and a hare $H$ that moves two letters at a time. When the hare reaches the end, the tortoise is in the middle. The following rule creates the tortoise and the hare. $L\to LTH$ Now the problem is that the tortoise and the hare must move at the same time. The idea for that is that after a move, the hare sends a "move signal" to the tortoise: for $x, y\in \{a,b\}$, add rules (in this order): $xM \to Mx$ $TMx\to xT$ $Hxy \to MxyH$ To check when the hare reach the right side, add rules (with the right priority relatively to the rules above): $HxR\to N$ (the length is odd) $HR \to SR$ Now you can reverse the word between the turtle and the right side (to transform $uu$ into $uu^R$). The idea is to Switch those letters into variables $A$ or $B$: $aS\to AS$ $bS\to BS$ and to reverse the whole thing, for $x\in \{a,b\}$ and $Y\in \{A,B\}$: $xY\to Yx$ Now we can check if the word is of the form $uu^R$ (I think there is a simpler method than the one you gave, since we know the middle of the string): $aTA \to T$ $bTB \to T$ $aTB \to N$ $bTA\to N$ Finally, when there is no more letters, the word is accepted: $LTSR\to Y$ Note: one might add rules to delete any other letter when there is a $N$.
{ "domain": "cs.stackexchange", "id": 21223, "tags": "discrete-mathematics" }
Is Hawking radiation a theory or a hypothesis?
Question: There are lots of articles, calling Hawking radiation a theory, but doesn't the definition of a scientific theory state that a theory is substantiated by a repeated testing and an overwhelming amount of evidence? Are those articles using non strict definition of a word theory, or do they make a mistake? Answer: In the case of Hawking radiation, the direct answer is "no, there is no direct test, nor can we imagine one with anything like current technology." But it is not some wild speculation made in vacuum. The extremely closely related Unruh effect can be derived from basic quantum field theory on a curved spacetime, and many QFT and GR texts have at least an outline of the derivation. (And some people even claim the Unruh effect is testable.) Maybe our derivation or interpretation or even some basic principle is wrong, sure, but there is still good evidentiary support for Hawking radiation. Indirect evidence, when reasoned with correctly, is not so weak as it seems to many people. What follows below is my elaboration/rant on this last point. The problem with the scientific method, as taught to school children and promulgated by science pundits, is that it is too narrow and linear. The story goes something like Form a testable hypothesis $H$. Gather evidence $E$. If $E$ supports $H$, $H$ is provisionally admitted as true, otherwise go back to step 1. That's all well and good, but there is no room for logical deduction in this simple picture. And inferential reasoning isn't really there either. A more accurate outline would involve several processes operating in parallel on our collective "web of knowledge" $W$ (i.e. our set of scientific beliefs and their relationships to one another): Derive new beliefs to augment $W$: Start with beliefs $B_1, \ldots, B_n \in W$ and proposed belief $B'$, If $B'$ logically follows from $B_1, \ldots, B_n$, add $B'$ to $W$ along with the relation "logically derived from $B_1, \ldots, B_n$." Infer new beliefs to augment $W$: Gather experimental evidence $E_1, \ldots, E_n$. Come up with a new belief $B$ that explains $E_1, \ldots, E_n$. If $B$ does not logically conflict with the rest of $W$, add it to $W$. Test the consistency of $W$ with the real world. These are the same steps as given in the beginning. This last process of experimental verification is certainly necessary on the whole. But it is entirely reasonable for there to be beliefs $B$ that are not directly experimentally verified. They could be inferred from large bodies of evidence as the only sensible thing we can come up with. Or the can be deduced from other parts of $W$. In fact, they can be deduced from very core principles of $W$ that would require monumental amounts of contrary evidence to overthrow. For example, if $W$ contains Coulomb's law and the idea that physics is invariant under the Lorentz group with fixed speed of light $c$, then the Lorentz force law and all of Maxwell's equations can be derived logically. There is no need to test these to add them to $W$, because logically you should believe them as much as you believe in Coulomb's law and Lorentz transformations. And if you do find experimental evidence against, e.g., Maxwell's equations, then the doubt it casts may very well propagate back to Coulomb's law, since you can't throw out Maxwell yet leave its logical antecedents untouched.
{ "domain": "physics.stackexchange", "id": 18227, "tags": "black-holes, hawking-radiation" }
OFDM signal spectrum and characterisitics
Question: Is it correct to say that the spectrum of an OFDM signal is rectangular pulse? What about the spectrum of each transmitted signal over subcarriers? If the OFDM signal has a rectangular frequency domain spectrum, then the time domain after IDFT is much like a sinc function. Do we need pulse shaping in this case, since sinc function already looks like pulse shape? Answer: Your statement isn't strictly correct; the spectrum of an OFDM signal isn't a rectangular pulse. Think about it this way: OFDM is really just a method to efficiently modulate multiple low-rate narrowband signals that are sent simultaneously. Therefore, the OFDM signal's spectrum is really just the sum of the spectra of each of its narrowband components. For example, if your OFDM signal has 64 BPSK subcarriers, then its spectrum is equal to the spectra of 64 narrowband BPSK signals, spaced at the OFDM subcarrier spacing. If you were to estimate the OFDM signal's power spectrum with a wide enough resolution bandwidth, then the spectrum would appear to be rectangular. However, if you look closely enough at the signal's spectrum, you'll notice that it isn't flat. Here is an example MATLAB script that demonstrates the phenomenon: % generate random BPSK symbol vector; this is equal to the "low-resolution" % spectrum, calculated with bin spacing synchronized with the OFDM % subcarrier grid X = 1 - 2 * (randn(64,1) > 0.5); f = -0.5 + (0:length(X)-1) / 64; % create time-domain OFDM symbol using the IFFT x = ifft(X); % calculate interpolated spectrum to see what it looks like between OFDM % subcarrier centers Sh = fftshift(fft(x,4096)); fh = -0.5 + (0:length(Sh)-1) / 4096; figure; plot(f,abs(X)); hold all; grid on; plot(fh,abs(Sh)); legend('Regular', 'Interpolated'); As the modulated OFDM signal doesn't have a perfectly rectangular spectrum as you guessed, it does not necessarily have a sinc-like shape in the time domain either.
{ "domain": "dsp.stackexchange", "id": 2846, "tags": "digital-communications" }
Are there ways to estimate size of the "whole universe"?
Question: Words escape me, but by "whole universe" (I think) I mean everything that's spatially connected to the observable universe in a conventional sense. If there is a better term for it, please let me know! I was surprised to find out that theories and measurements about the big bang seem to mostly relate to the size of the observable universe via expansion, and if I understand correctly don't say much about the size of the whole universe. Have I got this right? Are there any ways at all to try to estimate size of the whole universe? Does the term even mean anything? See @RobJeffries' excellent answer to When will the number of stars be a maximum? and @Acccumulation's comment for background on this question. Answer: tl; dr The universe is probably infinite, but if that's the case it's impossible to verify. If the universe is finite, and small enough, and the global curvature is equal to the curvature of our observable universe, then we will be able to estimate its size. If the global curvature of the universe isn't positive, then the size of the universe is infinite, (and it's always been infinite since the dawn of time, at the Big Bang), assuming the topology of the universe is trivial. Measurements of the observable universe indicate that the curvature may be zero (giving a flat universe), or even negative; if the curvature is positive, then its value is very small. We assume that the observable universe is typical of the whole universe, but of course that's impossible to verify. Wikipedia says that experimental data from various independent sources (WMAP, BOOMERanG, and Planck for example) confirm that the observable universe is flat with only a 0.4% margin of error. [...] The latest research shows that even the most powerful future experiments (like SKA, Planck..) will not be able to distinguish between flat, open and closed universe if the true value of cosmological curvature parameter is smaller than $10^{-4}$. If the true value of the cosmological curvature parameter is larger than $10^{-3}$ we will be able to distinguish between these three models even now. So even if the total universe does have positive curvature, its size is much larger than the observable universe, assuming that our observable universe isn't a patch that's abnormally flat. According to How Big is the Entire Universe? by Ethan Siegel, if the curvature is positive, then the diameter of the total universe is at least 14 trillion lightyears, but that article is from 2012, more recent calculations may give a larger value.
{ "domain": "astronomy.stackexchange", "id": 3721, "tags": "universe, cosmology, cosmological-inflation, cosmological-horizon" }
Lewis Structure of the Guanidinium Ion
Question: In a problem, I was asked to find the Lewis structure of the guanidinium ion $\ce{C(NH2)3}^{+1}$. I followed the following steps which led me to an incorrect structure and I was hoping someone could help me find my mistake. The total number of valence electrons is given by $4+3(5+2(1)) +1 = 26.$ The total number of electrons with each atom having an octet is $8+3(8+2(2)) = 44.$ The number of bonding electrons is thus $44-26 = 18.$ I drew the following skeletal structure which includes $9$ bonds (corresponding to $18$ bonding electrons): I have $26 - 18 = 8$ non-bonding electrons remaining, which I assigned as lone pairs to the three nitrogens and carbon: However, this structure is clearly incorrect because carbon rarely forms three bonds. The correct structure (including resonance) is Answer: You missed a sign. When you incorporate the charge into your electron count, you subtract positive charges and add negative charges because an electron is negatively charged. $4+[3×(5+2)]\color{red}{+}1=26$ $4+[3×(5+2)]\color{blue}{-}1=24$ ✔️ With the correct electron count you should now be able to render a double bond from the carbon to one of the nitrogen atoms. In reality the pi bond this represents is delocalized through the whole ion, providing an extra stabilization that makes guanidine about as strongly basic as sodium hydroxide.
{ "domain": "chemistry.stackexchange", "id": 17510, "tags": "ions, amines, lewis-structure" }
The well-known classifiers that can be trained/tested in linear time
Question: I am interested in collecting the list of the classifiers that (depending on their setting) can have linear time complexity (both in training and testing step) with respect to the number of samples $n$, the number of features of each sample $m$, and the number of classes $c$. In other words, I am seeking for a list of the well-known classifiers that can have the complexity of $\mathcal{O}(mnc)$ or lower. 1- I know that Naive Bayes classifiers are so; given that the parameters of the utilized probability distributions can be estimated in linear time (e.g. Poisson Naive Bayes and Multinomial Naive Bayes) 2- I also know that when K-NN is utilized as classifier (counting the K-nearest neighbors and choosing the most-voted class), it is also included in this list. Although this list, in theory, can be extended infinitively, I am interested in completing the list by the well-known classifiers by the CS community. I searched the web, a lot, but seemingly, there is no page even speaking about such subject or such list. The only thing that I found was a lot of info about "linear classifiers" whose scope is much more general than my question (even their list includes SVM and etc.). Answer: Time Complexity of Training and Testing Linear with respect to training samples ($n$) and samples features ($m$) and number of classes ($c$). Letting the list provided in the weblog of @Martin_Thoma as the basis, I exhaustively searched the web and reached the following list. The following classifiers potentially have computational complexities less than or equal with $\mathcal{O}(mnc)$ complexity) in both of the training and testing phases. k-nearest neighbors (https://stats.stackexchange.com/questions/219655/k-nn-computational-complexity), Naive Bayes is linear for those PDFs that can be estimated in linear time (e.g. Poisson and Multinomial PDFs). Approximate SVM (https://stats.stackexchange.com/questions/96995/machine-learning-classifiers-big-o-or-complexity) Logistic Regression can be linear (https://cstheory.stackexchange.com/questions/4278/computational-complexity-of-learning-classification-algorithms-fitting-the-p) AdaBoost and Gradient Boosting require at least a weak learner and are linear only if their weak learner is linear (f m n log(n)) (https://stackoverflow.com/questions/22397485/what-is-the-o-runtime-complexity-of-adaboost) Neural Networks can be linear (https://ai.stackexchange.com/questions/5728/time-complexity-for-training-a-neural-network) The following classifiers are nonlinear either in the training or in the testing phase: SVM requires polynomial training with respect to the number of samples (https://stackoverflow.com/questions/16585465/training-complexity-of-linear-svm) Decision tree and correspondingly Random forest are loglinear (m n log(n)) (https://stackoverflow.com/questions/34212610/why-is-the-runtime-to-construct-a-decision-tree-mnlogn) Linear (and Quadratic) Discriminant Analysis (LDA and QDA) are polynomial (n m^2) (https://stats.stackexchange.com/questions/211177/comparison-of-lda-vs-knn-time-complexity) Restricted Boltzmann Machine (RBM) is polynomial with respect to m (http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html)
{ "domain": "cs.stackexchange", "id": 11760, "tags": "complexity-theory, time-complexity, machine-learning, classification, pattern-recognition" }
Could we estimate the age of the universe based on the planar property of the Solar System?
Question: The Big Bang scattered planets and stars everywhere in three dimensions. But after billion years of moving and interacting with each others through gravity, planets moved on the same plane. Given the very small but non zero friction in space, could we give a good estimate of the age of the universe? Answer: No, the formation of planetary systems is not the result of the interaction of stars and planets which form at random in space post Big Bang. Stars form from density fluctuations in the gas in galaxies and planetary systems usually form (to the best of our knowledge) from the disks of gas and dust around stars that are a side effect of the formation of stars. That is planar planetary systems should be usual because they form from planar systems which are largely the result of dynamical processes in star formation.
{ "domain": "astronomy.stackexchange", "id": 1082, "tags": "universe, big-bang-theory, age" }
Is the explosion inside the grenade an external force in the grenade problem?
Question: The momentum conservation principle assumes that there are no external forces applied to the system. We have thrown a grenade away, and it explodes at some point. Do the fragments conserve the momentum? Isn't the explosion itself an external force applied to the system? Answer: You can select arbitrary objects and consider these object as a system. There are some forces acting on these objects. Each of the forces is either internal or external: depending on if the "other object" acting on our object belongs to the system or not. Internal forces do not affect the total momentum of our system. Now back to the grenade example. You can decide to include only the metal parts to the system. In this case the forces produced by explosion gases are external. Total momentum of the system (that is all the metal fragments in this case) does not have to remain constant after explosion. You can as well consider all the components of the grenade as a system. In this case forces between explosion gases and metal fragments are internal, the momentum of all these parts must remain constant. For better results explode the grenade somewhere far in space so that there is no forces from surrounding air - these are external forces and they may affect the total momentum of grenade fragments and explosion gases. Remark: I guess the total momentum of the metal fragments of the "usual" grenade would remain almost constant after explosion. That is for some strange grenade it may happen that most of the gases goes one way, metal fragments - opposite way (so that the total momentum remains constant). But for normal grenade the effect should not be significant.
{ "domain": "physics.stackexchange", "id": 44409, "tags": "momentum, conservation-laws" }
How to add in heat capacity term to Helmholtz Free Energy?
Question: The heat capacity is defined to be the amount of heat necessary to change the temperature of system/object, one divided by another: $$dQ=C~dT \tag{1}$$ Usually we would like to input such a definition into the differential relations between a thermodynamic potential and its respective variables, such as the Helmholtz free energy: $$dF=-P~dV-S~dT + \mu~dN \tag{2}$$ Where does the heat capacity come into the Helmholtz free energy? I'm confused because I can already read off $\partial F/\partial T$ from (2), $$\left(\frac{\partial F}{\partial T}\right)_{V,N}=-S$$ Answer: Material properties can often be expressed as the second derivative of a thermodynamic potential. For example, the thermal expansion coefficient is $\alpha=\frac{1}{V}\left(\frac{\partial^2 G}{\partial P \partial T}\right)$. The heat capacity is $C=-T\left(\frac{\partial^2 F}{\partial T^2}\right)$.
{ "domain": "physics.stackexchange", "id": 34063, "tags": "thermodynamics" }
How to cluster and visualize 3D data in python
Question: I have a 3D dataset of x,y,z points with 2 categories, category A and B. My end goal is to cluster all points in category B into volumes (spheroids/clouds) and find all points of category A close to edge of those volumes. I assume there won't be any points of category A inside the spheroids. The points of category B are very highly clustered in space, so clusters are probably very evident. In 2D GIS I have used Kernel Density Estimation and K-Means clustering for similar tasks, but since I am dealing with 3D data, and non-geographic at that (relative to a fictional 0,0,0 origin), and since I am comfortable with the python data science tools, I think matplotlib/scipy/numpy/sklearn/pandas/etc are probably better tools for this. But I am not sure what tools and libraries specifically would be good to look at. So my question is 2-fold: what libraries would be well suited to find the 3D clusters and the points of category A "close" to them what tool would allow me to visualize the clusters, preferably in an interactive plot that allows me to zoom/pan/rotate Answer: The answer by edmund is quite cool because it shows the algorithms and methodology that I need, but unfortunately his answer was about the wolfram language that I don't know and I don't really want to learn a new language right now. But some digging and googling has turned up some good alternatives. Specifically Open3D and sklearn became my tools of choice. Sklean's DBScan algorithm is what I need for the clustering, and sklearn has a lot of other clustering algorithms as well. Open3D is focused more on the geometric side of things and the visualization. It can create and visualize point clouds and meshes, and also includes some data processing algorithms like dbscan and importantly Convex Hull, which allows me to turn my clustered pointclouds into meshes. It is not as strong on the data science side as sklean, but the combination of the two is really powerful, especially since open3d can create a pointcloud from a numpy array, and hence a pandas dataframe. As a bonus I discovered Three.js as well, which is great if you want to visualize your results on the web. It has really good visualization tools, camera control, interactivity, etc. And it performs very well due to its WebGL implementation, much better than I expected. Unfortunately the docs are quite limited. They seem to rely mostly on examples, which often contain a lot of cool functionality, but make it hard to isolate the specific information you need. But with some time investment and trial and error, you can take the files you produced with pandas/sklearn/open3d and show them on the web to users.
{ "domain": "datascience.stackexchange", "id": 8184, "tags": "python, clustering, visualization" }
Why aren't the thermodynamic suceptibilities zero in the thermodynamical limit?
Question: As it is explained in this answer (and nicely so!), the second derivative of a thermodynamic potential to an intensive quantity, for example pressure or magnetic field strength (or temperature) will give the variance of the according extensive quantity in the given ensemble. This had me thinking wether one could extend this interpretation to the thermodynamic limit. However, in the thermodynamic limit, all extensive quantities are sharply peaked - wouldn't that mean that the variance, and hence the susceptibilities, are all zero? Or am I wrong and there is still some uncertainty left in the thermodynamic limit? Answer: In the thermodynamic limit both the extensive quantity (e.g., the total magnetization) and its variance (i.e., the magnetic susceptibility) scale proportionally to the size of the system: $$ \langle M\rangle \sim N,\\\sigma_M^2=\langle M^2\rangle - \langle M\rangle^2\sim N $$ This means that the relative fluctuations of the quantity in question are decreasing as $1/\sqrt{N}$: $$ \frac{\sigma_M}{\langle M\rangle} \sim \frac{1}{\sqrt{N}}, $$ in other words the standard deviation grows as $\sim \sqrt{N}$, i.e., much slower the quantity itself. However, the variance, which is the square of the standard deviation, is still growing proportionally to $N$, and therefore has finite value when taken per number of particles/moles/mass/volume/etc. See Volume susceptibility.
{ "domain": "physics.stackexchange", "id": 88036, "tags": "thermodynamics, statistical-mechanics, observables" }
What is the difference between asteroids, comets and meteors?
Question: Some celestial objects seem to be referred to as asteroids, some as comets, and some as meteors or meteorites. What is the distinction between all of these different objects? Are any of them the same? Answer: The objects you are refering to are actually two different objects: asteroids and comets. Meteor and meteorite are other names for an asteroid, at a given time of its interaction with our planet. We'll get to that. So first, what is the difference between an asteroid and a comet? A comet is a small solar system body that display a "coma" (an atmosphere of a sort) and sometimes a tail passing close to the Sun. They are mostly made of ice and dust, as well as some small rocky particles. We distinguish two kind of comets, with short or long orbital period. The short orbital period ones originated from the Kuiper Belt, a region composed of small bodies beyond the orbit of Neptune. The long orbital period ones originated from the Oort cloud, a scattered disk of icy planetesimals and small bodies laying around our solar system. An asteroid is a small body, composed mostly of rocks and metals. In our solar system, they can be originated from the asteroid belt, laying between Mars and Jupiter, or from the orbit of Jupiter (the Jupiter Trojans), or from actually almost everywhere in the Solar System. A small asteroid ('meteoroid') that enters the Earth's atmosphere becomes a meteor, what we also call a "shooting star". Eventually, a meteor that was massive enough not to be completely distroyed entering the Earth's atmosphere and hitting the ground is a meteorite.
{ "domain": "astronomy.stackexchange", "id": 1547, "tags": "asteroids, comets, meteor, meteorite" }
Why not just complex conjugate bras and kets instead of Hermitian conjugate?
Question: I read that one equation involving bras, kets and operators, implies another equation (its transpose conjugate), analogous to how one equation involving complex numbers implies its complex conjugate equation. I'm not seeing the need of the transpose though. Why not just only do the conjugate? As an example: Let's consider an inner product equation $\langle A|B\rangle=z$, where $z$ is a complex number, $\langle A|$ is the bra $(a,b,c)$, $|B\rangle$ is the ket $(d,e,f)$. This equation basically expresses this relation between a bunch of complex numbers: $$ad+be+cf=z$$ If we simply conjugate every complex number inside the bras and kets, we get this implied relation: $$a^*d^*+b^*e^*+c^*f^*=z^*$$ The transpose conjugate equation, $\langle B|A\rangle=z^*$, also expresses the same relation as above but in a notationally flipped manner. So what's the need of the transpose? Answer: Seeing as the following might help: $$|\psi\rangle\rightarrow \begin{pmatrix} a\\ b \end{pmatrix} $$ $$\langle \psi|\rightarrow \begin{pmatrix} a^*&b^* \end{pmatrix} $$ In this way the inner produce can be written as $$\langle \phi|\psi\rangle \rightarrow \begin{pmatrix} c^*&d^* \end{pmatrix} \begin{pmatrix} a\\ b \end{pmatrix}=ac^*+bd^* $$ $$\langle \psi|\phi\rangle \rightarrow \begin{pmatrix} a^*&b^* \end{pmatrix} \begin{pmatrix} c\\ d \end{pmatrix}=ca^*+db^* =(ac^*+bd^*)^*$$ As expected. That explains the need of taking transpose. The reason for taking conjugate is that we want $\langle \psi|\psi\rangle $ to be positive which represent the length of a vector.
{ "domain": "physics.stackexchange", "id": 79882, "tags": "hilbert-space, vectors, notation, complex-numbers, linear-algebra" }
Need biological system with specific reset process
Question: I need a biological process that can be described as a stochastic process in statistical physics. I am familiar with some processes such as birth-death or gene expression, but now I need a process where a specific reset process occurs. In this reset process, there is a counter that counts and increments an observable as the system evolves over time. The reset occurs randomly after a certain amount of time has passed, during which the counter does not count anything.Is it possible? Thanks for your attention. Answer: I think bacterial chemotaxis may qualify, depending on how the phenomenon gets aliased into the signal and time-counter variables. The bacterial flagellar rotor can turn in two directions. Because the flagellum is chiral, clockwise and counter-clockwise rotor directions produce two different types of movement. One type is running in which the motion is roughly linear, the other is tumbling in which the bacterium basically rotates. It's easy to show that random switching between these modes leads, over time, to a random position in space. So, if one (say) puts a single bacterium at the center of a 2D playpen, its coordinates after an interval are random. The ratio of runs/tumbles is set by a kind of olfactory system, so that when the bacterium is moving towards an attractant there are more runs to close the distance, and less tumbles to scramble the bearing. But even when traveling straight to an attractant, tumbles still occasionally occur. This is summarized below in a figure and its caption from 1 (with minor edits, and my boldface): Attractants bind to the receptor-transducer proteins ( Tar, tsr, Trg, Tap) in the membrane. The receptors also mediate the response to certain repellent. This activates a CheW-dependent change in the rate of autophosphorylation of CheA and the proteins that are phosphorylated by CheA (CheB and CheY). Attractants reduce the rate of autophosphorylation, and repellents increase the rate of autophosphorylation. The phosphoryl group is transferred from CheA-P to CheY and CheB. CheY-P interacts with the switch proteins (FilM, FliN, FliG) to cause clockwise rotation of the motor and tumbling. Thus repellents increase tumbling because they increase the level of CheY-P, and attractants reduce tumbling because they reduce the level of CheY-P. CheB-P is a methylesterase whose activity results in demethylation of the receptor-transducer proteins. As CheB-P goes up, methylation goes down. Thus repellents reduce the level of methylation because they increase the level of CheB-P and attractants increase methylation because they reduce the level of CheB-P. CheB-P autodephosphorylates. CheR is a methyltransferase that methylates the receptor-transducer proteins. The more highly methylated receptor-transducer protein does not transmit the chemoattracant signal to CheA. Hence adapation to a chemoattracant is due to increased methylation of the receptor-transducer proteins. Adapation to a repellent is due to undermethylation of the receptor-transducer protein. The relative activities of CheA-P determine the level of CheY-P, hence the frequency of tumbling [3]. When a run is initiated, the protein CheY is in a dephosphorylated state. The phosphorylation of this protein is a second stochastic process driving the chemotaxis. When it is eventually dephosphorylated again, also a stochastic process, this comprises a reset of that timing system. As a result, the time in current run or tumble is a stochastic variable that near-instantaneously resets. It could be appropriate to act as the "counter" time-index variable that could be paired with the bacterium coordinates random variable.
{ "domain": "biology.stackexchange", "id": 12307, "tags": "cell-biology, biophysics" }
Matrix template Class
Question: This is my 2nd shot at dynamic memory allocation. This project is for practice purposes. So many things were considered whilst writing this minimal project. I considered using placement new to dynamically allocate memory, this would be optimal for large objects. I finally resolved to restrict user of my class to c++ built-in types I considered having commutative arithmetic operators. I finally decided that 2 + mat makes no sense I considered supporting range checking, after much consideration, I decided that users of my class should be more careful :-) I considered making co_factor, det, transpose, swap, valid_dim member-functions. I finally decided that those functions do not need access to elem_ so I removed them from the interface. The following are some areas I hope to get reviews on and as such improve upon. Design Performance Ease of use Note: The code is a little large. Matrix.h #ifndef MATRIX_H_ #define MATRIX_H_ #include <algorithm> #include <cstdlib> #include <initializer_list> #include <iostream> namespace mat { template<typename T> void swap(T& a, T& b); bool valid_dim(int row, int column); template<class T> class Matrix { public: explicit Matrix(int row = 1, int column = 1, const T& val = {}) : row_{row}, column_{column} { if(!valid_dim(row_, column_)) throw std::invalid_argument("Exception: Invalid row and column in constructor"); elem_ = new T[ row_ * column_]; for(size_t i = 0; i != size(); ++i ) elem_[i] = val; } Matrix(int row, int column, std::initializer_list<T> list) : row_{row}, column_{column} { if(!valid_dim(row_, column_)) throw std::invalid_argument("Exception: Invalid row and column in constructor"); if(list.size() != size()) throw std::runtime_error("Exception: Intializer list argument does not match Matrix size in constructor"); elem_ = new T[ row_ * column_]; int i = 0; for(const auto& item : list) { elem_[i] = item; ++i; } } Matrix(const Matrix& M) : row_{M.row_}, column_{M.column_}, elem_{new T[ row_ * column_]} { std::copy(M.elem_, M.elem_+size(), elem_); } Matrix& operator=(const Matrix& M) { if(row_ != M.row_ || column_ != M.column_) throw std::runtime_error("Exception: Unequal size in Matrix="); row_ = M.row_; column_ = M.column_; std::copy(M.elem_, M.elem_ + size(), elem_); return *this; } Matrix(Matrix&& M) noexcept : row_{0}, column_{0}, elem_{nullptr} { swap(row_, M.row_); swap(column_, M.column_); swap(elem_, M.elem_); } Matrix& operator=(Matrix&& M) noexcept { swap(row_, M.row_); swap(column_, M.column_); swap(elem_, M.elem_); return *this; } ~Matrix() { delete [] elem_; } T& operator()(const int i, const int j) { return elem_[i * column_ + j]; } // Note: no range checking const T& operator()(const int i, const int j) const { return elem_[i * column_ + j]; } size_t size() const { return row_ * column_; } size_t row() const { return row_; } size_t column() const { return column_; } Matrix& operator+=(const Matrix& rhs) { Matrix res = (*this) + rhs; *this = res; return *this; } Matrix& operator-=(const Matrix& rhs) { Matrix res = (*this) - rhs; *this = res; return *this; } Matrix& operator*=(const Matrix& rhs) { Matrix res = (*this) * rhs; *this = res; return *this; } Matrix& operator+=(const double rhs) { Matrix res = (*this) + rhs; *this = res; return *this; } Matrix& operator-=(const double rhs) { Matrix res = (*this) - rhs; *this = res; return *this; } Matrix& operator*=(const double rhs) { Matrix res = (*this) * rhs; *this = res; return *this; } Matrix& operator/=(const double rhs) { Matrix res = (*this) / rhs; *this = res; return *this; } Matrix operator+(const Matrix& rhs) { if(row_ != rhs.row_ || column_ != rhs.column_) throw std::runtime_error("Exception: Unequal size in Matrix+"); Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) { res.elem_[i] = elem_[i] + rhs.elem_[i]; } return res; } Matrix operator-(const Matrix& rhs) { if(row_ != rhs.row_ || column_ != rhs.column_) throw std::runtime_error("Exception: Unequal size in Matrix-"); Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) { res.elem_[i] = elem_[i] - rhs.elem_[i]; } return res; } Matrix operator*(const Matrix& rhs) { if(row_ != rhs.column_ ) throw std::runtime_error("Exception: Unequal size in Matrix*"); Matrix res(rhs.row_, column_, 0.0); for(int i = 0; i != rhs.row_; ++i) { for(int j = 0; j != column_; ++j) { for(int k = 0; k != row_; ++k) { res(i, j) += rhs(i, k) * this->operator()(k, j); } } } return res; } Matrix operator+(const double rhs) { Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) res.elem_[i] = elem_[i] + rhs; return res; } Matrix operator-(const double rhs) { Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) res.elem_[i] = elem_[i] - rhs; return res; } Matrix operator*(const double rhs) { Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) res.elem_[i] = elem_[i] * rhs; return res; } Matrix operator/(const double rhs) { Matrix res(row_, column_, 0.0); for(size_t i = 0; i != size(); ++i) res.elem_[i] = elem_[i] / rhs; return res; } private: int row_; int column_; T *elem_; }; template<typename T> inline void swap(T& a, T& b) { const T tmp = std::move(a); a = std::move(b); b = std::move(tmp); } inline bool valid_dim(int row, int column) { return (row >= 1 || column >= 1); } template<typename T> Matrix<T> transpose(const Matrix<T>& A) { Matrix<T> res(A.column(), A.row(), 0.0); for(size_t i = 0; i != res.row(); ++i) { for(size_t j = 0; j != res.column(); ++j) { res(i, j) = A(j, i); } } return res; } template<typename T> Matrix<T> co_factor(const Matrix<T>& A, size_t p, size_t q) { if(p >= A.row() || q >= A.column()) throw std::invalid_argument("Exception: Invalid argument in cofactor(int, int)"); if(A.row() != A.column()) throw std::runtime_error("Exception:Unequal row and column in co_factor(int, int)"); Matrix<T> res(A.row() - 1, A.column() - 1); size_t a = 0, b = 0; for(size_t i = 0; i != A.row(); ++i) { for(size_t j = 0; j != A.column(); ++j) { if(i == p || j == q) continue; res(a, b++) = A(i, j); if(b == A.column() - 1) { b = 0; a++; } } } return res; } template<typename T> double det(const Matrix<T> A) { if(A.row() != A.column()) throw std::runtime_error("Exception:Unequal row and column in det()"); if(A.row() == 2) { return ( A(0,0) * A(A.row()-1,A.row()-1) ) - ( A(0,1) * A(A.row()-1,A.row()-2) ); } int sign = 1, determinant = 0; for(size_t i = 0; i != A.row(); ++i) { Matrix<T> co_fact = co_factor(A, 0, i); determinant += sign * A(0, i) * det(co_fact); sign = -sign; } return determinant; } } #endif main.cpp #include <iostream> #include "Matrix.h" using namespace mat; template<typename T> void display(const T& A) { for(size_t i = 0; i != A.row(); ++i) { for(size_t j = 0; j != A.column(); ++j) { std::cout << A(i, j) << " "; } std::cout << '\n'; } } int main() { Matrix<double> my_mat1(2,2, {1,2,3,4}); Matrix<double> my_mat2(2,2, {5,6,7,8}); std::cout << "\nDisplay matrix: \n"; display(my_mat1); std::cout << '\n'; display(my_mat2); std::cout << "\nAddition: \n"; display(my_mat1 + my_mat2); std::cout << "\nSubtraction: \n"; display(my_mat2 - my_mat1); std::cout << "\nMultiplication: \n"; display(my_mat1 * my_mat2); std::cout << "\nInplace Addition: \n"; my_mat1 += my_mat2; display(my_mat1); std::cout << "\nInplace Subtraction: \n"; my_mat1 -= my_mat2; display(my_mat1); std::cout << "\nInplace Multiplication: \n"; my_mat1 *= my_mat2; display(my_mat1); std::cout << "\nTranspose: \n"; display(transpose(my_mat2)); std::cout << "\nAdding 2 to my_mat1: \n"; my_mat1 += 2; display(my_mat1); Matrix<int> my_mat3 {4,4, { 1,0,2,-1, 3,0,0,5, 2,1,4,-3, 1,0,5,0 }}; Matrix<double> co_factor_mat = co_factor(my_mat1, 0, 0); std::cout << "\nCofactor: \n"; display(co_factor_mat); std::cout << "Determinant of matrix: " << det(my_mat3) << std::endl; } Answer: Use an unsigned type for rows and columns (probably std::size_t). Creating a size 1 matrix as the default (empty constructor args) probably isn't useful behavior. We could have an ordinary default constructor creating a zero-sized matrix. valid_dim is wrong (|| should be &&) and unnecessary if we use an unsigned type for rows and columns. (There's nothing wrong with allocating a zero size array in C++, or we could set elem_ to nullptr). We can use std::fill in the value constructor and std::copy in init list constructor. It's strange to prevent assignment from a different sized matrix (and very unexpected for the user for it to throw). We should just resize the matrix if necessary. We can provide an at(i,j) function that does size checking (similar to the standard library containers). We'd normally implement the binary math operators (+, -, etc.) using the math-assignment operators (+=, -=, etc.); the opposite of how they are implemented above. (The assignment versions can modify the values in place). 2.0 * m is as reasonable as m * 2.0 - we should implement that too. Usually we'd implement binary math operators as free functions using +=, -= etc. where possible (and it's only one more line of code where we can't). swap can't std::move out of the const tmp variable - it'll always copy. Note that we don't need to write a custom swap function - std::swap will do exactly the same thing by default. There are quite a lot of other useful functions we could implement (c.f. the standard library containers): empty(), clear(), resize(), data(), iterators (begin(), rbegin(), etc.), operator== and operator!=. I guess this is an exercise in manual memory management, but we really should use std::vector for storage! Implementing everything becomes much easier.
{ "domain": "codereview.stackexchange", "id": 40038, "tags": "c++, performance, object-oriented, reinventing-the-wheel, memory-management" }
how do i select light-weight 6dof robot?
Question: I plan to do some scientific research, after modeling and path planning for the robot, The algorithm outputs the Joint control torque(or Joint variable), and then i want to input this result to the robot Control platform to check my algorithm, as far as I know, most robot can only provide point-to-point tracking instructions(NOT torque\joint variable input API),any robot can supports torque or joint variable as the robot input(when i input joint variable or torque, robot can move)? How can I program a 6 DOF robot using torque rather than joint positions as input? If you have any questions or concerns, don't hesitate to let me know. Answer: This article explains how to implement joint-torque control on top of position-controlled robots. Implementing Torque Control with High-Ratio Gear Boxes and without Joint-Torque Sensors https://hal.archives-ouvertes.fr/hal-01136936/document Copied from another StackExchange question: https://robotics.stackexchange.com/a/7531/18316
{ "domain": "robotics.stackexchange", "id": 1570, "tags": "robotic-arm, ros, design, industrial-robot, platform" }
Delaunay Triangulation of Parallelepiped
Question: Given $n$ linearly independent vectors $v_1, v_2, \ldots, v_n$ in $n$-dimensional space. Let $V$ be the set of $2^n$ points of the form $x_1 v_1 + x_2v_2 + \ldots + x_nv_n$, in which $x_i$ can be $0$ or $1$. $V$ is the set of vertices of an n-dimensional parallelepiped. Is there any way to find the Delaunay triangulation of $V$ using subexponential space with respect to $n$? I understand that the size of the output is big, and we can't store the whole output. But is there any way to output the Delaunay edges one by one without storing the previous result? Answer: Testing whether a pair of points $p_i$ and $p_j$ are the endpoints of a Delaunay edge can be solved as a linear programming feasability problem: Lift each point to one higher dimension by making its last coordinate be the sum of squares of the other coordinates. Then look for a hyperplane passing through $p_i$ and $p_j$, such that all the other points are on one side of it. This can be represented in linear inequality constraints by seeking a vector $v$ and scalar $c$ such that the dot product of $v$ with each point is at least $c$, and is exactly $c$ in the case of $p_i$ and $p_j$. So for any set of points in any dimension, it's possible to find the Delaunay edges one by one, by solving a number of linear programs that is polynomial in the number of points. The reason Delaunay triangulation is hard is not because of the edges, it's because there may be too many higher-dimensional features.
{ "domain": "cstheory.stackexchange", "id": 2352, "tags": "cg.comp-geom, delaunay-triangulation" }
Bunsen Burners and the Sun
Question: Why do Bunsen burners burn blue in the center? What element is being burned? Why does the sun glow yellow, and not blue-a Bunsen burner is much cooler and yet it burns blue. Is it because the relative size, or are they two different phenomena? (i.e. black body radiation and something else?) Answer: A Bunsen Burner typically uses methane, butane, propane or another alkane and these burn blue. The Wikipedia article on butane has a spectrum showing the $\mathrm{CH}$ radical as the primary source of blue emission: The three main spikes are: You should consult Chemistry.SE for more details on why they burn blue. Bunsen burners are designed to mix air into the gas before combustion and the presence of oxygen allows the fuel to burn much more efficiently. If you restrict the air flow the flame looks yellow because soot (carbon) is being produced and the carbon glows red / yellow due to the temperature. The glow from the soot is brighter and drowns out the blue color. When you mix enough oxygen the fuel burns cleanly and you only see the blue.
{ "domain": "physics.stackexchange", "id": 7733, "tags": "thermal-radiation" }
robot_pose_ekf coordinate frame
Question: I have a visual odometry source which outputs Ros odometry msg in a coordinate system of the camera which is z-axis forward. Now i also have an Imu based odometry which gives data according to coordinate frame of the world which is x-axis forward. If i input these two sources in the robot_pose_ekf package, the pose published from the package would be according to which coordinate system? If this could cause a problem should i convert the visual odometry to the world's coordinate system with x-axis forward terminology.? Also what about the related published tf..? Originally posted by Karan on ROS Answers with karma: 263 on 2013-01-20 Post score: 1 Answer: The wiki page of robot_pose_ekf says that all poses are robot poses. So every source of information should publish pose information of the robot (and not the sensor). REP 105 could help you to understand odometry frames. If your odometry source publishes sensor movements you will have to transform this to robot movements. Have a look at viso2_ros::OdometerBase::integrateAndPublish() for an example of how to transform sensor pose information to robot pose information using tf. Originally posted by Stephan with karma: 1924 on 2013-01-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Karan on 2013-01-23: So this means i should transform the visual odometry from z-axis forward frame to robot frame which is X-axis forward..? Comment by Stephan on 2013-01-23: Yes, the full transformation from your visual odometry frame to your robot base_link has to be used. Make sure all your motion input reports motion of the same frame (base_link).
{ "domain": "robotics.stackexchange", "id": 12501, "tags": "navigation, kinect, visual-odometry, coordinate-system, robot-pose-ekf" }
Get protein annotation from Uniprot out of protein mappings
Question: Having a file of for about 10k mapping identifiers how can I get the annotations of the proteins from Uniprot ? As it is not possible to do It manually My mappings are contained in a .csv file They are from STRING database Example 9606.ENSP00000387699 Answer: This problem is discussed here. Apparently STRING IDs are in KEGG format but these can be mapped on to Uniprot IDs using information downloadable from here. I guess that it is either the protein.aliases data or the mapping_files data that you would need. The first link provides a Python script to extract useful info from the downloaded data. These are big downloads so I haven't actually tried this out myself. Presumably once you have the STRING→Uniprot conversion done you can query Uniprot with the output as a batch request. Good luck!
{ "domain": "biology.stackexchange", "id": 7233, "tags": "gene-annotation, proteomics" }
Ionizing with 980nm laser
Question: I have been looking at a 980 nm, 5-250 mw, 1 Hz to 1000 kHz pulsed IR laser. I understand that it takes 13.6 eV to ionize oxygen, and 15.6 eV to ionize nitrogen. Our pulses are only every 1,000,000 femtoseconds, but that is still ridiculously fast (0.000,000,001 seconds). So the first question is how many milliwatts do I truly need if it is only 15.6 eV to ionize nitrogen. Also, will the 980 nm wavelength be a problem for targeting nitrogen? Finally I would like to use a focal lens where F=20 cm, but I'm not sure if it will still ionize. I am willing to get any focal lens for it to ionize even if it is right in front of the lens. Also I've been talking to my chemistry teacher and he says its not really called ionization, as ionization totally removes the electron from the atom, rather its called the "quantum leap". I just want to know if he is right and if he is which is easier to do to produce light. I did a lot more research this time around and am ready to help clarify if I wasn't specific enough. Answer: First of all the process im trying to complete is called photoionization. Second, a $980\,nm$ laser wont work because if you take 1240/eV of element your trying to ionize = wavelength (nm) required. For nitrogen which is 15.58 eV that is $~80\,nm$. So no, if your doing a single photon process you need 80nm, but you can do a miltiphoton ionization process which you can increase the wavelength (in nm) and still get out the same Energy (in eV). In the single photon process power (in mW) does not matter, only in the aspect that the dot of light would be more visible, but wouldn't change if it ionizes or not.
{ "domain": "physics.stackexchange", "id": 44388, "tags": "laser, ionization-energy, laser-interaction" }
How pain can stimulate the vagus nerve
Question: I'm trying to find out why a prompt, severe, short pain is causing a stimulation of the vagus nerve. What could the physiological explanation be? Is that because the pain is triggering the sympathetic system (Huge stress because of the pain), and directly after that the parasympathetic system is trying to balance it, but it's overreacting? and then the sympathetic system is going back to normal when parasympathetic stays high ? Is there a physiologic explanation ? Thanks Answer: Short answer Given your comments you are referring to the cause of fainting after being triggered by certain stressors. The reason is a sudden drop in blood pressure due to parasympathetic nervous system activation (vagus nerve activity). This leads to reduced blood flow to your brain, which results in a brief loss of consciousness (Mayo Clinic). Background There are various types of fainting (syncope) with various causes. The type you are referring to is neurally mediated syncope (Arthur & kaye, 2000). It is evoked by emotional activation due to stressors (pain, fear etc.) Stress responses are accompanied by the release of stress hormonses, most notably Adrenaline (epinephrine). Epinephrine release results in sympathetic nervous system arousal, resulting in increased blood pressure, increased pulse and sweating. It also increases amounts of blood being funneled to peripheral organs. These mechanisms aids in the fight,fright and flight response, as more blood in the periphery allows more glucose to be transported to the muscles (Agarwal, 2007). The funneling of blood in the periphery (venous pooling), however, can reach a point in which venous return of blood to the heart falls too sharply. This in turn can lead to a substantial increase in the muscle activity in the ventricles, as they are likely struggling to pump the small available amounts of venous blood into the heart chambers. The increased muscular activity in the ventricles is believed to activate mechanoreceptors that would normally fire only during stretch. Stretch of heart muscle is normally associated with a high blood pressure. The warning message of a high blood pressure by the mechanoreceptors is sent to the medulla in the brainstem, which in turn leads to parasympathetic nervous arousal. As a part of the parasympathetic response, the vagus nerve is stimulated which reduces heart activity to reduce blood pressure. However, when the mechanoreceptors are stimulated due to the overt contractility of the ventricles due to a low blood volume, they sent the same neural input to the medulla which normally signals hypertension, thus causing vagus nerve stimulation and hence a paradoxic parasympathetic arousal that results in low blood pressure (hypotension), low heart rate (bradycardia), and ultimately fainting (syncope) (Grubb, 2005). References Agarwal et al., Postgrad Med J 2007;83:478–80 Arthus & Kaye, Postgrad Med J 2000;76:750-75 Grubb, Circulation 2005;111:2997-3006
{ "domain": "biology.stackexchange", "id": 3655, "tags": "neuroscience, human-anatomy, neurophysiology, central-nervous-system" }
Does the formula $ V =\omega r$ holds in angular frequency
Question: $ V =\omega r$ can I use that formula if $\omega$ is angular frequency instead of angular velocity? V is velocity Answer: Angular frequency is the magnitude of angular velocity, so yes. Although as far as I'm aware, we typically only speak of angular frequency when the angular velocity (or speed) is constant.
{ "domain": "physics.stackexchange", "id": 95647, "tags": "classical-mechanics, harmonic-oscillator, angular-velocity" }
Non-zero electric field inside a conductor, when applying an large external field
Question: I'm probably missing something, or does not understand conductors well enough. But I have a question related to the title of this message. In many places you read that there can be no electric field inside a conductor. The arguments typically go something in line with, since there is an electric field, charges inside the conductor will rearrange themselves so cancel the field. Very simply stated. That I don't understand, is that this seams to assume that there always is "enough" charge to redistribute. To clarify my confusion, let's say we have a conducting solid sphere with some charge. If we apply an "large" external static field to this sphere, charges inside it will tend to cancel it out. But, what if the total charge inside it is not enough? The total charge in the sphere can only generate a limited field, but the external one can be arbitrary large. What if the field outside is so large that the potential it generate, from one side of the sphere to the other, is larger than what the internal charge can generate? As I said, I'm probably missing something essential, but can someone please point out the misstake in the above argument? Answer: In a static situation, there is no electric field inside a conductor. If you have an energy source, and apply current to (for instance) a conductive light bulb filament, Ohm's law gives a nonzero solution for the electric field inside that filament. So, 'no electric field' is correct for electrostatics, and not for a variety of interesting technology where current flows in conductors. Still, there might be miles of wire between me and the power plant, and there's little voltage drop in a few millimeters of the transmission-line copper. The field really IS small, so it's a good approximation even for current-carrying conductors. this seams to assume that there always is "enough" charge to redistribute. It took a LOT of work to produce field-effect transistors, that actually DO have a practical limit to the amount of charge that can redistribute, so that a channel could be depleted of what would otherwise be free-charge. So, yes, there are also useful devices with depletion of a microinch or so of material arranged by an external field. It's not something you can do with large samples, on a tabletop. The semiconductor purity requirements were somewhat challenging.
{ "domain": "physics.stackexchange", "id": 49516, "tags": "electrostatics, electric-fields, conductors" }
Get all local maxima of a DFT
Question: I'm doing a Machine Learning project that incorporates many DSP elements in feature extraction. I am looking at the STFT of a piece of audio and trying to search for specific spectral features within it - for example, the exponential decay of the fundamental frequency of a kick drum - to determine what kind of sound the piece of audio is. One thing I've looked into is finding the frequencies of all the local maxima of a DFT, unquantized. This includes being able to identify when the peak is "between" bins based on contextual information. A possible path to go down is to find all of the changes in direction within the spectrum, and then for N changes fit an $(N+1)$-degree polynomial using a regression, then finding all the zeroes of the derivative of this polynomial. But does an algorithm for this already exist? It seems like something that might have been done before. Answer: That sounds a lot like you shouldn't be using the DFT, but rather a parametric spectrum estimator. For example, there's spectrum estimators that actually give you a function to evaluate at real-valued points rather than a vector of a sampled spectrum (your sentence "DFT, unquantized" doesn't make all that sense to me – the DFT is discrete, inherently, so what you describe as search algorithm only works if you do a massive DFT and then start somewhere in the and look for peaks from these points on). My go-to example here is either ESPRIT, which, given knowledge about the number of significant signals will just give you a vector of real-valued frequencies of peaks or ROOT-MUSIC (dunno if that's the official term, it's what we used, based on [1]) which uses the pseudospectrum function generated by the MUSIC algorithm and finds the signals by looking for roots of a specifically crafted polynomial [1]. [1]: Barabell, Arthur J.: Improving the resolution performance of eigenstructure-based direction-finding algorithms. In: Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP ’83., Band 8, April 1983, available online.
{ "domain": "dsp.stackexchange", "id": 4995, "tags": "fourier-transform, frequency-spectrum" }
Specific Heat of Nano Particles
Question: Specific heat of nano particles is less than bulk specific heat of them. Why? Answer: The specific heat of the bulk is the integral of the specific of all the applicable energy carriers. Let us consider electrons and phonons for simplicity $$ C_{v-total} = C_{v-electrons} + C_{v-phonons} = (\frac{\partial U}{\partial T}_{v-electrons}) + (\frac{\partial U}{\partial T}_{v-phonons})$$. Taking into account that energy distributions, we can calculate the specific heat as $$ C_{v-total} =\frac{d}{dT}\int E f_{FD}(E)D(E)dE + \frac{d}{dT}\int E f_{BE}(E)D(E)dE $$ Where the distributions are the fermi-dirac and the Bose-Einstein distribution. Obvioulsy, when you have a bulk material, like a solid, the density of states is modified to accommodate the overalap of the electronic states resulting in band formation. This does not happen in free particls (nano-particles e.g).
{ "domain": "physics.stackexchange", "id": 9727, "tags": "thermodynamics, particle-physics, nanoscience" }
Create 2 CAD files
Question: I found CAD files for the Create on the ROS TurleBot download page (.zip), and shells on the gazebo sim page. Any ideas where the files for the Create 2 could be found? Answer: The CAD you pointed out is for the Turtlebot, and includes electromechanical CAD for that assembly. I only spot-checked a few files, but the only Create-relevant CAD in that package I found was an STL of the robot's outer dimensions. The Create 2 manual should have enough information for someone to produce a solid model of similar precision, but I do not believe iRobot has released a shell for the robot, yet. Maybe you should make one and put it on a site like Thingiverse? There are other designs by iRobot that can be found there for various parts of the robot, like the bin. Please note that I am an iRobot employee, but the postings on this site are my own and don't necessarily represent iRobot's positions, strategies, or opinions.
{ "domain": "robotics.stackexchange", "id": 871, "tags": "irobot-create, design, turtlebot2" }
what is the pressure that a sonic wave actually creates?
Question: We know that sonic waves across atmosphere are basically areas of high and low pressure. But I couldn't find how much is that pressure. There are online calculators to calculate the sound pressure, but what does let's say a 10 Pascal sound pressure means? That the high pressure area is 5 Pascal above atmospheric pressure and the low pressure area is -5 Pascal below the atmospheric pressure? Or what? Please elaborate. Answer: The maximum change of pressure caused by a sound wave is its pressure amplitude. This would be the difference between high and low pressure areas in the sound wave. When sound is measured in pascals, however, for the purpose of computing decibels by comparing with other sounds, it's just the high pressure against the measuring surface, to the extent that it exceeds ambient atmospheric pressure.
{ "domain": "physics.stackexchange", "id": 23551, "tags": "pressure, acoustics, air" }
Edit distance (Levenshtein-Distance) algorithm explanation
Question: I want to calculate the edit distance (aka Levenshtein-Distance) between two words: «solo» and «oslo». According to this site we'll get the result matrix: What I don't understand is: In case of comparison the last «o» from «solo» with the first «o» of «oslo» will see the submatrix: 3 2 4 3 As far as I understand, in order to calculate the bottom right value, which is equal in this example to 3 of this submatrix, we'll get the min(3, 2, 4) + (0 if the letters are the same, otherwise 1). So, why in this example the bottom right value is equal to 3 and not to 2? Answer: If we look deeply at algorithm implementation in C: http://en.m.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance#C we'll see the main string which fills the matrix: matrix[x][y] = MIN3(matrix[x-1][y] + 1, matrix[x][y-1] + 1, matrix[x-1][y-1] + (s1[y-1] == s2[x-1] ? 0 : 1)); as we can see, the value of cells with indexes matrix[x-1][y] (upper cell) and matrix[x][y-1] (left cell) in automatically increased, the upper-left cell (by diagonal) with index matrix[x-1][y-1] is increased by 1 only if the letters are different and only after that we'll take a minimum from these 3 values. In other words: 3 2 4 min(3+0, 2+1, 4+1) = 3
{ "domain": "cs.stackexchange", "id": 3148, "tags": "algorithms, dynamic-programming, strings, string-metrics, edit-distance" }
How to use IR range finder with navigation stack?
Question: How do you use an IR range finder with the navigation stack? I got my eddiebot-like robot all set up, and added the range finders to costmap_common_params.yaml, but it just tells me: [FATAL] [1361591285.796743183]: Only topics that use point clouds or laser scans are currently supported So I'm thinking it wouldn't be too translate my range finders into laser scan messages, but that seems like a kludge. It seems like this must have been done before (for example on Eddiebot. What have others done? Originally posted by Jon Stephan on ROS Answers with karma: 837 on 2013-02-23 Post score: 0 Answer: With the current navigation stack, you have to either convert to PointCloud, PointCloud2, or LaserScan (I would avoid PointCloud as that message might go away some day and is superceded by PointCloud). The sensor_msgs/Range message is actually fairly new, and while you could certainly go add that capability to the navigation stack, you would have to edit the costmap_2d, possibly AMCL, etc, and so it is more straightforward to just write a node that creates a laserscan out of your sensor data. Note however, that you may actually want to interpolate in between your range finders or tune the costmap grid size, otherwise you may have serious issues clearing the costmap if you don't have enough other sensors on the robot. Originally posted by fergs with karma: 13902 on 2013-02-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Jon Stephan on 2013-03-01: Thanks, I changed my code to output a Range message along with an equivalent LaserScan message. I do also have a Kinect, these are just to sense the things that are too close for the Kinect to see.
{ "domain": "robotics.stackexchange", "id": 13022, "tags": "navigation, amcl" }
Value Iteration failing to converge to optimal value function in Sutton-Barto's Gambler problem
Question: In Example 4.3:Gambler's Problem of Sutton and Barto's book whose code is given here. In this code the value function array is initialized as np.zeros(states) where states $\in[0,100]$ and the value function for optimal policy which is returned after solving it with value iteration is same as the one given in the book, but, if we only change the initialization of the value function in the code, suppose to np.ones(states) then the optimal value function returned changes too, which means that the value iteration algorithm converges in both the cases but to different optimal value functions,but two different optimal value function is impossible in a MDP. So why is the value iteration algorithm not converging to optimal value function? PS: If we change the initialization of value function array to -1*np.random.rand(states), then the converged optimal value function also contains negative numbers which should be impossible as rewards>=0, hence value iteration fails to converge to optimal value function. Answer: So, naturally, if you've observed something that contradicts the theoretical properties of Value Iteration, something's wrong, right? Well, the code you've linked, as it is, is fine. It works as intended when all the values are initialized to zero. HOWEVER, my guess is that you're the one introducing an (admittedly very subtle) error. I think you're changing this: state_value = np.zeros(GOAL + 1) state_value[GOAL] = 1.0 for this: state_value = np.ones(GOAL + 1) state_value[GOAL] = 1.0 So, you see, this is wrong. And the reason why it's wrong is that both GOAL (which is 100 in the example) and 0 must have an immutable and fixed values, because they're terminal states, and their values are not subject to estimation. The value for GOAL is 1.0, as you can see in the original code. If you want initial values other than 0, then you must do this: state_value = np.ones(GOAL + 1) state_value[GOAL] = 1.0 state_value[0] = 0 In the first case (changing the initial values to 1) what you were seeing was, essentially, an "I don't care policy". Whatever you do, you'll end with a value of 1. In the second case, with the random values, you saw the classic effects of "garbage in, garbage out".
{ "domain": "ai.stackexchange", "id": 2293, "tags": "reinforcement-learning, value-functions, sutton-barto, value-iteration, numpy" }
How code be this piece of code be refactored?
Question: I use DMD 1.056 with Tango 0.99.9 to build a GPX document using API. I am a beginner in D language. Usage DMD 1.056 with Tango 0.99.9 is compulsory as a requirement The GPX data are here hardcoded but my intent is to write a more high level GPX builder code using appropriate API. The piece of code to refactor: module SwathGen; import tango.io.Stdout, tango.text.xml.Document, tango.text.xml.DocPrinter; void main(char[][] args) { auto gpxdoc = new Document!(char); gpxdoc.header; gpxdoc.tree .element(null,"gpx") .attribute (null,"xmlns","http://www.topografix.com/GPX/1/1") .attribute (null,"version","1.1") .attribute (null,"creator","SwathGen") .attribute (null,"xmlns:xsi","http://www.w3.org/2001/XMLSchema-instance") .attribute (null,"xsi:schemaLocation","http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd http://www.topografix.com/GPX/gpx_style/0/2 http://www.topografix.com/GPX/gpx_style/0/2/gpx_style.xsd http://www.topografix.com/GPX/gpx_overlay/0/3 http://www.topografix.com/GPX/gpx_overlay/0/3/gpx_overlay.xsd") ; gpxdoc.elements .element (null,"metadata") .element(null,"name","JobDef.gpx") .parent .element(null,"desc","Spray Job") .parent .element(null,"author") .element (null,"name","izylay") .parent .element (null,"email") .attribute (null,"id","izylay") .attribute (null,"domain","ary.com") .parent .parent .element(null,"copyright") .attribute (null,"author","izylay") .element (null,"year","2011") .parent .parent .element(null,"time","2011-10-10T08:19:50Z") .parent .element(null,"keywords","ULM, J300, Aerial Spraying, Locust") .parent .element (null,"bounds") .attribute (null,"minlat","-18.85522622") .attribute (null,"minlon","47.37275913") .attribute (null,"maxlat","-18.82044444") .attribute (null,"maxlon","47.39838002") ; gpxdoc.elements .element(null,"wpt") .attribute (null,"lat","-18.85522622") .attribute (null,"lon","47.39173757") .element (null,"name","A000") .parent .element (null,"sym","Waypoint") ; gpxdoc.elements .element(null,"rte") .element (null,"name","Spray Job") // .parent .element(null,"rtept") .attribute (null,"lat","-18.85522610") .attribute (null,"lon","47.39838002") .element (null,"name","Entry point") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85522525") .attribute (null,"lon","47.37275913") .element (null,"name","B000") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85387012") .attribute (null,"lon","47.37275913") .element (null,"name","B001") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85387109") .attribute (null,"lon","47.39173757") .element (null,"name","A001") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85251596") .attribute (null,"lon","47.39173757") .element (null,"name","A002") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85251499") .attribute (null,"lon","47.37275913") .element (null,"name","B002") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85115986") .attribute (null,"lon","47.37275913") .element (null,"name","B003") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.85116082") .attribute (null,"lon","47.39173757") .element (null,"name","A003") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84980569") .attribute (null,"lon","47.39173757") .element (null,"name","A004") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84980472") .attribute (null,"lon","47.37275913") .element (null,"name","B004") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84844959") .attribute (null,"lon","47.37275913") .element (null,"name","B005") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84845056") .attribute (null,"lon","47.39173757") .element (null,"name","A005") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84709543") .attribute (null,"lon","47.39173757") .element (null,"name","A006") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84709446") .attribute (null,"lon","47.37275913") .element (null,"name","B006") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84573933") .attribute (null,"lon","47.37275913") .element (null,"name","B007") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84574029") .attribute (null,"lon","47.39173757") .element (null,"name","A007") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84438516") .attribute (null,"lon","47.39173757") .element (null,"name","A008") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84438419") .attribute (null,"lon","47.37275913") .element (null,"name","B008") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84302906") .attribute (null,"lon","47.37275913") .element (null,"name","B009") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84303003") .attribute (null,"lon","47.39173757") .element (null,"name","A009") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84167489") .attribute (null,"lon","47.39173757") .element (null,"name","A010") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84167393") .attribute (null,"lon","47.37275913") .element (null,"name","B010") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84031879") .attribute (null,"lon","47.37275913") .element (null,"name","B011") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.84031976") .attribute (null,"lon","47.39173757") .element (null,"name","A011") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83896463") .attribute (null,"lon","47.39173757") .element (null,"name","A012") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83896366") .attribute (null,"lon","47.37275913") .element (null,"name","B012") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83760852") .attribute (null,"lon","47.37275913") .element (null,"name","B013") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83760949") .attribute (null,"lon","47.39173757") .element (null,"name","A013") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83625436") .attribute (null,"lon","47.39173757") .element (null,"name","A014") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83625339") .attribute (null,"lon","47.37275913") .element (null,"name","B014") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83489825") .attribute (null,"lon","47.37275913") .element (null,"name","B015") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83489922") .attribute (null,"lon","47.39173757") .element (null,"name","A015") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83354409") .attribute (null,"lon","47.39173757") .element (null,"name","A016") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83354312") .attribute (null,"lon","47.37275913") .element (null,"name","B016") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83218798") .attribute (null,"lon","47.37275913") .element (null,"name","B017") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83218895") .attribute (null,"lon","47.39173757") .element (null,"name","A017") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83083382") .attribute (null,"lon","47.39173757") .element (null,"name","A018") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.83083285") .attribute (null,"lon","47.37275913") .element (null,"name","B018") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82947771") .attribute (null,"lon","47.37275913") .element (null,"name","B019") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82947868") .attribute (null,"lon","47.39173757") .element (null,"name","A019") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82812355") .attribute (null,"lon","47.39173757") .element (null,"name","A020") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82812258") .attribute (null,"lon","47.37275913") .element (null,"name","B020") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82676744") .attribute (null,"lon","47.37275913") .element (null,"name","B021") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82676841") .attribute (null,"lon","47.39173757") .element (null,"name","A021") .parent .element (null,"sym","Dot") .parent .parent .element(null,"rtept") .attribute (null,"lat","-18.82044444") .attribute (null,"lon","47.39173757") .element (null,"name","Exit point") .parent .element (null,"sym","Dot") .parent ; gpxdoc.elements .element(null,"extensions") .element(null,"polyline") .attribute(null,"xmlns","http://www.topografix.com/GPX/gpx_overlay/0/3") .element(null,"points") // .element(null,"pt") .attribute (null,"lat","-18.85522622") .attribute (null,"lon","47.37275913") .parent .element(null,"pt") .attribute (null,"lat","-18.85522622") .attribute (null,"lon","47.39838002") .parent .element(null,"pt") .attribute (null,"lat","-18.82044444") .attribute (null,"lon","47.39838002") .parent .element(null,"pt") .attribute (null,"lat","-18.82044444") .attribute (null,"lon","47.37275913") .parent .element(null,"pt") .attribute (null,"lat","-18.85522622") .attribute (null,"lon","47.37275913") // ; auto print = new DocPrinter!(char); Stdout(print(gpxdoc)).newline; } Question: How could it be improved ? Answer: Hm, there is a lot of data in your program. As Cygal points out, you should put this where it belongs - in a data file. Other than that, there are lots of this instructions in your code: .element(null,"rtept") .attribute (null,"lat","-18.82812258") .attribute (null,"lon","47.37275913") .element (null,"name","B020") .parent .element (null,"sym","Dot") .parent I would put that in a single function. .append(createDot(gpxdoc, "-18.82812258", "47.37275913", "B020")).parent .append(createDot(gpxdoc, "-18.82676744", "47.37275913", "B021")).parent Since I've never used this language I leave the implementation of the createDot function to you. Edit: Now I understand your intent a little better. You want to do something like this: auto gpxdoc = new GPXDocument; gpxdoc .setDesc("Spray Job") .setAuthor("izylay", "izylay@ary.com") .setCopyright("izylay", "2011") .setKeywords("ULM", "Aerial Spraying", "Locust") .setBounds("-18.85522622", "47.37275913", "-18.82044444", "47.39838002") .addWaypoint("A000", "-18.85522622", "47.39173757") .addEntrypoint("-18.85522610", "47.39838002") .addPoint("B000", "-18.85522525", "47.37275913") .addPoint("B001", "-18.85387012", "47.37275913") You can inherit the Document object and add all these methods just to be wrappers around normal node manipulation. The thing with fluent interfaces is that they may maintain state (that's why you need a parent method, for example.) I'm not familiar with the gpx format, but I wouldn't be surprised if you can create groups of points. For that case, you just need to store a "state" reference. For the top-level attributes or elements (description, etc.) you can store references to the appropriate node. Hope this points you to the right direction.
{ "domain": "codereview.stackexchange", "id": 1460, "tags": "d" }
Why does Gadolinium have the highest neutron cross section?
Question: Gadolinium-157 has the highest thermal-neutron capture cross-section among any stable nuclide. Based on my layman's understanding of neutron capture – particularly the fact that (I assume) neutrons don't much care about electron shells – I would have guessed that neutron capture cross-section would be proportional to either: Size of the nucleus. I.e., strictly increasing with (N+Z). Density of the solid. Gadolinium is middling with respect to both properties. So what is it about the phenomenon of neutron capture that results in the cross-sectional peak for promethium through gadolinium? Answer: Welcome to the world of nuclear physics, where the answer is "It's a little more complicated than that." Density of the solid You can rule this out: cross sections are tabulated per target atom. Size of the nucleus, i.e., strictly increasing with (N+Z). This is a good guess, but you miss an important feature of thermal neutron physics: the relevant size parameter isn't the diameter of the nucleus, but the size of the neutron's wavepacket --- whose scale parameter is something like the neutron's wavelength. Thermal neutrons have wavelengths of a few angstroms ($1\text{ Å} = 10^{-10}\,\rm m$), many orders of magnitude larger than the physical size of a nucleus. The actual result has more to do with nuclear structure: in order for there to be a capture reaction, there has to be a final state available to receive the neutron with the correct energy and quantum numbers. If you look at a table of isotopes (see also), you'll find that gadolinium and its lanthanide neighbors are pretty far from any nuclear magic numbers. That means that they have a very high density of nuclear states and are easy to excite --- and it increases the probability that there's a resonance in the nucleus $\rm^{158}Gd^*$ whose energy and quantum numbers overlap with a ground-state $\rm^{157}Gd$ and a milli-eV neutron. The nuclear structure data file for $\rm^{158}Gd$ cites this 1978 paper in a description of the structure of the resonance. That reference (which I can't access) apparently refers to a resonant state in $\rm^{157}Gd$ with an energy of about thirty milli-eV, which is approximately the energy of a room-temperature neutron. That statement doesn't make sense to me right away, but there is an inflection in the cross-section curve at a thermal-ish energy. If you look at neutron capture cross sections on a table of isotopes (this link should work) you can see your promethium-to-gadolinium cluster of high-$\sigma$ isotopes just to the right of the $N=82$ magic number. Midway between the $N=50$ and $N=82$ magic numbers is another very strong absorber, cadmium. You can also see that the elements in the uranium-ish island of stability are also eager neutron absorbers. There are also pairing effects happening in gadolinium. Nucleons don't like to be alone, so nuclei with odd $N$ or odd $Z$ (or both) are less stable than their even neighbors. Gadolinium, like many heavy even-$Z$ elements, has a whole pile of stable isotopes, but the even-$N$ isotopes are more tightly bound than the odd-$N$ isotopes. If you look at the neutron cross sections for all of the gadolinium isotopes, you can see how desperately the odd-$N$ species want to collect an extra neutron: isotope σ (barn) ------- -------- Gd-152 735 Gd-153 22310 Gd-154 85 Gd-155 60740 Gd-156 1.8 Gd-157 253700 Gd-158 2.2 Gd-159 (unstable) Gd-160 1.4
{ "domain": "physics.stackexchange", "id": 94749, "tags": "nuclear-physics, neutrons, scattering-cross-section" }
'Unable to import' error message in Q# using VS Code
Question: I'm a complete beginner in Q#. Consider the code in the following image: For some reason, there always is an error message when I try to import anything. It also implies to keywords like Length. The error message is : No namespace with the name "Microsoft.Quantum.Canon" exists. However, my code works just fine. The code in the image is taken directly from user guide of Q#. Any Suggestions? Answer: If your code builds and runs successfully, this error has to come from IntelliSense. Are you using .NET Core 3.1.300? I think QDK release 0.11.2004.2825 has IntelliSense issues with .NET Core 3.1.300 specifically, and downgrading .NET Core to 3.1.201 fixes this issue - you can try that. Edit: the relevant issue on GitHub.
{ "domain": "quantumcomputing.stackexchange", "id": 1613, "tags": "programming, q#" }
Can you vaccinate against bacterial diseases? Are there existing vaccines for such diseases?
Question: I have the (maybe wrong?) preconception that we only vaccinate against viral diseases, and that there can only exist vaccines against viruses. I think the idea comes from the idea that only for viral diseases are you becoming immune after contracting the disease, not for bacterial. The question Is there a vaccine against the plague (Yersinia pestis)? made me think that I don't really have a justification for that, seeing that the mechanisms for combatting viral diseases and bacterial diseases should be the same (is that right?). So, is it possible to vaccinate against bacterial diseases? Do we have existing vaccines for any bacterial disease? Answer: Of course we can - and do - vaccinate against bacterial diseases, as these are some of the dangerous infectious diseases. Among these are: Tetanus (although here the vaccination is targeted against the toxin, not the bacterium itself) Tuberculosis Diphtheria Pertussis Haemophilus influenzae type B Cholera Typhoid Streptococcus pneumoniae Meningococcus: against the strains ACWY, B A list of the vaccines against bacterial diseases can be found here.
{ "domain": "biology.stackexchange", "id": 10535, "tags": "vaccination, immunity" }
Referral Network & Rewards
Question: I want to be able to distribute a total reward among network, where the distribution diminishes according to the depth of the network but the total sum of distributed rewards equals the initial total reward. I have attempted to model this with: \$S = a * (1 - r^n) / (1 - r)\$ The below Python snippet presents this series. def distribute_total_reward(total_reward: float, decay_rate: float, max_levels: int = 10) -> list: # Solve for the first term (a) a = total_reward * (1 - decay_rate) / (1 - decay_rate ** max_levels) rewards = [a * (decay_rate ** i) for i in range(max_levels)] return rewards if __name__ == "__main__": # Total reward to be distributed tw = 3500 # 50% Decay rate for the distribution dr = 0.5 # Number of levels in the network ml = 8 reward_distribution = distribute_total_reward(tw, dr, ml) for level, reward in enumerate(reward_distribution, start=1): print(f"Level {level} referrer receives: £${reward:.2f}") print(f"Total Distributed: ${sum(reward_distribution):.2f}") Note: I have arbitrarily set tw, dr, ml. One of the things I have struggled with is building out the solution to support circular referrals. I think this sort of breaks a referral system designed with hierarchy because anyone can referrer anyone else. I want to accommodate the flexibility of reverse referrals (i.e., downstream referring upstream and vice versa). I have tried implementing a fixed reward for reverse referrals, separate from the main referral rewards. However, I feel like this is too simplistic as it assumes a fixed number of reverse referrals. Realistically, we need to dynamically identify these reverse referral scenarios and allocate accordingly. Here is the code: def distribute( total_reward: int, decay_rate: float, max_levels: int = 10, reverse_referral_reward: int = 5 ) -> tuple: """ Distributes the total reward among the referral network and handles reverse referrals. """ num_reverse_referrals = 2 adjusted_total_reward = total_reward - (num_reverse_referrals * reverse_referral_reward) a = adjusted_total_reward * (1 - decay_rate) / (1 - decay_rate ** max_levels) rewards = [a * (decay_rate ** i) for i in range(max_levels)] reverse_rewards = [reverse_referral_reward for _ in range(num_reverse_referrals)] return rewards, reverse_rewards if __name__ == "__main__": total_reward = 3500 # Total reward to be distributed decay_rate = 0.5 # Decay rate for the distribution max_levels = 8 # Number of levels in the referral network reverse_referral_reward = 5 # Reward for reverse referrals primary_rewards, reverse_rewards = distribute( total_reward, decay_rate, max_levels, reverse_referral_reward ) for level, reward in enumerate(primary_rewards, start=1): print(f"Level {level} primary referrer receives: ${reward:.2f}") for i, reward in enumerate(reverse_rewards, start=1): print(f"Reverse referral {i} receives: ${reward:.2f}") print(f"Total Distributed Reward: ${sum(primary_rewards) + sum(reverse_rewards):.2f}") Answer: clear identifiers distribute_total_reward() looks great. All the identifiers are very helpful. An URL citation of that formula which introduces the variable "a" wouldn't hurt. post-condition The essential aspect of this routine is the computed rewards shall sum to the specified input parameter. We should minimally spell this out with an English sentence in a docstring. The supplied test code nicely computes a sum(), but it is not self-evaluating so a human must eyeball the result and verify it's sensible. Ideally the function would end with an assert of equality, but due to FP ULP rounding errors we expect a small epsilon error, so the test would be for relative_error() less than e.g. 1 ppb. An easy assert would be to construct a unit test to check the post-condition. For some parameters, such as 1024 total reward and .5 decay rate, the FP result will yield exact equality. def main() Nice __main__ guard. There is starting to be enough code here that it may be worth burying it within def main():. By the time you introduce reverse_rewards we're definitely ready for that. I confess I'm not sure about the choice of tw rather than a tr name. nit: We switch between £$ and $ currency prefixes. distribute() It's unclear why the currency figures of total_reward and reverse_referral_reward would be of type int. Please understand that a float annotation subsumes int, even though integers can have unlimited magnitude and there's no inheritance relationship between them. We have a docstring, and it is informative. But it's a little wishy washy. If I'm tasked with writing a unit test that verifies correct behavior, verifies the implementation conforms to a spec, then consulting just the docstring, alas, won't suffice. The verbs "distributes" and "handles" are OK but they aren't accompanied by any specifics. It seems like we want a pair of post-conditions here, constraining what happens with each of the two reward categories. DRY adjusted_total_reward = total_reward - (num_reverse_referrals * reverse_referral_reward) ... reverse_rewards = [reverse_referral_reward for _ in range(num_reverse_referrals)] These are kind of saying the same thing. Consider first computing reverse_rewards = [reverse_referral_reward] * num_reverse_referrals and then you can conveniently assign total_reward - sum(reverse_rewards). design of Public API return rewards, reverse_rewards This kind of looks like we're returning parallel vectors a, b as seen in many Fortran APIs, where a[i] describes one aspect of entity i and b[i] another aspect of that same entity. But of course they have different lengths. Consider including 0 values in the reverse_rewards. Consider returning a single vector of namedtuples that contain values for both reward categories. Which brings us to the input parameters. You will need a way to describe who referred, in which direction, and how effectively. This could be a general graph, but I am skeptical that you have a business use case for that yet. Better to start out with a restricted graph such as a tree. Maybe pass in a vector of (forward, reverse) referral magnitudes, and of course when all the reverse values are zero we should produce same result as that first algorithm produces. algorithm The closed-form solution you provide is very nice. Here is another way, a little messier, to think about that initial algorithm. Assign total_reward to the initial entry, with the rest zero. Use a for i in range(max_levels): loop to make several passes over the vector, distributing a small fraction of remaining reward each time. The last iteration makes the one and only assignment to the final level. We always {add, subtract} same small value, preserving a constant total. Now consider a graph that includes reverse referrals. Again we make several passes over the graph starting from its root, distributing a fraction as we go. But when following an upward edge we now can distribute upward, as well. The current approach of “divide level I reward evenly among that level’s participants” will need to move toward considering each participant individually. It's possible that you wish to view this as "a single (downward) origin node plus N (upward) reverse nodes", and so you want to run N + 1 instances of the loop.
{ "domain": "codereview.stackexchange", "id": 45574, "tags": "python" }
Does the bearing stress vary along the bolt?
Question: I'm currently working on problems for my mechanics of material course, and I just learned how to calculate the shear stress on a bolt holding stacked plates. That whether it be a single or double shear, I have to account for every shear plane. However, what happens for the bearing stress? I know that it depends on the area of contact, which is diameter * thickness of the plate. If there were three plates (double shear) and I looked at only the portion of the bolt in each plate, would the bearing stress not differ? Do I have to look at the entire bolt at once to calculate the bearing stress? Answer: I think answer to the question would the bearing stress differ is ==> YES! bearing stress would vary along the bolt in different sections in plates... and you cannot look at the entire bolt at once to calculate the bearing stress here is a pictorial explanation why do we need to consider it in different sections & how to calculate bearing stresses (if you want to). let us assume that we have a 3 plate arrangement like shown in pic. where this blue thing on the right is called CLEVIS and on the LHS we have a plate. Looking at the cross sectional view let us say: that This connection consists of a flat bar A (which i will refer here as just plate), a clevis C, and a bolt B that passes through holes in the plate and clevis. Under the action of the tensile loads P, the plate and clevis will press against the bolt in bearing, and contact stresses, called bearing stresses, will be developed. Bearing stresses being contact stresses depend upon the contact between two surfaces so the answer to * Do I have to look at the entire bolt at once to calculate the bearing stress * is answered here and the answer is NO you have to look at each contact area not just because you want to calculate but more because looking at the entire bolt at once to calculate the bearing stress is CONCEPTUALLY WRONG!!! & NOW, assume that clevis has upper and lower plate of different thickness... even if it doesn't look like... but let's just say it for a moment. these two plates of CLEVIS (upper and lower) are under forces say: $P_1$ and $P_3$ (respectively). and using equations of static equilibrium: $P_1$ + $P_3$ = $P$ The Bearing stresses exerted by the clevis against the bolt appear on the left-hand side of the free-body diagram (see 1 and 3) . & The Bearing stresses from the plate appear on the right-hand side (see 2). Here, we make an assumption i.e.: Since, The actual distribution of the bearing stresses is difficult to determine, so we customarily assume that the stresses are uniformly distributed. Based upon the assumption of uniform distribution, we can calculate an average bearing stress $\sigma_{b}$ by dividing the total bearing force $F_b$ by the bearing area $A_b$ , since stress is Force divided by acting area. The projected area $A_b$ on which they act is a rectangle having a height equal to the thickness of chosen plate (upper or lower) of the clevis and a width equal to the diameter of the bolt. SO,following would be the EXPRESSIONS for bearing stresses at different contacts: $plate_1$ (of clevis) \begin{align} \begin{pmatrix} \sigma_1 = \dfrac{P_1}{d * t_1} \end{pmatrix} \end{align} $plate A$ \begin{align} \begin{pmatrix} \sigma_2 = \dfrac{P}{d * t_2} \end{pmatrix} \end{align} $plate_3$ (of clevis) \begin{align} \begin{pmatrix} \sigma_3 = \dfrac{P_3}{d * t_3} \end{pmatrix} \end{align} and until and unless $t_1$ == $t_3$ bearing stresses would differ
{ "domain": "engineering.stackexchange", "id": 2750, "tags": "stresses" }
Is there a system of units that replaces time with light-distance?
Question: I would like to start this by saying that my motivation for asking is that I find relativity very difficult to deal with using the SI system. It strikes me that the problem with this system is the fact that it was based on an earlier system that assumed time to move at a constant rate under all circumstances. Since we now know this not to be true, is there, or should there be a better system out there? One that is based on the constant speed of light, not a constant rate of time. For example, if MKS became MKL where 1L = 1 Light-Gigametre (the time it takes light to travel 1 Gigametre - not in seconds) then in the world of slow moving objects we would have 1s approximately equal to 1L/3 and all would be fine. By my reckoning (and probable naivety) it should also hold out in the world of fast moving objects where time starts misbehaving but L remains constant. Does such a system already exist? UPDATE - What I am looking for is definitely not some scaled version of our existing base units. I'm looking for a system of units that ditches time altogether and starts from scratch with light-distance in its place. A fundamental change that affects all of the units that have a time component. I was prompted to look at Planck Units and in partiular Planck Time. At first sight this appeared to be what I was looking for i.e. it is based on light-distance. Unfortunately it looks like this has just been equated to a fixed number of seconds. The problem is that as soon as you make it a scaled version of time you have lost its true meaning and it is no longer true light-distance. It can't be, because light travels at a constant rate and time doesn't. Update 2 From a practical point of view, as a stationary observer on Earth, during the same period of our clock based time I could be looking at a light bouncing off a distant stationary mirror and measure one distance of light (2d) or I could be watching the same scenario on a fast moving passing vehicle and measuring a different distance of light (a V shape of height d). So in effect, the light-distance measured by me (L) would reflect the amount of earth time being experienced at the source of the light. So L (unlike t) would be dependent on what was being observed, as it should be. Or should it be that in a system where light-distance is replacing time we should just say that in both cases the light moved by 2d? Answer: Based on the discussion I had with @Alan Gee in the comments on the question and later in a discussion, I will give my answer to that question. A unit system defines units of measurement as definite magnitude of a quantity. A unit system does not make any statements about the constancy of time or space or anything really. It defines a refernce against one can measure quantities nothing more nothing less. It has nothing to with physics and makes no statements about it. The laws of special relativity (SRT) have nothing to do with any units. We can calculate with SI units, with natural units with Planck units: with every unit system we please and we will allways get the same result. The constant factors in our formulas may change and get easier or harder but the physics those formulas encode does not depend on a unit system. There is and never will be a unit system that can solve the "problem" that SRT has some unintutive and mathematically tricky points. Maybe one day there will be a more elegant theory of relativity but this theory will be based on a different mathematical description of physics and not on the units we calculate in. To the second part of your question: The SI units fixed one unit with the speed of light: the meter is defined as the distance light travells in 1/299792458 seconds. So the meter is fixed by the speed of light. You can not use the speed of light to fix another unit. Anyway it does not make a difference for SRT how you fix your units. Even the totally arbitrary old SI system (with Prototype Metre) had no problem what so ever with SRT and SRT had no problem with SI.
{ "domain": "physics.stackexchange", "id": 33226, "tags": "speed-of-light, relativity, units, si-units" }
Entropy change in a boiling egg
Question: Yesterday in a rather intuitive Q/A session, by a visiting researcher in my town, this question got my attention. What he said was that if you boil an egg, the entropy inside the egg would increase. Now common convention and logic dictates that the entropy will decrease (considering only the innards of the egg in the system), but this guy would not give up. He insisted on the contrary, and has given us time to think it out. I don't know which SE site to refer to, for this question. Any migration of this question, if necessary would be helpful. Thanks! Answer: I’ve come across several claims that a (hard) boiled egg has greater entropy than an unboiled egg. None of them cite any peer-reviewed research to back up their claim. While it is possible that entropy increases when you boil an egg, it is also possible that it does not. It is even possible that it stays the same. I did my own literature search to see if I could find any publication that answered the question. What I found left the question very much up in the air. So I contacted Greg Weiss, a scientist who has done research on a related topic: ‘unboiling’ an egg: https://www.livescience.com/49610-scientists-unboil-egg.html . Dr. Weiss was kind enough to discuss the topic with me via email. Here’s the tl;dr summary: Whether a hard-boiled egg has greater entropy all boils down to the question of how structured are the assemblies found in aggregated ovalbumin. Furthermore, we can't just consider the assemblies found in the ovalbumin aggregate since we're talking about assemblies composed of all the types of protein in the overall albumen. (BTW, it's important to note that heated egg white doesn't just form any type of aggregate, it forms a gel, which presumably implies additional structure.) For those who are interested in the details, here is the full email thread: https://docs.google.com/document/d/1Yt-usFLd3sm7MYpVTfwy3Yza6t3Ub6Q3lUZo7on5_wk/edit?usp=sharing
{ "domain": "physics.stackexchange", "id": 60465, "tags": "thermodynamics, everyday-life, entropy" }
Calculating average between all times to receive a response
Question: I have one interface for device but the two classes I have using the interface also include a list of a class. The classes the list are using are of ICommand. This gives me a problem because I run into multiple of essentially the same methods as seen below with the average methods. I'm fairly new to C# so I'm wondering if there is some way I can use an abstract class or something to eliminate all this duplicate code. public interface IDevice { double AvgCmdsMin { get; set; } double AvgTime { get; set; } string DeviceName { get; set; } double Duration { get; set; } int FailedCmds { get; set; } double MaxTime { get; set; } double MinTime { get; set; } } public class MuxDevice : IDevice { public string DeviceName { get; set; } /*Name of Device*/ public bool Active = false; /*True if device is currently waiting for a command response, false if not*/ public List<MuxCmd> UsedCommands = new List<MuxCmd>(); /*List of all commands used by device*/ public double MaxTime { get; set; } /*the longest time it took to receive a command response*/ public double MinTime { get; set; } /*the shortest time it took to receive a command response*/ public double AvgTime { get; set; } /*The average between all times to receive to a response*/ public double Duration { get; set; } /*Total duration of all commands on device*/ public double AvgCmdsMin { get; set; } /*How many commands were sent in a minute*/ public int FailedCmds { get; set; } /*Number of commands that received an error*/ } public class Device : IDevice { public string DeviceName { get; set; } /*Name of Device*/ public bool Active = false; /*True if device is currently waiting for a command response, false if not*/ public List<Command> UsedCommands = new List<Command>(); /*List of all commands used by device*/ public double MaxTime { get; set; } /*the longest time it took to receive a command response*/ public double MinTime { get; set; } /*the shortest time it took to receive a command response*/ public double AvgTime { get; set; } /*The average between all times to receive to a response*/ public double Duration { get; set; } /*Total duration of all commands on device*/ public double AvgCmdsMin { get; set; } /*How many commands were sent in a minute*/ public int FailedCmds { get; set; } /*Number of commands that received an error*/ } public interface ICommand { string CmdType { get; set; } DateTime EndTime { get; set; } TimeSpan Length { get; set; } int RespBytes { get; set; } int RespCmd { get; set; } int SendBytes { get; set; } int SendCmd { get; set; } DateTime SendTime { get; set; } } private void AverageTime(List<Device> deviceStats) { foreach (Device t in deviceStats) { long totalTicks = 0; /*total number of ticks(10,000 ticks in a millisecond)*/ foreach (Command t1 in t.UsedCommands) { totalTicks += t1.Length.Ticks; /*adds up command lengths as ticks*/ } TimeSpan totalDuration = new TimeSpan(totalTicks); t.Duration = totalDuration.TotalSeconds; if ((t.UsedCommands.Count - t.FailedCmds) == 0) /*if 0 commands*/ { t.AvgTime = 0; /*Add avgTimeSpan to Device in seconds*/ } else { var avgticks = totalTicks / (t.UsedCommands.Count - t.FailedCmds); /*Average ticks*/ var avgTimeSpan = new TimeSpan(avgticks); /*Convert totalTicks to Timespan*/ t.AvgTime = avgTimeSpan.TotalSeconds; /*Add avgTimeSpan to Device in seconds*/ } } } private void MuxAverageTime(List<MuxDevice> deviceStats) { foreach (MuxDevice t in deviceStats) { long totalTicks = 0; /*total number of ticks(10,000 ticks in a millisecond)*/ foreach (MuxCmd t1 in t.UsedCommands) { totalTicks += t1.Length.Ticks; /*adds up command lengths as ticks*/ } TimeSpan totalDuration = new TimeSpan(totalTicks); t.Duration = totalDuration.TotalSeconds; if ((t.UsedCommands.Count - t.FailedCmds) == 0) /*if 0 commands*/ { t.AvgTime = 0; /*Add avgTimeSpan to Device in seconds*/ } else { var avgticks = totalTicks / (t.UsedCommands.Count - t.FailedCmds); /*Average ticks*/ var avgTimeSpan = new TimeSpan(avgticks); /*Convert totalTicks to Timespan*/ t.AvgTime = avgTimeSpan.TotalSeconds; /*Add avgTimeSpan to Device in seconds*/ } } } Answer: It's actually not uncommon that you have both the interface and a class that implements it the way you did it. In order to make the interface reusable for both case you need to make use of generics and add a new generic parameter to IDevice that the list will be using. It would be a good idea to constrain it too with where on ICommand public interface IDevice<TCommand> where TCommand : ICommand { List<TCommand> UsedCommands { get; set; } // other members } Then you also need to modify the AverageTime to pass that generic parameter to the argument. The same constraint applies to this method now. private void AverageTime<TCommand>(List<IDevice<TCommand>> devices) where TCommand : ICommand { // ... } There is one more issue that you should address. The AvarageTime method modifies the parameter by altering the AvgTime. t.AvgTime = 0; If you do this you should rename the method to UpdateAverageTime or return a new list that contains objects with new values without modifying the input.
{ "domain": "codereview.stackexchange", "id": 28071, "tags": "c#, beginner, object-oriented" }
Regioselectivity in radical chlorination of a carboxylic acid
Question: From an online textbook, the first step in the preparation of an amino acid: Can we be sure that the chlorine will connect exactly at that spot, in the alpha-position? Won't we get an acyl chloride instead? Or a molecule where Cl is connected in the beta-position, or further down the line? Answer: I am going to say: no, this does not work in practice. In fact it does not even work in theory. In simple theory you would say that that the α C-H bond is the weakest because the resulting radical is stabilised by conjugation into the C=O. But under radical conditions, we don't observe α-chlorination in the lab. Is there a better way to explain this? The chlorine radical The problem is twofold. Firstly, the chlorine radical, $\ce{Cl.}$, is much less stable and therefore extremely unselective in radical chlorinations. In physical organic chemistry, we quote something called the Hammond postulate, which says that in this case, the transition state for hydrogen abstraction resembles the starting material more than the product. Therefore, the stability of the product does not significantly affect the activation energy, and the rate, of the hydrogen abstraction. On top of that, it is what we would consider an "electrophilic radical"; it has a very low-energy SOMO, and therefore has a tendency to abstract a more electron-rich hydrogen atom. Essentially, it means that the radical behaves more like an electrophile than a nucleophile. In the transition state for hydrogen abstraction, the chlorine wants to have a significant build-up of negative charge on itself (remember, the product of hydrogen abstraction is going to be $\ce{HCl}$). It therefore goes for the hydrogen that is most willing to give it this negative charge, and that is the most electron-rich hydrogen atom. An example Consider the radical chlorination of propionic acid, i.e. $\ce{R} = \ce{CH3}$ in your diagram above. The α-hydrogen, compared to the β-hydrogen, is comparatively electron-poor because of the electron withdrawal by the $\ce{-COOH}$ group. So, it turns out that the rate of abstraction of the α-hydrogen is much slower: $k_\alpha/k_\beta = 0.03$ (Moody & Whitham, Reactive Intermediates, p 12). Here's an article which discusses the selectivity observed in the chlorination of propionic acid under different conditions: Tetrahedron, 1970, 26, 5929. As written in Carey & Sundberg 5th ed., Part A, p 1022: Radical chlorination shows a substantial polar effect. Positions substituted by EWG [electron-withdrawing groups] are relatively unreactive toward chlorination, even though the substituents are capable of stabilizing the radical intermediate. [...] Because the chlorine atom is highly reactive, the reaction is expected to have a very early TS [transition state] and the electrostatic effect predominates over the stabilizing effect of the substituent on the intermediate. The electrostatic effect is the dominant factor in the kinetic selectivity of the reaction and the relative stability of the radical intermediate has relatively little influence. As an example, they use the chlorination of butyronitrile, which is the same as propionic acid except that the EWG is now $\ce{-CN}$ instead of $\ce{-COOH}$: As @Loong mentions in the comments, in the specific case of propionic acid, there is a further problem: there are three β-hydrogens for it to abstract, and only two α-hydrogens. Therefore, even in the absence of everything else I have described, we would expect there to be 1.5 times as much chlorination at the β-position. This certainly does not help at all with the selectivity! The importance of statistical factors is most clearly seen in the radical chlorination of simple alkanes, where all the hydrogens are pretty much similar. I shan't go into detail here, but you can look at this answer (by Loong again), and there are examples on Clayden 2nd ed., pp 987-8. The solution If you wanted to selectively chlorinate propionic acid at the α-position, you would use a polar mechanism that goes via the enol. The paper above has some examples of conditions that work. All of them involve suppressing the radical reaction by adding radical scavengers.
{ "domain": "chemistry.stackexchange", "id": 5469, "tags": "organic-chemistry" }
Will the water added to an ice piece freeze?
Question: Water, at room temperature is poured into a hole made of a block of melting ice(kept at room temperature).I was wondering if the water will ever freeze? Thank you. Answer: Ice coming from the freezer will typically be around -19 deg. celsius, and can only be stored for a limited time at room temperature. As soon as the ice is heated to 0 deg. or above, the ice will melt into liquid water. Liquid water coming into contact with ice will be cooled, and if cooled below 0 deg. it will also freeze. The answer to your question is that it will depend on how much ice, how much water, and the starting temperatures of these(and much more if you really goes into small detail like the dynamic of energy transport). Everything is controlled by energy, to do the real calculations, you need constants like the heat capacity of water, and ice, and the melting energy.
{ "domain": "physics.stackexchange", "id": 40359, "tags": "thermodynamics, heat, water, freezing" }
Fix base64 data URI scripts function
Question: So I noticed Chrome has quirky behaviour when it encounters script tags whose src is a base64 value. I decided to write a quick jQuery method that is supposed to work around it: jQuery.extend({ /** * Takes a script decodes the base64 src, puts it into the body of the script tag, * then puts it in whatever parent specified. * * @requires https://plugins.jquery.com/base64/ * * @param {Object} script The script tag that should be manipulated * @param {Object} parent The parent element to append the final script tag to. * * @return {Object} The script tag in question */ importBase64Script : function ( script, parent ) { // Check for base64 library if ( typeof $.base64 === 'undefined' ) throw 'No $.base64 found!'; // Sanitize our script var // Normalize our script object script = ( script instanceof jQuery ? script : $(script) ); // Check if it is a script tag if ( script[0].tagName !== "SCRIPT" ) throw "Not a script tag"; // Set default parent value parent = parent || $('head'); // Normalize our parent var parent = ( parent instanceof jQuery ? parent : $(parent) ); // We're gonna extract the base64 value var re = /data:[a-z]+\/[a-z]+;base64,([0-9a-zA-Z\=\+]+)/, base64Content = script.prop('src').match(re)[1], scriptContent = $.base64.decode( base64Content ); // Drop the decoded javascript into the contents of the script tag script.html( scriptContent ); // Clear src value script.prop('src',''); // Append it to the parent parent.append(script); return script; } }); I tested a few of the conditions on JsPerf to see which is better performance wise. Granted, I didn't do a full sweep on every browser. Any suggestions that anybody could make? Answer: Awesome, Well commented Nothing bad on JsHint.com Quiet and robust handling of parameters It does exactly what it says on the tin Unrelated to CR, but this is github worthy.
{ "domain": "codereview.stackexchange", "id": 31046, "tags": "javascript, jquery, dom" }
Temperature vs Material Properties - ABS
Question: I'm working on a design that uses fairly large plastic components (ABS, ~400mm longest dimensions). I'm looking at adding a steel component that encapsulates this plastic part. And, the steel part needs to be tight against the ABS part. Because of the thermal properties of ABS, the plastic part will change size fairly drastically relative to the Steel part. Since the steel part needs to tightly encapsulate the ABS part, I've been considering cooling the ABS part to around -10 to -20 Celsius before attaching the steel which would be at around +20 Celsius. My question is, with ABS cooled to that temperature would riveting the steel part to it still be an option, or should I expect fracturing from the rivet? I've tried looking into the hardness, brittleness, and Younges modulus of ABS as a function of temperature but I can't find any good data. Rivet would be a 7/32 sized rivet through a 3mm thick piece of ABS. Any help would be greatly appreciated. Answer: Cooling ABS to -20 C should be within its operating conditions. But why are you concerned that your encapsulation does not sit tight? What is the temperature range in which this part is used? If its above room temperature, that ABS part will expand more and most likely „press-fit“ into the encapsulation. Do you have a drawing of that ABS part and encapsulation?
{ "domain": "engineering.stackexchange", "id": 1787, "tags": "mechanical-failure, thermal-expansion, abs" }
Identification of kinematics of industrial robots
Question: Hi, I am looking for a way to identify (or calibrate) the kinematic information (Joint angle and robot frame offsets) of my 6-DOF industrial robot. I am aware of the packages robot_calibration and pr2_calibration. There still does not seem to be a package which simply optimizes (considering the nominal kinematic information from the manufacturer as a set of start parameters) those parameters and parses them into a new, updated URDF file again. I thought about a package which takes the endeffector frame (calculated via the direct kinematics) and the measured position of the endeffector as the input and finally optimizes after acquiring a couple of poses the new parameters with e.g the Levenberg-Marquardt optimizer. Is there any package like that? Otherwise, I am willing to work on that and would like to contribute. Originally posted by nmelchert on ROS Answers with karma: 143 on 2018-07-30 Post score: 4 Answer: Since I have not gotten any responses yet, I will simply write a little package for kinematic identification purposes and will post the link to the repository here as soon as I get done. Originally posted by nmelchert with karma: 143 on 2018-08-01 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 31411, "tags": "calibration, ros-kinetic, robot-calibration, arm-kinematics, ros-industrial" }
creating own javascript join and push methods
Question: I'm a newbie trying to write my own push() and join(). This isn't very serious, just an exercise. This is what I got so far function Errey(...values){ this.name=values } Errey.prototype={ constructor:Errey, //join method myJoin(union = ' '){ let result="" const length = this.name.length for(let i=0; i<length; i++) { i<length-1 ? result += this.name[i] + union: result+=this.name[i] } return result }, //push method myPush(value){ this.name = [...this.name, value] return this.name.length } } //run a few examples const myArray = new Errey("3","blue","1","brown") //we'll mutate myArray anyways using const or let let joined = myArray.myJoin() let a = myArray.myPush("hey") console.log("joined new array: ",joined, "\n\npush-mutated array: ", myArray.name, "\n\npush return value: ", a) I'm aware an object is returned instead of an array. I guess you might help me out there. Should I use factory functions here? I don't know how would I create a prototype for such a thing... Any improvements on the main code you'd suggest? Answer: first of all you can use the "new" class syntax instead of changing the prototype directly. This is how I would do it: class Errey { #currentArray; constructor(...elements) { this.#currentArray = elements; } get errey() { return this.#currentArray; } push(...elements) { const startLength = this.#currentArray.length; elements.forEach((curEl, index) => this.#currentArray[index + startLength] = curEl); return this.#currentArray.length; } join(union = " ") { return this.#currentArray.reduce((prev, cur) => prev + union + cur); } } //run a few examples const myArray = new Errey("3","blue","1","brown") console.log(myArray.join(", ")); console.log(myArray.push("Subscribe", "To", "Him")); console.log(myArray.errey); push: the standard array.prototype.push allows for multiple elements to be added at once, so I allow multiple elements and loop over them using forEach, you could also do [...this.#currentArray, ...elements] (or use Array.prototype.concat()). join: here I use reduce to create a join operation. Reduce takes the first 2 elements and applies a function on them (in this case a + union + b), makes them the new first element and repeats that until there is only one element left, our joined erray. I also made the internal Array private using the prefixed # and created a getter (get errey()) instead, this way the internal array can only be changed using the push method. Using the factory pattern (which I personally prefer): const Errey = (...elements) => { const push = (...newElements) => { const startLength = elements.length; newElements.forEach((curEl, index) => elements[index + startLength] = curEl); return elements.length; }; const join = (union = " ") => elements.reduce((prev, cur) => prev + union + cur); const get = () => elements; return { push, join, get, }; }; const myArray = Errey("3","blue","1","brown") console.log(myArray.join(", ")); console.log(myArray.push("Subscribe", "To", "Him")); console.log(myArray.get());
{ "domain": "codereview.stackexchange", "id": 39000, "tags": "javascript, object-oriented, reinventing-the-wheel" }
Can genetic algorithms be used to learn to play multiple games of the same type?
Question: Is it possible for a genetic algorithm + Neural Network that is used to learn to play one game such as a platform game able to be applied to another different game of the same genre. So for example, could an AI that learns to play Mario also learn to play another similar platform game. Also, if anyone could point me in the direction of material i should familiarise myself with in order to complete my project. Answer: Genetic algorithms and Neural Networks both are "general" methods, in the sense that they are not "domain-specific", they do not rely specifically on any domain knowledge of the game of Mario. So yes, if they can be used to successfully learn how to play Mario, it is likely that they can also be applied with similar success to other Platformers (or even completely different games). Of course, some games may be more complex than others. Learning Tic Tac Toe will likely be easier than Mario, and learning Mario will likely be easier than StarCraft. But in principle the techniques should be similarly applicable. If you only want to learn in one environment (e.g., Mario), and then immediately play a different game without separately training again, that's much more complicated. For research in that area you'll want to look for Transfer Learning and/or Multi-Task learning. There has definitely been research there, with the latest developments that I'm aware of having been published yesterday (this is Deep Reinforcement Learning though, no GAs I think). The most "famous" recent work on training Neural Networks to play games using Genetic Algorithms that I'm aware of is this work by Uber (blog post links to multiple papers). I'm not 100% sure if that really is the state of the art anymore, if it's the best work, etc... I didn't follow all the work on GAs in sufficient detail to tell for sure. It'll be relevant at least though. I know there's also been quite a lot of work on AI in general for Mario / other platformers (for instance in venues such as the IEEE Conference on Computational Intelligence and Games, and the TCIAIG journal).
{ "domain": "ai.stackexchange", "id": 714, "tags": "neural-networks, game-ai, python, genetic-algorithms" }
In and out states of scattering in Asymptotically flat spacetimes
Question: I am reading a paper called "New symmetries of massless QED", written by Temple He, Prahar Mitra, Achilleas P. Porfyriadis and Andrew Strominger (https://arxiv.org/abs/1407.3789). At some point at the very beginning the authors state that "This paper considers theories in which there are no stable massive charged particles, and the quantum state begins and ends in the vacuum at past and future timelike infinity.Of course, in real-world QED the electron is a stable massive charged particle, so it is highly desirable to generalize our analysis to this case.2 However, stable massive charges create technical complications because the charge current has no flux through future null infinity." From this I am led to understand that the "in" and "out" states are defined in the $|t|\rightarrow\infty$ limit in scattering theory, regardless whether or not the theory is massless (just as the QED theory the authors consider) or massive (just as the real-world theory of QED). This is the reason, hence, the authors want to generalize their arguments to the massive case. However, in the lecture notes written by Strominger called "Lectures on the infrared structure of gravity and gauge theory" (https://arxiv.org/abs/1703.05448), and specifically in Chapter 2, the author states that in order to study scattering problems "whether in electrodynamics or gravity, classical or quantum, one starts by specifying initial data at the past null infinity" So, which of the above am I supposed to believe and why? Does scattering theory involve definint in and out states at $|t|\rightarrow\infty$ both for massless and massive theories, or does the scattering regarding massless theories imply that we need to consider that the initial states are defined at the past null infinity and the final ones at the future null infinity? Any help is appreciated. Answer: As I mentioned in a comment in your other question, null infinity is a Cauchy slice only in purely massless theories. If there are also massive excitations, then you must include $i^+$ as well. Alternatively, you can also use as a Cauchy slice a constant $t$ slice with $t \to \pm\infty$. The latter is what is typically done in standard QFT. The crux of the paper you mentioned and of Strominger's lectures is to carefully analyze the effects of the boundary of the Cauchy slice. In traditional QFT, one assumes that all fields vanish at spatial infinity, but this assumption is wrong. Boundary contributions are responsible (as Strominger explains in his notes) for all the infrared effects that we see in the real world (such as infrared divergences and soft theorems). These boundary issues are best studied by looking at the Cauchy slices ${\cal I}^\pm \cup i^\pm$ instead of constant $t$ slices.
{ "domain": "physics.stackexchange", "id": 88235, "tags": "general-relativity, quantum-electrodynamics, scattering, qft-in-curved-spacetime, asymptotics" }
Why is the ground state energy of a 2DEG higher compared to the 3DEG?
Question: I am reading something about a 2DEG (2-dimensional electrongas model) and can not understand it. My book says the ground state of the 2DEG is higher compared to a 3DEG because the confinement to 2D increases the energy (the book calls this confinement energy but I can not finde stuff about it online - only quark stuff pops up). Both systems (3D and 2D) seem to have similar energies (or energy eigenvalues) which are of the usual form: $E \sim \frac{\hbar^2\pi^2n^2}{2mL^2}$ but it seems like that for some reason in the 3DEG n can be zero but in the 2DEG it can not an starts at 1. The system as hight of $L$ in the $z$ direction and in that direction the confinement takes place. The energy difference of the two ground states seems to be: $\frac{\hbar^2\pi^2}{2mL^2}$. The Model seems to suggest that in the $xy$-plane the particles act a like free particle but an infinite plane with $z$ confinement seems pretty weird to me. Perhaps the $z$-length is way smaller than the $x$- and $y$-length in typical 2DEG Models and thus we approximate it? But even if that would be the case - how would one compare the energy of a free particle (which does not have a definite energy as far as I understand it) with the energy of a particle in a infinite well? Edit: The potential has the form: $$V(z) = 0 ; \text{for} - \frac{L}{2} ≤ z ≤ \frac{L}{2}$$ $$V(z) = ∞ ; \text{else}$$ Edit: It would make sense if a free electron can have zero $k$-Vector and thus zero energy but wouldn`t that mean it does not exist in the first place? It would also work if $L$ is really really small such that the energy-term due to the infinite-well is much much bigger than the free kinetic energy term. Answer: It is easiest if you consider a rectangular box in 3D, with length $L_x$ in the $x-y$ plane, and length $L_z \ll L_x$ in the $z$ direction. Now, $L_x$ is very large, and you are only interested in the physics deep within the bulk, far from the boundaries. Therefore it is of little importance exactly which boundary conditions you choose. It is often easier to choose periodic boundary conditions in the $x$ and $y$ directions. (This gives identical predictions to box boundary conditions for almost all quantities in the limit $L_x\to 0$.) The energy eigenstates for a single electron are then of the form $$ \psi(\mathbf{r}) \sim e^{i2\pi(n_x x + n_y y)/L_x}\sin(\pi n_z z/L_z). $$ up to an uninteresting normalisation factor. The quantum numbers defining the momentum are $n_x,n_y = 0,1,2,3,\ldots$ and $n_z = 1,2,3,\ldots$. Notice that it is possible to have $n_x = 0$ or $n_y = 0$. There is nothing odd about this, it simply means that the electron has zero momentum in these directions, so that its wave function is constant. However, $n_z = 0$ is not a meaningful solution, since this means that the entire wave function vanishes. The energies are given by $$ E(n_x,n_y,n_z) = \frac{1}{2m}\left[ \left(\frac{2\pi \hbar n_x}{L_x}\right)^2 + \left(\frac{2\pi \hbar n_y}{L_x}\right)^2 + \left(\frac{\pi \hbar n_z}{L_z}\right)^2 \right].$$ The energy of the single-particle ground state is just $E(0,0,1)$. Obviously this is larger for small $L_z$. Fundamentally this is a consequence of the Heisenberg uncertainty principle. Since the position of the electrons is sharply localised in the $z$ direction, the momentum in the $z$ direction has very large fluctuations. These fluctuations give rise to a large kinetic energy, which is the "confinement energy" referred to in your book. In a (real) 2D system, the electrons are so tightly confined in the $z$ direction that every single one has $n_z = 1$. This requires that the temperature be very low, so that there is not enough thermal energy to excite electrons to higher motional states in the $z$ direction. Explicitly, the thermal energy must be smaller than the gap to the first excited state in the $z$ direction, which is $$E(n_x,n_y, 2) - E(n_x,n_y,1) = \frac{3\pi^2\hbar^2}{2mL_z^2}.$$ So as long as $k_B T \ll \hbar^2/(mL_z^2)$ then the motion in the $z$ direction is frozen out, so the system is effectively 2D.
{ "domain": "physics.stackexchange", "id": 21995, "tags": "quantum-mechanics, solid-state-physics, schroedinger-equation, potential, spacetime-dimensions" }
Where does the expression $\mathrm{Tr}(K) = \sum_{j=1}^{n}\langle\psi_j|K|\psi_j\rangle$ for the partial trace come from?
Question: During my studies of composite quantum systems I find some expressions that leave me with a little doubt. For example: Let K be a linear operator defined in the Hilbert space H. Where H is given by $H = H_{a}\otimes H_{b}$. If I want to perform the trace of $K$ in $H$ space, I use the following expression $$ \mathrm{Tr}(K) = \sum_{j=1}^{n}\langle\psi_j|K|\psi_j\rangle $$ where {$|\psi_j\rangle$} is some basis on $H$. With that, if i want to perform the partial trace of this operator over some basis of $H_b$ given by {$|b_j\rangle$}, i use the following expression: $$ \mathrm{Tr}_b(K) = \sum_{j}(I_a\otimes\langle b_j|)K(I_a\otimes|b_j\rangle)\tag{1} $$ I have doubts in this expression: Is $I_a$ is the identity operator given by $\sum_{i}|a_i\rangle\langle a_i|$?(where $\{|a_i\rangle\}$ is some basis on $H_a$). Where did expression (1) come from?, If I knew the expression for $K$, would expression (1) be equivalent to applying $\mathrm{Tr}_b(K)$ = $\sum_{j}\langle b_j|K|b_j\rangle$? Following the same reasoning, but now in the context of measurements in a composite system ($H = H_{a}\otimes H_{b}$). After measurement of an observable $A = \sum_{a}a|a\rangle\langle a| = \sum_{a}aA_a$ , where $A_a=|a\rangle\langle a|$ is the projector and $\{a\}$ is the discrete spectrum, the state collapses to $$\rho_a = (A_a\otimes I_b)\rho(A_a\otimes I_b)/p_a \tag{2} $$ with $p_a$ the probability of obtaining (a) when a measurement is taken. Likewise, I would like to know where equation $(2)$ came from. Do equations $(1)$ and $(2)$ have any relationship? Answer: A reasonable definition for the partial trace is the following. Given any orthonormal basis sets $|a_i\rangle$ and $|b_i\rangle$ for $\mathcal H_a$ and $\mathcal H_b$ respectively, any operator $K$ on the space $\mathcal H_a\otimes \mathcal H_b$ can be written $$K = \sum_{ij k\ell} K_{ijk\ell} |a_i\rangle |b_j\rangle \langle a_k|\langle b_\ell|$$ where $K_{ijk\ell} \equiv \langle a_i|\langle b_j| K |a_k\rangle |b_\ell\rangle$, and $\langle a_i|\langle b_j| \equiv \langle a_i|\otimes \langle b_j|$ (I omit the $\otimes$ for notational clarity). The partial trace is then defined to be $$\mathrm{Tr}_b(K):= \sum_{ik\ell}K_{i\ell k\ell}|a_i\rangle\langle a_k|$$ which is now a linear operator on $\mathcal H_a$ alone, with coefficients $$\bigg(\mathrm{Tr}_b(K)\bigg)_{ik} = \sum_\ell K_{i\ell k\ell}$$ Many people choose to write equation $(1)$ because it gives the impression of tracing over the $|b_i\rangle$ basis while leaving the $|a_i\rangle$ basis alone. If you look too closely at the expression, though, it doesn't really make sense$^\dagger$ - what kind of object is (operator)$\otimes$(vector)? If I knew the expression for $K$, would expression (1) be equivalent to applying $\mathrm{Tr}_b(K) = \sum_j \langle b_j|K|b_j\rangle$? That expression doesn't make sense. $K$ acts on $\mathcal H_a\otimes \mathcal H_b$, so what does $K|b_j\rangle$ mean if $|b_j\rangle \in \mathcal H_b$? Likewise, I would like to know where equation (2) came from. If you have a pure state $|\psi\rangle\in \mathcal H_a\otimes \mathcal H_b$, then the corresponding density operator is given by $\rho_\psi := \frac{|\psi\rangle \langle \psi|}{\langle \psi|\psi\rangle}$. If you apply a projection operator $P_a\otimes \mathbb I_b$ to your state $|\psi\rangle$, your new density operator becomes $$\rho = \frac{(P_a \otimes \mathbb I_b)|\psi\rangle\langle \psi|(P_a \otimes \mathbb I_b)}{\langle \psi|(P_a \otimes \mathbb I_b)^2|\psi\rangle} $$ which is equal to your equation (2). In short, density operators inherit their projective evolution from the projective evolution of the states from which they are built. This then extends naturally to states which are not necessarily pure. $^\dagger$This can be remedied. Objects like these can be defined given a bit of thought, and the result is intuitively just what you'd expect.
{ "domain": "physics.stackexchange", "id": 76539, "tags": "quantum-mechanics, density-operator, quantum-measurements, trace, open-quantum-systems" }
Lambda Calculus as a branch of set theory
Question: This answer to a question about whether C is the mother of all languages contained an interesting tidbit that I am curious about: The functional paradigm, for example, was developed mathematically (by Alonzo Church) as a branch of set theory long before any programming language ever existed. Is this true? What is the link between these topics that is so fundamental as to make lambda Calculus an outgrowth of set theory? The best I can come up with is that standard mathematical functions possess domains and codomains. Answer: It's false. The $\lambda$-calculus arose through efforts to understand foundations of mathematics. Nowadays some people mistakenly equate foundations with set theory. The Stanford Encyclopaedia of Philosophy has a very good writeup on the $\lambda$-calculus, as well as its history, I recommend it.
{ "domain": "cs.stackexchange", "id": 14263, "tags": "lambda-calculus, sets, history" }
slam Gmapping from bag only take partial data
Question: Hi, i,ve managed to create some bags starting from some carmen log file. Now i'm trying to use those bags with for gmapping but while rosbag play doesn't show any error, gmapping get stuck at: update frame 0 update ld=0 ad=0 Laser Pose= 0 0 0 m_count 0 Registering First Scan so when i try to get the map file using "rosrun map_server map_saver" only a partial map file is given. While doing this procedure with the bag given in the gmapping from logged data tutorial nothing get wrong. one of the bags i'm trying to use is this: db.tt/BawU5lXp Is there a way to check if the bag is suitable for what i'm doing ? what can be the problem ? Originally posted by loppo on ROS Answers with karma: 26 on 2013-10-27 Post score: 0 Answer: Try playing your bag as follows...... $rosparam set use_sim_time true $rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link laser 100 $rosbag play --clock <your-ros-bag.bag> Hope it works. Originally posted by cognitiveRobot with karma: 167 on 2013-10-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15980, "tags": "navigation, rosbag, gmapping" }
How do we calculate the force applied by a ball on a wall which bounces back?
Question: If we have a ball which we throw toward a wall which touches the wall and bounces back then how will you calculate the force applied by the wall on the ball because the the contact time of the ball and the wall is infinitely small so force must be infinitely large? In a similar case If a ball is dropped from a height of 80cm (10kg ball), What amount of force will the ground apply on the ball(suppose ball comes to halt after touching the ground)? In both the cases the duration of change of momentum is infinitely small so should the force be infinitely large ?? Note: Pls try and ignore any mistakes in the question because I am new to stack exchange Answer: It is not in fact infinitely small. If you look at slow motion video of the collision, you can see that the ball and the wall are in contact for reasonable amount of time. During that time, energy of motion is stored as elastic energy od the ball (some is naturally lost) and then converted back to the kinetic energy of the ball which is now moving in the other direction. In our idealized model of the collision, force is indeed infinite, but that information is not important if you know that momentum has to be conserved.
{ "domain": "physics.stackexchange", "id": 61824, "tags": "homework-and-exercises, newtonian-mechanics, forces, momentum, collision" }
ROS Answers SE migration: TS7400 Board
Question: Im interesting in develop projects with ROS, but i dont know if its possibly to "run" ROS on a TS7400 Technologic System Board, or any of this brand, and how to make this. Thanks for all the responses. http://www.embeddedarm.com/products/board-detail.php?product=TS-7400 Originally posted by ixion_006 on ROS Answers with karma: 11 on 2012-08-31 Post score: 1 Original comments Comment by Kevin on 2012-08-31: Can you provide a link to the hardware and I assume you are planning to use Linux? Answer: It has only a 200mhz processor and 32MB Ram..You can run Linux on it and ROS perhaps. But it will be really slow and you need to set it up yourself. I would recommend you buy a Beagleboard or a Raspberry Pi. Those already have a small community with people that use it with ROS. The Raspberry Pi is also very cheap (but harder to source). Originally posted by davinci with karma: 2573 on 2012-09-06 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10845, "tags": "ros" }
Electron Decay, Why are there P and higher orbitals?
Question: Related: Decay from excited state to ground state My confusion arose initially from the definition of binding energy being the lowest energy state (n=1) in the hydrogen atom. This, I assume, is simply because hydrogen only has one atom, and electrons don't exist in higher energy states stably. Then my next question was, why not? If these higher energy states exist, why can't electrons maintain those orbits? The above question seems to answer that question, but then I don't understand why bigger atoms CAN hold these higher energy eigenstates. Is is just because the lower electrons "prevent" the higher ones from decaying? I can see where the exclusion principle comes from, but I can't see how it would prevent decay, only how it would prevent more than two electrons from inhabiting the same orbit. Answer: Here is a representation of the hydrogen atom energy levels. It displays the availabe solutions of the Schrodiner equation for an atom composed of a proton in the nucleus and an electron existing in their mutual potential. Systems stay in the minimum energy state, and for the single electron of hydrogen the minimum energy state is the n=1 state and the value of that energy is -13.6 eV. It can happen that a photon of 10.2 eV scatters the electron to the n=2 state. This will be an unstable solution because there exists an empty lower energy state and the electron will radiate back to n=1. The same is true for the higher n states to which the electron can get scattered, and then can cascade down to the ground state. The radiation from these excitations is a spectrum measurable in the lab and it is how we know we have the correct quantum mechanical model of the hydrogen atom. A second electron has no meaning in this solution of the hydrogen atom, which has zero charge as an atom. A second electron will not be attracted because there is no potential atom+ second electron . Each atom has as many electrons as there are protons in the nucleus and there will be solutions that will give the energy levels those electrons can occupy. For Z=2 and higher the Pauli exclusion principle does not allow two electrons in the same energy state. It is worth looking up Helium to get an idea of the complexity of the energy levels of multi electron atoms and the role of the Pep.
{ "domain": "physics.stackexchange", "id": 10809, "tags": "quantum-mechanics, atoms, atomic-physics, pauli-exclusion-principle" }
Which electrodes do not corrode at all?
Question: I had used spare pieces of metal to perform electrolysis. They all had the disadvantage that they corrode when used as anode - and some of the oxides are toxic (copper, chromium). I've found that carbon, which can be harvested from pencils and old batteries does not corrode contrary to my expectations (I'd expect it to produce $\ce{CO2}$). But soldering on carbon is completely impossible. Also, I need some bigger electrodes. So are there any metals that do not corrode during electrolysis? I'm using sodium carbonate as electrolyte. Answer: This highly depends on what you are electrolyzing. When using solutions that are not very acidic, oxide-coated metals sometimes work, like lead-oxide electrodes, that can be harvested from old lead accumulators (attention, lead is toxic, wear lab coat and gloves when working with them. Lead accumulators contain ~20% sulfuric acid, so wear gloves as well and unassemble them in an acid-proof working space) If you search the web for recipes of perchlorate preparation by electrolysis at home, you will find extensive discussions on the topic of anode materials and making oxidation-resistant electrodes. Platinum is really hard to corrode, but even it corrodes in some cases.
{ "domain": "chemistry.stackexchange", "id": 4247, "tags": "everyday-chemistry, redox, metal, electrolysis" }
What is the minimal time to move from the left to the right in a double potential well?
Question: Consider a particle in a double potential well with Hamiltonian $\hat{H}$ and two basis states $|l\rangle, |r\rangle$ which correspond to the particle being maximally localized in the left resp. right well. Assume now that the time evolution operator of the system admits the form $\hat{U}(t)=\cos(\varphi(t))\mathbb{1}+i\sin(\varphi(t))\hat{\sigma}_1$ where $\varphi(t)$ is some angle, $\mathbb{1}$ denotes the identity and $\hat{\sigma}_1$ denotes the first Pauli operator, given as a matrix by $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ if we fix the basis $\{|l\rangle, |r\rangle\}$. Assignment: Assume that at time $t=0$, the system is in the state $|l\rangle$. At what minimal time $\tilde{t}$ is it in the state $|r\rangle$? According to my understanding, we need to solve $|r\rangle=\hat{U}(t)|l\rangle$ and by plugging $\hat{U}(t)$ from above in, this amounts to solving $ 0 =\cos(\varphi(t)) \text{ AND } -i=\sin(\varphi(t))$, the latter having no solution if the angle $\varphi(t)$ is real, which angles usually are. Question: Is my approach/understanding of time evolution correct and if not, where lies my mistake? Answer: The pauli matrix $\sigma_1$ given by $$\sigma_1=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} $$ Your approach is correct. At any time, The state given by $$|\Psi(t)\rangle =\mathcal{U}(t)|\Psi(0)\rangle=\mathcal{U}(t)|l\rangle $$ Suppose at $t=t_0$, $|\Psi(t_0)\rangle=|r\rangle $ so $$|r\rangle =\mathcal{U}(t_0)|l\rangle $$ And solve for $t_0$. We required $$\cos(\phi)=0\rightarrow \phi=\pi/2$$ $$|\Psi(t_0)\rangle = i|r\rangle =e^{i\pi/2}|r\rangle $$
{ "domain": "physics.stackexchange", "id": 80858, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, potential, time-evolution" }
How do we perceive acceleration?
Question: Today me and my friend were coming on motor bike and I was sitting opposite direction because I was holding something in my hand (and it was fun :P). When he started bike I felt very high acceleration. I asked him to go slow however he said he is going with regular speed. Then out of curiosity, me and him tried different velocities and different accelerations while me sitting on either direction of acceleration and opposite to it. To my surprise I 'perceived' more acceleration when I was sitting opposite to direction of acceleration. While searching how we perceive acceleration, surprisingly it is not very well known. I found following explanations, We overestimate arrival time ( ref: this ) Endolymph from Vestibular system ( ref: Wiki ) [Not exactly acceleration but rather balance] Interpolated motion segments ( ref: this ) My question, Is it just visual perception or special mechanism for detection of acceleration? In any of these cases, why I was feeling more acceleration while siting opposite than in direction of acceleration? [ My initial guess is because I didn't have any support and visual cues were messed up ] Update: I found this post but I didn't get any satisfactory answer from book referred in answer. Answer: My question, Is it just visual perception or special mechanism for detection of acceleration? Acceleration is a synthesized conclusion from a multitude of systems. Most prominent is the Endolymph system you already mentioned: As you accelerate, the endolympth and the otoliths within (small, calcified deposits) pass over the hair cells and produce action potentials which travel to the brain. The higher the magnitude of acceleration, the more action potentials will be sent. This is also where dizziness comes from, as the endolymph and otoliths do not come to rest at the same rate, momentarily giving the brain two different interpretations of what acceleration you're experiencing. There is also the Doppler Effect: Based in the cochlear system of the inner ear this time, instead of the vestibular system, the doppler effect is interpreted by the brain and gives a very rough interpretation of both velocity and acceleration (if the velocity of the object happens to be changing). Then there is the entire visual side of things. Motion blur and the speed at which the perception of objects change size are large contributors to our sense of visual acceleration, but almost all of this is done via complex manipulation of visual data in the occipital and frontal lobes. There are also countless minor contributors: skin and join tension (the sensation of weight due to acceleration), ease of breathing, sensory cells in your hair if your hair is exposed to the open air while acceleration, blood pressure sensors (and numbness when the body can't compensate), etc. In any of these cases, why I was feeling more acceleration while siting opposite than in direction of acceleration? My guess would be because your body is more visually attuned to facing the same direction as the acceleration, and when you were facing the opposing direction your mind overcompensated in an effort to protect you since you probably don't get a lot of opportunities to rapidly accelerate backwards over distances of more than a few feet. Your brain might also have been panicking a bit because you couldn't see where you were going (usually very bad), and the heightened sensory state made the magnitude of the acceleration feel greater. These are my best guesses, however. If someone has an academic source, please feel free to edit this answer!
{ "domain": "biology.stackexchange", "id": 4531, "tags": "vision, perception" }
Online Judge 10189: Minesweeper (C++)
Question: I'm new to C++ (not to programming in general), which I want to learn to participate in some programming contests. I solved the Online Judge Minesweeper Challange. Since I'm not familiar with C++, any feedback (including nitpicking) is very welcome. My algorithm: For each field... Read the dimensions and initialize a 2d-array. Simultainiously read and parse the cells: If it's not a bomb, continue. Otherwise, set the cell to a negative value, then increase all existing, non-negative neighbours. Print each value, * for negative numbers (i.e. bombs). I considered to separate the reading and parsing parts, but then I'd need to traverse the array once more. #include <iostream> using namespace std; int **parseField(int n, int m); void printField(int **field, int fieldIndex, int n, int m); int main() { int n, m, fieldIndex = 1; while (cin >> n >> m && !(n == 0 && m == 0)) { if (fieldIndex > 1) cout << endl; int **field = parseField(n, m); printField(field, fieldIndex++, n, m); } } int **parseField(int n, int m) { int **field = new int *[n]; for (int i = 0; i < n; i++) field[i] = new int[m]; string line; for (int i = 0; i < n; i++) { cin >> line; for (int j = 0; j < m; j++) { if (line[j] != '*') continue; field[i][j] = -1; for (int k = 0; k < 9; k++) { int x = i + (k / 3) - 1, y = j + (k % 3) - 1; if (x < 0 || x >= n || y < 0 || y >= m) continue; if (field[x][y] >= 0) field[x][y]++; } } } return field; } void printField(int **field, int fieldIndex, int n, int m) { cout << "Field #" << fieldIndex << ":" << endl; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { if (field[i][j] < 0) cout << '*'; else cout << field[i][j]; } cout << endl; } } Answer: Don’t write using namespace std;. You can, however, in a CPP file (not H file) or inside a function put individual using std::string; etc. (See SF.7.) The style in C++ is to put the * or & with the type, not the identifier. This is called out specifically near the beginning of Stroustrup’s first book, and is an intentional difference from C style. Put the functions in the opposite order, so you don't need to forward-declare them. Note that C++ has overloading, so if your declaration at the top doesn't match the actual function you'll get confusing link-time errors and not a compiler error for this file! Don't use new. See ⧺C.149 — no naked new or delete. Use a std::vector here. You are leaking memory: I don't see any delete! int **field = parseField(n, m); printField(field, fieldIndex++, n, m); Every time through the loop, you are dropping field on the floor. int n, m, fieldIndex = 1; Don't declare multiple variables in one statement. Don't declare all your variables at the top, but declare where first needed and when you are ready to give it a value. Here, the nature of cin input means you have to declare m and n before making that call. That's not the usual case. architectural Your parse and print are being passed the raw pointer thing, and the two sizes. Those should be combined into a single data structure, say a class named Field. Then these two function can be member functions of that class. Your data structure is laborious and unnecessary. Back it with a std::vector<int>, and also store the height and width, all as members of the class. Write a lookup function that takes two indexes (r,c) and calculates a linear value r*width+c and feeds that to the underlying vector. Use a type alias for the cell type rather than int. You can easily change it to (say) int8_t and see if it's faster as well as smaller.
{ "domain": "codereview.stackexchange", "id": 42189, "tags": "c++, beginner, programming-challenge, minesweeper" }
Percentage ionic character when electronegativity is given
Question: What is the ionic character of a bond, $\ce{A-B}$, in terms of the electronegativities of $\ce{A}$ and $\ce{B}$ ($\chi_\ce{A}$ and $\chi_\ce{B}$)? I have been taught that the percentage ionic character is: $$ \frac{\text{observed value of ionic character}}{\text{calculated value of character}} $$ but I can't understand how electronegativity is used here. I couldn't find anything on the internet either. Answer: Linus Pauling proposed an empirical relationship which relates the percent ionic character in a bond to the electronegativity difference $\Delta \chi$. Percent ionic character $= (1-e^{-(\Delta \chi/2)^2} )\times 100$ But I'd like to correct the definition of percent ionic character in your question using dipole moment $\mu$ (not Observed value of ionic character): Percent ionic character = $\Large\frac{\mu_{\text{observed}}} {\mu_{\text{calculated} }}$ $\times 100 \%$ Where $\mu_{\text{calculated}}$ is calculated assuming a 100% ionic bond. For more details please see this page.
{ "domain": "chemistry.stackexchange", "id": 17209, "tags": "covalent-compounds, electronegativity" }
What are the best programming languages for developing evolutionary algorithms?
Question: What are the "best" Turing complete programming languages which can be used for developing evolutionary algorithm-based AI programs? "Best" should be based on pros and cons of performance and easiness for machine learning. Answer: Most machine learning applications today are built on tensors, matrices, probabilistic / Bayesian inference, neural networks, etc. But those can all be built with any modern programming language (all the useful ones are Turing complete). And the best performing language for any of those will generally be assembly / machine code. Python is famous for machine learning, but that may be due to adoption of Python in academia and NumPy, SciPy, etc. Python isn't very performant, but most of the machine libraries leverage native code, so they're fairly performant. Julia is a new language that is gunning for a lead position in the data science space, which machine learning builds on. It is allegedly very performant over number crunching domains. Java has a decent developer ecosystem, and is fairly performant, but the highest performing libraries (including those that leverage GPU) tend to call out to native code via JNI. See DeepLearning4J. I personally like Clojure - a modern Lisp running on the Java JVM. There's a new deep learning project called Cortex built on Clojure and some fast native libraries, including GPU acceleration. I think Clojure provides a great balance of being able to easily wrap performant libraries with highly expressive, succinct and simple programming idioms.
{ "domain": "ai.stackexchange", "id": 148, "tags": "evolutionary-algorithms, programming-languages" }
Newbie python script, odometry
Question: I was doing one online course where on page was special frame to run python script. My task in this exercise was to compute the odometry, velocities are given. This script on page looks: http://snag.gy/NTJGz.jpg Now I would like to do the same using ROS: there is nearly the same exercise but in ROS: clear code looks: https://github.com/tum-vision/autonavx_ardrone/blob/master/ardrone_python/src/example1_odometry.py There is information that I should add code from this online_course version to function callback, I try, but it doesn't work. My code: #!/usr/bin/env python #ROS import rospy import roslib; roslib.load_manifest('ardrone_python') from ardrone_autonomy.msg import Navdata import numpy as np def __init__(self): self.position = np.array([[0], [0]]) def rotation_to_world(self, yaw): from math import cos, sin return np.array([[cos(yaw), -sin(yaw)], [sin(yaw), cos(yaw)]]) def callback(self, t, dt, navdata): self.position = self.position + dt * np.dot(self.rotation_to_world(navdata.rotZ), np.array([[navdata.vx], [navdata.vy]])) print("received odometry message: vx=%f vy=%f z=%f yaw=%f"%(navdata.vx,navdata.vy,navdata.altd,navdata.rotZ)) print(self.position) if __name__ == '__main__': rospy.init_node('example_node', anonymous=True) # subscribe to navdata (receive from quadrotor) rospy.Subscriber("/ardrone/navdata", Navdata, callback(self, t, dt, navdata)) rospy.spin() Please correct me, I am totally newbie to python. Now I get message: Traceback (most recent call last): File "./example1_odometry.py", line 28, in rospy.Subscriber("/ardrone/navdata", Navdata, callback(self, t, dt, navdata)) NameError: name 'self' is not defined Originally posted by green96 on ROS Answers with karma: 115 on 2014-11-10 Post score: 1 Answer: You can take a look here about what self means in python. The issue you're having is that the callback, rotation and __init__ methods and the position variable were originally part of a class. You need to remove the __init__ method, "declare" the position variable and remove all references to self from your code. #!/usr/bin/env python #ROS import rospy import roslib; roslib.load_manifest('ardrone_python') from ardrone_autonomy.msg import Navdata import numpy as np position = np.array([[0], [0]]) def rotation_to_world(yaw): from math import cos, sin return np.array([[cos(yaw), -sin(yaw)], [sin(yaw), cos(yaw)]]) def callback(t, dt, navdata): position = position + dt * np.dot(rotation_to_world(navdata.rotZ), np.array([[navdata.vx], [navdata.vy]])) print("received odometry message: vx=%f vy=%f z=%f yaw=%f"%(navdata.vx,navdata.vy,navdata.altd,navdata.rotZ)) print(position) if __name__ == '__main__': rospy.init_node('example_node', anonymous=True) # subscribe to navdata (receive from quadrotor) rospy.Subscriber("/ardrone/navdata", Navdata, callback) rospy.spin() I haven't tested if this code actually works, but it us just to give you an idea on how to solve your issue. Hope it helps Originally posted by Gary Servin with karma: 962 on 2014-11-10 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 20013, "tags": "ros, navigation, odometry, tum-simulator" }
The XOR cut-set structure, and combinatorial designs
Question: Given a graph $G(V,E)$ and a subset of vertices $T \subseteq V$, define $\mathsf{cutset}(T)$ = the set of edges connecting a vertex at $T$ with a vertex at $V\setminus T$. Our goal is to preprocess $G$ such that, given any set $T$, we can quickly return an edge in $\mathsf{cutset}(T)$, or reply that $\mathsf{cutset}(T)$ is empty. The structure should have space complexity $\widetilde{O}(|V|)$, i.e. we are not allowed to keep all the edges. The query complexity should be $\widetilde{O}(|T|)$. Kapron et al suggest the following neat solution, which works when the size of each cutset is at most 1. Give each edge a unique number. For each vertex $v$, keep $\mathsf{xor}(v)$ - the binary XOR of the numbers of all the edges adjacent to it. Given a query on $T$, calculate $\mathsf{xor}(T)$ - the binary XOR of all vertices in T. Every edge which is internal to $T$ (i.e. has both endpoints inside $T$) is XORed twice, and hence is not included in $\mathsf{xor}(T)$. So, $\mathsf{xor}(T)$ is actually a XOR of all edges in $\mathsf{cutset}(T)$. If the size of each cutset is at most 1, then there are two options: either $\mathsf{xor}(T)=0$, which means that $\mathsf{cutset}(T)$ is empty, or $\mathsf{xor}(T)$ is the number of the single edge in $\mathsf{cutset}(T)$. The authors then go on and describe a complex, randomized structure to handle the case in which $\mathsf{cutset}(T)$ contains more than a single edge. But in the conclusion, they say that: It is not hard to see that the technique described here can be made deterministic with an additional $\widetilde{O}(k)$ factor in the update time, if we know the cuts are of size no greater than $k$, through the use of combinatorial designs". Unfortunately, for me this seems to be hard... I don't understand: how can combinatorial designs can be used to solve the problem when the size of all cutsets is at most $k$? Answer: You can use a linear code with distance $2k$ or so. The parity check matrix of the code has the property that the XOR of any set of at most $2k-1$ columns is non-zero. This means, in particular, that given the XOR of at most $k$ columns, you can determine (not necessarily efficiently) how many columns were XORed (since if you couldn't you would obtain a set of at most $2k-1$ columns which XORs to zero). The cost of this encoding is the number of rows of the parity check matrix. If you choose the parameters correctly and an efficiently decodable code (recall that the parity check matrix of a code is the generator matrix of its dual), then you probably obtain the conclusion stated in the remark.
{ "domain": "cs.stackexchange", "id": 4132, "tags": "data-structures, combinatorics" }
Average number of particles in a certain energy level in the Canonical Ensemble
Question: A quantum system has $r$ discrete energy levels $\varepsilon_1,\varepsilon_2,\varepsilon_3,...,\varepsilon_r$ and $N$ particles distributed in these levels, with the number of particles at each level denoted by $n_1,n_2,n_3,...,n_r$. I'm trying to find the average number of particles in the $i$-th energy level, $\left\langle n_i\right\rangle$, and the fluctuation of this average, $\left\langle(\Delta n_i)^{2}\right\rangle$, using the Canonical Ensemble. My attempt The average energy of the system at the state $R$ determined by the occupation numbers $(n_1,n_2,n_3,...,n_r)_R$ can be computed by $$ \langle E\rangle=\left\langle E_{R}\right\rangle=\sum_{R} P_{R} E_{R} =\frac{1}{Z}\sum_{R} E_{R} e^{-\beta E_{R}} =-\frac{1}{Z}\bigg(\frac{\partial Z}{\partial \beta}\bigg)_{N, V} =-\bigg(\frac{\partial \ln Z }{\partial \beta}\bigg)_{N, V} $$ With a similar process, keeping in mind that $E_{R} = \sum_{r} n_r \varepsilon_{r}$, one gets that $$\langle n_i\rangle = \sum_{R} P_{R} n_i =\frac{1}{Z}\sum_{R} n_r e^{-\beta \sum_{r} n_i \varepsilon_{r}} =-\frac{1}{\beta}\bigg(\frac{\partial \ln Z}{\partial \varepsilon_i}\bigg)_{N, V} $$ Which is supposed to be the correct result. However, I am not sure that this $\langle n_i \rangle = \sum_{R} P_{R} n_i$ is valid for this average since $P_r$ is the probability that the system is in the $R$-state, not that the $r$-th energy level has a certain number of particles... Is the procedure I have performed in this correct? Answer: This isn't quite right. A microstate of your system is defined by the $r$-tuple $R=(n_1,n_2,\ldots,n_r)$ which gives the occupation numbers of each energy level. Each $r$-tuple has a corresponding energy given by $E_R=\sum_{i=1}^r n_{i,r} \epsilon_i$ (where $n_{i,R}$ is the occupation number of the $i^{th}$ energy level in microstate $R$) and the probability that the system occupies each microstate is $P_R = e^{-\beta E_R}/Z$, where $Z$ is the partition function. It makes sense to compute the average energy of the system via this probability distribution: $$\left<E\right> = \sum_R P_R E_R = \frac{\sum_R E_R e^{-\beta E_R}}{Z} = -\frac{\partial}{\partial \beta} \log(Z)$$ It doesn't make sense to talk about $\left<E_R\right>$, however. For each microstate $R$, $E_R$ is a fixed number. The expected number of particles in energy level $i$ can be computed precisely the same way. We're averaging over all possible microstates, weighted by the probability of that microstate being inhabited by the system: $$\left<n_i\right> = \sum_R P_R n_{i,R}$$ Expanding this out more, $$\left<n_i\right> = \frac{1}{Z}\sum_R \exp\left[-\beta \sum_j n_{j,R} \epsilon _j\right]n_{i,R}= \frac{1}{Z}\sum_R -\frac{1}{\beta}\frac{\partial}{\partial \epsilon_i}\exp\left[-\beta\sum_j n_{j,R} \epsilon_j\right]$$ $$= -\frac{1}{\beta}\frac{\partial}{\partial \epsilon_i} \log(Z)$$
{ "domain": "physics.stackexchange", "id": 75993, "tags": "thermodynamics, statistical-mechanics, partition-function" }
Adjusting the height of a div to match that of another column
Question: This code checks whether an image whose parent is .papers.left has completely loaded into DOM and if yes then the background of its right content's height has to be same as the background of this images parent. So below is html to understand the structure <div class="row"> <div class="col-md-6"> <div class="papers left text-center"> <img src="http://wowthemes.net/demo/leroy/img/dummies/18.jpg" alt=""><br /> <!--Once the above image is completely loaded I want to set `.papers.right` height as same as `.papers.left` div--> </div> </div> <div class="col-md-6"> <div class="papers right text-center"> <blockquote>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages." <cite>Jogn De, Birthday Event<br /><i class="fa fa-star"></i> <i class="fa fa-star"></i> <i class="fa fa-star"></i> <i class="fa fa-star"></i> <i class="fa fa-star"></i></cite> </blockquote> </div> </div> </div> So to set same height I have written below js code which will set the height of div.papers.right same as div.papers.left but only when the image is loaded completely $('img').each(function () { if ($(this).closest('.papers.left').length) { $(this).on('load', function () { $(this).closest('.row').find('.papers.right').height($(this).closest('.papers.left').height()); }) } }) Can the above piece of jQuery code be optimized? I am not sure whether the above code can be optimized to some simple one. Update I just realized one more requirement so that whichever's height is greater assign that height to other one and I've made below changes to code and I would like to know if there is any improvement that can be done to the below code. Extremely sorry for the inconvenience. $('img').each(function () { if ($(this).closest('.papers.left').length) { $(this).on('load', function () { if ($(this).closest('.papers.left').height() > $(this).closest('.row').find('.papers.right').height()) //if left side has greater height assign it to right side $(this).closest('.row').find('.papers.right').height($(this).closest('.papers.left').height()); else //vice versa $(this).closest('.papers.left').height($(this).closest('.row').find('.papers.right').height()); }) } }) Answer: You can save $(this).closest('.papers.left') into a local variable since you access it twice. This might improve performance. It can also enhance code readability: $('img').each(function () { var parent = $(this).closest('.papers.left'); if (parent.length) { $(this).on('load', function () { $(this).closest('.row').find('.papers.right').height(parent.height()); }) } }) As an add on to your code modification, I don't know that this is faster, but why not try assigning max height to both without checking which is taller? Something like: $('img').each(function () { var parent = $(this).closest('.papers.left'); if (parent.length) { $(this).on('load', function () { var sibling = $(this).closest('.row').find('.papers.right'); var height = Math.max(parent.height(), sibling.height()); parent.height(height); sibling.height(height); }) } }) It's certainly shorter and more readable. Why not test it out to see how the efficiency compares?
{ "domain": "codereview.stackexchange", "id": 15851, "tags": "javascript, performance, jquery, image, layout" }
How to convert duration column from 1 hr. 17 min to 77 min in pandas?
Question: I am trying to convert all the hr., min, and sec into just mins. For example, 1 hr. 17 min to 77 min, and 34 sec to 0.56 min, not 0.34 min. So I have used this code: merged['duration'] = merged['duration'].str.replace(" hr.", '*60').str.replace(' ','+').str.replace(' min','*1').str.replace(' ','+').str.replace(' sec','*0.01').apply(eval) From: How to Convert a Pandas Column having duration details in string format (ex:1hr 50m) into a integer column with value in minutes But it gives me this error: unsupported operand type(s) for +: 'int' and 'builtin_function_or_method' I am not sure how to go about it. Please help! Answer: You are replacing the spaces with a + character before replacing `` and . Therefore the minutes and seconds do not get replaced, which in turn cannot be evaluated using eval. Reordering the different operations should solve the error: df['duration'] = df['duration'].str.replace(" hr.", '*60').str.replace(' min','*1').str.replace(' sec','*0.01').str.replace(' ','+').apply(eval) You are mentioning that you want 34 seconds converted to 0.34, but if you want everything in minutes this does not make sense since 34 seconds is 34/60=0.56 seconds. If you also want to change this the following should work: df['duration'] = df['duration'].str.replace(" hr.", '*60').str.replace(' min','*1').str.replace(' sec','*(1/60)').str.replace(' ','+').apply(eval)
{ "domain": "datascience.stackexchange", "id": 10287, "tags": "python, pandas, data-mining, data-cleaning, dataframe" }
Can ICA be applied, when the number of mixture signal is less than number of source signal?
Question: I am referring to the the following paper : Non-contact, automated cardiac pulse measurements using video imaging and blind source separation In the above article, the authors are able to extract cardiac pulse signal out from RGB components. I try to visualize the process as follow. R' = R + cardiac pulse G' = G + cardiac pulse B' = B + cardiac pulse R', G' and B' are the colour components observed by camera. R, G, B are the colour components for a person, by assuming that he doesn't have any cardiac pulse. It seems that we will be having 4 sources (R, G, B, Cardiac pulse). We are now trying to obtain 1 of the 4 sources (Cardiac pulse) from 3 mixture signals (R', G', B'), by using ICA. Does it make sense? Am I missing some techniques? Or, am I making a wrong assumption on the process? Answer: You might also want to consider Principal Component Analysis (PCA) or an extension of it known as Independent Subspace Analysis which is PCA followed by ICA. These techniques work very well for extracting pitch stationary signals from a single observation signal. I'm an audio specialist but have discussed biomedical signals with colleagues in the past and from recollection cardiac pulses from a single observation are pretty well characterised and thus would be suitable sources for extraction using ISA. I have used it to great avail to separate drums from full musical polyphonies.
{ "domain": "dsp.stackexchange", "id": 28, "tags": "ica, source-separation" }
kinetic and potential energy
Question: I have 2 cubes where one mass is greater than the other. $M$ and $m$, where $M>m$, and there is a hill that is symmetrical on both sides, and has a friction factor of $k$ between the object and the surface. If we place both objects on one side of the hill, point 1, that has no slope, so it is flat, and give both of the masses the same velocities in the direction of the top of the hill so they finally land on the other side, on the flat area, point 2 (symmetrical to the point 1). How can I prove that the speed of the heavier cube is smaller than the lighter one in the point 2? (If I'm correct) I tried equating $E_{kinetic}$ and $E_{potential}$... I see there is work from the friction, but whenever I try anything I cross out the mass. Heres a diagram: http://www.wolframalpha.com/input/?i=sin%5E2(x)%2C+0%3Cx%3Cpi&x=0&y=0 Where point 1 is at (0, 0), and point 2 at (pi, 0). What I have so far: where $g$ is constant of gravity and $k$ friction factor, and $d$ distance. $E_{kinetic.start} - W_{friction} = E_{kinetic.end}$ $M(\frac{v_0^2}{2}-gkd)=M(\frac{v_1^2}{2})$ I could change $M$ with $m$ and nothing would change, they'd both have same speeds... So it's obvious that the masses cancel out and the energy is proportionally lower at both masses, the distance for friction is the same and starting velocities are the same, therefore the velocity should be the same for both bodies at the end. This confuses me as I believe the smaller body should be faster. (Cant noone help me with this? Im now doubting the solution i've provided first is right. I cannot prove otherwise.) Answer: Your line of thought and the way you are arguing in the second part is correct. Lets separate the task: one with friction but on a flat surface (no hill), which you already did (1) and one without friction but with the hill on the way (2). 1. Friction, flat surface I am disregarding energy here - I concentrate on velocity and deceleration. Both bodies start with $v_0$. Both bodies are decelerated due to the frictional force $F_{FR}$. $F_{FR,1}$ = $M \cdot g \cdot k \Rightarrow a_1= {F_{FR,1} \over {m_1}} = g \cdot k$, (the same applies for the smaller body) So the deceleration due to friction is again independent from the mass of the bodies. Both bodies will have the same velocity at every given time. 2. No Friction, hill You are well aware that without friction both bodies must have constant overall mechanic energy throughout the whole process. Lets look at what this means for their velocity at the hilltop (I am using $_0$ to denote values at the beginning, $_{top}$ for the hilltop, $h$ for gained height, $m$ for (general) mass: $E_{kin, top} = m \cdot {{v_{top}^2} \over 2}= E_{kin,0} - E_{pot}= { {m \cdot v_0^2} \over 2} - m \cdot g \cdot h = m \cdot \left( \frac {v_0^2} {2} - g \cdot h \right)$ Once more the mass cancels out and (without really calculating $v_{top}$) we can see that both bodies will have the same velocity at the top of the hill (and everywhere else between). So in both cases we see that the velocity of the body is completely indepenent of its mass. Your intuition led you astray :-)). Please keep in mind that we did not take air resistance into the calculation.
{ "domain": "physics.stackexchange", "id": 14106, "tags": "newtonian-mechanics, mass, friction, speed" }
Does the number of base pairs between the end of one gene and the start of the next gene have to be a multiple of 3?
Question: I'm working on a project for a Coursera course where I need to scan a string of base pairs and identify possible genes, but there's something about real DNA that I need to know to design the algorithm correctly. My question is best illustrated by example. Suppose we have a strand of DNA like this : ATGxxxyyyzzzTAAqqATGrsssATGuuTAAvpppTAG The lower case letters just stand for general base pairs that are not in the sequences "ATG", "TAA", "TAG", or "TGA". "ATGxxxyyyzzzTAA" constitutes a gene for the purposes of the assignment, and my program does correctly identify that string of base pairs as a gene. What I want to know focuses on what happens after that first gene is found. Let's consider the last portion of the string: "qqATGrsssATGuuTAAvpppTAG" My current gene-finding program would identify "ATGrsssATGuuTAA" as a gene, but notice that this strand starts after only two base-pairs ("qq") have been searched through. Does that matter? I.e., would this happen in real DNA, or should the second valid gene be identified as "ATGuuTAAvpppTAG"? Answer: There absolutely is not a rule saying that all genes on a strand should be in the same frame. Your program is looking for ORFs (open reading frames) and it should identify all of them, in every frame. If you were being very thorough, you should generate the reverse complement of your sequence, and search that for ORFs too. But I guess if you were given this sequence, presuambly the professor gave you the sequence in the desired orientation.
{ "domain": "biology.stackexchange", "id": 7955, "tags": "genetics" }
Why is the CASSCF method multi-configurational, while the CI method is not?
Question: The CASSCF method is perhaps the most commonly used theoretical method for studying difficult chemical systems exhibiting multi-reference character or non-dynamical/static/strong correlation. CASSCF is quite similar to CI, in the sense that a full CI wavefunction is generated within the active space used in the CASSCF method. However, the CI method is a so-called ground state method, while the CASSCF method is multi-configurational. My question is simply "Why?". I feel it may be related to that during the CASSCF optimization, both the orbitals and CSF expansion coefficients are optimized, while in CI only the expansion coefficients are optimized. But I fail to understand how optimizing the orbitals gives the CASSCF method its multi-configurational nature. Answer: When you say multiconfigurational I think you mean that the ground state wavefunction does not have one single dominant MO configuration. CI can be multiconfigurational depending on the type of excitation included. For example, Full CI is of course multiconfigurational since it is the exact solution for the basis set used. However, CIS (single excitations) or CISD (single + double excitations) calculations are the only two CI methods that are computationally efficient enough to be used in practice. So that’s probably what you mean when you say CI. You also probably mean CIS or CISD based on an RHF wavefunction, to distinguish it from multi-reference CI. The lowest energy solution of CIS based on RHF is the RHF energy, so you can think of the SCF procedure as a form of CIS. So a CASSCF(2,2) calculation (e.g. where the active space is HOMO, LUMO) is a bit like a CISDT calculation. For example, there will be contributions to the wavefunction where the LUMO is doubly occupied and another MO outside the active space is “singly excited”. Thus, the LUMO is allowed to change when electrons are put into it, which becomes more and more important as the HOMO-LUMO gap decreases. CISD cannot account for this, so in that sense it is not multiconfigurational.
{ "domain": "chemistry.stackexchange", "id": 5565, "tags": "quantum-chemistry, computational-chemistry, multi-reference" }
Optical system for cloning an image of a light source?
Question: I need to create an optical system for cloning an image of a light source to human eyes. Is there a correct solution(design) for this problem. Reflectivity of the mirrors must be equal. Because light that coming from the real world must be on the same brightness level for both eyes. Reflectivity percentage should be within the acceptable range for human eye(not less then 5%, not like in the graphic). Both images must be identical. Here is my basic design(as an explanation) with full of mistakes: Answer: This system is actually a little more complicated that I first thought because the path length to both eyes must be the same to "clone" the light source.
{ "domain": "physics.stackexchange", "id": 21353, "tags": "optics, reflection" }
How to handle a Dirac delta at $r = 0$ for a first Born approximation?
Question: I have been trying to get my hands around a scattering problem all day but I can't wrap my head around the idea. It's a scattering problem with First Born Approximation and the potential is a Dirac delta sitting at the origin. The standard procedure would be to just use the formula from Griffith's book $$ f(\theta) = \frac{-2m}{\hbar^2 \kappa}\int_0^\infty a\delta(r)r\sin (\kappa r)dr $$ where $\kappa = 2k \sin({\theta/2})$. The first problem is of course the fact that Dirac delta function lacks support at $r=0$ so I have been trying to switch to cartesian coordinates where it has support at the origin. But I keep getting zero as result, so either the my answer is right and that makes the problem really boring to start with, or either I'm missing something. I have also been trying to understand the physics around the problem. If the Dirac delta was evaluated at some radius a the problem would have made more sense since that would have been the scattering from a hard sphere, but I can't understand what a scatter from a single infinitely small point would mean? Answer: The general formula for the first-order approximation of $f(\kappa)$ is $$ f(\kappa)=-\frac{m}{2\pi}\int\mathrm d\boldsymbol r\ \mathrm e^{-i\boldsymbol\kappa\cdot\boldsymbol r} V(\boldsymbol r)\tag{A} $$ In the particular case $$ V(\boldsymbol r)=a\delta(\boldsymbol r) $$ we have $$ f(\kappa)\overset{(\mathrm A)}=-\frac{ma}{2\pi} $$ If the potential is spherically symmetric, then $(\mathrm A)$ becomes $$ f(\kappa)=-\frac{2m}{\kappa}\int_0^\infty \mathrm dr\ rV(r)\sin (\kappa r)\tag{B} $$ Recall that the Dirac delta centred at the origin, in spherical coordinates, reads $$ V(r)=\frac{a}{4\pi r^2}\delta(r) $$ and therefore $$ f(\kappa)\overset{(\mathrm B)}=-\frac{ma}{2\pi}\int_0^\infty \mathrm dr\ \delta(r)\frac{\sin (\kappa r)}{\kappa r} $$ which agrees with the previous result, if we take $$ \left.\frac{\sin x}{x}\right|_{x\to0}=1 $$
{ "domain": "physics.stackexchange", "id": 38422, "tags": "quantum-mechanics, scattering, dirac-delta-distributions" }