anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is most of the energy in the universe potential energy?
Question: So I asked a question about what would happen in regards to gravitational potential if I left earth and then vaporized it. The answer I got was that the Mass would still remain the same and even if something is split the total amount of gravity it generates is linearly proportional to mass. So if no matter what, everything has gravitational potential in relatively to everything else, does that mean that the majority of energy in the universe is potential? Answer: Most energy in the universe is potential. Only when all mass has turned into photons, by evaporation of black holes, the universe will be energy dominated. If you talk about the kinetic and potential energy of massive particles, the situation is tricky. Galaxies have velocity and rotate, like atars and planets. But all their mass contains potential gravitational energy too. The three basic forces of nature keep it from collapsing and gain kinetic energy. The Earth got kinetic energy by gravity. The Earth has a lot of potential energy as well as kinetic energy. The potential energy would be released if the EM forces wouldn't keep the parts from collapsing. How big is the potential energy of the Earth? Now the total kinetic energy of all masses in the universe is the negative of all gravitational potential energy. But there is still a lot of potential energy that is stored in all massive gravitationally bound objects. To know how big this stored energy is in comparison to the kinetic energy prsent, you need to calculate all potential gravitational energies and all kinetic energies. I'm not sure if you can reason what their ratio is. But both are much smaller than the energy equivalent of all mass present, i.e, $10^{53} (kg)$.
{ "domain": "physics.stackexchange", "id": 86587, "tags": "energy, cosmology, gravity, potential-energy, universe" }
Construct Pandas DataFrame whose (i,j)th entry is the probability that a person aged i+100 will still be alive after j years
Question: The following function takes the probability of a person aged 100+i dying in the next year (conditional on them being alive at the start of the year) and returns the probability that they will be alive after j years (for 0 < j < 21): import pandas as pd def prob_in_force(): # i-th entry of qx_curve gives the probability that a life aged i+100 (for 1 < i < 21) will die in the next year # (conditional on being alive at the start of the year) qx_curve = pd.Series([0.378702, 0.402588, 0.42709, 0.452127, 0.477608, 0.503432, 0.529493, 0.555674, 0.581857, 0.607918, 0.633731, 0.659171, 0.684114, 0.708442, 0.732042, 0.754809, 0.776648, 0.797477, 0.817225, 1], index=range(101, 121)) # i-th entry of px_curve gives the probability that a life aged i+100 (for 1 < i < 21) will NOT die in the next year # (conditional on being alive at the start of the year) px_curve = (1 - qx_curve).to_list() # Construct a DataFrame whose (i, j)th entry is the probability that a life aged i+j+100 (for 1 < i < 21) will NOT # die in the next year (conditional on being alive at the start of the year) df_px_arr = pd.DataFrame([px_curve[i:] + [0] * i for i in range(1, 21)], index=[x for x in range(101, 121)]) # Calculate the cumulative product of the px values in each row of the DataFrame constructed above. The (i, j)th # entry of this DataFrame is the probability that a life aged i+100 (for 1 < i < 21) will still be alive after j # years return df_px_arr.cumprod(axis=1) if __name__ == '__main__': print(prob_in_force()) In practice, the qx curve is read from a csv file, so the instantiation of this variable can be ignored for the purpose of improving this code. The line that I suspect leaves most room for improvement is df_px_arr = pd.DataFrame([px_curve[i:] + [0] * i for i in range(1, 21)], index=[x for x in range(101, 121)]) I suspect that both the speed and readability of this line leave room for improvement. Any advice appreciated. Answer: As in the comments, I don't trust that it's correct for you to skip the first element of your input; but for now I'll assume that this is intentional. You don't need Pandas for any of this, and cutting straight to Numpy is possible (though you'll later see that this isn't always beneficial). The input and output can be trivially converted from and to Pandas if needed. You're correct in thinking that there are speed concerns in this code. List comprehensions are often the death of performance in Pandas/Numpy. A fully-vectorised version is possible. The strided version I've demonstrated takes some spooky shortcuts in constructing a triangularised two-dimensional matrix with low overhead. Numerically what I show here is equivalent, verified with regression tests and inspection. Functionally, it doesn't produce a dataframe. About the only thing you were using in Pandas that would be worth reintroducing after the fact is an age index, though you haven't shown any information on its use. Of course, as with anything performance: measuring the results is critical, and - very interestingly - Numpy's cumprod implementation, whereas it has very low startup cost, scales poorly at O(n^2) whereas the original method and @tdy's method scale linearly. If you have fewer than ~1000 elements, one of the Numpy methods is best; otherwise, use one of the Pandas methods. Suggested from timeit import timeit import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt from numpy.random import default_rng def prob_in_force_old(qx_curve: np.ndarray) -> np.ndarray: # i-th entry of px_curve gives the probability that a life aged i+100 (for 1 < i < 21) will NOT die in the next year # (conditional on being alive at the start of the year) px_curve = (1 - qx_curve).to_list() # Construct a DataFrame whose (i, j)th entry is the probability that a life aged i+j+100 (for 1 < i < 21) will NOT # die in the next year (conditional on being alive at the start of the year) df_px_arr = pd.DataFrame([px_curve[i:] + [0] * i for i in range(1, 21)], index=[x for x in range(101, 121)]) # Calculate the cumulative product of the px values in each row of the DataFrame constructed above. The (i, j)th # entry of this DataFrame is the probability that a life aged i+100 (for 1 < i < 21) will still be alive after j # years return df_px_arr.cumprod(axis=1) def prob_in_force_rowwise(qx_curve: np.ndarray) -> np.ndarray: n = len(qx_curve) prod = np.zeros((n, n), dtype=qx_curve.dtype) px_curve: np.ndarray = 1 - qx_curve[1:] for y in range(0, n-1): px_curve[y:].cumprod(out=prod[y, :n-y-1]) return prod def prob_in_force_strided(qx_curve: np.ndarray) -> np.ndarray: n = len(qx_curve) # Tricky: make an array of double the width, so that when we broadcast-triangularise, # the lower right fills with zeros. px_curve = np.zeros(2*n, dtype=qx_curve.dtype) px_curve[:n-1] = 1 - qx_curve[1:] # Broadcast-triangularise. This is an efficient view construction that # should not take up any additional memory. broadcasted = np.broadcast_to(px_curve, (n, 2*n)) bytes = broadcasted.dtype.itemsize slid = np.lib.stride_tricks.as_strided(px_curve, shape=(n, n), strides=(bytes, bytes)) return slid.cumprod(axis=1) def prob_in_force_tdy(qx_curve): return pd.DataFrame({ i - 1: qx_curve.rsub(1).shift(-i, fill_value=0) for i in range(1, 21) }).cumprod(axis=1) def test_regression() -> None: def isclose(a, b): assert np.allclose(a, b, rtol=0, atol=1e-9) # i-th entry of qx_curve gives the probability that a life aged i+100 (for 1 < i < 21) will die in the next year # (conditional on being alive at the start of the year) as_array = np.array((0.378702, 0.402588, 0.42709, 0.452127, 0.477608, 0.503432, 0.529493, 0.555674, 0.581857, 0.607918, 0.633731, 0.659171, 0.684114, 0.708442, 0.732042, 0.754809, 0.776648, 0.797477, 0.817225, 1)) as_series = pd.Series(as_array) for method in (prob_in_force_old, prob_in_force_strided, prob_in_force_rowwise, prob_in_force_tdy): qx_curve = ( as_array if method in (prob_in_force_strided, prob_in_force_rowwise) else as_series ) result = method(qx_curve) if isinstance(result, pd.DataFrame): result = result.values assert result.shape == (20, 20) isclose(0, result.min()) isclose(0.597412, result.max()) isclose(0.02894936687892334, result.mean()) isclose(11.579746751569337, result.sum()) def test_performance() -> None: times = [] rand = default_rng(seed=0) n_values = np.round(10**np.linspace(0.5, 5, 50)) methods = { prob_in_force_old, prob_in_force_strided, prob_in_force_rowwise, prob_in_force_tdy, } for n in n_values: n = int(n) times.append(('ideal_n', n, (n/1000) * 5e-3)) if 1e2 <= n <= 1e4: times.append(('ideal_n2', n, (n/1000)**2 * 5e-3)) as_array = rand.random(n) as_series = pd.Series(as_array) slow = set() for method in methods: qx_curve = ( as_array if method in (prob_in_force_strided, prob_in_force_rowwise) else as_series ) def run(): method(qx_curve) t = timeit(run, number=1) times.append((method.__name__, n, t)) if t > 0.2: slow.add(method) methods -= slow df = pd.DataFrame(times, columns=('method', 'n', 't')) fig, ax = plt.subplots() sns.lineplot(data=df, x='n', y='t', hue='method', ax=ax) ax.set(xscale='log', yscale='log') plt.show() if __name__ == '__main__': test_regression() test_performance()
{ "domain": "codereview.stackexchange", "id": 42916, "tags": "python, pandas" }
What is the sparse Fourier transform?
Question: MIT has been making a bit of noise lately about a new algorithm that is touted as a faster Fourier transform that works on particular kinds of signals, for instance: "Faster Fourier transform named one of world’s most important emerging technologies". The MIT Technology Review magazine says: With the new algorithm, called the sparse Fourier transform (SFT), streams of data can be processed 10 to 100 times faster than was possible with the FFT. The speedup can occur because the information we care about most has a great deal of structure: music is not random noise. These meaningful signals typically have only a fraction of the possible values that a signal could take; the technical term for this is that the information is "sparse." Because the SFT algorithm isn't intended to work with all possible streams of data, it can take certain shortcuts not otherwise available. In theory, an algorithm that can handle only sparse signals is much more limited than the FFT. But "sparsity is everywhere," points out coinventor Katabi, a professor of electrical engineering and computer science. "It's in nature; it's in video signals; it's in audio signals." Could someone here provide a more technical explanation of what the algorithm actually is, and where it might be applicable? EDIT: Some links: The paper: "Nearly Optimal Sparse Fourier Transform" (arXiv) by Haitham Hassanieh, Piotr Indyk, Dina Katabi, Eric Price. Project website - includes sample implementation. Answer: The idea of the algorithm is this: assume you have a length $N$ signal that is sparse in the frequency domain. This means that if you were to calculate its discrete Fourier transform, there would be a small number of outputs $k \ll N$ that are nonzero; the other $N-k$ are negligible. One way of getting at the $k$ outputs that you want is to use the FFT on the entire sequence, then select the $k$ nonzero values. The sparse Fourier transform algorithm presented here is a technique for calculating those $k$ outputs with lower complexity than the FFT-based method. Essentially, because $N-k$ outputs are zero, you can save some effort by taking shortcuts inside the algorithm to not even generate those result values. While the FFT has a complexity of $O(n \log n)$, the sparse algorithm has a potentially-lower complexity of $O(k \log n)$ for the sparse-spectrum case. For the more general case, where the spectrum is "kind of sparse" but there are more than $k$ nonzero values (e.g. for a number of tones embedded in noise), they present a variation of the algorithm that estimates the $k$ largest outputs, with a time complexity of $O(k \log n \log \frac{n}{k})$, which could also be less complex than the FFT. According to one graph of their results (reproduced in the image below), the crossover point for improved performance with respect to FFTW (an optimized FFT library, made by some other guys at MIT) is around the point where only $\frac{1}{2^{11}}$-th to $\frac{1}{2^{10}}$-th of the output transform coefficients are nonzero. Also, in this presentation they indicate that the sparse algorithm provides better performance when $\frac{N}{k} \in [2000, 10^6]$. These conditions do limit the applicability of the algorithm to cases where you know there are likely to be few significantly-large peaks in a signal's spectrum. One example that they cite on their Web site is that on average, 8-by-8 blocks of pixels often used in image and video compression are almost 90% sparse in the frequency domain and thus could benefit from an algorithm that exploited that property. That level of sparsity doesn't seem to square with the application space for this particular algorithm, so it may just be an illustrative example. I need to read through the literature a bit more to get a better feel for how practical such a technique is for use on real-world problems, but for certain classes of applications, it could be a fit.
{ "domain": "dsp.stackexchange", "id": 11682, "tags": "fourier-transform, sparsity, sparse-fourier-transform" }
Boolean expression to logic gates
Question: Hello I need help with xy xor z, do I do the logic AND first or the XOR, no parenthesis. Thank You Answer: Usually, an operator that's written implicitly by just putting the operands next to each other (e.g., "$xy$" for $x\land y$ in logic or $x\times y$ in arithmetic) has precedence over binary operators that are written out explicitly (e.g., $\lor$ or $\oplus$ in logic, or $+$ in arithmetic). Indeed, any author who uses a different convention to that needs a good stern talking to. But note that it is only a convention. So the expression in the question means $(xy)\oplus z$, unless the person who wrote it hates us. This view is reinforced by the fact that logical and is multiplication modulo 2, and xor is addition modulo 2 and we'd expect multiplication to have precedence over addition.
{ "domain": "cs.stackexchange", "id": 8927, "tags": "boolean-algebra" }
get the raw kinect data
Question: In openni, I think sensor_msgs encode data. Because, rostopic echo /camera/depth/image and /camera/depth/disparity pulish not 16bit or 11bit, just 0~255. I think the data is not actual depth data( distance ) I want to know depth value. But I don't know how to get the depth value. I know the relation ( Z=fT/d ). I think the fomula substituted raw data is right. How can I get the raw kinect sensor data? Or How can I get the actual depth value? Originally posted by ha on ROS Answers with karma: 1 on 2012-08-28 Post score: 0 Original comments Comment by ha on 2012-08-29: I think sensor_msgs/image message format is 8bit and using cv_bridge is 16bit or 32bit. Dose from 8bit to 32bit make loss accuracy? Answer: Looking at the openni_camera documentation on the ROS wiki, you can see that the openni_node broadcasts several topics. I think for the data that you are looking for, the topics of interest will be the depth/image_raw messages, or the depth_registered/image_raw if you have the OpenNI registration turned on. The */image_raw messages are in the sensor_msgs/Image message format, which stores the data in a byte array, that can then be reassembled in the subscriber. This is accomplished using cv_bridge. cv_bridge allows you to write code like this for your callback: void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, enc::TYPE_32FC1); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } cv::imshow(WINDOW, cv_ptr->image); image_pub_.publish(cv_ptr->toImageMsg()); } cv_ptr->image will contain an OpenCV cv::Mat, in this case containing floating point values, which represent the depth in millimeters for that pixel. Originally posted by mjcarroll with karma: 6414 on 2012-08-29 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by shinnqy on 2014-05-21: Can you tell how to access the floating point values in cv_ptr->image?
{ "domain": "robotics.stackexchange", "id": 10795, "tags": "ros, data" }
CMakeLists: only link library if Raspberry Pi (or equivalent condition)
Question: I have a package that I want to run on both a desktop/laptop (x64) and a Raspberry Pi (armhf). There's a node in my package that has to be run on the Raspberry Pi. It uses the wiringPi library. Obviously, doing catkin_make on something other than a RPi results in an error, so at the moment I just comment out those lines. I'd like to find a way to only execute target_link_libraries() if the current computer/architecture/distro matches that of the RPi. Originally posted by christophebedard on ROS Answers with karma: 641 on 2017-03-11 Post score: 0 Original comments Comment by gvdhoorn on 2017-03-12: @Chris_: this is really a CMake question, and not related to ROS per se (catkin == essentially CMake in this case). I'd advise you to do a (google) search for keywords like "cmake detect architecture" or "cmake detect raspberry pi". You'll probably find a solution much faster that way. Comment by christophebedard on 2017-03-12: @gvdhoorn yeah I figured it wasn't strictly ROS-related, but I thought I'd see if people went through the same stuff here. Thanks for the suggestion! Answer: You can add an if - Else - endif clause in your CMakeLists.txt using a self defined variable, e.g. IS_RASP, with different target_link_libraries commands in if and Else. When calling Into catkin_make you can call it without additional arg on your Host anderen with catkin_make -DIS_RASP=1 in your device. Note you can pass Command Line args to cmake when calling it via catkin_make using the -D Just Like you would so in plain cmake. Originally posted by Wolf with karma: 7555 on 2017-03-12 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 27285, "tags": "catkin-make, cmake" }
Flattening output before calculating metrics
Question: I use scikit-learn to calculate precision, recall and f1 scores which only accept 1D arrays, but my model's outputs are 2D (binary segmentation maps). My question is, is it ok to simply flatten the outputs, or is there some other function I should use to calculate the metrics in my case? Answer: You need to evaluate the percentage of pixels in the output map which were correctly classified, and it doesn't matter whether it is represented as 2D or 1D. The demand is just that mask's and the ground truth pixels have to match each other. You can find an example about Pixel Accuracy here. So, if you would like, you can convert (stretch 2D) into a 1D vector. However, you can also implement computation of precision and recall by yourself. If you use python, it's easy to leverage construction with np.sum(mask), np.sum(target), or np.sum(mask*target) to calculate it for 2D: TP = np.sum(mask*target) FP = np.sum(mask*np.where(target == 0, 1, 0)) FN = np.sum(np.where(mask == 0, 1, 0)*target) Precision = TP / ( TP + FP ) Recall = TP / ( TP + FN ) F1_measure = 2 * Precision * Recall /(Precision+Recall)
{ "domain": "datascience.stackexchange", "id": 5992, "tags": "scikit-learn" }
A Language Bot: For creating noun declension or verb conjugation tables
Question: I'm a freshman student. What does my program solve? This is actually a bot for reddit, and there are language learning subreddits on that website, sometimes when discussing something with people, we may need to show declension or conjugation tables to them, instead of sending a link, this bot will help you to comment it automatically. What does my code do in my opinion? It only works for lithuanian language, there is a website for declensions or conjugations for lithuanian language, and my program goes to that website, scrapes the html table off of it, makes it form into a table, and then replies in the comments. Why do I want a review? Well actually, this was supposed to be a learning project, I want to know the weaknesses in my code, stuff I've done wrong or the parts that I could code it better. It does work for sure, but I dont think that's enough to be a good programmer. Is my code clean enough? Is my code DRY enough or not? Is it slow? Anything broken or wrong? Any suggestions? Did I seperate the folders correctly? You can find all the source code here on github This is how a comment looks like This is how my directory looks like: >src >config >lietuvos-config.js >service >bot.js >util >extract-word.js >get-table.js >pretty-stringified.js >app.js >package.lock >readme.md >package.json extract-word.js const extractor = function(query){ const pattern = /<[a-zA-Z]+>/; //only works with latin letters for now, to be updated const stringToSplit = query; //make it case insensitive const extractedWord= stringToSplit.match(pattern) extractedWord[0] = extractedWord[0].replace("<","") extractedWord[0] = extractedWord[0].replace(">","") return extractedWord } module.exports={ extractor: extractor } get-table.js var scraper = require('table-scraper'); const {prettifier} = require('../util/pretty-stringified') const tableOfContent = function(query){ return scraper.get('https://morfologija.lietuviuzodynas.lt/zodzio-formos/'+query) .then(function(tableData){ console.log(tableData) return prettifier(tableData).toString() //JSON.stringify(tableData) }) .catch((error)=>{ return "error" }) } module.exports = { tableOfContent: tableOfContent } pretty-stringified.js const tablemark = require('tablemark') const prettifier = function(query){ const decider = nounOrVerb(query); if(decider == "noun"){ return declineNouns(query) }else if(decider == "verb"){ return conjugateVerbs(query) }else if(decider == "adjective"){ return declineAdjectives(query) } } const convertArrays = function(array){ var newArr = []; for(var i = 0; i < array.length; i++){ newArr = newArr.concat(array[i]); } return newArr } const declineNouns = function (query){ const newArr = convertArrays(query) var tidiedArr = [] for (var i = 0; i < newArr.length; i++) { tidiedArr.push(newArr[i]["\Š\."]) tidiedArr.push(newArr[i].Vienaskaita) } var redditTable = tablemark([ {Form:"**V.**",Vienaskaita:tidiedArr[0],Daugiskaita:tidiedArr[1]}, {Form:"**K.**",Vienaskaita:tidiedArr[2],Daugiskaita:tidiedArr[3]}, {Form:"**K.**",Vienaskaita:tidiedArr[4],Daugiskaita:tidiedArr[5]}, {Form:"**G.**",Vienaskaita:tidiedArr[6],Daugiskaita:tidiedArr[7]}, {Form:"**Įn.**",Vienaskaita:tidiedArr[8],Daugiskaita:tidiedArr[9]}, {Form:"**Vt.**",Vienaskaita:tidiedArr[10],Daugiskaita:tidiedArr[11]}, {Form:"**Š.**",Vienaskaita:tidiedArr[12],Daugiskaita:tidiedArr[13]}, ]) return redditTable; } const conjugateVerbs = function(query){ const newArr = convertArrays(query) var tidiedArr = [] for (var i = 0; i < newArr.length; i++) { tidiedArr.push(newArr[i]["Jie/jos"]) tidiedArr.push(newArr[i]["Esamasis laikas"]) tidiedArr.push(newArr[i]["Būtasis kartinis laikas"]) tidiedArr.push(newArr[i]["Būtasis dažninis"]) } var redditTable = tablemark([ {Įvardis:"**Aš**","**Esamasis laikas**":tidiedArr[0],"**Būtasis kartinis laikas**":tidiedArr[1],"**Būtasis dažninis**":tidiedArr[2],"**Būsimasis laikas**":tidiedArr[3]}, {Įvardis:"**Tu**","**Esamasis laikas**":tidiedArr[4],"**Būtasis kartinis laikas**":tidiedArr[5],"**Būtasis dažninis**":tidiedArr[6],"**Būsimasis laikas**":tidiedArr[7]}, {Įvardis:"**Jis/ji**","**Esamasis laikas**":tidiedArr[8],"**Būtasis kartinis laikas**":tidiedArr[9],"**Būtasis dažninis**":tidiedArr[10],"**Būsimasis laikas**":tidiedArr[11]}, {Įvardis:"**Mes**","**Esamasis laikas**":tidiedArr[12],"**Būtasis kartinis laikas**":tidiedArr[13],"**Būtasis dažninis**":tidiedArr[14],"**Būsimasis laikas**":tidiedArr[15]}, {Įvardis:"**Jūs**","**Esamasis laikas**":tidiedArr[16],"**Būtasis kartinis laikas**":tidiedArr[17],"**Būtasis dažninis**":tidiedArr[18],"**Būsimasis laikas**":tidiedArr[19]}, {Įvardis:"**Jie/jos**","**Esamasis laikas**":tidiedArr[20],"**Būtasis kartinis laikas**":tidiedArr[21],"**Būtasis dažninis**":tidiedArr[22],"**Būsimasis laikas**":tidiedArr[23]}, ]) return redditTable; } const declineAdjectives = function(query){ const newArr = convertArrays(query) var tidiedArr = [] for (var i = 0; i < newArr.length; i++) { tidiedArr.push(newArr[i]["\Š\."]) tidiedArr.push(newArr[i].Vienaskaita) tidiedArr.push(newArr[i].Daugiskaita) tidiedArr.push(newArr[i]["Vienaskaita_2"]) } var redditTable = tablemark([ {Form: "",name:"**Vienaskaita**",name2:"**Daugiskaita**",name3:"**Vienaskaita**",name4:"**Daugiskaita**"}, {Form:"**V.**",name:tidiedArr[0],name2:tidiedArr[1],name3:tidiedArr[2],name4:tidiedArr[3]}, {Form:"**K.**",name:tidiedArr[4],name2:tidiedArr[5],name3:tidiedArr[6],name4:tidiedArr[7]}, {Form:"**K.**",name:tidiedArr[8],name2:tidiedArr[9],name3:tidiedArr[10],name4:tidiedArr[11]}, {Form:"**G.**",name:tidiedArr[12],name2:tidiedArr[13],name3:tidiedArr[14],name4:tidiedArr[15]}, {Form:"**Įn.**",name:tidiedArr[16],name2:tidiedArr[17],name3:tidiedArr[18],name4:tidiedArr[19]}, {Form:"**Vt.**",name:tidiedArr[20],name2:tidiedArr[21],name3:tidiedArr[22],name4:tidiedArr[23]}, {Form:"**Š.**",name:tidiedArr[24],name2:tidiedArr[25],name3:tidiedArr[26],name4:tidiedArr[27]}, ],{ columns:[ "Form", {name: "Vyriškoji giminė"}, {name:" "}, {name: "Moteriškoji giminė"}, {name:" "} ] }) return redditTable; } const nounOrVerb = function(array){ try{ const listOfTable = Object.keys(array[0][0]); if(listOfTable[0] == "Jie/jos"){ return "verb" }else if(listOfTable[3] =="Vienaskaita_2"){ return "adjective" }else if(listOfTable[0] == "Š."){ return "noun" } }catch{ return "Error" } } module.exports = { prettifier:prettifier } bot.js const {comments} = require('../config/lietuvos-config') const {extractor} = require('../util/extract-word') const {tableOfContent} = require('../util/get-table') const BOT_START = Date.now() / 1000; const canSummon = (msg) => { /*if(msg){ msg.toLowerCase().includes('!inspect'); //function return; } return*/ return msg && msg.toLowerCase().includes('!inspect'); }; const commenting = function(){ comments.on('item',async (item) => { try{ var replyString = "labas u/"+item.author.name+"! esu robotas ir pateikiu lentelę apie žodžius, veiksmažodžius, būdvardžius."+ " jei prieš nieko, čia tavo žodžio lentelė. net galite sužinot daugiau čia ^[šaltinis](https://morfologija.lietuviuzodynas.lt/zodzio-formos/"+extractedWord+ ") \n\n " var errorReply= "labas u/"+item.author.name+"! esu bandęs rasti žodį, kurį rašėi, atsiprašau ir dėja, bet negalėjau rasti. "+ "gal tas žodis neegzistuoja lietuvių kalboj, rašėi neteisingai - arba yra klaida mano kode.\n šiaip ar taip, galite bandyt rast savarankiškai čia ^[žodynas](https://morfologija.lietuviuzodynas.lt/" var replyStringEnder = " \n\n \*\*\* \n ^feel ^free ^to ^report ^bugs ^or ^errors\n ^\[[source-code]\](https://github.com/wulfharth7/lietuvos-robotas) ^| ^\[[buy-me-a-coffee☕]\](https://www.buymeacoffee.com/beriscen)" if(item.created_utc < BOT_START) return; if(!canSummon(item.body)) return; var extractedWord = extractor(item.body) tableOfContent(extractedWord).then(function(tableofLog){ if(tableofLog !== "error"){ item.reply(replyString+ tableofLog+replyStringEnder) }else{ item.reply(errorReply+replyStringEnder) } }) }catch(Error){ var errorReply= "labas u/"+item.author.name+"! esu bandęs rasti žodį, kurį rašėi, atsiprašau ir dėja, bet negalėjau rasti. "+ "gal tas žodis neegzistuoja lietuvių kalboj, rašėi neteisingai - arba yra klaida mano kode.\n\n šiaip ar taip, galite bandyt rast savarankiškai čia ^[žodynas](https://morfologija.lietuviuzodynas.lt/)" item.reply(errorReply+replyStringEnder) } }); } module.exports={ commenting: commenting } Answer: extract-word.js Well, you're clearly already aware that it only working with latin letters is an issue, but I'll point it out anyway. Since, y'know, it excludes a few letters used in lithuanian. By the way, the regular expression can be made case-insensitive with the i flag, like const pattern = /<[a-z]+>/i stringToSplit is a bit redundant - it contains exactly the same content as query, so we may as well operate on query directly We can clean up the extracting a bit by using capturing groups. If we do const pattern = /<([a-z]+)>/i, the parens define a group, and we can access that group as extractedWord[1] All in all, that function could be boiled down into a one-liner like return query.match(/<([a-z]+)>/i)[1];. Or perhaps return query.match(/<([a-ząčęėįšųūž]+)>/i)[1]; to add some letters (I hope those are the right ones). More sensible might be to keep the pattern on a separate line, like: const extractor = function(query){ const pattern = /<([a-ząčęėįšųūž]+)>/i; return query.match(pattern)[1]; } module.exports={ extractor: extractor } get-table.js I do wonder if returning "error" in case of failure is the most useful behaviour. Would it not be more convenient to just... let the failure remain a failure, to make it easier for the caller to detect and clean up? Since bot.js already has a catch block that provides an error message, it might be best to just... let it deal with errors from here as well Then we could have this file looking closer to var scraper = require('table-scraper'); const {prettifier} = require('../util/pretty-stringified') const tableOfContent = function(query) { return scraper.get('https://morfologija.lietuviuzodynas.lt/zodzio-formos/' + query) .then(function(tableData) { console.log(tableData); return prettifier(tableData).toString(); }); } module.exports = { tableOfContent: tableOfContent } pretty-stringified.js Well, for one, the name of nounOrVerb is a bit misleading, since it has three possible results. For now. Maybe one day it might even have a fourth? Let's give it a name that leaves room for expansion. Something like const typeOfWord = getTypeOfWord(query); perhaps? Alternatively, instead of having a function return a string, and then branching based on what that string is... we could just return the function directly: const prettify = function(query) { const prettifier = choosePrettifier(query); return prettifier(query); } const choosePrettifier = function(table) { try { const listOfTable = Object.keys(table[0][0]); if (listOfTable[0] == "Jie/jos") { return conjugateVerbs; } else if (listOfTable[3] == "Vienaskaita_2") { return declineAdjectives; } else if (listOfTable[0] == "Š.") { return declineNouns; } } } We could even call it right there but... I don't know, I think that looks worse somehow. Which might be a sign that there's an even better solution that I'm missing right now. Moving on to the conjugate/decline functions, they seem to follow a somewhat odd pattern. They take an array containing some manner of structured objects, removes that structure by shoving their fields into a flat array, then re-adds structure by working on individual array indices. That feels a bit roundabout. For example, looking at declineNouns, it seems like something like this should work: const declineNouns = function(query) { const newArr = convertArrays(query); const forms = ["**V.**", "**K.**", "**K.**", "**G.**", "**Įn.**", "**Vt.**", "**Š.**"]; const tidiedArr = []; // There's probably some even neater functional way to do this, but I don't remember it right now for (var i = 0; i < newArr.length; ++i) { tidiedArr.push({Form: forms[i], Vienaskaita: newArr[i]["\Š\."], Daugiskaita: newArr[i].Vienaskaita}); } return tablemark(tidiedArr); } Now, I know that Daugiskaita: newArr[i].Vienaskaita part looks a bit questionable to me when a Vienaskaita key also exists, but the old code had Daugiskaita: tidiedArr[1], and tidiedArr[1] was set by tidiedArr.push(newArr[i].Vienaskaita), so I'm gonna assume that it's correct bot.js Commented-out code in canSummon should be deleted. Should we need it back for some reason, there's always version control. errorReply is defined the exact same way twice. I would suggest doing it just once by the start of the function instead. Since it depends on the comment author's username, we could either define it by the start of the callback passed to comments.on, or we could have a "template function" of sorts that just takes a username and spits out the correct error message. I kind of like the latter, but either works. extractedWord seems to be used to create replyString before it is actually set. Thanks to JS's scoping rules, I wouldn't be entirely surprised if it finds a word, but I would expect it to find a word an earlier commenter asked for instead. We'll probably want to make sure the reply is created after we have all the content that goes into it, whether that be by moving the definition later or by passing it to a function that slots it into a string We may also be able to save some repetition by having only a single item.reply call towards the end If we also go with the "throwing an exception instead of returning "error"" idea mentioned earlier, I'd probably do something not too far from the following: const commenting = function(){ let replyString = function(username, extractedWord) { return `labas u/${username}! esu robotas ir pateikiu lentelę apie žodžius, veiksmažodžius, būdvardžius.` + ` jei prieš nieko, čia tavo žodžio lentelė. net galite sužinot daugiau čia ^[šaltinis](https://morfologija.lietuviuzodynas.lt/zodzio-formos/${extractedWord}")`; }; let errorReply = function(username) { return `labas u/${item.author.name}! esu bandęs rasti žodį, kurį rašėi, atsiprašau ir dėja, bet negalėjau rasti. ` + "gal tas žodis neegzistuoja lietuvių kalboj, rašėi neteisingai - arba yra klaida mano kode.\n šiaip ar taip, galite bandyt rast savarankiškai čia ^[žodynas](https://morfologija.lietuviuzodynas.lt/"; } let replyStringEnder = " \n\n \*\*\* \n ^feel ^free ^to ^report ^bugs ^or ^errors\n ^\[[source-code]\](https://github.com/wulfharth7/lietuvos-robotas) ^| ^\[[buy-me-a-coffee☕]\](https://www.buymeacoffee.com/beriscen)"; comments.on('item',async (item) => { let message; try { if(item.created_utc < BOT_START) return; if(!canSummon(item.body)) return; let extractedWord = extractor(item.body); table = await tableOfContent(extractedWord); message = replyString(item.author.name) + table; } catch(Error) { message = errorReply(item.author.name); } item.reply(message + replyStringEnder); }); }
{ "domain": "codereview.stackexchange", "id": 43751, "tags": "javascript, beginner, node.js, web-scraping" }
Is it possible to know in advance that Alpha Centauri has exploded?
Question: Alpha Centauri is 4.3 light years away. If it exploded suddenly, would we be able to know this in advance? As the light from the supernova will not reach us before 4.3 years. Answer: Yes, you would get a few hours warning from the intense pulse of gravitational waves and neutrinos caused by the core collapse. Incidentally, I should say that this does not actually apply to Alpha Cen, since this is a solar-type star that will never produce a supernova. The gravitational waves and neutrinos also travel at the speed of light (well, almost in the case of neutrinos), but they can escape promptly (within a few seconds) of the core collapse, whereas the "fireball" that produces the electromagnetic signature takes several hours to work its way to the surface of the star.
{ "domain": "physics.stackexchange", "id": 64828, "tags": "speed-of-light, astrophysics, stellar-physics, supernova" }
How should I format my code to make it easier to read and understand in the future
Question: I am using this query to generate a conversation stream between user 184 and 192- SELECT events.event_time, messages . * FROM events , messages WHERE events.global_id=messages.global_ref_id AND (messages.to =184 AND messages.from =192) OR events.global_id=messages.global_ref_id AND (messages.to =192 AND messages.from =184) AND messages.global_ref_id < 495 ORDER BY `messages`.`global_ref_id` ASC This query is working good for generating conversation between two users. Can you help me with converting this query to something more readable. Because I believe it is just a workaround and will cause some problems in future. Schema of messages table Field Type global_ref_id int(12) to int(12) from int(12) message text status int(1) viewed int(1) where global_ref_id is foreign key. Answer: I would use a syntax like this: SELECT events.event_time, messages .* FROM events , messages WHERE events.global_id=messages.global_ref_id AND ((messages.to = 184 AND messages.from = 192) OR (messages.to = 192 AND messages.from = 184)) AND messages.global_ref_id < 495 ORDER BY `messages`.`global_ref_id` ASC or use a left outer join instead: SELECT e.event_time, m.* FROM events e left outer join messages m ON e.global_id=m.global_ref_id WHERE ((m.to = 184 AND m.from = 192) OR (m.to = 192 AND m.from = 184)) AND m.global_ref_id < 495 ORDER BY m.`global_ref_id` ASC
{ "domain": "codereview.stackexchange", "id": 2706, "tags": "php, mysql" }
The concavity of Gibbs potiential
Question: Excuse me, I have a question about the concavity of Gibbs potential. It is known that Gibbs potential must be the minimum at the equilibrium under the constant pressure and temperature constraint. It makes me guess that Gibbs potential is a convex function of P and T. However, mathematically, G must be a concave function of P and T. (It would be better if you are able to see Problem 1.10 of Plischke's book.) For example, However, it makes me guess that Gibbs potential has the maximum value at the equilibirum, but I know it is not true. How can I understand about the concavity of Gibbs potential? Answer: It is known that Gibbs potential must be the minimum at the equilibrium under the constant pressure and temperature constraint. It makes me guess that Gibbs potential is a convex function of P and T. This is the weak point. There is not such an implication. The fundamental reason is that the minimum principle refers to a minimum with respect to variables expressing any possible constraint one could add to an equilibrium system at constant thermodynamic macrostate variables. It is not about a minimum of the thermodynamic potential with respect to its variables. In other words, the connection between the minimum principle for a thermodynamic potential and the sign of its second derivatives is not a straightforward application of the condition on the second derivatives at a point of minimum. It is more indirect. This is evident once we realize that the minimum principle holds for all the Legendre transforms of a thermodynamic potential, even if the definition of Legendre transforms used in Thermodynamics implies a convex/concave alternation. The correct chain of implications is minimum principle $\color{orange}\Rightarrow$ strict convexity with respect to the extensive variables in all the unique equilibrium states (i.e., in the absence of phase coexistence) $\color{orange}\Rightarrow$ the thermodynamic potential depending only on extensive variables (i.e., the internal energy) is a strictly convex function of all its variables $\color{orange}\Rightarrow$ every Legendre transform of the internal energy with respect to one extensive variable is a strictly concave function of the conjugate variable of that extensive variable. I hope this clarifies why the concavity of the Gibbs free energy with respect to $P$ and $T$ does not contradict the minimum principle.
{ "domain": "physics.stackexchange", "id": 89653, "tags": "thermodynamics, energy, statistical-mechanics" }
Why isn't momentum conserved in this pulley problem?
Question: I have some conceptual doubt about method of solving this problem. 24. A block of mass $m$ and a pan of equal mass are connected by a string going over a smooth light pulley as shown in figure (9-W17). Initially the system is at rest when a particle of mass $m$ falls on the pan and sticks to it. If the particle strikes the pan with a speed $v$ find the speed with which the system moves just after the collision. Solution: Let the required speed be $V$. As there is a sudden change in the speed of the block, the tension must change by a large amount during the collision. Let $N$ = magnitude of the contact force between the particle and the pan $$T = \text{tension in the string}$$ Consider the impulse imparted to the particle. The force is $N$ in upward direction and the impulse is $\int N\,dt$. This should be equal to the change in its momentum. Thus, $$\int N\,dt = mv - mV.\tag{i}$$ Similarly considering the impulse imparted to the pan, $$\int(N - T)dt = mV\tag{ii}$$ and that to the block, $$\int T\, dt = mV.\tag{iii}$$ Adding (ii) and (iii), $$\int N\, dt = 2mV.$$ Comparing with (i), $$mv - mV = 2mV$$ or, $$V = v/3.$$ But, total initial momentum of the system = $mv$ downwards. And final downwards momentum of the system = $mV + mV - mV = mV = mv/3$ So, is this solution wrong? I think the final downwards velocity should still be $v$ (I can get this by making final and initial momentum equal). But I could not find any technical mistake in this solution. If it is correct, why is momentum not conserved in this case. I understand that kinetic energy is already conserved as there has been a plastic collision. Answer: The pulley (and the attachment to the ceiling) are part of the system here. Because of this, you cannot simply use conservation of momentum on the three given masses. If the final velocity were $v$, then the total energy of the system would have increased since both the pan and counterweight would be moving and the other mass would not have slowed. You can't use conservation of momentum equations on only a portion of a system. If you imagine a ball bouncing on the floor, you can't say the momentum of the ball is conserved before and after the bounce. You have to consider the change in momentum of the floor as well. In your problem, the change in momentum of the ceiling will be small, but relevant. $$\Delta p_{m1} + \Delta p_{m2} + \Delta p_{pan} + \Delta p_{ceiling} = 0$$ As you do not know the change in this final component, you can't use conservation to solve for the remaining momentum of the other three masses. Let's change the situation to make this more explicit. Instead of a counterweight, consider two pans and two weights. Let's imagine the pulley and string to be massless, so the two pans and two weights have a total mass of $4m$. If the balls have a velocity $v$, then the total momentum of the pulley inside the room is $1mv + 1mv = 2mv$. But by symmetry, we can see that the pulley isn't going to turn. If we imagine the pans at rest after the collision, we find the momentum is now $0mv$. If the pulley's connection to the ceiling/room/earth is not part of the system, then we say that forces from that connection were external and changed the total momentum. We cannot use conservation of momentum due to external forces. If the ceiling/room/earth are part of the system, then after the collision, they have gained $2mv$ downward momentum, so the total system does not change. If we picture the box as nearly massless and in a spaceship instead of earth, the entire box would be moving downward at $v/2$ after the balls hit the pans. (assuming completely inelastic collision). The more massive the box, the slower it moves to retain the velocity. Consider it attached to a building/earth, and the momentum still changes, but the velocity change is no longer measurable.
{ "domain": "physics.stackexchange", "id": 29792, "tags": "newtonian-mechanics, momentum, conservation-laws, collision" }
Arbitrarity of $i$ in the propagator
Question: My question is simple: how arbitrary can the factor in front of the propagator be? What I mean by that is, if we call the wave operator $K$ and the propagator $G$, I've seen different books use different notations: $$KG=\delta$$ $$KG=-\delta$$ $$KG=i\delta$$ $$KG=-i\delta$$ Are all of these equivalent? Are there other conventions besides the ones I wrote? How should I guess which one was used? In case this is of help to answer my question, my doubt came when, to write the inverse of the scalar propagator $\frac{i}{p^2-m^2}$, I saw $(p^2-m^2)$, with no $i$ factor. Answer: how arbitrary can the factor in front of the propagator be? You are free to choose whatever definition you like, whether it is: $$KG=\delta$$ or $$KG=-\delta$$ or $$KG=i\delta$$ or $$KG=-i\delta\;,.$$ But, clearly, if the $K$ is the same in these equations above, the $G$ is different. The difference is fairly trivial, but a difference nonetheless. I wouldn't typically describe this as "arbitrary," but as a matter of convention. Although, if you want to say arbitrary, no one can stop you. It just seems a little flippant to me, since it seems to disregard that there could be a reason for the convention. From dictionary.com: "Arbitrary - Adjective - subject to individual will or judgment without restriction; contingent solely upon one's discretion..." From Google.com: "Arbitrary... based on random choice or personal whim, rather than any reason or system." Once you have made a choice of convention, you should stick with it to avoid confusion. Different books make different choices because their authors chose different conventions. To directly answer your question: The factor in front can be as "arbitrary" as you would like. Choose whatever convention you want, but if you choose an unfamiliar convention you are going to have to explain yourself to everyone else in the world who doesn't use that convention. The explanation will be short, but pretty much pointless. (In addition, as pointed out in the comments, if insert a factor of 42 for no reason, you are going to have to carry that factor around and explicitly write it out.) And to answer your additional questions: Are all of these equivalent? No. Are there other conventions besides the ones I wrote? Maybe, but those seem like they would contain the major ones. How should I guess which one was used? You should not guess if you don't have to. Look at the equation the author is solving and you should see what the convention is. In case this is of help to answer my question, my doubt came when, to write the inverse of the scalar propagator $\frac{i}{p^2-m^2}$, I saw $(p^2-m^2)$, with no $i$ factor. The inverse of $\frac{i}{p^2-m^2}$ is not $(p^2-m^2)$. The inverse of $\frac{i}{p^2-m^2}$ is $-i(p^2-m^2)$. In general, we can say that the inverse of $X$, $X^{-1}$, satisfies $XX^{-1}=1$. If X is not a c-number then we could define right and left inverses.
{ "domain": "physics.stackexchange", "id": 88391, "tags": "definition, conventions, complex-numbers, greens-functions, propagator" }
Reactions of acidified substances
Question: Case study: acidified potassium manganate(VII) Does "acidified" suggest that the substance contains $\ce{H+}$ions? If so, will acidified $\ce{KMnO4}$ react with a base, such as ammonia? Answer: Permanganates, like KMnO4 are powerful oxidizers, but their redox potential and thus reactions are pH dependent. Lowering the pH by adding an acid increases the oxidizing power of permanganates. Acidified KMnO4 most likely means a solution of KMnO4, with some sulfuric acid added. Since the solution is acidic it will react with bases like ammonia, but that is not the intended reaction of such a solution. It is used to oxidize a wide range of organic and inorganic chemicals, for example toluene to benzoic acid.
{ "domain": "chemistry.stackexchange", "id": 4479, "tags": "acid-base" }
Is the SK2 calculus a complete basis, where K2 is the flipped K combinator?
Question: Specifically, if I defined a new $K_2$ as $$K_2 = \lambda x. (\lambda y. y)$$ instead of $$K = \lambda x. (\lambda y. x)$$ would the $\{S, K_2,I\}$-calculus be a compete basis? My guess is "no," just because I can't seem to be able to construct the regular K combinator from the $S$, $I$, and $K_2$ combinators, but I don't have an algorithm to follow, nor do I have good intuition about making things out of these combinators. It seems like you can define $$K_2 = K I$$ with the regular $\{S, K, (I)\}$-calculus, but I couldn't really work backwards from that to get a derivation of $K$ in terms of $K_2$ and the rest. My attempt at a proof that it was not functionally complete essentially attempted to exhaustively construct every function attainable from these combinators in order to show that you reach a dead end (a function you've seen before) no matter what combinators you use. I realize that this isn't necessarily going to be true of functionally incomplete sets of combinators (e.g. the $K$ combinator on its own will never dead end when applied to itself), but this was my best thought. I was always able to use the $S$ combinator to sneak out of what I thought was finally a dead end, so I'm no longer so sure of the feasibility of this approach. I asked this question on StackOverflow but was encouraged to post it here. I received a few comments on that post, but I'm not sure I understood them right. Bonus: if it isn't a complete basis, is the resulting language nonetheless Turing-complete? Answer: Consider the terms of the $S,K_2,I$ calculus as trees (with binary nodes representing applications, and $S, K_2$ leaves representing the combinators. For example, the term $S(SS)K_2$ would be represented by the tree @ / \ / \ @ K2 / \ / \ S @ / \ / \ S S To each tree $T$ associate its rightmost leaf, the one you get by taking the right branch at each @. For example, the rightmost leaf of the tree above is $K_2$. As can be seen from the ASCII art below, all reduction rules in the $S, K_2, I$ calculus preserve the rightmost leaf. @ @ / \ / \ / \ / \ @ g [reduces to] @ @ / \ / \ / \ / \ e g f g @ f / \ / \ S e @ / \ / \ @ f [reduces to] f / \ / \ K2 e From there on, it's easy to see that if some term $T$ reduces to $T'$, then $T$ and $T'$ have the same rightmost leaf. Hence, there is no term $T$ in the $S, K_2, I$ calculus such that $TK_2S$ reduces to $K_2$. However, $KK_2S$ reduces to $K_2$, hence $K$ cannot be expressed in the $S,K_2, I$ calculus.
{ "domain": "cs.stackexchange", "id": 13881, "tags": "lambda-calculus, combinatory-logic" }
Estimating destination according to previous data
Question: I need an advice. I can resume my problem like that : I have some travels in a database, for example : Person1 travelled from CityA to CityB on Date1 Person1 travelled from CityB to CityC on Date2 Person2 travelled from CityB to CityD on Date3 ... We can consider that these cities are in the complete graph. Now, according to all the travels in the database, I would like to know where a PersonX is likely to go. I can know when he come from (or not). I don't know if I should use machine learning, data-mining or graph theory. Answer: This is a spatio-temporal clustering problem that is likely best solved with a Markov model. You could reasonable group this into machine learning or data mining. Develop your model using machine learning and then (the data mining part) leverage those pattern recognition techniques (that have been developed in machine learning). I think there are at least one or two threads on this over at Cross-Validated that go into more detail. Here are a couple of papers to look at if you are just getting started. Using GPS to learn significant locations and predict movement across multiple users Predicting Future Locations with Hidden Markov Models
{ "domain": "datascience.stackexchange", "id": 403, "tags": "machine-learning, data-mining, graphs" }
Price optimization for tiered and seasonal products
Question: Assuming I can collect the demand of the purchase of a certain product that are of different market tiers. Example: Product A is low end goods. Product B is another low end goods. Product C and D are middle-tier goods and product E and F are high-tier goods. We have collected data the last year on the following 1. Which time period (season - festive? non-festive?) does the different tier product reacts based on the price set? Reacts refer to how many % of the product is sold at certain price range 2. How fast the reaction from the market after marketing is done? Marketing is done on 10 June and the products are all sold by 18 June for festive season that slated to happened in July (took 8 days at that price to finish selling) How can data science benefit in terms of recommending 1. If we should push the marketing earlier or later? 2. If we can higher or lower the price? (Based on demand and sealing rate?) Am I understanding it right that data science can help a marketer in this aspect? Which direction should I be looking into if I am interested to learn about it. Answer: You should be able to use linear regression to find correlation between the factors which cause your products to sell better (or worse). There are many correlations you can test against in this data set. Some examples are: If a product has been marketed aggressively, does it sell more quickly? If a low tier item is available, do fewer high-tier items sell? If multiple high-tier items are available, are fewer sold of each item? Keep in mind that correlation does not necessarily imply causation. Always think about other factors which may cause sales to go up and down. For example, you may sell more high tier items in a season one year than another year. But, this could be due to changes in the overall economy, rather than changes in your pricing. The second thing you can do is perform A/B tests on your product sales pages. This gives you clear feedback right away. Some example tests could be: Show the user one high-tier product and one low-tier product (A). Show the user two high-tier products and no low-tier products(B). Which page generates more revenue? Send out marketing emails for a seasonal sale 5 days in advance to one group of users (A). Send the same email to a different set of users 1 day in advance (B). There are many possibilities. Use your intuition and think about previous knowledge you have about your products.
{ "domain": "datascience.stackexchange", "id": 4303, "tags": "recommender-system" }
What is the difference between regular blood, a woman's and a virgin's menstrual blood?
Question: There are many stories that blood contains the life-force energy and specifically menstrual (period) blood has always been a feature of many rituals and some ancient Sumerian tablets mentioned that menstrual blood was the 'gold of the gods'. So it is true that the menstrual blood is highly oxygenated, the purest of all blood and it carries the decoded DNA? Or what differences are known to modern science between regular blood, a woman's period blood and virgin's period blood? Answer: The "purest of all blood" is fresh out of the bone marrow, i.e. in your circulatory system. Menstrual blood is a combination of blood, some mucous, and dead endometrial tissue. The endometrium consists of a single layer of columnar epithelium resting on the stroma, a layer of connective tissue that varies in thickness according to hormonal influences. Simple tubular uterine glands reach from the endometrial surface through to the base of the stroma, which also carries a rich blood supply of spiral arteries... Proliferation is induced by estrogen (follicular phase of menstrual cycle), and later changes in this layer are engendered by progesterone from the corpus luteum (luteal phase). It is adapted to provide an optimum environment for the implantation and growth of the embryo. In the absence of progesterone, the arteries supplying blood to the functional layer constrict, so that cells in that layer become ischaemic and die, leading to menstruation. So, menstrual "blood" is a combination of sloughed off stromal and glandular tissue, broken down vascular cells and blood, and, no, it is not highly oxygenated (it's kind of darker than normal blood. It doesn't carry any "decoded DNA". It's basically a waste product at this point, dead, dying and no longer functional tissue. Virgin's flow is just as dead as non-virgin's flow. Cultures obtained at hysterectomy indicate that the endometrial cavity is normally sterile. The major difference between a virgin and a non-virgin is that the possibility of infection of endometrial tissue exists in non-virgins. Of course myths will arise around menstrual flow. After all, when it was alive, it was the medium for implantation of a blastocyst. But they are just myths. Endometrium Histology Infections as a Cause of Infertility
{ "domain": "biology.stackexchange", "id": 2788, "tags": "hematology, red-blood-cell" }
How to determine aberrated wave front at exit pupil of a lens system using ray tracing?
Question: I am working on building a very simple optical simulator for my workflow. I am stuck at a point where I am trying to simulate the impact of diffraction on a lens system that has geometric aberrations. Textbooks such as Goodman and others specify the wavefront aberrations at the exit pupil essentially add a phase-shift to the perfect spherical wave. This phase-shift term $e^{jW(x, y)}$ where $W(x, y)$ are the Seidel or Zernike polynomials that represents the optical path difference (OPD) between the actual and ideal wavefronts. I have implemented a detailed ray tracing code that helps me generate spot diagrams. I am even able to calculate optical path length for every ray that traverses through the system. However, I am unclear on how I can (a) compute the actual wavefront OR (b) Determine the function $W(x, y)$ that represents the OPD at the exit pupil. I came across this post on Zemax communities where they mention something about "subtracting a chief-ray centered reference sphere phase from the optical path lengths computed via ray tracing". Any help on explaining this or pointing to resources that can help with this will be really appreciated. Answer: I would recommend a book by Warren Smith, Modern Optical Engineering, McGraw Hill. The older editions, at least, mention the calculation of OPD (for which you have to construct a reference sphere). OPD is calculated for rays traced to this reference sphere. Example D is used for the OPD calculations. You can use Smith's calculations as a check on your own. I've never found errors in his books. Another very good reference is WT Welford, Aberrations of Optical Systems. I've used both of these for similar problems. Basic ray-tracing is one thing - what CodeV does well is figure out what rays to trace such that they go through the stop properly. (In particular, the chief ray which goes from the tip of the object, through the center of the stop). This can be tricky for off-axis points, especially when field angles get large. Zemax historically doesn't do this as well, so beware if you compare. Or try OSLO, which has a free version. OSLO is an high quality lens design package, at low cost if you buy it. (I have no association with them other than past use). You could use OSLO at least to compare your calculations.
{ "domain": "physics.stackexchange", "id": 88753, "tags": "optics, geometric-optics, lenses" }
Moveit! and URDF problem
Question: Hello Dear Community! I have a problem that I can't resolve during already a few days. I have created simple test robot arm in SolidWorks. I followed instructions on setting the axis correctly for all joints (X-forward, Z-up). I have exported URDF model from SolidWorks. I have created Moveit! config for this URDF model with Moveit! wizard. Models successfully loads into rviz. I can control it with robot_state_publisher GUI panel. But I can't set the Goal State with Interactive Marker either with the code. I always get "Fail: ABORTED: No motion plan found. No execution attempted.". I have uploaded to GoogleDrive two tars: drive.google.com/file/d/0Bx529USSkqSbd0k0djE0aEVKYTg/edit?usp=sharing (model exported from SolidWorks: URDF, meshes) drive.google.com/file/d/0Bx529USSkqSbbGRjckp5bVRNbjQ/edit?usp=sharing (generated Moveit! config) Could you please be so kind and review this packages, and tell me what I'm doing wrong! I really need help with this! Great thanks ahead! Originally posted by maska on ROS Answers with karma: 61 on 2013-11-04 Post score: 2 Original comments Comment by maska on 2013-11-04: An arm for Moveit! shall be strictly 6DOF? Or could have less DOF? Comment by davinci on 2013-11-04: Try to set the logger levels of move_arm and ompl_ros to DEBUG using rxconsole. That could give some extra debug info. Answer: I have set the position_only_ik: True in kinematics.yaml file and this has solved the problem. Originally posted by maska with karma: 61 on 2013-11-06 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by maska on 2013-11-06: How to mark the question as resolved? Comment by davinci on 2013-11-06: Click the v button
{ "domain": "robotics.stackexchange", "id": 16054, "tags": "ros" }
Confusions regarding meaning of "kernel" in signal processing and image processing?
Question: While studying filters in DSP and DIP, the term "kernel" is often encountered What is the meaning of "kernel" in context of filters??Please kindly explain in simple words with an example I have also attached a snapshot from gonzalez What is meant by "neighborhood of operation" here? and in last line,author says that mask,template & window terms are synonyms for kernel. Is it true even in case of FIR windows such as Hanning window,Blackman window etc? Those windows are also kernels? Answer: What is the meaning of "kernel" in context of filters? A "kernel" is the thing you convolve by. So for a FIR filter that's defined by the taps $k = \begin{bmatrix}a_0 & a_1 & a_2 & \cdots\end{bmatrix}$, $k$ is the kernel. For a 2D filtering problem, the kernel would, in general, be 2D (and you tend to see the term much more often in image processing, but I've also seen it in really highfalutin' stochastic signal processing literature). What is meant by "neighborhood of operation" here? For an output pixel at position $\begin{bmatrix}x, y\end{bmatrix}$, all of the pixels in the input image that affect the output. In general, if you have an $n \times m$ kernel, the neighborhood of operation is $n \times m$. Is it true even in case of FIR windows such as Hanning window,Blackman window etc? Those windows are also kernels? Different disciplines, different terminologies. No, because those don't define filter kernels -- the "windowing" operation is different from the "windowing" in 2D image processing.
{ "domain": "dsp.stackexchange", "id": 8823, "tags": "image-processing, filters, terminology, kernel" }
When to involve electron mass in energy calculation of beta decay
Question: I would just like to apologise for the horrible images because I have no idea to format the subscripts of the elements. Why is that we do not have to account for the weight of the electron in beta minus emission? Some of the solutions for other similar questions involve the weight of the electron. Is there a rule regarding when to account for it that I am not aware of? For reference, this is from the Giancoli Physics 6th Edition Textbook. (II) The isotope $^{218}_{84}\rm Po$ can decay by either $\alpha$ or $\beta^-$ emission. What is the energy release in each case? The mass of $^{218}_{84}\rm Po$ is $218.008965\ \rm u$. Answer: For beta decay we have $^{218}_{84}\mathrm{Po} \rightarrow^{218}_{85}\mathrm{At}+^0_{-1}\mathrm{e}$ The $Q$ value is \begin{align} \\Q&=[m(^{218}_{84}\mathrm{Po})-m(^{218}_{85}\mathrm{At})]c^2 \\&=[(218.008965\mathrm{u}-(218.00868\mathrm{u})]c^2(931.5\mathrm{MeV/u}c^2)=0.27\mathrm{MeV} \end{align} Answer: As you would expect, all masses need to be accounted for - always. Here is what is causing the confusion: The atomic masses cited include the mass of the electrons in the neutral atom. So, we have that $m(^{218}_{84}\mathrm{Po})$ includes 218 nucleons and 84 electrons while $m(^{218}_{85}\mathrm{As})$ includes the mass for one extra electron. In the decay $^{218}_{84}\mathrm{Po} \longrightarrow ^{218}_{85}\mathrm{As} + e^- + Q$ we refer to the nucleus that is decaying. But in the atomic mass difference the mass of the emitted electron is already included. Hence, for this decay, the difference in the atomic masses is indeed the Q-value (ie kinetic energy) of the emitted electron + anti-neutrino. I hope this helps. Sources: See here for the latest atomic masses
{ "domain": "physics.stackexchange", "id": 50892, "tags": "homework-and-exercises, radioactivity" }
Are circles stronger than triangles?
Question: I've often heard in engineering that, "there is no shape stronger than a triangle." I also recall that arches are also very strong shapes, which can be crudely described as a perpendicularly-symmetrical half-an-ellipse; Which can be simplified to half a circle. If there were no conventional complications with designing structures to utilize circles; Which shape is stronger? Given a simple two-dimension-like application such as simple bridges or trusses as an example, of obvious visualizations. Answer: The short answer is, if you are making a bridge, triangles are, because the way they distribute weight when they are in a group makes them stronger. A single arch is stronger, but when you use lots of triangles when building a bridge it becomes stronger than using one arch. That is why we use triangles for most of our construction. There are tons of places online that talk about this. Below are some links: https://www.physicsforums.com/threads/physics-of-triangles.443267/ https://answers.yahoo.com/question/index?qid=20070712062525AAXy6Cm http://www.answers.com/Q/What_shape_is_stronger_an_arch_or_a_triangle
{ "domain": "physics.stackexchange", "id": 21239, "tags": "forces, geometry, statics" }
Does n^(1-1/d) always dominate log^d(n)
Question: Hi I am currently learning about orthogonal range search and found two data structures with two different runtimes and wanted to proof that one always dominates the other. So I found out about k-d-trees with a query time of $$\mathcal{O}(n^{1-\frac{1}{d}})$$ and range trees with a query time of $$\mathcal{O}(\log^{d-1}n)$$ and would like to show that $$\mathcal{O}(\log^{d-1}n) << \mathcal{O}(n^{1-\frac{1}{d}}) \quad \forall d \in \mathbf{N}$$ I tried around on Wolfram Alpha and the solutions were of the form $$ n > \exp\left(-d \cdot W_{-1}\left(\frac{-1}{d}\right)\right) $$ where $W_k(z)$ is the analytical continuation of the product log function. But I wasn't able to proof this always holds. Thank you in adnvance :) Answer: L'Hôpital's rule shows that for $\epsilon > 0$, $$ \lim_{n\to\infty} \frac{\log n}{n^\epsilon} = \lim_{n\to\infty} \frac{\frac{1}{n}}{\epsilon n^{\epsilon-1}} = \lim_{n\to\infty} \frac{1}{\epsilon n^\epsilon} = 0. $$ In other words, $\log n = o(n^\epsilon)$ for all $\epsilon > 0$. This immediately implies that $\log^C n = o(n^\delta)$ for all $C,\delta > 0$, taking $\epsilon = \delta/C$.
{ "domain": "cs.stackexchange", "id": 17362, "tags": "runtime-analysis, search-algorithms, search-trees, proof-assistants" }
How exactly does a compatible reduction relation change the $\pi$-calculus?
Question: The reduction relation given for the $\pi$-caculus is usually not compatible (i.e., it's not preserved under arbitrary contexts). Quoting Milner's The Polyadic $\pi$-Calculus: A Tutorial: It is important to see what the rules do not allow. First, they do not allow reductions underneath prefix, or sum; for example we have $$u(v).(x(y)\ |\ \bar{x}z )\not\rightarrow$$ Thus prefixing imposes an order upon reduction. This constraint is not necessary. However, the calculus changes non-trivially if we relax it, and we shall not consider the possibility further in this paper. I haven't seem yet a reference discussing the possibility of a compatible reduction relation (though one is used in Yoshida's Strong Normalisation in the $\pi$-Calculus). So, my question is: how exactly does having a compatible relation change the calculus, as Milner puts it? Are there any references that study this possibility I could refer to? (I do assume one of Milner's papers must mention this, but I couldn't find which one, if any.) Answer: I don't know "exactly" how it changes the calculus, in the sense that I don't have a formal statement measuring the difference (and I am not aware that there exists one), but allowing reduction under prefixes does change the semantics of the calculus "non-trivially", as the following example shows. Let $\qquad P\ :=\ \nu(x,y,z)(x(w).(w \mathrel| \overline w.a \mathrel| \overline{y}.b) \mathrel| \overline{x}y \mathrel| \overline xz.c)$ $\qquad S\ :=\ \tau.(\tau.a+\tau.b)+\tau.(\tau.a\mathrel|c)$ $\qquad S'\ :=\ \tau.(\tau.a+\tau.(a\mathrel|c))+S$ Whatever definition of reduction you choose, $S$ and $S'$ are not bisimilar. Now, in Milner's calculus, $P$ is bisimilar to $S$. If you allow reduction under prefix, $P$ becomes bisimilar to $S'$. (Intuitively, the difference between $S$ and $S'$ is that $S'$ may "lose" the barb on $b$ without necessarily exposing a barb on $c$, which is what $S$ must do instead. Technically, $S$ and $S'$ are not weakly barbed bisimilar, which implies that they are not weakly barbed congruent, which implies that they are not weakly bisimilar, or anything stroger. Indeed, opponent can always win the barbed bisimilarity game against me (player) by starting with the reduction $$S'\to \tau.a+\tau.(a\mathrel|c)\ =:\ S_1'.$$ To respond, I need to reduce $S$ (arbitrarily many times, including none) ending up on a term having the same weak barbs as $S_1'$, which are $a$ and $c$. Now, the reducts of $S$ having exactly $a$ and $c$ as weak barbs are $\tau.a\mathrel|c$ and its immediate reduct $a\mathrel| c$. Whichever I choose, opponent's next move is going to select the reduction $S_1'\to a$, ending on a process whose only barb is $a$. There is no way for me to match this reduction, because any process I may reduce to will have barbs on both $a$ and $c$, and so I'm toast). Forbidding reduction under input prefixes corresponds to the fact that read/receive operations are blocking, which is quite natural in programming languages. When we are waiting for some information to be received and we decide to proceed without, we might make "bad" choices. In the example, we proceed as if $w$ could not become equal to $y$. If you think about passing values other than names (for example, integers), the phenomenon should become even more apparent intuitively. In a deterministic context (like the $\lambda$-calculus), proceeding without knowing the value of an input parameter (like reducing $M$ in $\lambda x.\!M$) is fine, but in a concurrent context it is not a good idea. The story is different for output prefixes and, in fact, many do consider them to be non-blocking, to the point that they force $\mathbf 0$ (the inactive process) to be the only process allowed under an output prefix. This is known as the asynchronous $\pi$-calculus and its expressiveness with respect to the synchronous version is well understood, in the sense that there are synchronous processes which may be simulated asynchronously only introducing divergence (see especially the work by Catuscia Palamidessi about this).
{ "domain": "cstheory.stackexchange", "id": 5497, "tags": "reference-request, concurrency, pi-calculus" }
Definition of a moon in an exam: "A satellite of a planet that *doesn't produce light itself but reflects it*" - is there relevance for the emphasis?
Question: In a 5th-grade exam (for 10-11-year-old pupils in Finland), there was a question, "What is a moon?" The model answer was: "A satellite of a planet that doesn't produce light itself but reflects it." Most pupils answered simply "A satellite of a planet" and received half the points, which is also what I would have answered! To my understanding, planets don't have satellites that are stars (i.e., ones that produce light themselves) or ones that don't reflect any light (like black holes?). Therefore, I don't see why the second part of the answer is relevant. I am looking for arguments to correct the evaluation. Answer: I'd ask the test author to provide an example of an object that would be improperly defined as a moon in the typical answer ("A satellite of a planet") that would properly be defined as not a moon in the model answer ("A satellite of a planet that doesn't produce light itself but reflects it"). If they can't produce an example, then the emphasized portion of the model answer is irrelevant (adds no value). The only examples I can think of are the rare artificial satellites that use laser communication links. There are no perfect blackbody satellites that don't reflect any light. The model answer includes all the other artificial satellites in planetary orbits. These certainly aren't moons, so the model answer is very flawed. From the Keck Observatory page: New Aurorae Detected On Jupiter’s Four Largest Moons Astronomers using W. M. Keck Observatory on Maunakea in Hawaiʻi have discovered that aurorae at visible wavelengths appear on all 4 major moons of Jupiter: Io, Europa, Ganymede, and Callisto. Here is an artist's rendition from the same website (Credit Julie Inglis). Hence, the model answer excludes Jupiter's major moons since they do produce light. This can easily be fixed by adding one word to the typical answer: "A natural satellite of a planet", which I think is deserving of full marks. I wouldn't give full marks for "A satellite of a planet" since it doesn't exclude artificial satellites. Note: My definition above unfortunately includes planetary rings, but there isn't really a lower boundary for the size of an object to call it a moon. Objects in planetary rings are sometimes called moonlets. We could also argue about asteroid satellites (sometimes called moons and sometimes called moonlets). And what would we call a satellite around a blanet (which is a planetary equivalent orbiting a black hole)? These questions might be too advanced for the 5th grade.
{ "domain": "astronomy.stackexchange", "id": 7163, "tags": "orbit, natural-satellites" }
Reading bytes from packet
Question: I have a device I connect to on TCP socket. I send it a message with what I want and it sends back all sorts of useful information: Header: +---------------+----------+----------+ | MagicWord | 2 bytes | 0xCAFE | | Version | 1 byte | 0x02 | | Type | 1 byte | Variable | | UTC offset | 1 byte | Variable | | Reserved | 1 byte | Variable | | PayloadSize | 2 bytes | Variable | +---------------+----------+----------+ Payload: +---------------+----------+----------+ | StartTime | 4 bytes | Variable | | Duration | 4 bytes | Variable | | Temp | 2 bytes | Variable | | EstimatedDist | 2 bytes | Variable | | NumKnocks | 1 byte | Variable | | Reserved | 1 byte | | +---------------+----------+----------+ Here is my attempt at how I got the variables from the message: var location:[(String, [UInt8], Int)] = [] //Name, message, location in data let dict = [ ("startTime", [UInt8](), 8), ("duration", [UInt8](), 12), ("Temp", [UInt8](), 16), ("EstimatedDist", [UInt8](), 18), ("NumKnocks", [UInt8](), 20), ("Reserverd", [UInt8](), 21)] for item in dict { location.append(item) } var i = 0 var itemLen = Int() for item in location { if 0...4 ~= i { //find length of byte by length subtraction of item itself from item above. itemLen = location[i + 1].2 - location[i].2 } let typeRange = NSRange(location: location[i].2, length: itemLen) let typeData = data.subdataWithRange(typeRange) var arr = Array(UnsafeBufferPointer(start: UnsafePointer<UInt8>(typeData.bytes), count: typeData.length)) location[i].1 = arr i = i + 1 } 0000 fe ca 02 30 00 00 0e 00 4e 5b a2 56 12 19 00 00 0010 22 00 12 19 07 00 In my situation data is NSData from a GCDAsyncSocket response with the header removed. For testing i've been doing and connecting to port. echo -e "\xfe\xca\x02\x30\x00\x00\x0e\x00\x4e\x5b\xa2\x56\x12\x19\x00\x00\x22\x00\x12\x19\x07\x00" | nc -kl -c 1975 I can connect, send the packet, get the message back thats all fine. I can get the variables from the packet. However, I don't feel it's that elegant. I want to know what I should be looking at to try to improve. Examples (with explanations) would be great. Answer: One things for certain... an array of tuples which is basically being used as a glorified dictionary is never going to suffice as an acceptable data model. Moreover, an array of bytes isn't really that much useful than what we start with. So what's clear, we need an actual data model. So names and data types may need some twiddling, but here's what I started with: class Information { let startTime: UInt32 let duration: UInt32 let temperature: UInt16 let estimatedDistance: UInt16 let knockCount: UInt8 let reserved: UInt8 // TODO: implement initializers } This is the model for our data. This is the easiest way for the entire rest of our app to access the data. This will be way easier to use. All that's left is getting from our data packet (<4e5ba256 12190000 22001219 0700>) to an actual instantiated Information object. I know that we can get from an NSData object to an array of UInt8s, but I notice, we're going to need some UInt16s and some UInt32s. So, I extended UInt16 and UInt32 to include some constructors for taking an array of bytes: extension UInt16 { init?(bytes: [UInt8]) { if bytes.count != 2 { return nil } var value: UInt16 = 0 for byte in bytes.reverse() { value = value << 8 value = value | UInt16(byte) } self = value } } extension UInt32 { init?(bytes: [UInt8]) { if bytes.count != 4 { return nil } var value: UInt32 = 0 for byte in bytes.reverse() { value = value << 8 value = value | UInt32(byte) } self = value } } If it's not immediately obvious how this is useful, keep reading. We'll see in a bit. Now, what do we need to input and what do we need to get in return? At the end of the day, we need an Information object, which is this class of six variously sized unsigned integers. What we start with, however, is some raw data. So ideally, we'd like to simply do something like this: let myInformation = Information(data: data) Right? That'd be nice, eh? So the end goal is to have an Information initializer which takes an NSData object. Well, let's start with the simplest initializer for the most ideal case: init(startTime: UInt32, duration: UInt32, temperature: UInt16, estimatedDistance: UInt16, knockCount: UInt8, reserved: UInt8) { self.startTime = startTime self.duration = duration self.temperature = temperature self.estimatedDistance = estimatedDistance self.knockCount = knockCount self.reserved = reserved } If we get the needed values exactly as we need them, we can just directly assign them as such. This is just a member-wise initializer. Let's rephrase that initializer in the form of UInt8 (or [UInt8]) though. This next one will be marked as a failable convenience initializer. convenience init?(startTimeBytes: [UInt8], durationBytes: [UInt8], temperatureBytes: [UInt8], estimatedDistanceBytes: [UInt8], knockCount: UInt8, reserved: UInt8) { if let startTime = UInt32(bytes: startTimeBytes), duration = UInt32(bytes: durationBytes), temperature = UInt16(bytes: temperatureBytes), estimatedDistance = UInt16(bytes: estimatedDistanceBytes) { self.init(startTime: startTime, duration: duration, temperature: temperature, estimatedDistance: estimatedDistance, knockCount: knockCount, reserved: reserved) } else { self.init(fail: true) } } (Don't worry about self.init(faile: true) right now. I'll paste that implementation in a second. For now, just note that it's saving me some boilerplate copy-pasta.) We've got an if let trying to bind four variables using the UInt32 and UInt16 initializers we extended (which of course return optionals). So if the passed in arrays are of the wrong size, this initializer will return nil, otherwise, it will return us the Information object we need. So, what's left? The next step up is the exact initializer that we want... the one that takes the NSData argument: convenience init?(data: NSData) { if data.length == 14 { let allBytes = Array(UnsafeBufferPointer(start: UnsafePointer<UInt8>(data.bytes), count: data.length)) self.init(startTimeBytes: Array(allBytes[0..<4]), durationBytes: Array(allBytes[4..<8]), temperatureBytes: Array(allBytes[8..<10]), estimatedDistanceBytes: Array(allBytes[10..<12]), knockCount: allBytes[12], reserved: allBytes[13]) } else { self.init(fail: true) } } We optimize a little bit here by checking the data length (which is also preventing any array index out of bounds exceptions). And then assuming the data is the right size, we call the previously mentioned convenience initializer. For the sake of completeness, here's what self.init(fail: true) does: private init?(fail: Bool = true) { self.startTime = 0 self.duration = 0 self.temperature = 0 self.estimatedDistance = 0 self.knockCount = 0 self.reserved = 0 if fail { return nil } } Because of the way Swift initializers work, having this initializer saves us from having to assign every property in every single initializer. If we know any of our other failable initializers have entered a failure state, we just call this and we're done. The code, start to finish: import Foundation extension UInt16 { init?(bytes: [UInt8]) { if bytes.count != 2 { return nil } var value: UInt16 = 0 for byte in bytes.reverse() { value = value << 8 value = value | UInt16(byte) } self = value } } extension UInt32 { init?(bytes: [UInt8]) { if bytes.count != 4 { return nil } var value: UInt32 = 0 for byte in bytes.reverse() { value = value << 8 value = value | UInt32(byte) } self = value } } let bytes: [UInt8] = [0x4e, 0x5b, 0xa2, 0x56, 0x12, 0x19, 0x00, 0x00, 0x22, 0x00, 0x12,0x19, 0x07, 0x00] let data = NSData(bytes: bytes, length: bytes.count) class Information { let startTime: UInt32 let duration: UInt32 let temperature: UInt16 let estimatedDistance: UInt16 let knockCount: UInt8 let reserved: UInt8 private init?(fail: Bool = true) { self.startTime = 0 self.duration = 0 self.temperature = 0 self.estimatedDistance = 0 self.knockCount = 0 self.reserved = 0 if fail { return nil } } init(startTime: UInt32, duration: UInt32, temperature: UInt16, estimatedDistance: UInt16, knockCount: UInt8, reserved: UInt8) { self.startTime = startTime self.duration = duration self.temperature = temperature self.estimatedDistance = estimatedDistance self.knockCount = knockCount self.reserved = reserved } convenience init?(startTimeBytes: [UInt8], durationBytes: [UInt8], temperatureBytes: [UInt8], estimatedDistanceBytes: [UInt8], knockCount: UInt8, reserved: UInt8) { if let startTime = UInt32(bytes: startTimeBytes), duration = UInt32(bytes: durationBytes), temperature = UInt16(bytes: temperatureBytes), estimatedDistance = UInt16(bytes: estimatedDistanceBytes) { self.init(startTime: startTime, duration: duration, temperature: temperature, estimatedDistance: estimatedDistance, knockCount: knockCount, reserved: reserved) } else { self.init(fail: true) } } convenience init?(data: NSData) { if data.length == 14 { let allBytes = Array(UnsafeBufferPointer(start: UnsafePointer<UInt8>(data.bytes), count: data.length)) self.init(startTimeBytes: Array(allBytes[0..<4]), durationBytes: Array(allBytes[4..<8]), temperatureBytes: Array(allBytes[8..<10]), estimatedDistanceBytes: Array(allBytes[10..<12]), knockCount: allBytes[12], reserved: allBytes[13]) } else { self.init(fail: true) } } } func stringFromTimeInterval(interval:NSTimeInterval) -> String { let ti = NSInteger(interval) let ms = Int((interval % 1) * 1000) let seconds = ti % 60 let minutes = (ti / 60) % 60 let hours = (ti / 3600) return NSString(format: "%0.2d:%0.2d:%0.2d.%0.3d",hours,minutes,seconds,ms) as String } let i = Information(data: data)! let timestamp = NSDate(timeIntervalSince1970: NSTimeInterval(i.startTime)) let duration = stringFromTimeInterval(NSTimeInterval(i.duration)) let temperature = i.temperature let estimatedDistance = i.estimatedDistance let knockCount = i.knockCount let reserved = i.reserved And this is the result I get for the given data:
{ "domain": "codereview.stackexchange", "id": 18118, "tags": "swift, serialization" }
Will porridge/rice absorb water faster without a lid on it?
Question: I just wonder if my porridge would be ready any faster if I take the lid off my tupperware container. Answer: If you put cooked porridge in a vacuum it will dry out i.e. lose all it's water. In fact this process is the basis of freeze-drying. But what you're describing isn't putting the porridge in a vacuum. If you put a hot water/porridge oats mixture in a sealed box then exclude the air you actually have the water/porridge mixture in equilbrium with water vapour. The pressure of the water vapour will be the vapour pressure of water at whatever the temperature of the porridge is. In any case, the absorption of water by the porridge oats is limited by the diffusion of liquid water into the starch grains in the porridge, and the vapour plays no part in this. So whether the box is sealed or not will have little effect on the cooking time. However with an unsealed box you may need to add extra water to make for the water lost by evaporation.
{ "domain": "physics.stackexchange", "id": 15901, "tags": "absorption" }
Can the fish topple the bowl?
Question: A man is standing inside a train compartment. He then hits one side wall of the compartment with his hand (or you can assume he kicks the wall with his leg). Will the compartment begin to move? I don't think so. (If it happens, there will be no need of fuel.) I mentioned this incident as a background for my question. Suppose there is a fish bowl (totally round) on a table and a fish swims inside it. It wants to get out from the bowl to achieve freedom {yes, it won't be the freedom :) }. But it cannot jump over the edge. Then it tries to topple the bowl by hitting the wall of the bowl (Assume that the fish has the enough strength). Is it possible? I thought 'no' until I read this question (It might be better to say that I am still in my opinion): Could a fish in a sealed ball, move the ball? I doubt whether this is slightly different from my question. Can someone explain? EDIT: I got some answers here those state that the swimming of the fish can move the bowl. While googling I found a similar question on another website. Instead of comforting me, it doubled my confusion :( . Though, causing my consolation, it says what I previously thought. You can find it here. Answer: Summary The fish can only move the bowl horizontally, at all, if he can use the force of internal momentum transfers, $F= m \tfrac{dv}{dt}$, to overcome the force of static friction with the table, $F= \mu_s Mg$, and can only move it vertically if he can overcome the force of gravity $F= Mg$. Same is true for the man if the car is stationary and we replace coefficient of static friction with static coefficient of rolling friction. “Perfectly round” is saying more than people often realize, it is an impossible theoretical case, so I mostly address a real-world round bowl. Furthermore, as discussed at the bottom, NASA slosh scientists who analyze slosh in liquid nitrogen tanks for the mentioned applications generally disregard anything inside the tank, like a mixer, because it is very hard to create any (net) momentum in the liquid from the inside; it usually takes external motions and forces to get any meaningful sloshing, also for the reasons discussed. Man in Train Car For the man in the car, there are three cases to consider: No friction between the wheels and the track A constant “coefficient of rolling friction” between the wheels and the track A coefficient of static friction and a coefficient of rolling friction that are different. Case 1: He can never change the center of mass (com) of the man/car system. This is because he cannot apply any net force on the man/car system. There is no external friction nor any other source of external force. Anything he does to push on the wall will push on him with equal force, and the net resultant force on the man/car system will be zero. But even in this case he can cause slight temporary back-and-forth motions of the car from the outside, but he cannot change the com of the whole system so he won’t be able to move the car on a continual basis or for a significant distance. The way he can move it a little is by moving in the car. For any motion of the man relative to the car along the direction parallel to the track, no matter how he moves (slowly, quickly, pushing on walls, walking), the car will move in the opposite direction enough to keep the system com in the same place. Case 2: (constant coef of friction): He can start the car rolling and actually move the system’s com, but due to friction it will stop again. But then he could do it again. He walks slowly in one direction while it is stopped, keeping his momentum transfers low enough to not overcome friction with the track. Then moves the other direction with enough force to overcome friction and start the train rolling (perhaps by jumping and pushing on the wall, or just sprinting the car length). Case 3: (Static friction is more than kinetic): This doesn’t change things a ton, but it matters. It makes it much easier and more forgiving when starting out, and remember we must start out over and over on our journey. We have a differential friction gain. Friction differences in each direction provide the external force that moves com. —— The Fish No Friction Our case: If the bowl is perfectly round (as nothing is), this is a frictionless case when we consider rolling. Again, perfectly round is no friction. He will be able to tip the bowl using even minute waves, or while motionless using his innate differential density capacity, discussed below. One may think it’s not frictionless because the bowl and surface will give a little, but there’s still maximum material stress and force at the very bottom, decreasing radially from there, which only requires any positive leverage however small. If there is any flat, or even imperfections in the roundness, will the bowl move at all in the frictionless case? The remainder if this section consider frictionless, but with a flat rather than the trivial perfectly round: The fish faces the same general problem with the added complication of the liquid. But case 1 (no bowl/table friction) remains generally the same. This may seem surprising, but every little motion of the fish that changes com and even small currents will be balanced from a center-of-mass standpoint by the bowl moving on the table. For normal (slow) swimming, whenever he swims to one side, the bowl and water move a little the other direction $d_{\text{bowl}} m_{\text{bowl}} = ( d_{\text{swim}} - d_{\text{bowl}} )$ $m_{\text{fish}}$ where $m_{\text{bowl}}$ includes the water. There’s a twist. This equation gives no motion if his density equals that of the water. Fish have internal pockets of gas that they can compress or expand to help them go up and down, in addition to just swimming up and down. Gas, unlike water, is compressible. So one surprising result is that if the fish is off to the side, and the water is still, and he merely changes the volume of his gas pouch without swimming... the bowl will move on the table. That’s because he has changed his density and the overall com as seen from the bowl. To see this note that when he is as dense as water, the com is on the vertical centerline of the bowl, and when he is not, it is not. But the com viewed from the table cannot move, so the bowl moves. (Frictionless worlds are impossible and sometimes counterintuitive.) The man in the train does not have a density similar to air, but the fish does to water and this makes it even harder. This all means he needs to use the water to transfer momentum, as this gas-pouch effect is small. However, it alone is enough to tip the bowl because the bowl is perfectly round and only requires any amount net leverage. A more interesting question is whether he can get anything with a small bottom to slide or tip: If he swims fast there will be water motion to consider too. Yet, he also can’t set up big currents and sloshing and do a lot; the location of the bowl on the table depends only on where everything inside it is, not how fast anything is moving, and the maximum distance moved (with the same location for the center of mass of the system) depends only on how much mass he can get to one side in a peak of water, which is very limited operating from within the tank. There are sloshing problems even without friction (such as in space), but they require external motions and forces to generate the oscillations, not a fish or even a mixer etc from the inside. NASA has sloshing scientists who analyze how liquid nitrogen tanks can affect things and how to control for it. It can only cause a lot of back and fourth, but that can be a big problem when positioning things in a space station. And the sloshing inside as mentioned comes from motion caused externally. You can probably find NASA sloshing analysis papers online. They use computational fluid dynamics and try to estimate what the maximum short-term momentum change through time, $F_{max}= m \tfrac{dv_{cog}}{dt}_{max}$, could be to give that as an upper bound for other engineers to know, and how best to reduce sloshing, with dampers or springs or whatever, which change automatically with tank level because sloshing dynamics change with that. There can be something akin to a resonate frequency, and I think that can even be estimated classically (?). Even if he makes a little net sloshing (note that things like a whirlpool have no overall effect, and sloshing scientists don’t even call them sloshing), he is limited to motion that corrects com, so the sloshing fish will have to use friction also to get across town. Fish with Friction Because perfectly round is frictionless for rolling, this case is not that. It will be much harder for him. And not just because of what was mentioned in the frictionless case (that he can’t get large amounts of water to one side from the inside). To overcome friction and move consistently, he has to get some mass (in the form of himself and some water) to one side, slowly (ie without exceeding the force of static friction with his rate of transferring mass), and then immediately move it quickly the other way generating large momentum transfer rates (and hence external force). Why must he go the other way immediately and not just quickly? Because it is a liquid and won’t stay to the side. If he stops and takes a breath () the water will begin to move back but not fast enough to do what he needs it to: overcome friction. This detail helps explain why the NASA sloshing engineers as above don’t worry about internal mixers in the nitrogen tanks. If the fish sloshes back and fourth overcoming friction each direction, they largely cancel out. He needs rapid then slow momentum changes. So you see that tipping, even with a very tiny flat on the bottom, or a nearly round bowl, seems impossible from the inside, even for a strong fish, even if he wasn’t so close to the density of water. Tipping with a flat bottom is even harder because $\mu_s$ is much lower than one, usually ~$0.2 - 0.3$. Good luck lifting, tipping, or even moving horizontally without getting outside at all
{ "domain": "physics.stackexchange", "id": 81536, "tags": "newtonian-mechanics, forces, momentum" }
How to compare acidity in the following aniline derivatives?
Question: Let the simplest one be a), ortho be b), meta be c) and para be d). I have confusion between ortho and meta. How to compare their acidity? Answer: Taking the case of ortho and meta in which you have confusion: The acidity of aniline will increase if there is a electron donating group in the compound to stabilise the conjugate positive acid formed. Since OMe is shows +R and -I effect, it is a net electron donating group as +R effect generally dominates over -I.So in case of ortho it is able to show electron donating nature due to +R effect which makes it more basic than the meta isomer which shows electron withdrawing effect due to only -I effect as +R effect will not affect from meta position.
{ "domain": "chemistry.stackexchange", "id": 5226, "tags": "organic-chemistry" }
Red-Black tree height from CLRS
Question: The lemma 13.1 of CLRS proves that the height of a red black tree with $n$ nodes is $$h(n) \leq 2\log_2(n+1)$$ There's a subtle step I don't understand. The property 4 reported at the beginning of the chapter states: If a node is red, then both its children are black. And because of such property it is later stated According to property 4, at least half the nodes on any simple path from the root to a leaf, not including the root, must be black. Consequently, the black-height of the root must be at least $h/2$. I can intuitively agree, but as exercises I'd like to prove it, but I can't manage how to actually do it. Why is that property true? I'm not actually neither able to set up the problem, the only think I could think of was that I if I have $r + b = h$ nodes, where $r$ is the number of red nodes and $b$ is the number of black nodes I can have a total of $$ k = \frac{h!}{r!b!} $$ And I'd like to prove from here that if $b < r$ than I have the contradiction, but I need something more that this probably. Any help? Answer: I'd prove it this way (sorry for being too late, hope it'll be useful for other people). Let $bh(x)$ be a fixed black height. The minum height we can have is when we have only black nodes, hence $$bh(x) = h(x)$$ so $$bh(x) \ge \frac{h(x)}{2}$$ holds. The maximum height we can have is when we alternate a black and a red node. In other words, for each black node there is a red node (at maximum), so we'll have: $$ 2\times bh(x) = h(x) $$ hence $$bh(x) \ge \frac{h(x)}{2}$$ holds. $\square$ Update requested: The maximum height of a tree rooted in the node $x$ is reached when we alternate a black and a red node because one of the properties of a RB tree states "A red node does not have a red child." So if we add a red node we shall violate this property, if we add a black node we shall increase the black height.
{ "domain": "cs.stackexchange", "id": 19179, "tags": "proof-techniques, trees, binary-trees, balanced-search-trees" }
How do you calculate percentage of an ion in a non stoichiometric compound?
Question: In an experiment lanthanum $\ce{^{57}La}$ was reacted with $\ce{H2}$ to produce the non stoichiometric compound $\ce{LaH_{2.90}}$. Assuming that the compound contains $\ce{La^{2+}}$, $\ce{La^{3+}}$ and hydride ions, what is the % of $\ce{La^{3+}}$ present in $\ce{LaH_{2.90}}$ ? I tried to find the percentage but I got a quadratic equation which has imaginary roots. Any help would be appreciated. I am aware of how to find the percentage of an element in a compound but it gets kind of fuzzy when we get to non stoichiometric compounds but I can understand the basics. Here is what I tried: Let the number of $\ce{La^2+}$ ions in $\ce{LaH_{2.90}}$ be x then the number of $\ce{La^3+}$ ions should be (2.9-x)^2 considering that the number of hydride ions are equal to the total number of La ions be it 2+ or 3+. Hence the quadratic expression emerges that x+(2.9-x)^2 = 1 which has no roots. I may be wrong so please do tell me if my assumption is incorrect. Answer: [OP] Let the number of $\ce{La^2+}$ ions in $\ce{LaH_{2.90}}$ be x then the number of $\ce{La^3+}$ ions should be (2.9-x)^2 considering that the number of hydride ions are equal to the total number of La ions be it 2+ or 3+. Hence the quadratic expression emerges that x+(2.9-x)^2 = 1 which has no roots. The charge of the lanthanum ions is equal to the charge of the hydride ions. Your statement that the number of lanthanum ions is equal to that of hydride ions is incorrect. Starting with the charges is more difficult, but possible. Let $c$ be the charge of the +2 ions in the formula unit. $2.9 - c$ will be the charge of the +3 ions. I can get the stoichiometric coefficient of the separate ions by dividing by 2 or 3, and these have to add up to one: $$\frac{c}{2} + \frac{2.9 - c}{3} = 1$$ Multiplying by 6: $$3c + 2(2.9 - c) = 6$$ Solving for c: $$ c = 6 - 5.8 = 0.2$$ So $\ce{La^2+}$ contributes 0.2 of the charge, and $\ce{La^3+}$ 2.7 of the charge. Of course, calling $x$ the stoichiometric coeffient of the $\ce{La^2+}$ ions gives an easier derivation, without fractions. You get $$ x * 2 + (1 - x) * 3 = 2.9 $$ which gives $x = 0.1$. Here are three ways to write the result: $$0.1 \ce{La^2+ + 0.9 La^3+ + 2.9 H-}$$ or $$\ce{\overset{+2}{La} _{0.1}\overset{+3}{La}_{0.9}H_{2.9}}$$ or $$\ce{LaH2.9LaH3}$$ (with thanks to BuckThorn).
{ "domain": "chemistry.stackexchange", "id": 17033, "tags": "inorganic-chemistry" }
Covariant derivative of a covariant derivative
Question: I'm trying to find the covariant derivative of a covariant derivative, i.e. $\nabla_a (\nabla_b V_c)$. This is something I've taken for granted a lot in calculations, namely I though that by the Leibniz rule we just have: $$\nabla_a (\nabla_b V_c) = \partial_a(\nabla_b V_c) - \Gamma_{ab}^{d}\nabla_c V_d - \Gamma_{ac}^{d} \nabla_d V_c$$ However when we prove that the covariant derivative of a $(0,2)$ tensor is the above, we use the fact that the covariant derivative satisfies a Leibniz rule on $(0,1)$ tensors: $\nabla_a(w_b v_c) = v_c\nabla_a(w_b) + w_b\nabla_a(v_c)$. However $\nabla_a$ on it's own is not a tensor so how do we have the above formula for it's covariant derivative? Answer: Easy way Let me first state the straight-forward way to do this computation. $$ \langle \nabla_a \nabla_b V, \partial_c\rangle = \partial_a \langle \nabla_b V, \partial_c \rangle - \langle \nabla_aV, \nabla_a \partial_c\rangle = \partial_a (\nabla_bV)_c - (\nabla_bV)_d \Gamma_{ac}^d $$ First equality follows from compatibility, second equality uses definition of Levi-Civita symbols. Hard way You are suggesting a roundabout way to do this, which formalizes to the following: $$ \nabla_a\nabla_bV = \nabla_a\left[~(\nabla_cV\otimes dx^c)[\partial_b]~\right] = \nabla_a\left[~C(\nabla_cV\otimes dx^c \otimes \partial_b)~\right] = C [\nabla_a (\nabla_cV\otimes dx^c \otimes \partial_b)] $$ where $$ C: T_pM \otimes T_pM \otimes T^*_pM \to T_pM, ~~ w\otimes z\otimes V \to z[V]w $$ is the contraction map of the last two arguments. Covariant derivative on mixed-type tensors commute with contractions (used in the last equality). Observe the expression within $C[ \cdots ]$ is a covariant derivative of a mixed tensor, which you can compute with the Leibneiz rule, and use your favorite component-wise formulas.
{ "domain": "physics.stackexchange", "id": 38305, "tags": "homework-and-exercises, general-relativity, differential-geometry, tensor-calculus, differentiation" }
Performing batch insertion for a single row
Question: I've following code that insert single row in if (rows == 1) { part and inside loop it is batch insert. So I'm thinking to skip single insert and have it in batch insert code. private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) { int rows = jTable.getRowCount(); if (rows == 1) { String itemName = (String) jTable.getValueAt(0, 0); int itemQty = (int) jTable.getValueAt(0, 1); Double itemPrice = (Double) jTable.getValueAt(0, 2); Items items = new Items(); items.setName(itemName); items.setPrice(itemPrice); items.setQty(itemQty); items.setTransactionNumber(manager.getTransNo()); manager.saveItems(items); Above part insert single record if there's only one row in a JTable Below in else part inside loop performed batch insert if JTable contains more than one row. } else { for (int i = 0; i < 1; i++) { String itemName = (String) jTable.getValueAt(i, 0); int itemQty = (int) jTable.getValueAt(i, 1); Double itemPrice = (Double) jTable.getValueAt(i, 2); Items items = new Items(); items.setName(itemName); items.setPrice(itemPrice); items.setQty(itemQty); items.setTransactionNumber(manager.getTransNo()); int max = items.getTransactionNumber(); manager.saveItems(items); And here in a second loop it does perform insertion after first row in a JTable when int i = 1. for (i = 1; i < rows; i++) { String itemsName = (String) jTable.getValueAt(i, 0); int itemsQty = (int) jTable.getValueAt(i, 1); Double itemsPrice = (Double) jTable.getValueAt(i, 2); items.setName(itemsName); items.setPrice(itemsPrice); items.setQty(itemsQty); items.setTransactionNumber(max); manager.saveItems(items); } } } } I was thinking to save all rows without considering is either only one row or more. So my modified version would not have if (rows == 1) {....} part Will have only for (int i = 0; i <= 0; i++) { So if there will have only one row in a JTable would this modified version can be cause of lower performance? Answer: Short answer to your question: No it's the same performance wise. Full review: My first concern with your code is the double for loop that uses the same looping variable i. This makes it harder to understand when you see an i inside the inner for loop. My other major concern is that the only difference between the code before the inner for loop and the code inside the inner loop is setting the variable max which I had no idea what it should be at first sight. Let's start by renaming that to transactionNumber. And since this is independent of the loop itself let's initialise that from the manager right above the loops instead: int transactionNumber = manager.getTransNo(); for (int i = 0; i < 1; i++) { String itemName = (String) jTable.getValueAt(i, 0); int itemQty = (int) jTable.getValueAt(i, 1); Double itemPrice = (Double) jTable.getValueAt(i, 2); Items items = new Items(); items.setName(itemName); items.setPrice(itemPrice); items.setQty(itemQty); items.setTransactionNumber(transactionNumber); manager.saveItems(items); for (i = 1; i < rows; i++) { String itemsName = (String) jTable.getValueAt(i, 0); int itemsQty = (int) jTable.getValueAt(i, 1); Double itemsPrice = (Double) jTable.getValueAt(i, 2); items.setName(itemsName); items.setPrice(itemsPrice); items.setQty(itemsQty); items.setTransactionNumber(transactionNumber); manager.saveItems(items); } } Now with a closer look there's a second difference. You only instantiate items once and overwrite that with the setters. I don't think this is a good idea. You should probably create a new items object each time with it's own name, price, quantity and save that independent of any previous values. This means we can just combine the 2 for loops into 1 like so: int transactionNumber = manager.getTransNo(); for (int i = 0; i < rows; i++) { String itemName = (String) jTable.getValueAt(i, 0); int itemQty = (int) jTable.getValueAt(i, 1); Double itemPrice = (Double) jTable.getValueAt(i, 2); Items items = new Items(); items.setName(itemName); items.setPrice(itemPrice); items.setQty(itemQty); items.setTransactionNumber(transactionNumber); manager.saveItems(items); } If you now take a close look at what would happen if you only had 1 row you can see that the code inside the loop gets executed exactly 1 time with the i replaced by a 0. This is exactly the same code you have inside your if(rows == 1) block so you can remove that check and only keep the for loop. Depending on how the Items are used in the rest of your code I also strongly suggest to remove the setters and pass any needed parameters with the constructor instead. This results in the following implementation of the entire method: private void jButton1ActionPerformed (java.awt.event.ActionEvent evt){ int transactionNumber = manager.getTransNo(); for (int i = 0; i < rows; i++) { String itemName = (String) jTable.getValueAt(i, 0); int itemQty = (int) jTable.getValueAt(i, 1); Double itemPrice = (Double) jTable.getValueAt(i, 2); Items items = new Items(transactionNumber, itemName, itemPrice, itemQty); manager.saveItems(items); } } The only improvement you should consider is to rename jTable to something more meaningful.
{ "domain": "codereview.stackexchange", "id": 33397, "tags": "java" }
Infinite deflection on axially compressed beam
Question: With $k$ proportional to the square root of the compression force, an axially loaded (and otherwise unloaded) beam has a deflection following the DE $$ \frac{\partial^4}{\partial x^4}z + k^2\frac{\partial^2}{\partial x^2}z = 0 $$ With solutions on the form $$ z(x) = C_1 + C_2 k x + C_3 \sin(kx) + C_4\cos(kx) $$ Now, consider the following boundary conditions $z(0) = 0, \left[\frac{\partial z}{\partial x}\right](0) = 0, z(L) = h, \left[\frac{\partial z}{\partial x}\right](L) = 0$ That is, the derivative is clamped to zero at both ends, and the function values are $0$ and $h$ respectively. Solving the corresponding system of equations for $C_1$, $C_2$, $C_3$, and $C_4$ gives $C_1 = - C_4$, and $C_2 = -C_3$, where $$ C_1 = h\frac{\cos(kL) - 1}{\xi}\\ C_2 = h\frac{\sin(kL)}{\xi} $$ and $$ \xi = kL \sin(kL)+2\cos(kL)-2 $$ If $kL = 2\pi n, n\in\mathbb{Z}$, $\xi$ becomes zero, and $C_1$ goes to infinity. Is this some kind of resonance, that occurs for certain wave numbers or compression forces? Answer: If $k$ becomes large enough, the beam buckles and collapses. The critical load is $$ F= \pi^2 Y I/(KL)^2 $$ where $Y$ is young's modulus, I is the Areal moment of inertia and for your boundary conditions $K=1/2$. The larger values values of $k$ that also give $\xi=0$ correspond to higher and higher bending modes becoming unstable (their energy goes down as the coeffecnient $C$ gets bigger. Of course the beam collapses as soon as any mode becomes unstable, and it seems that $k=2\pi /L$ is the first to go.
{ "domain": "physics.stackexchange", "id": 96435, "tags": "solid-mechanics" }
ROS on Ubuntu Wily
Question: Will the latest Ubuntu be supported any time soon? Originally posted by acajic on ROS Answers with karma: 36 on 2015-10-28 Post score: 2 Answer: Quoting from here, Jade only supports "Trusty (14.04), Utopic (14.10) and Vivid (15.04) for debian packages." Reading the ROS Release Policy we see that ROS releases will also not add support for new Ubuntu releases after its release date. This means that Jade will not ever support Wily (15.10) with debian packages. Next May when ROS K is released, it will likely support 15.10, but only through the end of July (when 15.10 reaches EOL). ROS K will be an LTS release paired with Xenial (16.04) as its LTS distribution. You can always try building ROS from source if you are stuck on 15.10. Originally posted by jarvisschultz with karma: 9031 on 2015-10-28 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 22852, "tags": "ubuntu" }
DDD Architecture for an e-commerce website (uploading images)
Question: Problem Description I am working on an e-commerce website, when a user wants to sell a product, he would open the product page and he can upload up to 12 photos: Image Upload Process This is the process that I follow for saving images. Once user drops an image on the uploader, it would get saved with a temporary prefix and a timespan. So for example when user drops 3 images, I would have: tmp-0-20190911123456787.jpg tmp-1-20190911123456777.jpg tmp-2-20190911123457777.jpg Now once the user completes the process, he clicks on DONE button, I create an array of 12 strings, representing the latest image names in the 12 uploaders. Then I open the corresponding folder (where product's temporary images are dropped). I get all images dropped in the folder, if the images is present in the array, I rename it to a permanent image name, if the uploaded image is not present in the array, I would delete the temp image from the upload folder. This is how photos folder looks like once user click on DONE: img-0-20190911123456787-product-title-is-appended-in-image-name.jpg img-1-20190911123456777-product-title-is-appended-in-image-name.jpg img-2-20190911123457777-product-title-is-appended-in-image-name.jpg Infrastructure Consideration I use different file systems to save the image, in case of Dev environment the images are stored in local file system. In case of Prod environment, images are stored in AWS S3 Bucket. DDD/Architecture Considerations I want to follow DDD principles and also I want to keep the code DRY. Since I have two different infrastructure layers (S3 & local disk) I want to remove as much logic as possible from the infrastructure layer (so I don't have to repeat the logic in each infrastructure layer). Since I want min logic in Infrastructure layer, I need to put the logic somewhere else. (But where should this logic go?) I assume in DDD, domain is a good place for putting the logic, but at the same time, in DDD an entity should not have dependency on infrastructure layer, and here, FileInfo (i.e. System.IO) is the infrastructure layer for Disk I/O/... I am finding it very difficult to think of a domain object which does not include FileInfo or FileStream. So finally this is how my DDD layers look like: The Code: Infrastructure Layer Contract IImageRepository.cs This interface defines the contracts the S3 and Disk IO should implement: public interface IImageRepository { string CreateAdFolder(string folderName); List<string> GetPermanentImageNames(string photosUrlPath); string UploadTempImage(FileUpload tempImageUpload); void SavePermanentImages(List<string> curPhotos, string photosUrlPath, ImageInfo imageInfo); } Domain Objects (this is only for Upload Bounded Context) FileUpload.cs // when user upload an images, we save the file with a temporary prefix, once user Save the form we rename the temp image name to a permanent image name // user may upload a couple of images, each image is uploaded through an Uploader. In case of advertisement we have 12 uploaders on the form, but in case of // profile picture, we only have 1 uploader on the form. Regardless of number of uploaders on the form, user may upload several phones on the same uploader // befor hitting the save button. When user hits save, we need to delete all the temporary images except the final version when is currently in the uploader // we need to rename this final images from temp name to a permanent image name public class FileUpload { public FileUpload(HttpPostedFileBase fileToUpload, string urlPath, string upoaderNumber) { FileToUpload = fileToUpload; UrlPath = urlPath; UploaderNumber = upoaderNumber; } public HttpPostedFileBase FileToUpload { get; } public string FileName { get { return FileToUpload.FileName; } } public string UrlPath { get; } public string UploaderNumber { get; } public void ValidateUploaderNumber() { // we have 12 image uploaders on the screen, and they are associated with 12 files in ad folder, we need to check uploaderNumber is within this range if (int.TryParse(UploaderNumber, out int uploaderNumber)) { if (uploaderNumber >= 0 && uploaderNumber <= GlobalConstants.NoOfImagesPerAd) { return; } } throw new Exception($"Invalid Upload Request: Image Uploader Number: {UploaderNumber}"); } } ImageInfo.cs // Details that we need for saving an image, these details would saved as the text properties of the image public class ImageInfo { public ImageInfo(string title, string category) { Title = title; Category = category; } public string Title; public string Category; public string GetJPEGTitle() { return $"{Category}: {Title} - Buy and Sell"; } public string GetJPEGComment() { return $"Buy and Sell for Free from Shopless {GlobalConfig.CountryLongName}"; } public string GetJPEGCopyrite() { return $"Shopless {DateTime.Now.Year.ToString()}"; } } UserUploadPath.cs public class UserPhotosPath { public UserPhotosPath(long userId) { UserId = userId; } public long UserId { get; } /// <summary> /// each user has his own images folder, append folder name under user's images folder /// </summary> /// <param name="folderName"></param> /// <returns>// returns something like: /file-uploads/images/24/$(foldername)</returns> public string AppendtoUserPhotosUrlPath(string folderName) { return $"{GetUserPhotosUrl()}{folderName}"; } public void ValidateUserPermissionToPath(string photosUrlPath) { if (string.IsNullOrEmpty(photosUrlPath)) { throw new Exception($"Invalid empthry Path, UserId: {UserId}"); } // since photosUrlPath is coming from the browser, a hacker could construct a path with a different userId and upload images to that path // here, we need to validate the requested path matches the current user's photo upload folder string curUserPath = GetUserPhotosUrl().TrimStart('/'); string trimmedPhotosUrlPath = photosUrlPath.TrimStart(new char[] { '/' }); if (trimmedPhotosUrlPath.StartsWith(curUserPath, StringComparison.InvariantCultureIgnoreCase)) { return; } throw new Exception($"Access denied for UserId: {UserId}, Path: {photosUrlPath}"); } // returns something like: /file-uploads/images/24/ private string GetUserPhotosUrl() { return $"{GlobalConfig.ImageUploadRelativeRoot}{GlobalConstants.UrlSeparator}{UserId.ToString()}{GlobalConstants.UrlSeparator}"; } } Common Helper I could not think of any place to keep this logic (this is for generating temporary and permanent image name) so I have created a static helper class in Common project: ImageNameHelper.cs public static class ImageNameHelper { public static string GenerateUniqueAdFolderName() { return GetIncrementalUniqueString(); } /// <summary> /// Permanent images name should be generated using this pattern: img(UploaderNumber)-(dash-separated-title)-(timestamp).(originalFileExtension) /// </summary> /// <param name="fileName">original/temporary file name</param> /// <param name="imageUploaderNumber">image uploader number</param> /// <param name="title">Title of the advertisement</param> /// <returns>something like this: img1-beautiful-scarf-for-sale-20180423134055768-hashcode.jpg</returns> public static string GenerateUniquePermanentImageName(string fileName, string imageUploaderNumber, string title) { title = title.ToDashSeparatedString(); if (string.IsNullOrEmpty(title)) { title = "picture"; } return $"{GetPermanentPrefix(imageUploaderNumber)}{title}-{GetIncrementalUniqueString()}{Path.GetExtension(fileName)}"; } /// <summary> /// Temporary images name should be generated using this pattern: tmp(uploaderNumber)-(TimeStamp-UniqueCode).(originalFileExtension) /// </summary> /// <param name="fileName">OrgiginalImageName.jpg</param> /// <param name="imageUploaderNumber">uploader number, should be between 0 to 11</param> /// <returns>something like this: tmp-1-TimeStamp-Guid.jpg</returns> public static string GenerateUniqueTemporaryImageName(string fileName, string imageUploaderNumber) { return GetTemporarytPrefix(imageUploaderNumber) + GetIncrementalUniqueString() + Path.GetExtension(fileName); } public static string GetPermanentImagePrefixPattern() { return GlobalConstants.PermanentPrefix + "*"; } public static string GetTemporaryImagePrefixPatternForUploader(string imageUploaderNumber) { // all temporary images with uploader 1 should start with tmp-1 (pattern would be "tmp1-*") return GetTemporarytPrefix(imageUploaderNumber) + "*"; } public static bool DoesImageHaveCorrectPermanentPrefix(string imageName, string imageUploaderNumber) { if (imageName.StartsWith(GetPermanentPrefix(imageUploaderNumber), StringComparison.InvariantCultureIgnoreCase)) { return true; } return false; } private static string GetPermanentPrefix(string imageUploaderNumber) { return GlobalConstants.PermanentPrefix + imageUploaderNumber + "-"; } private static string GetTemporarytPrefix(string imageUploaderNumber) { return GlobalConstants.TemporaryPrefix + imageUploaderNumber + "-"; } // I am using TimeStap to ensure names generated names are incremental and a Guid to ensure they are unique private static string GetIncrementalUniqueString() { return DateTime.Now.ToCompactDateTimeString() + "-" + Math.Abs(Guid.NewGuid().GetHashCode()).ToString(); } } Infrastructure Layer (Disk) This is the implementation of the contracts for saving images in the file system: public class DiskImageRepository : IImageRepository { private UserPhotosPath _userPath; private DiskPhysicalPathMapper _physicalPathMapper; public DiskImageRepository(UserPhotosPath userPath, DiskPhysicalPathMapper physicalPathBuilder) { _userPath = userPath; _physicalPathMapper = physicalPathBuilder; } public string CreateAdFolder(string folderName) { string photosUrlPath = _userPath.AppendtoUserPhotosUrlPath(folderName); // no need to check if User directory already exists or not, since it would be automatically created if missing // Create directory will throw exception if it cannot create the directory string physicalPath = _physicalPathMapper.ConvertUrlToPhysicalPath(photosUrlPath); Directory.CreateDirectory(physicalPath); return photosUrlPath; } public List<string> GetPermanentImageNames(string photosUrlPath) { int i = 0; List<string> photos = new List<string>(new string[GlobalConstants.NoOfImagesPerAd]); // initialize list to contain 12 elements string physicalPath = GetValidatedPhysicalPath(photosUrlPath); FileInfo[] images = GetFiles(physicalPath, ImageNameHelper.GetPermanentImagePrefixPattern()); foreach (FileInfo image in images) { if (i >= GlobalConstants.NoOfImagesPerAd) { LogConfig.Logger.Error($"{photosUrlPath} contains more than {GlobalConstants.NoOfImagesPerAd} files, the cause need to be investigated."); break; } photos[i++] = image.Name; } return photos; } public string UploadTempImage(FileUpload tempImageUpload) { // exception is thrown if any of the following validation fails _userPath.ValidateUserPermissionToPath(tempImageUpload.UrlPath); tempImageUpload.ValidateUploaderNumber(); var physicalPath = GetValidatedPhysicalPath(tempImageUpload.UrlPath); DeleteSuperceededTemporaryImagesByUploaderNumber(physicalPath, tempImageUpload.UploaderNumber); var tmpFileName = ImageNameHelper.GenerateUniqueTemporaryImageName(tempImageUpload.FileName, tempImageUpload.UploaderNumber); var fullPhysicalPath = Path.Combine(physicalPath, tmpFileName); tempImageUpload.FileToUpload.SaveAs(fullPhysicalPath); return tmpFileName; } public void SavePermanentImages(List<string> curPhotos, string photosUrlPath, ImageInfo imageInfo) { _userPath.ValidateUserPermissionToPath(photosUrlPath); var physicalPath = GetValidatedPhysicalPath(photosUrlPath); FileInfo[] images = GetFiles(physicalPath); for (int i = 0; i < curPhotos.Count; i++) { if (!string.IsNullOrEmpty(curPhotos[i])) { if (images.Where(img => string.Equals(img.Name, curPhotos[i], StringComparison.InvariantCultureIgnoreCase)).Any() == false) { LogConfig.Logger.Error($"photo: {curPhotos[i]}, was not found in ad folder: {photosUrlPath}. This is either a bug or a malicious request."); curPhotos[i] = string.Empty; } } } foreach (FileInfo image in images) { var index = curPhotos.FindIndex(p => p.Equals(image.Name, StringComparison.InvariantCultureIgnoreCase)); if (index >= 0) { if (ImageNameHelper.DoesImageHaveCorrectPermanentPrefix(image.Name, index.ToString()) == false) { var permanentImageName = ImageNameHelper.GenerateUniquePermanentImageName(image.Name, index.ToString(), imageInfo.Title); string fullPhysicalPath = Path.Combine(physicalPath, permanentImageName); image.MoveTo(fullPhysicalPath); SetJpegMetadata(fullPhysicalPath, imageInfo); curPhotos[index] = permanentImageName; } } else { image.Delete(); } } } private string GetValidatedPhysicalPath(string urlPath) { string physicalPath = _physicalPathMapper.ConvertUrlToPhysicalPath(urlPath); if (!Directory.Exists(physicalPath)) { throw new Exception($"Path does not exists: {physicalPath}"); } return physicalPath; } // We have 12 images uploader on the screen which correspond to 12 files (place holders) in ad folder, on the disk drive. If an image is // uploaded on uploader 1, we need to delete all other temporary images uploaded on uploader 1. We should not delete permanent images at // this stage, because user may not saves changes, and in that case we need to delete all temporary images // NOTE: it is considered that path and uploader number are already validated. private void DeleteSuperceededTemporaryImagesByUploaderNumber(string adFolderFullPhysicalPath, string uploaderNumber) { FileInfo[] oldTmpImages = GetFiles(adFolderFullPhysicalPath, ImageNameHelper.GetTemporaryImagePrefixPatternForUploader(uploaderNumber)); foreach (var oldTmpImage in oldTmpImages) { oldTmpImage.Delete(); } } private FileInfo[] GetFiles(string physicalPath, string searchPattern = "") { if (string.IsNullOrEmpty(searchPattern)) { searchPattern = "*"; } DirectoryInfo di = new DirectoryInfo(physicalPath); return di.GetFiles(searchPattern).OrderBy(x => x.Name.PadNumbersForAlphanumericSort()).ToArray(); } private void SetJpegMetadata(string physicalPathToJpeg, ImageInfo imageInfo) { var jpeg = new JpegMetadataAdapter(physicalPathToJpeg); jpeg.Metadata.Title = imageInfo.GetJPEGTitle(); jpeg.Metadata.Comment = imageInfo.GetJPEGComment(); jpeg.Metadata.Copyright = imageInfo.GetJPEGCopyrite(); jpeg.Save(); } } DiskPhysicalPathMapper.cs This is just a mapper to translate URL paths to Physical Path (for Disk): public class DiskPhysicalPathMapper { private readonly string _physicalRoot; private readonly string _urlRoot; public DiskPhysicalPathMapper(string urlRoot, string physicalRoot) { _urlRoot = urlRoot; _physicalRoot = physicalRoot; } public string ConvertUrlToPhysicalPath(string urlPath) { if (string.IsNullOrEmpty(urlPath)) { throw new Exception($"invalid photosUrlPath: {urlPath}"); } // replace url root with physical root, case insensitive var path = Regex.Replace(urlPath, _urlRoot, _physicalRoot, RegexOptions.IgnoreCase); return path.Replace(GlobalConstants.UrlSeparator, @"\"); } } I also have a JPegMethaDataAdapter class that I have not included in the review, it just sets JPEG image properties. I have also not included the S3 infrastructure layer. Answer: Architecture DDD-wise : The most important thing about DDD is that the domain drives the design (so.. DDD). That means that when we look at your domain objects we should be able to understand the whole logic of your application. Let's summarize what your application is doing : A user posts an product for sale, where the product can have a Youtube link, a website and some photos. For every noun in there, there should be a corresponding object. Maybe you have them, maybe you don't, I can't tell from the picture you uploaded. The what I can at least tell is that a photo doesn't belong to a user, but to a product (which means that the class UserPhotosPath needs to be re-thought of). So, your bounded context should look like Product has {Photos, Youtube link, website link} and a user has products. One very important thing I think you misunderstood about DDD is that the domain is the good place to put the domain logic, not all the logic. Domain logic includes : Relationship between entities, validations. That's... pretty much all. So, you need to ask yourself, does my domain depend on some file system? Do you want your domain, which should be comprised of only domain objects, to know that you use a file system to store your images? That's your problem. You have a hard time to create a domain object named FileInfo because it shouldn't be a domain object. Finally, Domain Driven Design is kind of a pain to use in a web context, there's one pitfall you should be very aware of that will kill the scalability and performance of your application. The whole bounded context things should be used only when writing to the application. Which means, when you need to show something on your website, have a separate project where all you do is load data in POCOs (plain old C# objects), skipping all business logic, because it doesn't apply to reading.
{ "domain": "codereview.stackexchange", "id": 35971, "tags": "c#, design-patterns, file-system, asp.net-mvc, ddd" }
How exactly does ROS "Exact Time" policy work, for a time synchronizer?
Question: Quick question, I'm trying to understand more in depth how this aspect of ROS works. As per the title, does exact time policy really mean that the timestamp of the messages being compared have to be literally the same down to the nanosecond? And is there a guarantee that the data transmitted was indeed captured at the specified instant, or does the timestamp only say when the message was published on the ROS topic? For example, let's say I've got a turtlebot. I want to do sensor fusion between the wheel encoders and the gyroscope. In order to have the best result, I'd better make sure that the data coming into my Kalman filter was recorded by the different sensors at the same time, correct? But how can the computer running the turtlebot software be able to give the exact same timestamp to different sensor data? Won't it be running one line of code at a time? Thanks. Originally posted by 1fabrism on ROS Answers with karma: 23 on 2019-05-17 Post score: 1 Answer: As per the title, does exact time policy really mean that the timestamp of the messages being compared have to be literally the same down to the nanosecond? Yes. See wiki/message_filters: ExactTime Policy: The message_filters::sync_policies::ExactTime policy requires messages to have exactly the same timestamp in order to match. Your callback is only called if a message has been received on all specified channels with the same exact timestamp. This is mostly used with message streams that are published by the same node, but on different topics. Camera drivers are a good example: the sensor_msgs/Image and CameraInfo messages are published on different topics, but with the same timestamp. Or sensor_msgs/Image and sensor_msgs/PointCloud2 published by a 3D camera driver. And is there a guarantee that the data transmitted was indeed captured at the specified instant, or does the timestamp only say when the message was published on the ROS topic? The timestamp is set by the driver / algorithm / node / process, not by ROS (which is an ambiguous thing to say in any case). So if the driver sets the timestamp equal to when the data was captured, it will represent the time at which the data was captured. If it sets it to something else, it'll be something else. But how can the computer running the turtlebot software be able to give the exact same timestamp to different sensor data? Won't it be running one line of code at a time? Technically (and pedantically): no, as it will execute a single instruction at a time, and a line may translate into many instructions. As to your question: this will be difficult to answer, as in multi-processor or multi-core systems, it's certainly possible for multiple nodes to be active at the same time. The chances of having two independent nodes, active in parallel, to publish with the exact same timestamp are very low however. And that is why there is an ApproximateTimeSynchronizer, which allows you to configure a maximum delta-t between messages for them to still be considered published "at the same time" (or in that specific case: near enough). Originally posted by gvdhoorn with karma: 86574 on 2019-05-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by gvdhoorn on 2019-05-21: One could wonder: if a single node publishes multiple messages on different topics with the same timestamp, why not just publish a single message that is the union of all those messages? There are (at least) two reasons for this: you cannot subscribe to a field in a message, only a topic. If a single, large msg would be published, it would be impossible to express interest in a part of it (ie: subscribers would have to subscribe and deserialise each and every message, as a whole, to retrieve only part of it) standard tools (ie: rostopic, RViz, etc) would not know how to process the union-msg: this would immediately mean the loss of visualisation tools, easy debugging and other uses where ROS standard tools help. Plugins or forks would have to be created to deal with the union-msg, which becomes unmaintainable rather quickly Publishing seperate msgs from the standard sets and syncing them using their timestamp avoids both of these problems, while incurring minor overhead.
{ "domain": "robotics.stackexchange", "id": 33030, "tags": "ros, ros-melodic, timesynchronizer, rostime" }
Expansion of universe, distance confusion
Question: I am having trouble understanding expansion of the universe. Say my coordinate system uses atomic clock cycles for time and the number of clock cycles for light to bounce back to me for distance. Say I have an object at the same coordinate in my coordinate system all the time. Is my interpretation that since the universe is expanding it would get a bigger distance with the metric tensor if it was still, so it is in fact moving towards me even though its coordinates are fixed, correct or wrong? Answer: I suspect you have got confused by a very common misunderstanding of the expanding universe. For a flat universe (the simplest case) the distance between an object at the origin and an object at some position $(x,y,z)$ is given by: $$ \ell^2 = a^2(t)\left( x^2 + y^2 + z^2 \right) $$ where the function $a(t)$ is called the scale factor. For an expanding universe the scale factor increases with time and this means the distance $\ell$ increases with time even when the position $(x,y,z)$ is constant. It appears that distance to an object increases even though the object isn't moving. Where the confusion comes in is that the coordinates, $x$, $y$ and $z$, are comoving coordinates that are are not simply distances in the usual sense that we measure distances using rulers or whatever measuring device we want. The distance we would measure by stretching a measuring tape out to the object is called the proper distance - the Wikipedia article I linked above explains the difference between proper distance and comoving distance. If we take your example of a distant object that is stationary relative to you then in your coordinates that object has a constant position and a zero velocity. Alternatively in comoving coordinates the position of the object changes with time and it has a non-zero velocity. As long as you stick to the same coordinate system the behaviour is as you'd expect. The conclusion that distance changes even though the object is stationary arises only when you mix up the two coordinate systems.
{ "domain": "physics.stackexchange", "id": 50867, "tags": "cosmology, spacetime, metric-tensor, space-expansion, coordinate-systems" }
Why is a [Cu(SCN)2] complex black?
Question: I've been creating various Copper(II) complexes using different ligands and predicting their relative hues using Crystal Field Theory by using the spectrochemical series to predict the extent of d orbital splitting on the Copper(II) ion and the resultant colour of the copper(II) complex. When I added potassium thiocyanate ($\ce{KSCN}$) to a solution containing hexaaquacopper(II) the entire solution turned deep black. Doing some research online and knowing that $\ce{SCN-}$ tends to form complexes with planar geometry I'm fairly certain that the resultant copper(II) complex was $\ce{[Cu(SCN)4]}$. Because of the relative strength of the $\ce{SCN-}$ ligand compared to the pale blue complex formed with $\ce{H2O}$ I assumed the solution would appear redder in hue as higher frequency wavelengths would be absorbed, however, I fail to see why the entire visible spectrum would suddenly be absorbed. Can this phenomenon be explained with Crystal Field Theory? Also perhaps my solution was not dilute enough? Answer: As the user above states, if you mix concentrated solutions of $\ce{Cu(II)}$ salts and $\ce{NCS}$ you get $\ce{Cu(NCS)_2}$ which is a black solid. I'd disagree with them that it's not longer a complex because it 100% is - and you are correct to assume that the local coordination is $\ce{Cu(NCS)_4}$. To be more precise, each copper is coordinated by $\ce{Cu(NCS)_2(SCN)_2}$, and there's actually a large Jahn Teller distortion so it could also be described as $\ce{Cu(NCS)_2(SCN)_4}$. They're right that the colour can't be easily explained by $d-d$ transitions (i.e. the kind you're thinking of w.r.t. ligand field splitting) though. $\ce{Cu(NCS)_2}$ is black probably because of ligand to metal charge transfer (the same reason $\ce{Fe(III)NCS}$ is blood red) - i.e. on absorption you transiently form $\ce{Cu(I)}$ and $\ce{NCS}$ from $\ce{Cu(II)}$ and $\ce{NCS–}$. It's more complex that that for sure, but I think it's fair to say no one knows yet, because we only worked out the structure two years ago. If you want a lot more information about $\ce{Cu(NCS)_2}$ we published on it here https://journals.aps.org/prb/abstract/10.1103/PhysRevB.97.144421 (on the arxiv at https://arxiv.org/abs/1710.04889).
{ "domain": "chemistry.stackexchange", "id": 10883, "tags": "coordination-compounds, color, crystal-field-theory" }
how to fly ardrone in tum simulator?
Question: i don't know how to fly the ardrone in the tum simulator and how to run the python scripts in the simulator for the autonomy flight Originally posted by slowmed on ROS Answers with karma: 1 on 2014-11-23 Post score: 0 Original comments Comment by Pedro_85 on 2014-11-24: Hi, a couple of questions first. Where you able to run the simulator successfully? What topics do you get when you run rostopic list? tum_simulator replicates the topics created by ardrone_autonomy to match the ardrone_driver node. Let me know so we can go on. Comment by slowmed on 2014-11-26: thank you pedro my response to your two questions is in the answer below Comment by Pedro_85 on 2014-11-26: I don't see in your answer the topic /ardrone/navdata, /ardrone/takeoff, /ardrone/land, and /cmd_vel. Can you confirm if you see them and forgot to include them? Comment by slowmed on 2014-11-26: you saw right , they don't exist Comment by Pedro_85 on 2014-11-26: What kind of error do you get? Is gazebo gui launched at all? did you install the ardrone_autonomy and tum_simulator from the fuerte branch? Comment by slowmed on 2014-11-26: i install the ardrone_autonomy and the tum_simulator , gazebo does work but when i try the take off command listed in the wiki nothing happend Comment by Pedro_85 on 2014-11-27: Right, you need to publish to the topics that I mentioned and you don't seem to have in order to make the ardrone move. Try reinstalling the ardrone_autonomy and tum_simulator packages. When you clone them from the git repository make sure you select the fuerte branch Comment by slowmed on 2014-11-27: how do i select fuerte branch ?? Comment by slowmed on 2014-12-06: still nothing pedro ? Comment by Pedro_85 on 2014-12-07: You can clone a particular branch from the git repository by using the flag -b <name_of_the_branch>. So it would be: git clone https://github.com/AutonomyLab/ardrone_autonomy.git -b fuerte. With tum_simulator: git clone https://github.com/tum-vision/tum_simulator.git -b fuerte Answer: You can fly with joystick or simply (it is very uncomfortable ) by publishing to topic. Check section 3.3 Originally posted by green96 with karma: 115 on 2014-11-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by slowmed on 2014-11-26: actually the command line doesn't work , i tried all the commands listed in the section 3.3 they don't work , and what i want really is to move the drone in the simulator using the python programs in the autonavx_ardrone file
{ "domain": "robotics.stackexchange", "id": 20128, "tags": "ros, tum-simulator, tum-ardrone" }
Why PyTorch is faster than sklearn models?
Question: Recently, I get to know about the hummingbird library for Python. I trained a RandomForest on a 10M-sized dataset with 2 labels. With sklearn it was taking 450 ms for inference. But after converting the same model to PyTorch, now it takes 128ms on CPU inference. If both are running on the CPU, then why hummingbird's Pytorch model is faster than sklean model? I am not getting what hummingbird does to my sklearn model to increases speed. Answer: It is difficult to answer your question without the access to your code. The best way to understand the difference is to profile the code and see where the bottlenecks are for your specific problem. For this, you can use different profiling modules in python: cProfile python line profiler
{ "domain": "datascience.stackexchange", "id": 7757, "tags": "machine-learning, scikit-learn, random-forest, machine-learning-model, pytorch" }
How land is made flat at scale
Question: For building something like a big building or a parking garage, there is a need for a large flat surface of concrete. Wondering how they make the dirt and the concrete flat. It's hard to make a tiny flat surface at the beach in sand let along thousands of square feet. Wondering what sorts of machines tools or techniques are used at a high level. Answer: For big constructions, there is a survey plan and a few bench marks marked and cemented by the surveyor. Then at the time of excavation there are auto rotating laser levels that cast a level moving beam which is easy to see and follow. there are reflecting mirrors and elevation readers. Excavation is done in stages, from rough to fine contours using different heavy machinery depending on the type and configuration of the site. Many jobs call for presence of soils engineer at intervals to test the soils and determine their properties and if they match the project's soils report. Concrete slabs can be leveled with vibrating, moving, or, rotating screeds or operator driven motorized finishers or manually. There are plastic stakes that mark the finish level and the have fittings for long aluminum levels. Structural drawings usually call for 1/8" to 1/16" tolerance in slabs level.
{ "domain": "engineering.stackexchange", "id": 2497, "tags": "surface-preparation" }
Would a visible light beamforming "telescope" be feasible?
Question: If you had a phased array of high speed photodiodes that were spaced sufficiently far apart, would it be possible to apply beamforming theory to create a steerable visible light "telescope"? Traditional, earth-bound telescopes are limited by the size of their primary mirror due to the mechanical properties of figured glass as the size increases. Additionally, the larger the primary mirror the more robust the tracking mechanism needs to be to handle the weight. It seems that if you could instead "virtually" track the object by beamforming the output of a horizontally mounted photodiode array, you are no longer limited by the size of a primary mirror and/or infrastructure to physically move the telescope. Angular resolution could be increased simply by making the array larger. Furthermore, your tracking accuracy is only dependent on your software processing and electronics, not moving parts. Of course, the problem likely becomes dealing with immense quantity of data collected from the photodiode array, and the sheer physical size of the array. Light travels at about 299,700 km/s in air, so ~29.97 cm/µs. That seems within the realm of high-speed electronics given a large enough array (10x10m?) Difficult, huge amount of data...but it doesn't seem impossible unless I'm misunderstanding how beamforming works? Answer: Not in the way you think, no. Digital Beamforming works because the digital receiver is able to sense the phase of the incident radio wave; that phase is retained. Photodiodes don't sense the phase – they only sense the intensity. Also, this requires the whole incident radio or light wave to be phase-coherent across its front – that's the case for laser light, but not for two random photons emitted from a huge fusion fireball, I'm afraid. So, the second requirement also falls flat. Generally, even non-laser sources of light have a coherency period – but the more different wavelengths contribute the smaller that gets. In TX, that principle works – if you feed a laser through an array of digital phase modulators (optical crystals with electrically adjustable refractive indices), you can build phase-coherent multiemitters. In RX, that might also work, if the received coherent light can be made to fall through a microscope objective, and then onto an acousto-optic deflector, you might, via modulating that deflector's state, be able to build a selective/phase-shifting summer, leading to beamforming. But: that's particle physics sized electronics normally, i.e. suitable for situations where you have a very strong laser illuminating very small points (in fact, I learned about AODs just today, and they're usually used the other way around: take one beam of laser, split it into multiple parallel beams, focus these, and trap quantums in the focus point. Cool stuff.). Anyway, there's also mathematical accuracy limits to digital beamforming, and I'd must admit that I don't know whether they'd even allow higher precision with the kind of timing accuracy we can do today. Then again, phyisicists invent the coolest devices, and maybe you can
{ "domain": "dsp.stackexchange", "id": 5095, "tags": "beamforming" }
What decompression algorithms have the shortest implementations?
Question: I am writing an executable compressor for small (roughly 1 kilobyte) executable files. I am exploring compression algorithms that are well suited for this task. The amount of time required to compress and decompress is not important for my purposes, nor is the complexity of the compressor. I am limited to 4 GB of RAM while decompressing and I am chiefly concerned with maximizing the compression ratio while minimizing the size of the decompression stub. I am currently experimenting with arithmetic encoding using various models to predict the posterior probability. This results in a decompression stub around 200 to 300 bytes in size. I have also experimented with LZ based methods, but have rejected them because their implementation is more complex and their performance is both theoretically and empirically inferior to arithmetic encoding with context modeling. What other compression algorithms might be useful candidates for this task? Answer: I expect short decompressors with algorithms that do not maintain an explicit model and use "simple" encodings. An LZ encoding might use (length, offset) with Fibonacci coding, with an offset of 0 meaning a run of length literal codes, (minus) one a repetition of the last code, …. I expect decompressor size to depend on (possibly abstract) machine - "MS Windows x64", NC4000, … (The implied model being Things seen before are more likely than anything new, with short, local repetitions more likely than large ones from way back.) That said, you may be after shortest length of compressed code+decompressor - Kolmogorov complexity.
{ "domain": "cs.stackexchange", "id": 6986, "tags": "algorithms, data-compression" }
Changing the size of the turtlesim window
Question: How can I change the canvas width and canvas height of the turtlesim window displayed on running turtlesim_node? Are there some options which can be used with the command "rosrun turtlesim turtlesim_node __name:=turtle1" to change the size of the display window. Originally posted by Hemu on ROS Answers with karma: 156 on 2013-05-22 Post score: 0 Answer: The turtlesim does not support a dynamic windows size. This is mainly due to keeping the code as simple as possible since turtlesim is used as an example/tutorial for ROS. Originally posted by Dirk Thomas with karma: 16276 on 2013-05-23 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Gazer on 2013-05-23: +1 second that
{ "domain": "robotics.stackexchange", "id": 14248, "tags": "ros, turtlesim, turtlesim-node" }
How can I remove active topic?
Question: Hi, I wanted to use a python script to record a bag instead of using the command line in terminal. My script is based on how record rosbag with python. However, the recorded topic stay active in the topic list even after I kill the process with Ctrl+C. The code snippet is as follows: import subprocess, shlex, psutil command = "ros2 bag record -o subset /sensor/position" command = shlex.split(command) rosbag_proc = subprocess.Popen(command) How can I fix this? Any help is appreciated Originally posted by Ima on ROS Answers with karma: 1 on 2022-11-18 Post score: 0 Original comments Comment by Ranjit Kathiriya on 2022-11-18: Follow the answer.. Answer: Hello, You can write a proper node for recoding a bag file. You can follow a tutorial for recording a bag file using the python ros2 node. https://docs.ros.org/en/galactic/Tutorials/Advanced/Recording-A-Bag-From-Your-Own-Node-Py.html I kill the process Instead, you can unsubscribe to that by using the following line. self.Node.destroy_subscription(self,self.subscriber) Originally posted by Ranjit Kathiriya with karma: 1622 on 2022-11-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Ima on 2022-11-18: Thanks for the reply! I, however, tried writing a ros2 node for recording a rosbag but Ros2 foxy doesn't provide the package rosbag2_py. Comment by Ranjit Kathiriya on 2022-11-18: Have a look in the future branch you will be able to get rosbag2_py https://github.com/ros2/rosbag2/tree/foxy-future https://github.com/ros2/rosbag2/issues/983 If you got your answer then accept the answer by pressing tick button. Comment by Ima on 2022-11-21: I can't change the whole branch to a new one, and what I'm looking for is removing the remaining active topics from the topic list Comment by Ranjit Kathiriya on 2022-11-21: Clone the git repo, then after change the branch to foxy, build it. I think you will be ready to go.
{ "domain": "robotics.stackexchange", "id": 38134, "tags": "ros2" }
The Spinning Log "Perpetual Motion" problem, and my attempt at a solution
Question: So I was introduced to this "perpetual motion" riddle a few weeks ago. The problem goes like this: we all know perpetual motion machines are not possible, but this riddle seems like it should work as a perpetual motion machine - the problem is to explain why it doesn't work. Here's the situation: (1) You take a room, and put a wall down the middle of the room - splitting it in to two equal half-rooms. (2) Take a perfectly cylindrical log whose length is the same as the length of the wall we just put in to make two half-sized rooms, and cut out a place halfway up the wall so that the log slips right in, length wise (if you were looking at the middle wall from the inside of one of the rooms, you would not see the ends of the log at all, you would a half-cylinder running the length of the wall - if you looked from the other room, you would see the other side of the cylinders length). (3) Make sure the space between the log and the wall is air-tight (the log is treated to absorb no water, there is no friction between it and the wall, etc), and fill one of the half-sized rooms all the way to the top with water. Now you have one side of the log exposed to air, and one side totally surrounded by water. The idea is that, since everyone has experience that logs rise in water, the side exposed to the water should "rise" and the log should spin, creating a perpetual motion machine. Now, I have two explanations of why it doesn't work. One I came up with myself, and one that is proposed in the solution on the page in which I found the problem. My Solution: Water pressure increases as you increase in depth, so the water molecules hitting the log deeper in the water would have more force that the water molecules hitting the log less deep, but at each infinitesimal depth increase, the water is hitting the log at all angles, and the force from the molecules coming on from all directions would cancel out to leave a force pointing toward the center, at every place the log is exposed to water. Since all of the net forces from the water at each point exposed to water is pointing toward the center, there can be no net torque and the log cannot spin. The net force would be a slight force directly upwards (due to the difference in size of the force vectors pointing inwards between the less dense water up top and the more dense water on the bottom), but since the log is held in place, it cant move upwards. The Proposed Solution "To understand why this would not work, we need to look at how buoyancy itself works. When a lighter than water object (lets say a hollow ball) is submerged it is pushed up by the water apparently in spite of gravity. In fact the opposite is true. It is gravity pulling down on the water that pushes the ball up. The kinetic energy to lift the ball comes from an equal volume of water falling to occupy the space where the ball just was. As the ball moves up, the evacuated area is filled with water from above it; therefore, this water is falling. When the water was above the ball it had potential energy that is exchanged for kinetic as it falls to fill the void left by the ball thus providing energy to lift the ball. The opposite is true for a heavier than water object (lets say a brick), but it's still the same principle. As the brick falls through the water, it is filling space that was once filled by water. As the brick falls, it is providing energy to lift the water to fill the space it just evacuated. Now back to our spinning log example. If the log were to spin, it would not be evacuating any space for the water to fill. Even though a different part of the log would be filling that space, it would still be the same space it was filling before. There would be not downward movement (falling) of water to convert potential energy into kinetic, so there is no energy to cause the log to spin." Here's my problem I've never understood this explanation of buoyancy. Why, just because an object is less dense than water, is the water "rushing below it" to fill the space evacuated by the object as it rises? Why doesn't the object just stay where it is? It seems like you're using buoyancy to explain buoyancy - the less dense object floats upward due to buoyancy and the water rushes in below to fill the evacuated space, which causes the buoyant force upwards... what? Why is the water rushing below less dense objects, and not rushing below more dense objects? Am I missing something in the explanation? I understand that when you submerge an object in water, it displaces an equal volume of water, and the weight of the water displaced equals the bouyant force upwards, but I just don't understand why the water is "rushing underneath and pushing the object up". Answer: The explanation is using an energy argument. That for the normal case of a submerged piece of wood, you can assume that if the wood and a parcel of water above it switch places, then the water (which is heavier/more massive) drops in the gravitational field releasing potential energy. This release is not offset by the rising wood since it is not as massive. This energy imbalance goes into the kinetic energy to move the log (and the water). This argument says that in the case of the rotating log, none of the water can move downward to release potential energy. Since the water stays in the same place, no energy is released, and there is no energy to move the log. Why doesn't the object just stay where it is? Because forces will combine to move it to a position with less energy. You can do a free-body diagram of a ball rolling downhill to see that the net force is down the hill. But you can also simply say that the ball is going to move in a way that lowers its potential energy (which is downward in the gravitational field). Using that argument for the cylinder says that there is no position it is free to move to that lowers the potential energy, so no movement will happen.
{ "domain": "physics.stackexchange", "id": 52443, "tags": "fluid-dynamics, buoyancy, perpetual-motion" }
Quantum oracle implementation overhead
Question: I am a physicist getting acquainted with one of the typical constructs for formulation and analysis of quantum algorithms (such as search problems or query complexity models), namely the "oracle function". As far as I understand, the oracle is a quantum black-box entity which computes the quantity to be analyzed. For example, in a qubit circuit formulation of a universal quantum computer the oracle is a deterministically computed Boolean function (correct me here if I'm wrong). My question is: Is quantum oracle implementation overhead counted when the computational complexity of specific/well-known algorithms is compared to alternatives? Examples: deterministic Deutsch-Jozsa algorithm algorithm is exponentially faster than a classical counterpart only if the implementation of the unknown function $f$ requires less than $2^n$ classical "steps'' (including obtaining the guarantee that $f$ is either partial or constant). A stochastic example: collapse of the Grover algorithm speedup by a faulty oracle with an arbitrary small but constant failure probability. Why do one care: Of course, one can alsways say that we look just at the non-oracle part or count only the number of queries etc. But any talk of quantum speedup has any chance of empirical relevance only if the overhead for implementation of all of the quantum parts of the construction is taken into account (if it is trivial/constant - no problem). In other words, I find it meaningful to compare quantum algorithms only after they are "fully compiled'' onto (at least hypothetically) realizable classically-driven hardware (e.g., universal set of gates plus as many qubits as one needs - but we'll count all the time & memory required). My question is probably fairly typical for a non-expert, perhaps I just got confused by some less sophisticated introductions to quantum computation. Please deconfuse me. Answer: As pointed out by Logan Mayfield, Scott Aaronson's blogpost on the topic completely resolves my question. Oracles is a fundamental tool for rigorous investigation of computational complexity classes, “bring out the latent strengths” of one class over another class. The root cause of my concerns was that little (as far as I understand - nothing) can be proven unconditionally about quantum speedups in the unrelativized (compiled to physical hardware) world. Of course, this obstacle does not diminish the power of the oracle framework as a major tool enabling progress in quantum computation theory, including the celebrated example of Shor's algorithm. Personally, it is a consolation to learn from Scott that This [relativized vs. unrelatized complexity] is an absolutely crucial distinction that trips up just about everyone when they’re first learning quantum computing.
{ "domain": "cstheory.stackexchange", "id": 1955, "tags": "quantum-computing, circuit-complexity, oracles" }
Can someone explain intuitively how, for a Galilean universe, $A^4$ is equivalent to $\Bbb{R} \times \Bbb{R}^3$?
Question: I am reading Arnold's book on classical mechanics. Obviously, everyone who's studied basic physics feels comfortable with $\Bbb{R} \times \Bbb{R}^3$. This is just a pair $(t,\mathbf{x})$. There are three basic actions one can take. Uniform motion with velocity: $g_1(t,\mathbf{x}) = (t, \mathbf{x} + \mathbf{v}t)$ Translations: $g_2(t,\mathbf{x}) = (t+s, \mathbf{x} + \mathbf{s})$ Rotations: $g_3(t,\mathbf{x}) = (t,G\mathbf{x})$ However, Arnold talks about defining Galilean space using an affine space $A^4$ and nothing is coming to mind that connects the affine space definition of Galilean space to the intuitive one I stated above. My Question Can someone provide an intuitive explanation for how defining Galilean space as an affine space would permit us to define the same kinds of actions as above? In what way are these two equivalent? Answer: The Galilean spacetime is indeed the affine space $\mathbb{A}^4$. Affine space can be considered as a 'space with no origin', which makes intuitively sense because why would some point (the origin) be special. For example a trivial Galilean space is $\mathbb{E}\times \mathbb{E}^3$ where $\mathbb{E}$ is Euclidean space. The $\mathbb{R}\times \mathbb{R}^3$ you have is referred to as Galilean coordinate space. Now define an affine map which preserves the Galilean spacetime structure as $$\varphi:\mathbb{A}^4\to\mathbb{R}\times\mathbb{R}^3,\; A_t\mapsto(t(A_t),\mathbf{r}(A_t)),$$ where $A_t$ is a point of simultaneous events in Galilean space. This is called a Galilean chart. With this you can identify the Galilean spacetime with the coordinate space $\mathbb{R}\times \mathbb{R}^3$. Intuitively you attach coordinate system to the affine space $\mathbb{A}^4$ with this map. So you have this abstract affine space and you attach a coordinate system to it which makes it a coordinate space $\mathbb{R}\times\mathbb{R}^3$. Now all the actions you described can be implemented in the chosen coordinate space. Edit: The $g$'s form what is called the Galilean group. This is a mapping $$g:\mathbb{R}\times\mathbb{R}^3\mapsto\mathbb{R}\times\mathbb{R}^3,(t,\mathbf{x})\mapsto(t+s,\mathbf{Gx}+\mathbf{v}t+\mathbf{s}).$$ Also it can be shown that all Galilei charts are of the form $\varphi^´:=g\circ\varphi$ So the $g$'s correspond to change of coordinates in the coordinate space.
{ "domain": "physics.stackexchange", "id": 20640, "tags": "classical-mechanics, galilean-relativity" }
The conditions for a shift in a loop momentum to be allowed
Question: When we evaluate the Feynman diagram containing a loop, we commonly use the identity: \begin{align} \frac{1}{A_{1}^{m_{1}} A_{2}^{m_{2}} \cdots A_{n}^{m_{n}}}= \int_{0}^{1} d x_{1} \cdots d x_{n} \delta\left(\sum x_{i}-1\right) \frac{\prod x_{i}^{m_{i}-1}}{\left[\sum x_{i} A_{i}\right]^{\Sigma m_{i}}} \frac{\Gamma\left(m_{1}+\cdots+m_{n}\right)}{\Gamma\left(m_{1}\right) \cdots \Gamma\left(m_{n}\right)} \end{align} to combine denominators of propagators, and make shifts in the loop momentum so that the denominator has a form: \begin{align} \left[\ell^{2}-\Delta\right]^{m} \end{align} where $\ell$ is the loop momentum, and $\Delta$ doesn't depend on $\ell$. However, I'm not sure if this procedure is always valid. Would someone know the conditions for shifts to be allowed or give me an example we cannot make the shifts? Answer: If we use dimensional regularization, if $x$ denotes the Feynman parameters, and if the denominator $$\ell^2+ 2a(x)\cdot \ell+ b(x)=\ell^{\prime 2}-\Delta(x)$$ is a quadratic expression in the loop-momentum $\ell^{\mu}$, then clearly the shift of integration variable is $$ \ell^{\prime \mu}~=~\ell^{\mu}+a^{\mu}(x), \qquad \Delta(x)~=~a(x)^2-b(x).$$ This should in principle always work, but be aware that it may shift the integration contour, cf. e.g. this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 87417, "tags": "quantum-field-theory, renormalization, feynman-diagrams, regularization, dimensional-regularization" }
What is the meaning of cmmol in chemistry?
Question: I came across a unit called $\text{cmmol dm}^{-3}$ in buffer solutions. What does this unit mean? Answer: The unit $cmmol\,dm^{-3}$ is the same as $mmol_{c}\,dm^{-3}$ which denotes the millimoles of charge per litre. I think the former notation is not well used anymore. On the other hand, latter is used often and a google search would return a number of references. For example, http://micromaintain.ucanr.edu/Prediction/Source/Groundwater/Potential_for_clogging/CP/Water_analysis_for_hazard/Levels_of_concern/ http://www.slidefinder.net/c/converting_various_units_mmol_mmolc/32208520 http://www.scielo.br/scielo.php?pid=S0103-90162011000400012&script=sci_arttext
{ "domain": "chemistry.stackexchange", "id": 3396, "tags": "acid-base, titration" }
Euler Angles Order For Quadrotor Modelling
Question: I am modelling a quadrotor and I need to choose an order for the rotations that transfer vectors which are represented in Earth Frame to the Body Frame. what is the most logical order for these rotations? which order is likely used? does the order have a big effect on the control of the quadrotor? Thanks in advance for any answers Answer: I don't understand what you mean by order. I have always seen the quadrotor modelled like this, for instance. For your question of how to represent the body-frame vectors into inertial frames, no quadrotor knowledge is necessary. Just Applied Mechanics, where the rotation matrix R showed in previous link is necessary. It has no influence on the control, as the control is usually modelled with body-frame angles of the quadrotor.
{ "domain": "robotics.stackexchange", "id": 2573, "tags": "quadcopter" }
Can large capacitors allow power plants to run at full load and store excess electricity?
Question: We know that demand for electricity is not constant. Power plants are not operated under full load conditions. What if power plants were operated under full load condition and excess electricity was stored in capacitors? I have read that capacitors of 10 kF capacitance are available. Could the use of such capacitors make it reasonable to operate power plants under full load at all times? Answer: Capacitors are currently "rather too costly" for this purpose. I've revised my estimated cost after some more research but you appear to be in the 'well over one hundred thousand dollars" range! The dear way: If you were to assemble a 10kF 150 volt capacitor from available smaller capacitors now it would cost around 1 million dollars and store about 30 kWh of electricity - worth maybe 5 to 15 dollars retail depending where you are and a lot less wholesale. Using 160 VDC rated parts, 1700 of these would cost about $2,000,000 (really), weigh about 9 tones and occupy about 17 cubic metres (!). You'd get a discount for quantity but overall they are not viable Datasheet here One of 1700: "Doing it yourself" is hardly cheaper.. In 1000 quantity these 3400 $\mu$F, 2.8fV cost a mere 53 dollars each. Each series string = 150/2.85 = 53 capacitors. Capacitance in series divides by number of caps so C per spring = 3400/53 = 64F. Number of strings = 10,000/64 = 156. Total caps = caps/string x strings = 53 x 156 = 8268 Cost = 8268 x 53 = 438,204 dollars That's better than one million plus above - but that's a lot of mounting to do AND balancing circuitry will be essential. 1 to 2 million starts to look almost good. Using smaller voltage high capacity units "off the shelf Alternatives include: "Flow batteries" using eg Vanadium oxide liquid in various oxidation levels. "Tanks" of liquid are pumped through a cell to "charge" and stored in another tank for subsequent discharge. Large trial systems exist. Not yet mainstream. May or may not "make it" commercially. Lithium Ion - as being used in Tesla cars and their Powerbank home storage technology. JUST becoming economic with careful timing of charge and discharge to buy cheap power and use it at peak cost periods. Large commercial installations exist in eg Germany - costs are about break even overall so far with advantages of continuity being a factor. Lithium Titanium (a LiIon variant) - coming - used in eg Suzuki LEAF along with traditional LiIon cells. Dearerthan than standard LiIon but immensely fast charge rates and can have 5,000 - 10,000 usage cycles with due care. Molten salt - used to store energy thermally and make power off peak - eg station at Guila-Bend near Phoenix. This is solar thermal heated but electric heating could be used. Pumped storage as John mentioned. Some use but not very common. Efficiency overall about 60%. UK has a large system used for grid leveling applications. Flywheels - investigated for many decades - good in theory but to get acceptable energy densities you need large masses )(tons) rotating at 10s of thousands of RPM. Mechanical failure is not pretty. So far many have tried but there are no significant working systems. Flywheel added comment - March 2020: There are some flywheel storage systems available used for peaking load control with about 10 kWh capacity per flywheel. Others exist, but that gives a feel. .
{ "domain": "engineering.stackexchange", "id": 344, "tags": "energy-storage, power-engineering" }
Transforming full names to surname and initial
Question: I have a large data frame that I want to merge with another dataset. In order to do so I need to get the names of individuals into a certain format. The function below converts the name in 'column' to the desired format (mostly) and stores it in newColumn. My question is, is there a better (faster and/or more pythonic) way to do this? The main aim is to transform full names into surname and initials, such as: Novak Djokovic = Djokovic N. Jo-Wilfred Tsonga = Tsonga J.W. Victor Estrella Burgos = Estrella Burgos V. Juan Martin Del Potro = Del Potro J.M. def convertNames(df,column, newColumn): df[newColumn] = 'none' for player in df[column]: names = player.split(' ') if len(names) == 2: if (len(names[0].split('-')) > 1): newName = names[1]+' '+names[0].split('-')[0][0]+'.'+names[0].split('-')[1][0]+'.' else: newName = names[1]+' '+names[0][0]+'.' elif len(names) == 3: newName = names[1]+' '+names[2]+' '+names[0][0]+'.' else: newName = names[2]+' '+names[3]+' '+names[0][0]+'.'+names[1][0]+'.' df[newColumn][df[column] == player] = newName return df Answer: You are doing way too much split()-ing. You split on ‘-‘, and if you find the length of the split is greater than 1, you split on the ‘-‘ twice more, to get the first and the second part of the hyphenated name. Split once, and save the result in a list, and access the list elements! You are doing too much in convertNames(). It would be better to create a convertName() method, which just processes the player name into the desired form. Then you could call that method from convertNames(). def convertName(player): names = player.split(' ') if len(names) == 2: names[0:1] = names[0].split('-', 1) surname = min(len(names)-1, 2) return ' '.join(names[surname:]) + ' ' + ''.join(name[0]+'.' for name in names[:surname]) # Test data for player in ('Novak Djorkovic', 'Jo-Wilferd Tsonga', 'Victor Estrella Burgos', 'Juan Martin Del Potro'): print(player, ':', convertName(player))
{ "domain": "codereview.stackexchange", "id": 32915, "tags": "python, strings" }
When was the last time a total solar eclipse on Neptune courtesy of Triton occured, and when will be the next?
Question: If Triton had an equatorial orbit, then solar eclipses would occur around the Neptunian equinox - same as with Saturn and Uranus. But Triton's orbit is infamously inclined to Neptune's equator, so common intuition doesn't help in guessing when an eclipse will happen. Googling this issue leads nowhere apart from articles on Triton occulting background stars. I'm pretty sure an easy way to see this would be to the set the location in Stellarium as the Sun itself, observe Triton, and note when it passes over Neptune. Unfortunately, I am unable to do so now or in the near future. Answer: According to Wikipedia, all of Neptune's inner moons can cause solar eclipses. At that distance, "the Sun's angular diameter is reduced to one and a quarter arcminutes across". Triton has an angular diameter of 26 to 28 arcminutes, so it can easily cover the Sun. However, Triton eclipses are quite rare, due to its highly inclined orbit. Also, these eclipses are very brief, because Triton's orbital period is only 5.876854 days (~5 days, 21 hours) and its orbit is retrograde relative to Neptune's axial rotation. The Triton eclipse season occurs twice per Neptune's orbital period (164.8 years), and it's possible for Triton to eclipse the Sun several times during this period. The most recent eclipse season occurred in late 1952 / early 1953. I think the closest eclipse occured on 1952-Nov-11 5:43 UTC. The next eclipse season will be in 2046, with the eclipses on 2046-Jul-30 15:39 and 2046-Aug-5 12:41 both being very close. Here are some ecliptic latitude & longitude plots, produced using Horizons. The timestep is 10 minutes. Here's a plot for (most of) a recent orbit of Triton, with a 6 hour timestep. And here's the plotting script. The controls are similar to my 3D orbit plotting script given in this answer. Set the aspect_ratio to zero to use the default aspect ratio chosen by matplotlib. The offset option puts 0° longitude in the centre of the plot. The curve option will generally create a mess if the longitude wraps around.
{ "domain": "astronomy.stackexchange", "id": 7234, "tags": "solar-eclipse, neptune, triton" }
What wavelength to best detect the "9th planet"?
Question: We know that the reflected sunlight will make detecting the 9th planet very difficult in the visible light. Is there another band that will be more likely to detect it? What is the surface temperature of this object likely to be, and what would that mean about its optimal detection wavelength? Answer: Direct reflection of sunlight is the most likely scenario for a ninth planet discovery, however that does not hold if the object has a very low albedo. I assume you are interested in what wavelengths the planet would radiate. For the surface temperature, the rotation of the planet is important. If it is locked with one side facing the sun, or rotates very slowly, the centre of the sun facing hemisphere radiates as much energy as it gets from the Sun. At 60 AU, the solar flux is about 0.38 W/m². Using the Stefan-Boltzmann law, we obtain a equilibrium surface temperature of 51 K (that is the highest possible surface temperature, assuming it does not have an atmosphere). Wien's displacement law tells us that radiation from a 51 k object peaks at a wavelength of 57 µm (infra-red). For a rotating body, the equator temperature is 38 K, with radiation peaking at 78 µm (still infra-red). Using an albedo of 0.5, the peaks are 68 µm and 90 µm for a non-rotating and a rotating body respectively. Note that this is for the equator region only, the actual peak-wavelength is going to be a little bit higher, belonging in the far infra-red spectrum. Also, the high uncertainty of rotation, albedo and mass (mass is important for internal heat), makes it impossible to get a higher accuracy than that 60 au is a very optimistic perihelion distance for the ninth planet, so for a more realistic distance of say 200 au, it is not possible to observe it in the IR spectrum, if it does not have a significant internal heat source.
{ "domain": "astronomy.stackexchange", "id": 1322, "tags": "solar-system, planet, observational-astronomy, 9th-planet, wavelength" }
Homotopy Theory for Topological Insulators
Question: I'm trying to understand topological insulators in terms of homotopy invariants. I understand that in 2 spatial dimensions, we have Chern insulators since $$\pi_2(S^2) = \mathbb{Z}$$ One subtlety that I don't get is why it's alright to replace the brillouin-zone, which is a Torus ($T^2$) by a 2-sphere ($S^2$) when calculating this invariant. Secondly, in 3d, we have that $$\pi_3(S^2) = \mathbb{Z}$$ So then why is there no Chern insulator in 3d? I'm assuming this has to do with "classifying spaces" so I'd appreciate an answer involving homotopy invariants that clarifies why there is only the trivial insulator in 3d without time-reversal and also why the classification changes to $\mathbb{Z}_2$ in the presence of time-reversal symmetry. Answer: As pointed out by FraSchelle, your first question (why we can replace by the Brillouin zone by a sphere when calculating winding numbers) has been asked (and answered) a few times. The same goes for your tag-on question of why we get a $\mathbb Z_2$ invariant in the case of extra symmetries. So I will focus on your middle question, which I find the most interesting: If $\pi_3(S^2) = \mathbb Z$, why don't we have a topological phase of matter corresponding to that? Well: we do :) It is called a Hopf insulator (due to the fact that the non-trivial maps $S^3 \to S^2$ are the so-called Hopf maps). But it is not as interesting as a Chern insulator, because a Chern insulator cannot be connected to a product state without a phase transition (i.e. it has intrinsic topological order--at least for some of the definitions). The Hopf insulator, however, can be trivialized. You might wonder how that is possible, given it has a non-zero discrete index given by the winding number of the Brillouin zone over $S^2$. Indeed: isn't the whole point of such discrete invariants that they show we cannot trivialize the state without a phase transition? Well, one objection might be that such a winding number is only well-defined if you have a Brillouin zone, which means you have to presume translation invariance and no interactions. The same can be said about the winding number for a Chern insulator, but it turns out that in that case the topological invariant can be extended even to the case where translation invariance is broken and/or interactions are added. (One usual argument to make this plausible is that the winding number in the case of the Chern insulator is equivalent to the discrete Hall conductance, with the Hall conductance being well-defined even without translation invariance or with interactions. Of course to make that a justified statement, one has to prove that the Hall conductance, if non-zero, is quantized. I don't know a good argument for this, but one argument I could imagine goes like this: having a Hall conductance means that your effective electromagnetic response in your material is given by $S = k \int A d A$ (indeed: use the Euler-Lagrange equations to show that this gives a Hall conductance $\propto k$). If one then believes this action should also work on the quantum level, then it is a well-known fact that this action (called a Chern-Simons action) is only well-defined if $k$ is discrete [it has to do with the fact that under a gauge transformation, $\int A d A$ changes by a multiple of $2\pi$, so if we want $e^{iS}$ to be gauge-invariant, we need $k \propto$ an integer.]) For the Hopf insulator I do not know such an extension/argument. But does this mean that even if we assume translation invariance and no interactions that our topological phase is non-trivial? Well: yes and no. In a strict way, yes, because then we have our non-zero topological invariant. But in a physical way, not really. This is because $\pi_3(S^2)$ presumes our system only has two bands. Indeed the ``$S^2$'' arises from the fact that for every momentum $\boldsymbol k$ we have $H_{\boldsymbol k} = \boldsymbol n_{\boldsymbol k} \cdot \boldsymbol \sigma$ (where we can assume $|n_{\boldsymbol k}| = 1$ and so we get $S^2$). So this doesn't tell us what happens if we allow for extra bands. In fact, when people classify these phases, they check what happens when you add bands, because we don't want to call a phase topological if by adding a trivial band to our system, we can now connect our whole state to a trivial product state without a phase transition. But it turns out in the case of the Hopf insulator this is exactly the case. (See e.g. https://arxiv.org/abs/1307.7206 for more information.)
{ "domain": "physics.stackexchange", "id": 32657, "tags": "condensed-matter, mathematical-physics, topological-insulators, topological-phase" }
Quadratic Functions Calculator/Table (Simple console app)
Question: Built so that it doesn't depend on DLLs: http://s000.tinyupload.com/?file_id=25464905355948914118 My goal is to make a console calculator with a subset of the functions that are available on a scientific calculator. This source is WIP, the dependencies that are used by this are the ones that Visual Studio 2015 gives you when you create a Win32 console app. // MathFunction_Table.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> using namespace std; //string to integer variables string astr, bstr, cstr, r1str, r2str, stepsstr; int a, b, c, r1, r2; int step = 0; //steps are on 0 by default // //variables int x; string selection; // int main() { // while (true) Makes sure that it doesn't exit console application unless the user intends to while (true) { system("cls"); cout << "\n" "\t \t FUNCTIONS TABLES X AND Y (Mathematics)" "\n" "\t Select:\n" "\n" "\t Quadratic (q)\n" "\n" "\t Linear (l)\n" "\n \t \t or Quit (quit)\n" "\n" "\n --> "; getline(cin, selection); if (selection == "quit") { break; } else if (selection == "q") { //quadratic system("cls"); cout << "\n" "\t Quadratic: ax^2 + bx + c" "\n" "\n \t Enter value of a --> "; //user input for a getline(cin, astr); a = atoi(astr.c_str()); // atoi(astr.c_str()); is just a function I use to convert from string to int cout << "\n \t Enter value of b --> "; // //user input for b getline(cin, bstr); b = atoi(bstr.c_str()); cout << "\n \t Enter value of c --> "; // //user input for c getline(cin, cstr); c = atoi(cstr.c_str()); // //user input 2 numbers for a range of X values while (true) { cout << "\n" "\t Select a range of x values to display on table ( R1 <-> x <-> R2 )\n" "\n" "\t *** R1 must be greater than R2. *** \n" "\n" "\n \t R1 --> "; //range getline(cin, r1str); r1 = atoi(r1str.c_str()); cout << "\n \t R2 --> "; getline(cin, r2str); r2 = atoi(r2str.c_str()); //steps cout << "\t \t Steps?\n" "\n" "\t *** Make sure that R1 divided by the steps number = a whole number *** \n" "\t *** otherwise it won't work! *** \n" "\n \t Just enter 0 if you don't need steps...\n"; cout << "\t steps --> "; getline(cin, stepsstr); step = atoi(stepsstr.c_str()); //user input for steps if (r1 < r2) { system("cls"); cout << " \n \n \t error "; system("pause"); continue; } else { break; } } // //print the table in the range x = r2; cout << "\n"; cout << "\t-----------Table-----------\n"; cout << endl; cout << "\t For the equation, " << a << "x^2 + " << b << "x + " << c << endl; cout << endl; for (x; x <= r1; x++) { if ( x == (r1 - r2)) { x++; } x = x + step; //if there are steps, the steps will be evident if you see a difference of "step" between every X on every line cout << "\t X = " << x << " Y = " << (a * x * x) + (b * x) + c << endl; if (x == r1) { cout << endl; cout << "\t---------------------------\n"; system("pause"); } } continue; //quadratic end } /* String to integer conversion occurs because an integer does not allow me to display error messages when there is a complicated condition by that I mean, I am just not bothered enough... yet. */ else if (selection == "l") { //linear //linear end } else { cout << "\n \tPlease select either q or l \n \n"; system("pause"); continue; } } } Answer: Your code has several issues that make it very hard to read. Remove stdafx.h Remove the #include "stdafx.h" line as that is a Windows-only precompiled header which hampers portability. Also, you aren't using any of the functionality inside that header. Remove the global variables //string to integer variables string astr, bstr, cstr, r1str, r2str, stepsstr; int a, b, c, r1, r2; int step = 0; //steps are on 0 by default // //variables int x; string selection; // All of these variables are NOT necessary to have as global variables. You should always declare your variables in the smallest scope possible. This allows you to take advantage of C++'s resource management effectively while allowing you (or the programmer reading your code) to more easily identify the variables. I found myself having to scroll up and down many times throughout your code to remember what was declared where. It also has the nice effect of not cluttering the global namespace with variables. Main function and if-elseif-else I generally believe that the main function should be as small as possible while handling most of the I/O requests of a program. In this case, your main function houses all of the logic of the program which isn't a good thing. You can split up your main into a switch statement and have each case call a helper function to do the heavy lifting. Also, you use the following logic a lot in your program: read an integer from input and store it in a string. Convert the string to a number Why not wrap this up into a function? int GetUserNum(istream &is = std::cin) { string val; getline(is, val); is.ignore(); return stoi(val); } You also have what's called "spaghetti logic" with all those continue and break statements. These should be replaced with do-while loops that keep prompting the user for new values if some condition isn't met. Finally, your quadratic table printing is unnecessarily complicated and filled with bugs. For example, it seems that your steps variable is to allow the x value to be incremented by some user-defined number yet you're letting x be incremented every time by the loop. This will cause you to miss some steps on the graph. Moreover, you never actually check whether r1 is cleanly divisible by steps in your code. You can express your table drawing logic in a single small loop. for (int x = r2; x <= r1; x += steps) { cout << (a * x * x) + (b * x) + c << "\n"; } After cleaning up the code a bit, your general program logic could then be this: int GetUserNum(istream &is = std::cin) { string val; getline(is, val); is.ignore(); // needed for multiple istream reads return stoi(val); } // TODO: implement void HandleLinear() { } void PrintQuadraticTable(int r1, int r2, int steps, int a, int b, int c) { for (int x = r2; x <= r1; x += steps) { cout << (a * x * x) + (b * x) + c << "\n"; } } void HandleQuadratic() { cout << "A value?\n"; int a_val = GetUserNum(); cout << "B value?\n"; int b_val = GetUserNum(); cout << "C value?\n"; int c_val = GetUserNum(); int r1 = 0, r2 = 0; do { cout << "R1? Make sure R1 > R2\n"; r1 = GetUserNum(); cout << "R2?\n"; r2 = GetUserNum(); } while (r1 <= r2); int steps = 0; do { cout << "Steps? Make sure steps divides cleanly into R1.\n"; steps = GetUserNum(); } while (r1 % steps != 0); PrintQuadraticTable(r1, r2, steps, a_val, b_val, c_val); } int main() { while (true) { /* Give user some prompt */ string answer; getline(cin, answer); switch (answer) { default: cout << "Invalid option!\n"; return 1; case "quit": cout << "Quitting\n"; return 0; case "l": HandleLinear(); break; case "q": HandleQuadratic(); break; } } } Notice that by having a few helper functions around, we were able to: Get rid of duplicate code Remove many unnecessary comments by giving functions good, descriptive names Organize our code into logical units Keep the main function clutter-free and allow the reader to have a better idea of the high-level logic of the program. Get rid of all the global values and allow us to declare variables only when we needed to.
{ "domain": "codereview.stackexchange", "id": 17603, "tags": "c++" }
"Hard wall"/ "soft wall"
Question: I have encountered those terms in various places. As I understand it, "soft wall" can correspond to a smooth cutoff of some spacetime, while "hard wall" can be a sharp one, which can be described in terms of D-branes. Could somebody please explain the terminology, and in which context it can occur? Answer: You are right about your understanding of these terms. This terminology appears in extensions of the Randall-Sundrum type brane world models. The original model contains a single compact extra dimension bounded by two branes and is known as a hard wall model with the "hard wall" referring to the hard cutoff of space by the IR brane. With such a geometry it is found that the Kaluza Klein (KK) masses of particles that live in the bulk scale as $m_n^2 \sim n^2$ (like the energy levels of a particle in a box). Attempts were made to use RS type setups to be dual to QCD in order to calculate meson masses etc. This is known as ADS/QCD. However the meson mass spectrum is what is called a Regge spectrum i.e. $m_n^2 \sim n$ and so the RS type model needed to be adapted. This paper first introduced the idea of a soft wall to solve this problem. One of the branes in the hard wall model is removed and a dilaton field $\Phi$ is introduced which dynamically cuts off the space-time $$S= \int d^5x \,\sqrt{g}\, e^{-\Phi}\mathcal{L}.$$ The profile of the dilaton in the extra dimension then determines the KK spectrum of bulk fields and for a quadratic dilaton profile ($\Phi(z) \sim z^2$) a Regge spectrum is produced. The removal of one of the branes (hard spacetime cutoff) and replacement by a smooth dynamical cutoff coming from the dilaton coined the name "soft wall". Following this idea, people decided to model electroweak physics with such a geometry (see e.g. here). All the standard model fields, including the Higgs must now propagate in the bulk. The new setup offered unique phenomenology and is far less constrained by electroweak precision observables and FCNCs which cause severe tensions in the original RS. Note that since the dilaton field is not normally given a kintic term in such models, it is not a true dynamical field and one may simply consider the effect as being a different form of metric than RS. So essentially the difference between hard wall and soft wall is just a different geometry of the extra dimension which produces different phenomenology.
{ "domain": "physics.stackexchange", "id": 5999, "tags": "quantum-mechanics, quantum-field-theory, particle-physics, terminology, ads-cft" }
How can I measure a droplets contact angle without a setup?
Question: I have to use a phone camera to replicate the Sessile drop analyzer, but there doesn't seem to be a standard to different parameters, such as how far away should my camera be from the droplet or how much liquid should I use for my droplets? Is it possible to get accurate results this way, and how should I go about it? Answer: In 2010, Borguet et al. published in Journal of Chemical Education, DOI: 10.1021/ed100468u an intentionally simplified setup which may be useful as a reference. A Sony Cybershot camera was placed in front of a sample scrutinized under diffuse illumination: The additional lens shown is focussed on the drop (both for water and $n$-hexadecane, the authors recommend a volume of $\pu{5 uL}$; focal length of the length used equals $\pu{50 mm}$):. With photos recorded like this the contact angle determination was performed with the contact angle plugin of imageJ. As Fiji, it is open source, free ware, and java based/OS portable. To quote from their publication: For each easurement, the user must choose two points to manually define the baseline and three points along the drop profile (see the supporting information that features a set of instructions supplying all the details on the procedure used to make the measurement). The program then fits the profile of the drop and calculates the contact angle using the sphere approximation or the ellipse approximation. In our study, the ellipse approximation gave consistent results for contact angles $> 40^\circ{}$. For drops with contact angles $< 40^\circ{}$, the sphere approximation was used. In comparison with reference data, the authors judge contact angles recorded as well as the subsequently calculated surface tensions as good enough for instruction / lab classes and research lab.
{ "domain": "chemistry.stackexchange", "id": 8496, "tags": "water, surface-chemistry" }
What does a wedge in a graph look like?
Question: I am reading Decompositions of Triangle-Dense Graphs by Gupta et al. On page 2, in Definition 1 what is a wedge in a graph? I know what triangle is but I don't know what wedge is and google isn't helping! Answer: They say it in the paper. Let a wedge be a two-hop path in an undirected graph. So it is a path with 2 edges, like 2/3 of a triangle.
{ "domain": "cs.stackexchange", "id": 7781, "tags": "graphs, terminology" }
Question on finding nuclear ground state spins using shell model
Question: So I'm studying the shell model and I understand where the individual nucleon energy levels come from (Woods-Saxon plus spin-orbit interaction), but I'm stumped on how to find the ground state total quantum number $J$ for the entire nucleus. Take for example oxygen-16 (8 protons and 8 neutrons). Apparently for this case $J = 0$. I know that from filling up nucleon energy levels from the bottom, you'd get a total $M$J value of $0$ by summing over the $m_j$ of the individual nucleons: $$ M_{J,protons} = (-\frac{1}{2} + \frac{1}{2}) + (-\frac{3}{2} - \frac{1}{2} + \frac{1}{2} + \frac{3}{2}) + (-\frac{1}{2} + \frac{1}{2}) = 0$$ and similarly for neutrons. The parentheses separate the sums of $m_j$ over the $1s_{1/2}$, $1p_{3/2}$, and $1p_{1/2}$ energy levels. How do you get from the above to the conclusion that $J=0$? Intuitively I'd say to find $J$ you'd have to use angular momentum addition theorem for the sum of the total angular momentum quantum numbers $j_i$ of each nucleon $i$, which would give you a range of possible $J$ values in integer steps. How can you just conclude $J=0$? My professor mentioned that due to the arbitrariness of the z-axis, we can take $M_J$ to be equal to $J$ but I have no idea where this is coming from or why it would make any sense. Any help would be greatlty appreciated! Thanks in advance! Answer: I would say that for the nuclei with filled shells $J = 0$ follows from the Pauli principle. The question about filled nuclear shells is probably somewhat similar to questions about filled electron shells. At PSE, questions about electron shells were addressed. However, here is my attempt. Let $|\Psi\rangle$ denote the state vector of the nucleus according to the shell model. To show that $J=0$, it is enough to prove that in the state $|\Psi\rangle$ all components of the total angular momentum are equal to zero: $$ \hat{J^z}|\Psi\rangle = 0,\quad \hat{J^x}|\Psi\rangle = 0,\quad \hat{J^y} |\Psi\rangle = 0. $$ The validity of the first of these equalities was proved by Samuele Fossati, the other two are equivalent to the following $$ \hat{J^+}|\Psi\rangle = 0,\quad \hat{J^-}|\Psi\rangle = 0.\tag{1} $$ Here $\hat{J^\pm} = \hat{J^x}\pm i\hat{J^y}$ are raising and lowering operators of the total angular momentum. In the shell model of the nucleus, the interaction between nucleons is neglected. Therefore, the $|\Psi\rangle$ state is constructed from one-nucleon states. Nucleons are fermions, so two or more nucleons cannot be in the same one-nucleon state. We can say that there is a set of one-nucleon states, and the basis states in the state space of a system of many nucleons are determined by the sets of occupied one-nucleon states. The vectors $\hat{J^+}|\Psi\rangle$ and $\hat{J^-}|\Psi\rangle$ in the general case are linear combinations of the basis states $\sum_i|\Psi,i\rangle$, where each state $|\Psi,i\rangle$ differs from $\Psi\rangle$ by one occupied one-nucleon state. But in the case of filled shells, $|\Psi\rangle$ in some sense corresponds to the filling of all possible one-nucleon states. This means that there are no basis states $|\Psi,i\rangle$ for constructing non-zero states $\hat{J^+}|\Psi\rangle$ and $\hat{J^-}|\Psi\rangle$. Consequently, in the case of filled shells, equalities (1) are valid and the total angular momentum is equal to zero, $J = 0$. I admit that the formulations in the last paragraph may not be entirely clear and understandable. I'm afraid that the only way for me to formulate the statement more precisely is to write a lot more formulas and use the method of secondary quantization.
{ "domain": "physics.stackexchange", "id": 98598, "tags": "energy, nuclear-physics, neutrons, protons" }
How Costmap works
Question: I am trying to understand how Costmaps Works. I have Costmap with params: costmap params base_scan: {clearing: true, data_type: LaserScan, expected_update_rate: 0.4, marking: true, max_obstacle_height: 2.4, min_obstacle_height: 0.08, observation_persistence: 5.0} cost_scaling_factor: 13.0 footprint: '[[-0.375,-0.375],[-0.375,0.375],[0.375,0.375],[0.375,-0.375]]' footprint_padding: 0.05 global_frame: /odom height: 6 inflation_radius: 0.5 lethal_cost_threshold: 100 map_topic: map map_type: voxel mark_threshold: 0 max_obstacle_height: 2.0 max_obstacle_range: 2.5 observation_sources: base_scan obstacle_range: 2.5 origin_x: 0.0 origin_y: 0.0 origin_z: 0.0 publish_frequency: 5.0 publish_voxel_map: false raytrace_range: 3.0 resolution: 0.05 restore_defaults: false robot_base_frame: /base_link robot_radius: 0.46 rolling_window: true static_map: false track_unknown_space: false transform_tolerance: 0.3 unknown_cost_value: 0 unknown_threshold: 9 update_frequency: 5.0 width: 6 z_resolution: 0.05 z_voxels: 10 rviz(OcupancyGrid with topic: /test_node/local_costmap/infalted_obstacles: rviz(OcupancyGrid with topic: /test_node/local_costmap/infalted_obstacles http://imageshack.com/a/img46/971/wf7w.png How Costmap really looks like(openCV): How Costmap really looks like(openCV) http://imageshack.com/a/img571/8012/n7mm.png (I assume that black is obstacle and white is free because when I put robot in free space I get white costmap with black square(robot footprint)). this assumption was wrong This can be not true as @ahendrix mentioned. The Costmap picture is flipped vertically to OccupancyGrid and my questions: Why the costmap is consisted only of 0 or 255 values ( I thought there should be a gradient 0-255)? I printed values in console and there are values between 0-255 (but not a lot of them - I suppose it depends on inflation radius). But still I don't get it why costmap looks like this and sends correct OccupancyGrid topics. 2. Why when obstacle spotted whole "beam" on map is 255 (not for instance part of beam from the point on distance that obstacle was spotted and further)? i was wrong with the values(black is 0-free) 3. Why the place of robot (footprint?) is spoted as obstacle i was wrong with the values- footprint is free, but why when i put robot in free space, costmap has only footprint spotted as free? Thanks in advance Originally posted by BP on ROS Answers with karma: 176 on 2014-04-02 Post score: 1 Original comments Comment by Marcus on 2015-02-25: Hey BP, I am trying to get the costmap_2d into any kind of image format: see here It seems like you were able to do this. Could you give me a short explanation? Thanks, Marcus Answer: You have the values backward; In the costmap, 255 is occupied, and 0 is free space. Values in between capture the uncertainty about whether a particular cell is occupied or not. It looks like your costmap rendering is flipped vertically compared to the display in rviz. I suspect this is due to differences in how the axes are aligned in rviz vs images. The black areas you see correspond to 0 values in the costmap; these areas are free space. This includes all of the space between your laser and the object it's seeing (minus inflation). This also includes the footprint of the robot, because in order for the planner to operate properly, the robot cannot be in a start state that is colliding with anything. The white areas that you see in the costmap are either occupied or unknown space. Occupied space is marked explicitly by the laser, and unknown space is space where there are no sensor readings, such as behind the robot, and on the far side of obstacles. The obstacle layer in the costmap computes free space by using raytracing to mark the space between the sensor and the obstacle as free. If there are no obstacles in the environment, there's nothing to raytrace to, so all of the space around the robot remains unknown. Originally posted by ahendrix with karma: 47576 on 2014-04-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by BP on 2014-04-04: so why when i put robot in free space costmap looks like I've said? Also- pictures I posted are for the same place (costmap is rotated by -90 deg to the OcupancyGrid) So why there is a hudge distance ( I thought the costmap should look like infaltedObstacles but with bigger range and gradient) Comment by BP on 2014-04-04: But I still can't see it on costmap- it should have about 135 degrees of fully free grids) because there is no obstacles and there is measurement from laserscans. and there is only about 45 deg. area of that. And why when I put robot in free space there are obstacles everywhere? Comment by paulbovbel on 2014-04-04: For reference, http://docs.ros.org/hydro/api/costmap_2d/html/cost__values_8h.html 255 are unknown/no information cells Comment by BP on 2014-04-04: thanks. It's a bit strange for me that area with no obstacles that were found is considered as unknown, but I think I can live with it Comment by paulbovbel on 2014-04-04: by default, areas are raytraced 'clear' for only for an area inbetween the robot and an obstacle. This leads to the long-hallway problem where if you can't see the end of the room, that entire area remains unknown and you can't navigate through it. Comment by paulbovbel on 2014-04-04: you should check the echo of /scan for areas where there's no obstacle. It should be either inf or maxrange + 1. In hydro, there are parameters which will enable raytracing of free space for those cases. Not sure about prior distributions. Comment by paulbovbel on 2014-04-04: you could also enable unknown_space for navfn global planner, which will let your robot plan paths through unknown space, though this may be unsafe for general use Comment by BP on 2014-04-04: thank's, that surely will help Comment by BP on 2014-04-05: I set unknown_threshold to sth like 10000 and there's no more unknown space
{ "domain": "robotics.stackexchange", "id": 17511, "tags": "navigation, opencv, costmap, occupancy-grid, costmap-2d-ros" }
Dictionary with ISet key as a collection of names
Question: I'm writing a command-line utility and I need to find commands (and parameters) by name. The name can either be a full name like save or a shortcut s. I thought I use a dictionary with an ISet key and a custom comparer. At first I had a list and searched for the name with LINQ but I'd like to have something more convenient. The performance doesn't matter - this time convenience goes first. There will be at most a few dozens of commands. I know I could use a string and map each name to the command but this isn't cool :-) First, there is a NameSet that is the base class for concrete sets. class NameSet : HashSet<string> { protected NameSet(IEnumerable<string> keys, IEqualityComparer<string> keyComparer) : base(keys ?? throw new ArgumentNullException(nameof(keys)), keyComparer) { } } one with the suffix CI which stands for Case Insensitive (like the collation in Sql Server) class NameSetCI : NameSet { private NameSetCI(IEnumerable<string> keys, IEqualityComparer<string> keyComparer) : base(keys, keyComparer) {} public static NameSetCI Create(params string[] keys) => new NameSetCI(keys, StringComparer.OrdinalIgnoreCase); } the other with the suffix CS which obviously stands for Case Sensitive. class NameSetCS : NameSet { private NameSetCS(IEnumerable<string> keys, IEqualityComparer<string> keyComparer) : base(keys, keyComparer) { } public static NameSetCS Create(params string[] keys) => new NameSetCS(keys, StringComparer.Ordinal); } The comparer for this is very simple. It just looks if there is any overlapping set. internal class SetComparer : IEqualityComparer<ISet<string>> { public bool Equals(ISet<string> x, ISet<string> y) => x.Overlaps(y); public int GetHashCode(ISet<string> obj) => 0; // Force Equals. } With the hash code 0 it doesn't seem to be O(1) anymore but all the keys are in one place an the logic is just a single Overlaps method. LINQ wouldn't be faster anyway and it would mean a lot more work. Example: var dic = new Dictionary<NameSetCI, string>(new SetComparer()); dic.Add(NameSetCI.Create("foo", "bar"), "fb"); dic.Add(NameSetCI.Create("qux"), "q"); dic[NameSetCI.Create("baz")] = "b"; dic[NameSetCI.Create("bar")].Dump(); // fb dic.Add(NameSetCI.Create("foo"), "f"); // bam! Answer: You can just use a single class with a generic Create method which has constraints for IEqualityComparer<string>: internal class NameSetGeneric : HashSet<string> { private NameSetGeneric(IEnumerable<string> keys, IEqualityComparer<string> keyComparer) : base(keys ?? throw new ArgumentNullException(nameof(keys)), keyComparer) { } public static NameSetGeneric Create<T>(T comparer, params string[] keys) where T : IEqualityComparer<string> => new NameSetGeneric(keys, comparer); } You can even go further and make the whole class generic, but that's only if you want to work with different data types. Example usage: var dic = new Dictionary<NameSetGeneric, string>(new SetComparer()); dic.Add(NameSetGeneric.Create(StringComparer.OrdinalIgnoreCase, "foo", "bar"), "fb"); dic.Add(NameSetGeneric.Create(StringComparer.OrdinalIgnoreCase, "qux"), "q"); dic[NameSetGeneric.Create(StringComparer.OrdinalIgnoreCase, "baz")] = "b"; dic[NameSetGeneric.Create(StringComparer.OrdinalIgnoreCase, "bar")].Dump(); //fb dic.Add(NameSetGeneric.Create(StringComparer.OrdinalIgnoreCase, "foo"), "f"); // bam!
{ "domain": "codereview.stackexchange", "id": 24391, "tags": "c#, hash-map, set" }
Is possible to use the air friction to increase the speed of a car?
Question: Its possible to make a device that, attached to the car, it will use the air that hits the car at high speed and give the car a boost? Answer: Friction wants to resist relative movement between the air and the car. If the air is not moving forward relative to the direction your car is moving, the friction cannot be used to increase the speed. If the wind were blowing in the same direction you wanted to travel, and faster than you wanted to travel, then yes, friction could speed up your car up until it reaches the wind speed. This would basically be using your car as a sail to assist it's movement. You won't have much luck using the "hitting" of the air to help power your car though if you're traveling against the wind. The air would have momentum on the other direction that you are trying to generate momentum; so they do not constructively add; but instead combat each other.
{ "domain": "physics.stackexchange", "id": 45990, "tags": "newtonian-mechanics, friction, aerodynamics, speed" }
Create your own n-qubit quantum gate in Qiskit
Question: I need to create the 2-qubit gates that are not supported by Qiskit (Ex: Controlled - F gate). Is there any way to create a class/object and use it as other basic gates? Example: qc = qiskit.QuantumCircuit(2,2) qc.h(0) qc.cf(0, 1) Thanks! Answer: One way to achieve this is as follows: from qiskit import QuantumCircuit def cf(circuit, qubit1, qubit2): # Create a circuit that is equivalent to your gate: qc = QuantumCircuit(2) qc.cx(0, 1) qc.csx(1, 0) qc.cx(0, 1) # Convert the circuit to a gate: sr_swap = qc.to_gate(label = '√SWAP') # Add the gate to your circuit which is passed as the first argument to cf function: circuit.append(sr_swap, [qubit1, qubit2]) # We need this line to add the method to QuantumCircuit class: QuantumCircuit.cf = cf Now, you can use cf as other QuantumCircuit gates: circ = QuantumCircuit(2) circ.cf(0, 1) circ.draw('mpl')
{ "domain": "quantumcomputing.stackexchange", "id": 3242, "tags": "qiskit, programming, quantum-gate" }
What would be the variance for complex number?
Question: When $x$ is a zero mean random variable then $$\sum_{n=1}^N x_n x_n^T = N \sigma^2_x\,\text,$$ where the variance is $\sigma^2_x$. I'm considering Complex Normal Distributions where the real and imaginary part are uncorrelated. https://en.wikipedia.org/wiki/Complex_normal_distribution explains about the form of the distribution for complex normal distribution. I have a confusion because the denominator for the real case in the distribution has a 2 but for the complex that is not there. Does this mean that if $x$ is a complex valued random variable, then the variance becomes half i.e., $$\sum_{n=1}^N x_n x_n^H = N \sigma^2_x/2\,\text,$$ where the variance is $\sigma^2_x/2$ because the variance gets equally distributed in the real and imaginary component? I have this doubt because when implementing, if I need to generate a complex noise of variance 1, I would be doing (in Matlab) noise = sqrt(1/2) * (randn(N,1) + 1j*randn(N,1)) Since each component (real and imaginary) needs to have variance 1/2, such that their sum becomes 1. So, the variance $\sigma^2$ is half mathematically. Is my understanding correct? UPDATE based on valuable information provided under the comments : I am considering at the circularly symmetric complex normal distribution, where real and imaginary part are completely uncorrelated. Answer: I will focus on the reason of the factor $1/2$ and leave aside the estimation things. The exact understanding should be : if a scalar Gaussian random variable (rv) is circular symmetric, its real and imaginary parts must be uncorrelated (this is equivalent to independence if they are assumed jointly Gaussian) and identically distributed with zero mean. Thus, your Matlab code is correct for a rv $\sim \mathcal{CN}(0,1)$. The story behind is that a complex random variable (rv) is simply a vector of two real random variables. A vector of $n$ complex rv is indeed a vector of $2n$ real rv. You are talking about the case $n=1$, 1 complex Gaussian rv $Z = Z_r + jZ_i$ or a vector of 2 real Gaussian rv $[Z_r,Z_i]^T$. As with real Gaussian rv which is described by its variance, a real vector $[Z_r,Z_i]^T$ must be described by its covariance matrix $$\mathbb{E}\left\lbrace [Z_r,Z_i]^T \times [Z_r,Z_i] \right\rbrace = \mathbb{E}\left\lbrace\begin{bmatrix} Z_r^2 & Z_rZ_i \\ Z_iZ_r & Z_i^2 \\ \end{bmatrix}\right\rbrace$$ Take a look at the variance $\mathbb{E}\left\lbrace ZZ^H \right\rbrace = \mathbb{E}\left\lbrace Z_r^2+Z_i^2 \right\rbrace$ and pseudo-variance $\mathbb{E}\left\lbrace ZZ^T \right\rbrace = \mathbb{E}\left\lbrace Z_r^2-Z_i^2+j2Z_rZ_i \right\rbrace$, the covariance matrix of real vector (or complex scalar) is fully described by the variance and pseudo-variance. Threfore, you need both variance and pseudo-variance to characterize a complex Gaussian rv (with the prior condition that its real and imaginary parts are jointly Gaussian). Now we use the circular symmetry property : $e^{j\phi}Z$ has the same probability distribution as $Z$ for all real $\phi$. This leads to $\mathbb{E}\left\lbrace e^{j\phi}Z(e^{j\phi}Z)^T \right\rbrace = \mathbb{E}\left\lbrace e^{j2\phi}ZZ^T \right\rbrace = \mathbb{E}\left\lbrace ZZ^T \right\rbrace$ for all $\phi$ thus $E[ZZ^T] = 0$ and variance $\mathbb{E}\left\lbrace ZZ^H \right\rbrace$ is sufficient statistic for $Z$. Note that $\mathbb{E}\left\lbrace ZZ^T \right\rbrace = \mathbb{E}\left\lbrace Z_r^2-Z_i^2+j2Z_rZ_i \right\rbrace \implies \mathbb{E}\left\lbrace Z_r^2 \right\rbrace = \mathbb{E}\left\lbrace Z_i^2 \right\rbrace$ and $\mathbb{E}\left\lbrace Z_rZ_i \right\rbrace=0$, the real and imaginary parts are uncorrelated then independent (because they are Gaussian), with the same variance, this is the reason of the factor $1/2$. To sum up, your code is correct because you are estimating circular symmetric complex Gaussian rv. The jointly Gaussian assumption between real and imaginary parts must be used. If this is not about circular symmetric rv (or real random vector with two elements), you must calculate the pseudo-variance. For more details and to understand the formula of the wikipedia article, you can read this article Circular Symmetric Gaussian R.Gallager.
{ "domain": "dsp.stackexchange", "id": 5071, "tags": "estimation, complex" }
How is distance between sun and earth calculated?
Question: How has the distance between sun and earth been calculated? Also what is the size of the sun? Answer: The most precise measures of this distance are from radars in the 1960s. However, the distance has been known, though roughly, since the Ancient Times. Aristarchus of Samos (310BC - 230BC) used the angle between the Earth-Moon axis and the Earth-Sun when the Moon is in First Quarter (elongation of the Moon, $E$ ) and then, with simple trigonometry, could deduce the distances: $$ \cos E = \frac {distance (\text{Earth-Moon})} {distance(\text{Earth-Sun})} $$ Since he had already computed the Earth-Moon distance from the duration of lunar eclipses, he could conclude on the Earth-Sun distance. His results were false, because of too loose measure of the angle, but his method was very accurate. See Wikipedia for more details. Another method was explored in 1672 by Cassini and Richer: they measured the parallax (i.e. the variation in angle when seen from different places) under which Mars was seen in Cayenne and Paris, at the moment of opposition. From this, they deduced the distance Earth-Mars. Then, using the Kepler law $$\frac{a^3}{p^2}= constant$$ (where $a$ is the distance between the planet and the Sun, and $p$ the sideral time) they could figure out what was the distance to the Sun.
{ "domain": "physics.stackexchange", "id": 507, "tags": "astronomy, newtonian-mechanics, astrometrics" }
C program for 'Reverse DNS lookup'
Question: I have written the following code for doing a reverse dns lookup. I'm not sure if there are any errors in it. Please have a look: #include <stdio.h> //for printf() #include <stdlib.h> //for exit() #include <arpa/inet.h> //for inet_pton() #include <netdb.h> // for NI_MAXHOST, getnameinfo() and gai_strerror() #include <errno.h> // for errno #include <string.h> // for strerror() int main(int argc, char** argv) { if(argc<2) { printf("\n%s [IP]\n",argv[0]); printf("For e.g. %s 10.32.129.77\n",argv[0]); exit(-1); } struct sockaddr_in sa; int res = inet_pton(AF_INET, argv[1] , &sa.sin_addr); switch(res) { case 0: printf("\nInput address is not a valid IPv4 address.\n"); case -1: if(res == -1) printf("\nError(%s)\n",strerror(errno)); int n_res = inet_pton(AF_INET6, argv[1] , &sa.sin_addr); switch(n_res) { case 0: printf("\nInput address is not a valid IPv6 address.\n"); case -1: if(n_res == -1) printf("\nError(%s)\n",strerror(errno)); exit(-1); case 1: sa.sin_family = AF_INET6; } case 1: sa.sin_family = AF_INET; } printf("\nsa.sin_addr.s_addr[%d]\n",sa.sin_addr.s_addr); char node[NI_MAXHOST]; memset(node,0,NI_MAXHOST); res = getnameinfo((struct sockaddr*)&sa, sizeof(sa), node, sizeof(node), NULL, 0, 0); if (res) { printf("%s\n", gai_strerror(res)); exit(1); } printf("\nIP[%s]\n",argv[1]); printf("HOSTNAME[%s]\n", node); return 0; } Answer: It's generally best to divide your code into separate sections (possibly separate functions) for argument handling and actual computation. Errors should be printed to standard error rather than standard output. Also, prefer small integer values for exit status (and since we're in main(), we can use simple return rather than exit() - note the useful EXIT_FAILURE macro for the return value). It's worth documenting case fallthroughs. This one is especially suspect: case 1: sa.sin_family = AF_INET6; } case 1: sa.sin_family = AF_INET; } After we store AF_INET6, it's immediately overwritten with AF_INET - is that really intended? This fallthrough would be clearer as two independent cases: case 0: printf("\nInput address is not a valid IPv6 address.\n"); /* fallthrough */ case -1: if (n_res == -1) printf("\nError(%s)\n",strerror(errno)); exit(-1); Compare: case 0: fprintf(stderr, "\nInput address is not a valid IPv6 address.\n"); return 1; case -1: fprintf(stderr, "\nError(%s)\n", strerror(errno)); return 1; This line seems to be no use to a user: printf("\nsa.sin_addr.s_addr[%d]\n",sa.sin_addr.s_addr); We check for argv less than 2, but we neither complain about nor use any excess arguments. There's no need to null out the node storage, as getnameinfo() will either fail (in which case we'll never access it) or write a valid string. Minor (grammar): don't use "for" with "e.g." - that reads like, "for for example". When I tried running the program, I found it wouldn't work at all with IPv6 addresses, because sockaddr_in is too small for IPv6 addresses. I had to totally rewrite with a union of address types: #include <arpa/inet.h> //for inet_pton() #include <netdb.h> // for NI_MAXHOST, getnameinfo() and gai_strerror() #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <string.h> static int convert4(struct sockaddr_in *sa, const char *name) { return inet_pton(sa->sin_family = AF_INET, name, &sa->sin_addr); } static int convert6(struct sockaddr_in6 *sa, const char *name) { return inet_pton(sa->sin6_family = AF_INET6, name, &sa->sin6_addr); } int main(int argc, char** argv) { if (argc != 2) { fprintf(stderr, "Usage: %s [IP]\nE.g. %s 10.32.129.77\n", argv[0], argv[0]); return EXIT_FAILURE; } union { struct sockaddr sa; struct sockaddr_in s4; struct sockaddr_in6 s6; struct sockaddr_storage ss; } addr; if (convert4(&addr.s4, argv[1]) != 1 && convert6(&addr.s6, argv[1]) != 1) { fprintf(stderr, "%s: not a valid IP address.\n", argv[1]); return EXIT_FAILURE; } char node[NI_MAXHOST]; int res = getnameinfo(&addr.sa, sizeof addr, node, sizeof node, NULL, 0, NI_NAMEREQD); if (res) { fprintf(stderr, "%s: %s\n", argv[1], gai_strerror(res)); return EXIT_FAILURE; } puts(node); }
{ "domain": "codereview.stackexchange", "id": 37143, "tags": "c, linux" }
NLP Feature creation from phrase matching
Question: I'm building a model to classify email content, to decide whether the email should lead to a JIRA ticket being "Raised" or "Not Raised". The problem I am having is the data is highly imbalanced with only around 11% being classed as "Raised". So far, the Random Forest classifier is providing the highest level of accuracy but the True Positive Rate/Recall is sitting at around 40% and I can't seem to increase upon this. I have been provided with a list of phrases that should they be contained in the email content, then in all likelihood a ticket needs raising. Looking for some tips as to the best method to create a new feature based on phrase matching? Has anyone any experience in the best methods for doing this? Answer: The problem with imbalance is that the optimizer can get a very good score by declaring everything 'not raised'. You need to cheat with your training data by removing that incentive. I would suggest a training set that is balanced 50/50 between the classes. Your evaluation set can still be representative, which will give you a sense of how it'll generalize.
{ "domain": "datascience.stackexchange", "id": 5868, "tags": "python, nlp, feature-construction" }
Direction of Magnetic force from a current running through a coil of wire
Question: What is the direction is the magnetic force vectors pointing from a coil of wire that has current running through it? http://www.ndt-ed.org/EducationResources/CommunityCollege/MagParticle/Graphics/coil1.gif The above link is a picture of a wire with current running through it. I see the blue arrows indicating the magnetic field lines, but I am having trouble visualizing the magnetic force lines. Where are they pointing? Please help. Answer: Let me start from your comment on Lubos' answer: If we have a electron near a coil of wire that has current running through it, certainly the electron will move a certain direction right? No, it's not that simple. For a given coil of wire producing a given magnetic field, the electron can experience a force in any direction that is perpendicular to the field. It depends on which way the electron is moving. (The force is always perpendicular to both the field and the electron's velocity.) In fact, if the electron is just sitting at rest, or is moving parallel to the magnetic field, it experiences no force at all. You might be confused because you're thinking of the electrostatic force. That one is always parallel to the electric field; it doesn't matter how the particle is moving, and that's why you can draw electrostatic force lines. But that doesn't work with the magnetic force.
{ "domain": "physics.stackexchange", "id": 744, "tags": "electromagnetism, vectors, electric-circuits" }
Alcubierre drive and inertia.
Question: What is the inertia or velocity of a vehicle upon exiting or shutting down an Alcubierre bubble? Would the vehicle maintain the velocity it had in the bubble? I'm not sure I asked the question in a meaningful way so I'll try another question that targets what I'm after. If a vehicle was traveling in one of these bubbles how hard would it be to come to a full stop? Answer: All the treatments of the Alcubierre drive I've seen have not dealt with the acceleration and deceleration. The nearest I've seen is the paper The Alcubierre Warp Drive: On the Matter of Matter, but this is mainly interested in the interactions of matter with the drive and it doesn't deal with the mechanism of acceleration. You'd have to specify how the acceleration was achieved. For example it might be achieved by gradually bringing the exotic matter from infinity, or if you were using some sort of field generator as proposed by Harold White by ramping up the field. In either case there's probably no analytic solution to the equation of motion so you'd have to do it numerically. There is probably a way to ramp up the drive that produces no inertial forces on the occupants. They would feel no force as the ship accelerates then decelerates, and with the drive off their original velocity would be unchanged. However I must emphasise that in the absence of any proper treatment of the problem no definitive statement can be made.
{ "domain": "physics.stackexchange", "id": 15711, "tags": "inertia, warp-drives" }
Are children's sparklers based on a magnesium reaction?
Question: We were letting our kids play with sparklers on New Year's Eve, and my friend's son asked his Dad: What would happen if we threw the sparkler into the water? Would it keep burning under water. My friend pondered that for a moment - and said it depended if the reaction was based on magnesium or not. My question is: Are children's sparklers based on a magnesium reaction? Answer: What would happen if we threw the sparkler into the water? Would it keep burning under water. Most likely, not. Water is an effective coolant, so a wet sparkler wouldn't be able to propagate a burn front. Pyrotechnic compositions often severely degrade in a wet air (some might self-ignite and some might loose the ability to burn). Are children's sparklers based on a magnesium reaction? The most common sparkler composition I'm aware about is fine $\ce{Al}$ powder as a fuel, $\ce{KNO3}$ as an oxidizer dextrin or other combustible binder Iron coarse powder for orange-ish sparks. $\ce{Ti}$, $\ce{Al}$ or $\ce{Sb2S3}$ in coarse powder can be used for producing white sparks. Charcoal powder or potassium poly-sulfides can produce redder sparks. To my knowledge, there is no way to produce purple, blue or green sparks. Exact composition is a delicate balance between gases produces (so sparks were thrown away at meaningful distance) stability and temperature of burn, amount and color of sparks produced, safety and cost. Please, note. While a common sparkler cannot burn in water, a big slag with same composition and water-proof coating certainly can. But you don't want to be nearby when it happens. Like, at all. An example with a much tamer fuel here: https://www.youtube.com/watch?v=czwBWB5u6Hg Also, magnesium is usually avoided in pyrotechnic compositions in favor of aluminum. Magnesium usually produces a lot of thick, white smoke and is more sensitive when stored in real conditions. If aluminum doesn't burn good enough, $\ce{Mg/Al}$ alloy might be used.
{ "domain": "chemistry.stackexchange", "id": 9402, "tags": "reaction-mechanism, home-experiment" }
Can a cold gas be considered as ideal at a very high speed?
Question: Considering a plane flying in the atmosphere, my book uses the perfect gas law $pV=n\bar RT$. Yet, as the plane itself is taken as the reference, the air ($T=-50°C$) has a speed of $800 \ \mathrm{km/h}$. I know that for low temperatures, the perfect gas law does not work and must be replaced by the Soave-Redlich-Kwong (or just Redlich-Kwong) law. But here the situation is particular since the speed is very high. Is their a way that $pV = n\bar R T$ could work in this context, or is my book totally wrong ? Answer: According to Moran et al, Fundamentals of Engineering Thermodynamics, the "equivalent" critical properties of air are 133 K and 37.7 bars. The atmospheric temperature is -50 C at about 10 km, and the atmospheric pressure at this altitude is on the order of about 0.265 bars. So the reduced temperature is 223/133 = 1.68 and the reduced pressure is 0.265/37.7 = 0.0070. According to the graph in Moran et al of real gas compressibility factor Z as a function of reduced temperature and pressure, at this reduced temperature and pressure, the compressibility factor for air is indistinguishable from 1.0. Thus, ideal gas behavior is very closely approximated under these conditions.
{ "domain": "physics.stackexchange", "id": 93104, "tags": "thermodynamics, fluid-dynamics, ideal-gas, atmospheric-science, air" }
Which solution is good to generate a list without None results from a function?
Question: I'm relatively new to Python and learned, that function cannot return nothing, not even None. In my application I want to iterate over a list and keep all non-None results of a complicated function: def complicated_function(x): if x < 0: return "a" elif x == 3.6: return "b" elif x == 4: return None # Idealy here nothing should be returned elif x > 10: return "c" else: return "d" I have three soltuions and my question is which would you recommend me to use and are there better ones? First solution tmp = [complicated_function(x) for x in [-1, 2, 3.6, 4, 5, 20]] ret = [x for x in tmp if x] Second solution ret = [complicated_function(x) for x in [-1, 2, 3.6, 4, 5, 20] if complicated_function(x)] Third solution ret = [] for x in [-1, 2, 3.6, 4, 5, 20]: tmp = complicated_function(x) if tmp: ret.append(tmp) I think the first solution is slow, if the list to iterate over is very large. The second solution seems to be bad if the complated_function is really expensive (in rumtime). The third solution seems to be ok, but append is said to be significantly slower than list comprehension. Answer: You can use an intermediate generator function: [y for y in (complicated_function(x) for x in <my_iterable>) if y is not None] An alternative is changing complicated_function to become a generator that accepts an iterable, instead of a pure function: def complicated_function(iterable): for x in iterable: if x < 0: yield "a" elif x == 3.6: yield "b" elif x == 4: continue elif x > 10: yield "c" else: yield "d" and then: list(complicated_function(<my_iterable>))
{ "domain": "codereview.stackexchange", "id": 37739, "tags": "python, iteration" }
Isospin and Clebsch Gordan coefficients
Question: If I have to combine 2 spin $\frac{1}{2}$ particles I do with Clebsch Gordan coefficients so that for example $|{\frac{1}{2},\frac{1}{2}}\rangle|{\frac{1}{2},\frac{-1}{2}}\rangle=\frac{1}{\sqrt{2}}(|{1,0}\rangle+|{0,0}\rangle)$. Now if I have to calculate C-G coefficients for this new state and another state for example $|{\frac{1}{2},\frac{1}{2}}\rangle$, how I have to do? Answer: I recommend to you "Lie Algebra in particle physics" by Howard Georgi, section 3.5, he gives a very detailed example. The idea is to start with the highest spin state: in your example it's $\big|1,1\big\rangle = \big|\frac{1}{2},\frac{1}{2}\big\rangle\big|\frac{1}{2},\frac{1}{2}\big\rangle$ and then act with the lowering operator $J^-$. for instance for the first time you get \begin{align} J^-\big|1,1\big\rangle &= J^-\left(\big|\frac{1}{2},\frac{1}{2}\big\rangle\big|\frac{1}{2},\frac{1}{2}\big\rangle\right)\\ \big|1,0\big\rangle&=\sqrt{\frac{1}{2}}\left(\big|\frac{1}{2},-\frac{1}{2}\big\rangle\big|\frac{1}{2},\frac{1}{2}\big\rangle+|\frac{1}{2},\frac{1}{2}\big\rangle\big|\frac{1}{2},-\frac{1}{2}\big\rangle\right) \end{align}
{ "domain": "physics.stackexchange", "id": 69308, "tags": "angular-momentum, quantum-spin, isospin-symmetry" }
Mathematically calculate if a Planet is in apparent Retrograde motion
Question: I am trying to find out if a planet is in an apparent retrograde motion with respect to the earth at any given point in time. Given the time in Julian days. I already have the geocentric and heliocentric coordinates and velocities of the planet using the JPL ephimeris, but not sure of what mathematical formula to use to identify the retrograde motion. I read this article "Mathematically calculate if a Planet is in Retrograde" which had a similar question but I am not sure of 1) Is the answer correct as it has not been marked as correct? 2) How to represent earth in the XY plane only when it has XYZ planes and why do we even need to do that? Cann't we just use the heliocentric equitorial plane which I think is the same as the one used by NASA JPL ephimeris. Answer: I'm assuming from the upvotes that my answer was correct but incomplete. Working out the equations for orbits that are not in the same plane would have taken far too much time for too little gain. The coordinate system I used was a heliocentric ecliptic coordinate system. This simplifies the comparison of orbits since most of the planets orbit the sun very close to the ecliptic plane. Due to Earth's tilt, using an equatorial coordinate system would be more mathematically difficult since the relative motion of the planets would be three-dimensional. As for the rest of the answer, the idea is to calculate the angle of a ray from Earth through the other planet, as this represents the position of the planet in the sky with respect to the fixed stars. When the movement of this line reverses direction, the other planet is entering or leaving retrograde motion. For the purposes of determining when retrograde motion occurs, motion of the planets perpendicular to the ecliptic plane does not matter. Below are two pictures of time-lapsed retrograde motion of Mars. Whether the motion is in a loop or an S-curve, what determines when retrograde motion happens is when the component of the vector joining two planets that is parallel to the ecliptic reverses its direction of rotation. In the pictures below, that corresponds to the points of reversal in the tilted horizontal motion. This is why I ignore the Z-component of the planet's motion. https://apod.nasa.gov/apod/ap100613.html http://apod.nasa.gov/apod/ap160915.html
{ "domain": "physics.stackexchange", "id": 34796, "tags": "planets" }
gnu parallel for macs peak calling
Question: These are the list of files Control_Input_sorted.bam_rem.bam Control_H2BUb_sorted.bam_rem.bam Control_IgG_sorted.bam_rem.bam PTPN6_g2_6_H2Bub_sorted.bam_rem.bam PTPN6_g2_6_Input_sorted.bam_rem.bam So to get unique files i do this ls -1 *bam | sort | sed -r 's/_sorted.bam_rem.bam//g' | sort | uniq Control_Input Control_H2BUb Control_IgG PTPN6_g2_6_H2Bub PTPN6_g2_6_Input Now i have to run each sample against input and IgG for peak-calling. Like Control_H2BUb_sorted.bam_rem.bam against Contol_Input_sorted.bam_rem.bam & Control_IgG_sorted.bam_rem.bam and PTPN6_g2_6_H2Bub_sorted.bam_rem.bam against PTPN6_g2_6_Input_sorted.bam_rem.bam So how do I parse and execute using gnu parallel. I came across this tutorial where it does something like this cat sample_names.txt | parallel --max-procs=12 'macs2 callpeak -t {}-A-NC.sorted.bam \ -c {}-G-NC.sorted.bam -g hs -n {}-A-NC-sharp-model -q 0.01 --outdir {}-A-NC-sharp-model-peaks 2> {}-A-NC-sharp-model.stderr' Now im not sure how to pass the file names as argument for IP and Input or IgG. If i use logic a little bit i have to fix Control and PTPN6_g2_6 as constant naming!. Any suggestion or help would be really appreciated. Answer: not gnuparallel but nextflow. I don't know macs and what is 'first.bam' and 'second.bam' is not clear not me but I hope you'll get the idea. The nextflow: Channel.fromPath(params.bams).splitCsv(header: false,sep:'\t',strip:true).map{T->file(T[0])}.into{bams1;bams2} boolean isControl(file) { return file.name.endsWith("_Input_sorted.bam_rem.bam") || file.name.endsWith("_IgG_sorted.bam_rem.bam "); } bams1.filter{T->isControl(T)}.set{controls} bams2.filter{T->!isControl(T)}.set{experiment} process mac { maxForks 14 tag "${bam1} vs ${bam2}" input: set bam1 , bam2 from experiment.combine(controls) output: file("output.peaks") into mac_peak script: """ # check syntax please macs2 callpeak -t {bam1.toRealPath()} -c ${bam2.toRealPath()} -g hs -n out-model -q 0.01 --outdir output.peaks """ } invoke: find /dir1/dir2 -type f -name "*.bam" > input.list nextflow run --bams input.list workflow.nf
{ "domain": "bioinformatics.stackexchange", "id": 1299, "tags": "shell, macs2" }
Are dual vectors not intrinsic to the manifold?
Question: I'm saying this based on their transformation. Say, we change the co-ordinate chart of a manifold according to $x'=f(x,y)$, $y'=g(x,y)$. Let $A$ be the Jacobian matrix of this transformation. Vectors transform as: $$v'=A^{-1}v$$ This looks like a passive transformation. As if we're trying to describe some abstract entity, attached to the manifold, after a change of basis Dual-vectors transform as: $$v'^{*}=Av^{*}$$ This looks like an active transformation. As if dual vectors are entities attached to the co-ordinate system instead of the manifold. For example, if $A$ is a rotation, the dual vectors rotate exactly the same as the co-ordinates. I'm visualising an abstract manifold with vectors attached on it (looking like squishy vomit). Brushing against it is a co-ordinate chart ($R^n$), with dual vectors attached on it. Each abstract point on the manifold is touching a point on $R^n$ according to the chart. When we change the chart, the $R^n$ space transforms, dragging the dual-vectoes along with it. The manifold stays resting with its vectors. (This visualisation requires a second chart: The chart mapping all the points to the background of the visualisation (say, a computer screen)) Am I wrong? Answer: I think you will be less confused if you consider a map $\phi:M\rightarrow N$ between two distinct manifolds, understand what's going on in that more general context, and then specialize to the case $M=N$. Pick a point $m\in M$ and put $n=\phi(m)$. Let $df$ be a cotangent vector at $n$. Then we get an associated cotangent vector $\phi^*(df) = d(f\circ\phi)$ at $m$. Let $v$ be a tangent vector at $m$ (so that $v$ acts on cotangent vectors). Then we get an associated tangent vector $\phi_*(v)$ at $n$, defined by $$\phi_*(v)(df)=v(\phi^*(df))$$ So the cotangent vectors get pulled back from $N$ to $M$ and the tangent vectors get pushed forward from $M$ to $N$. Again, this can be confusing to think about if you start with the case $M=N$ as you've done. But once you understand the general case, the special case should be clearer.
{ "domain": "physics.stackexchange", "id": 84084, "tags": "differential-geometry, vectors" }
Four color theorem and map pre-simplification of faces with less than 5 edges
Question: It is already known that in searching for a solution of the four color problem, regular maps can be pre-simplified by removing all faces with less than four edges. This is described for example in the book "What is Mathematics? An Elementary Approach to Ideas and Methods" about the five color theorem. I belive that all regular maps can be simplified by removing all faces with less than five edges (instead of less then four), without affecting the search and the validity of the four color theorem. This simplification is described here: http://4coloring.wordpress.com/t1/ In this case Euler’s identity gets really simplified: F5 = 12 + F7 + 2F8 + 3 F9 + ... What is known about this? Has it already been studied before? Answer: The result is known since Kempe in 1879, and is mentioned in http://en.wikipedia.org/wiki/Four_color_theorem as "Kempe also showed correctly that G can have no vertex of degree 4...." Your proof does not work, because when you remove two edges joining face B with A and C, A and C may already have been adjacent. Since you give the combined face a color, then return face B with a new color, you leave A and C adjacent and of the same color.
{ "domain": "cstheory.stackexchange", "id": 774, "tags": "graph-theory, co.combinatorics, graph-algorithms" }
Cell division - meiosis
Question: Really confused. How many chromosomes pairs do humans have in their sex cells? How many single chromosomes do humans have in their sex cells? Answer: Terminology is what is holding you back. What chromosome implies depends on context, i.e. is it replicated or not. In Meiosis. Gametes arise from Germline cells that have 23 PAIRS of chromosomes. That means, that while there are 23 different chromosomes they come in doubles/pairs. This is the typical X like shape we think about when we hear chromosomes. Another way, chromosome can means either 1X or 2 single chromatid MEIOSIS I -- 23X chromosome pairs are duplicate and end up as 46X chromosome pairs (46 pairs of X OR, 92 separate chromatid) These 46X are divided equally among two daughter cells just like in mitosis. Where now each daughter cell receives 23X = 23 pairs. MEIOSIS II. The daughter cells in end of Meiosis I divide without DNA replication and the 23 pairs are split among 2 new cells. So now each new daughter cell (in total 4) has 23 individual chromatid (individual chromosomes) and these are the gametes. When the gametes fertilize to form a zygote, the zugote product ends up 24X each. Each X contains, 1 chromatid (|) from the father and 1 chromatid {|} from the mother. Haploid means once copy of each chromosome. 23 Diploid means two copies chromosomes or 23X Ref - Allison Q.N,.(2015), Freeman Biological Science.
{ "domain": "biology.stackexchange", "id": 5533, "tags": "human-biology, genetics" }
rosbag::MessageInstance derefenced pointer error
Question: I am trying to follow this sample code to pull data from a bag file but am running into a bug that I cannot figure out. Hopefully this is the right forum to ask this in. The c++ api specially "Example usage for read:" of the link below. http://wiki.ros.org/rosbag/Code%20API I successfully recreated the write/read bag file examples, but when trying to apply to my own problem I cannot access the data within the returned pointer of rosbag::MessageInstance.instantiate(). I have a bag file containing topic /robot_mode of type std_msgs::Int8 that I would like to access its data member. However the call to robot_mode_p->data returns strange items. Sample code: BOOST_FOREACH(rosbag::MessageInstance const m, view) { cout << m.getTopic() << endl; if (m.getTopic() == mode_topic ) { std_msgs::Int8::ConstPtr robot_mode_p = m.instantiate<std_msgs::Int8>(); std::string message_def = m.getDataType(); cout << message_def << endl; cout << "robot_mode_p: " << *robot_mode_p << endl; if (robot_mode_p != nullptr) cout << "Dereferenced robot_mode: " << robot_mode_p->data << endl; } } I would expect in output of: /robot_mode std_msgs/Int8 robot_mode_p: data: 24 Dereferenced robot_mode: 24 But my code outputs: /robot_mode std_msgs/Int8 robot_mode_p: data: 24 Dereferenced robot_mode: Any help is much appreciated. Thanks! drive folder for bag file and sample code if interested: https://drive.google.com/drive/folders/1Bj2MONuQREMg74W9DkJqABTe3bQRd3CV?usp=sharing Originally posted by matthewlefort on ROS Answers with karma: 23 on 2019-12-11 Post score: 0 Answer: Wrap the value in a cast cout << "Dereferenced robot_mode: " << static_cast<int>(robot_mode_p->data) << endl; Originally posted by EFernandes with karma: 51 on 2019-12-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by matthewlefort on 2019-12-12: Wow! Thank you for the quick reply and correct answer. I will take a look at casting and pointers, would like to understand the why behind this. For those interested: http://www.cplusplus.com/doc/tutorial/typecasting/ seems the static_cast is returning the pointer to its base type (int)? Please correct me if I misunderstand. Comment by EFernandes on 2019-12-12: Glad to help! While I recommend to still look at casting and pointers, the problem it's not related to that, but cout itself. If you tried to print robot_mode_p.data (no pointer) it would still not print correctly without the cast.
{ "domain": "robotics.stackexchange", "id": 34136, "tags": "rosbag, ros-kinetic" }
Is there a polynomial algorithm for optimal sorting on trees?
Question: There's the classical problem of sorting numbers in a list with the restriction that you can only swap two neighbouring numbers. It's easy to see that getting an optimal number of needed swaps can be achieved by insertion sort or bubble sort. We can generalize this problem by changing the underlying structure from a list to a graph. Instead of swapping neighbouring numbers in a list we would be swapping numbers connected by an edge. Formally let's have a graph G=(V, E) with V = {1, ..., N} and an assignment of values f: V -> {1, ..., N} where f is a bijection/permutation. In order to do a swap we select an edge {u, v} from E and switch f(u) with f(v). Our goal is to sort f (that is achieve a state when f is the identity) in the least number of swaps. My question is whether there is a (polynomial) algorithm for this. Since working on general graphs seems really difficult from what I've tried let's restrict ourselves to G being a tree. This should be a simpler question because each number has a clear path to its target. Some observations: When G is a path graph the problem is the same as sorting a list which is simple. When G is a complete graph the problem is also simple since we only need to decompose each cycle into transpositions. You can actually look at the general problem as decomposing a permutation into as few transpositions as possible with the restriction of which transpositions you are allowed to use. As long as the graph is connected there's a lower bound of "sum of all distances to targets"/2 because each swap decreases the distance to target of at most two numbers. The upper bound is "sum of all distances to targets" because we can select a number that wants to be in a non-articulation vertex (or simply leaf in trees) and drag it there. Then we can forget about this vertex and reduce the problem to a smaller graph. Answer: This problem is called Token Swapping, and you can find more information if you search by its name on google. I couldn't find a proof that it is NP-complete on trees, but I've found a recent paper that gives a 2-approximation algorithm for trees: http://erikdemaine.org/papers/TokenReconfiguration_FUN2014/paper.pdf
{ "domain": "cs.stackexchange", "id": 10049, "tags": "algorithms, graphs, sorting, trees" }
Problems that don't have polykernel when parametrized by vertex cover
Question: Are there any problems apart from chromatic number, which is $FPT$ when parametrized by (the size of a minimum) vertex cover, and that does not admit a polykernel when parametrized by (the size of a minimum) vertex cover (under standard complexity assumptions)? Answer: Yes, consider the clique problem (W[1]-hard for the parameter solution size). The problem is FPT when parameterized by the vertex cover number: any clique can contain at most one vertex outside the vertex cover, so it is enough to look at only $2^\tau(n-\tau+1)$ possible solutions, where $\tau$ is the vertex cover number. On the other hand, the clique problem has no kernel of polynomial size (unless $\text{NP} \subseteq \text{coNP/poly}$). See e.g. the arXiv version Bodlaender, Hans L., Bart MP Jansen, and Stefan Kratsch. "Cross-composition: A new technique for kernelization lower bounds." arXiv preprint arXiv:1011.4224 (2010). The same is true for weighted feedback vertex set according to the paper.
{ "domain": "cs.stackexchange", "id": 5557, "tags": "algorithms, algorithm-analysis, parameterized-complexity" }
What can happen when 2.3*10^28 positrons collide with 2.3*10^28 electrons?
Question: I'm interested in this question after a writer friend asked me what happens when a human gets bombarded with positrons. Didn't want to post this under scifi because I want more "scientific" answers... Potassium isotope in human body produce around 4000 positrons daily. What would happen, let's say, that number is 100 billion, or infinity? EDIT To ask this in strictly physics terms: What can happen when 2.3*10^28 positrons annihilate with 2.3*10^28 electrons? I'm thinking photons get emitted and nothing else really happens. Answer: The rest mass of an electron is 0.511 MeV. When an electron and a positron annihilate their mass turns to energy (two 0.511 MeV photons) so for each annihilation an energy of 1.022 MeV is released. One electron volt is $1.602 \times 10^{-19}$ joules, so in joules the energy released is $1.637 \times 10^{-13}$ J. You ask what happens if $2.3 \times 10^{28}$ positrons annihilate with $2.3 \times 10^{28}$ electrons. Well, we simply multiply the energy we calculated above by $2.3 \times 10^{28}$ to find the total energy release would be $3.765 \times 10^{15}$ J. Figures like this are hard to put in context, so to get a handle on this figure note that a 1 megaton nuclear explosion produces $4.184 \times 10^{15}$ joules. So the electron-positron annihilation would produce the same energy as a 900 kiloton nuclear bomb, which is about 60 times as powerful as the bomb dropped on Hiroshima.
{ "domain": "physics.stackexchange", "id": 19947, "tags": "particle-physics, electricity, electromagnetic-radiation" }
Do seismic travel times from one location to another differ based on factors other than distance?
Question: Bit puzzled why it appears that seismic travel times from one location to another appears to just be a function of the distance, and not any other factors. Do seismic travel times from one location to another differ based on factors other than distance? Answer: Yes The biggest controlling factors are: Distance Velocity Model Composition (though velocity model and composition are related) I believe you understand the first factor, so I will focus on the other two. The most common GLOBAL velocity model is PREM, which is depicted below(Image Source): Velocity models basically assume the internal P and S velocity structure of Earth. If the crust is thinner than average in the area where the source is, the arrival times will be faster than if the crust is thicker than average, though depending on the distance this could be negligible. This is because seismic waves travel faster in than they do in the crust, so the least time tends to involve travel through the mantle. The other controlling factor is composition, or in other words, heterogeneities within the earth. Depending on the composition the wave travels through, arrival times could be slower or faster. For example, imagine an interplate earthquake traveling through a subducting slab and into a seismometer: the arrival times are slower because the wave speed of lithosphere is slower than the wave speed of mantle. In fact, its these deviations from our velocity model predictions that help us find heterogeneity in the earth to begin with! This is essentially how we find our natural resources, though instead of using earthquakes as the source we use man made active seismic waves.
{ "domain": "earthscience.stackexchange", "id": 91, "tags": "geophysics, earthquakes, seismology" }
What percentage of a Proton's mass is potential/kinetic energy?
Question: So in an hydrogen atom, the total mass of the atom is equal to the masses of the proton, the electron, minus their net binding energy of around 13 eV. Making the total less massive than the sum of its parts by about 1 part in 100 million. As it also turns out, the electron follows the virial theorem, so that binding energy is actually in the form of -26 eV of electromagnetic potential energy and 13 eV of kinetic energy. In a proton however, the total mass is equal to the masses of the three valence quarks plus the net binding energy, which is not only positive but accounts for 99% of the proton's mass. This is because the protons can never be in a free state, so while this binding energy is still positive, it is the minimum possible binding energy these quarks can have, and so attempting to dissassociate a quark increases the total mass of the system just like in the case for the hydrogen atom. Now onto the question, what fraction of this total net binding energy can be considered as the potential energy of the gluon field between quarks and the gluons themselves, and what percentage can be considered to be in the kinetic energy of the quarks and gluons? The potential energy must still be negative (since it is still a potential well and the strong potential is attractive at nuclear distances), but the virial theorem no longer holds (because the strong potential doesn't follow an inverse square law), so the kinetic energy can no longer be simply negative half of the potential (and that would lead to a negative net binding energy anyway, but we know from above its positive). The kinetic energy must therefore be some multiple greater than 1 of the potential energy, and I am wondering if one has calculated what that multiple is. No doubt some lattice QCD and supercomputer shenanigans are needed to get the result, but surely it has been done? TL;DR, the ~929 MeV of the proton, ignoring the ~9 MeV of the valence quarks, is some amount of negative potential and positive kinetic energy, say -200 MeV potential and 1129 MeV kinetic, for example. Those numbers are what I'm looking for. (Note, I am aware that in actuality most of this energy will be taking the form of virtual quark-antiquark pairs, but these are in constant flux and so I am merely looking for the semi-classical baseline that these quantum fluctuations float around.) Answer: There are a number of subtleties, because in a field theory you must define what you mean by "potential" and "kinetic" energy, and there are issues with any decomposition that are related to gauge invariance and scale dependence. The issue of the proton mass decomposition has been studied in some detail, and continues to be an active area of research (see, for example, these two papers). The basic idea is the following. Protons are described by QCD, and the QCD energy-momentum tensor is $$ T^{\mu\nu} = \frac{1}{2} \bar\psi i\gamma^{(\mu} \stackrel{\leftrightarrow}{D}\mbox{}^{\nu)}\psi +\frac{1}{4}g^{\mu\nu}F^2 - F^{\mu\alpha}F_{\alpha}^{\nu} $$ where $\psi$ is a quark field, $\stackrel{\leftrightarrow}{D}$ is a symmetrized gauge covariant derivative, $(\mu\nu)$ are symmetrized vector indices, and $F^{\mu\nu}$ is color field strength tensor. The Hamiltonian is $$ H = \int d^3x\, T^{00} $$ and the proton mass is $$ m_p = \langle p(\vec{k}=0) | H | p(\vec{k}=0)\rangle $$ where $| p(\vec{k}=0)\rangle$ is a proton state with zero momentum. I can now ask whether it is possible to decompose $m_p$ into contributions from various terms in the stress tensor, and whether these terms can be independently measured. The answer is "yes", modulo some of the ambiguities mentioned above (the total mass of the proton is certainly well defined, but there may be ambiguities in individual terms that cancel in $m_p$). Experimental input on individual terms comes from deep inelastic scattering (which measures trace free matrix elements of the quark and gluon energy-momentum tensor), and pion-nucleon scattering (which measures quark mass contributions). Individual matrix elements can also be determined in lattice QCD. There are many details, discussed in the papers cited above. A typical result is shown in this figure (taken from a report on a future electron ion collider) which shows fractional contributions from quarks and gluons (with quark mass contributions separated out) Postscript: Note that the proton is very different from a typical non-relativistic bound state. A non-relativistic bound state is made from some constituents with total rest mass energy $E_0=m_1c^2+\ldots + m_Nc^2$, and the statement that there is a bound state implies that $E=E_0-B$, where $B$ is biding is a (positive) binding energy. States of this type exist in the heavy quark sector of QCD (at least approximately), and are know as charmomium, bottomonium, etc. However, the proton is not like that. It is made of approximately massless quarks and exactly massless gluons, but the proton mass is large, $m_p >> 2m_u+m_d$. There is no binding energy, the mass of the proton is positive energy of quark and gluon fields. Also, the proton cannot be ionized into quark constiutents. If I excite the proton, all I can do is generate additional quark-anti-quark pairs. Second postscript: If you are really interested in an estimate, there are models of the nucleon (not quite QCD, but QCD inspired) in which one can talk about binding energy. For example, in the constituent quark model, quarks acquire an effective mass $m_Q\sim 400$ MeV from chiral symmetry breaking. Constituent quarks interact (by one gluon exchange, string potentials, or instanton induced fores) to make the proton, with a binding energy $3\times 400 - 935 \sim 265$ MeV. Also, one can attempt to take the gluon field energy and split into an electric part ("kinetic energy") and a magnetic part ("potential energy"). This separation depends on the scale, but the magnetic part does indeed come out negative. A simple analysis (using numbers from the paper cited above) gives $$ \frac{1}{2}\langle p| E^2 p\rangle \simeq 850\, {\rm MeV}\quad\quad \frac{1}{2}\langle p| B^2 p\rangle \simeq -525\, {\rm MeV} $$ A similar separation cannot be done for quarks, because there is no gauge invariant way to decompose the covariant derivative $\bar\psi \gamma\cdot D\psi$ into a kinetic and an interaction term.
{ "domain": "physics.stackexchange", "id": 83709, "tags": "mass-energy, quantum-chromodynamics, protons, binding-energy, strong-force" }
Views Counter made in Python, Gevent and MongoDB
Question: I've created a Views Counter in Python, Gevent and MongoDB (Flask is also included in the full stack as you can see from the context issue in the code). My gut still tells me that it can be still somehow improved though. What the code does is initializing a dict "buffer" (just a shelve persisted in memory [no writeback]), define a function where Mongo's bulk_op is initialized, a list comprehension that iterates over the view_count buffer and sets individual find-updates for them with their respective key values that is then ended by an execute with write concern disabled. Then there's the run_buffer_op function that runs a loop infinitely, which checks if the buffer has over 5000 items, and in that case, flushes its content to the database (by executing the former function), otherwise it just waits 15 minutes before flushing. This function is finally run (or better, spawned) by Gevent. Do you see some possible further improvements? ''' Views counter buffer ==================== ''' # Initialize vc buffer (ObjectId + Views_count pair). We don't need **writeback** as it should just be persisted in memory view_count=shelve.open('view_count', writeback=False) def flush_to_db(): # Apparently you have to set the context to make the db bulk operation work, otherwise it'll return a *Working outside of context* error :( with app.app_context(): bulk = mongo.db.test.initialize_unordered_bulk_op() # What this list comprehension does is iterating over the *view_count* buffer # and set individual find-updates for them with their respective key value. # Like: bulk.find({'_id': '7rhf3d32dh23jd78988ej8'}).update({'$set':{'count':2}}) # bulk.find({'_id': '7rhf3d32dh23dg48988ej8'}).update({'$set':{'count':10}}) # bulk.find({'_id': '7rhf3d32th23dg48988ek9'}).update({'$set':{'count':7}}) # ... [bulk.find({ '_id': k }).update({'$set':{'count': v }}) for k,v in view_count.iteritems()] # Execute the bulk update with no **write concern**. We can afford to lose some views count if that ever happens in change of better performance. bulk.execute(write_concern=None) def run_buffer_op(): # Run the loop infinitely while True: # If buffer has more than 5000 items flush it now to the db if len(view_count) > 5000: flush_to_db() # Else just wait 15 minutes gevent.sleep(900) flush_to_db() # Spawn the loop gevent.spawn(run_buffer_op) Answer: Instead of using a list comprehension, I would simply use a for-loop. It will do the exact same thing and save having to allocate a temporary list. Also, in run_buffer_op, you first check the number of views. If that value isn't >5000 (almost a DBZ joke) then you sleep for 15min. Once it wakes up, you IMMEDIATELY flush_to_db no matter what. Based on your comments, it seems you only want to flush_to_db if there are >5000 views. Thus, the flush_to_db call immediately after the gevent.sleep(900) seems a little redundant.
{ "domain": "codereview.stackexchange", "id": 7726, "tags": "python, asynchronous, mongodb, pymongo" }
Simplify the definition of substitution in Lamdba calculus
Question: Substitution in untyped Lambda calculus is complicated by variable capture. Can this boring technical complication be entirely avoided by some restriction on the standard formation rules? Something that prevents the dangerous symbol duplication. If so, how? Otherwise why not? Alternatively, can the complication be avoided by uniformly alpha-converting every Lambda abstraction of the final composite, to ensure that every binding/bound variable symbol only appears locally (and therefore never as a free variable in the argument of an application term). If so, it seems that the substitution rules could be simplified. Answer: Yes, there are several techniques, such as de Bruijn indices and explicit substitutions. Actual implementations, at least those that actually have to work efficiently, use such techniques and never implement substitution by renaming variables.
{ "domain": "cs.stackexchange", "id": 17646, "tags": "lambda-calculus" }
Is a body in uniform circular motion in equilibrium?
Question: I know this question has been asked a myriad different times, but nowhere can I seem to find a definitive, final resolution to it. Is a body in uniform circular motion in equilibrium? Answer: After searching for some of the keywords, it seems like this issue of an object in uniform circular motion being in "dynamical equilibrium" arose from a book (or multiple books using the same original source) for ICSE prep. See, the third example in this text segment posted on this site But a similar book seems to have eliminated reference to the circular motion examples now. See pdf linked here with this updated section: Anyway, as others have noted, there is no good definition of "dynamical equilibrium" that can include an object in uniform circular motion. An object in equilibrium experiences zero net force (and thus, by Newton II, has zero acceleration). An object in circular motion has, by definition, an acceleration. One could say that an object in "dynamical equilibrium" is one for which the sum of the forces is zero but it moves with constant velocity with respect to its surroundings. But that's really true for any object in equilibrium, depending on your definition of surroundings and your choice of inertial frame. It could be that this arose because some introduce a centrifugal force to explain objects in circular motion, essentially moving $m\vec{a}$ over to the other side of Newton's Second Law. But that is not a good way to think of it.
{ "domain": "physics.stackexchange", "id": 99919, "tags": "newtonian-mechanics, kinematics, equilibrium" }
Trace as integral
Question: Consider a system of two entangled harmonic oscillators. The normalised ground state is denoted by $\psi_0(x_1,x_2)$. I've been taught that a density matrix is constructed as $\rho = \left|\psi\rangle\langle\psi\right|$, so in this basis: $$\rho = \psi_0(x_1,x_2) \psi_0^*(x_1',x_2') \left|x_1,x_2\rangle\langle x_1',x_2'\right|$$ The reduced density matrix of the second oscillator is then: $$\rho_2(x_2,x_2') = \psi_0(x_1,x_2) \psi_0^*(x_1,x_2') \left|x_2\rangle\langle x_2'\right|$$ However, in a paper I've come across this reduced density matrix is written: $$\rho_2(x_2,x_2') = \int_{-\infty}^{\infty} dx_1 \psi_0(x_1,x_2) \psi_0^*(x_1,x_2')$$ I'm not familiar with the Trace as an integral, though I can sort of see how it would work for continuous variables. Clearly there's also a difference in notation, since this last formula has no bras or kets. I was wondering if someone could explain the difference, or maybe give some sort of overview. Answer: If the eigienvalues form a continuous spectrum, like the eigenvalues of $x$, then states must be normalized to a dirac delta, $$ \left\langle x \right| x' \rangle = \delta(x-x') $$ The trace of an operator is the sum of the diagonal elements, or if the basis is continuous, it becomes an integral \begin{eqnarray*} \mathrm{Tr}\left(\left|\phi\right\rangle\left\langle \psi \right|\right) &=& \int_{-\infty}^{\infty}\mathrm{d}q\,\left\langle q\right|\phi\rangle\left\langle \psi \right|q\rangle\\ &=& \int_{-\infty}^{\infty}\mathrm{d}q\,\phi(x)\psi^*(x') \left\langle q\left|x\left\rangle\right\langle x'\right|q\right\rangle\\ &=& \int_{-\infty}^{\infty}\mathrm{d}q\,\phi(x)\psi^*(x') \delta(q-x)\delta(q-x') \end{eqnarray*} This last line is only nozero if $x=x'=q$, so you can choose whichevr label you like for the integration variable. $$\mathrm{Tr}\left(\left|\phi\right\rangle\left\langle \psi \right|\right) = \int_{-\infty}^{\infty}\mathrm{d}x\,\phi(x)\psi^*(x)$$ As for as the difference between the kets and no kets, if the kets are there, it is the operator, ready to act on some vector. If the kets are not there, you have just the $x_1,x_2$ matrix element. It's technically not the operator, but since the elements are just the values of some function of $x_1,x_2$, writing the element this way tells you all of the information you need. For example, you could write the 2x2 identity matrix as $$\mathbb{I} = \pmatrix{1&0\\0&0} + \pmatrix{0&0\\0&1}$$ You could just as easily say $$\mathbb{I}_{i,j} = \begin{cases}1& \mathrm{if}\;i=j\\ 0 & \mathrm{otherwise}\end{cases}$$ and convey the same information.
{ "domain": "physics.stackexchange", "id": 13402, "tags": "quantum-entanglement, harmonic-oscillator, density-operator, coupled-oscillators" }
Kinetic Energy with constant velocity
Question: I am sorry for the question, but I am a noob in Physics. I don't understand why in the kinetic energy's formula the distance is out of the equation. Kinetic Energy formula: $$K = \frac{1}{2}m v^2 = n J$$ For example: if I run $5$ miles at $3m/s$, I spend the same calories or joules ($J$) if I run for few meters with the same velocity. It is not like in the real life. I don't understand why. Any hints? Answer: The kinetic energy is only part of the picture. If you were gliding in a perfect vacuum, your assessment would be right, it'd take the same amount of energy to travel different distances at the same speed (and it'd just take longer). (Although even here, you're body wouldn't be perfectly efficient in converting it's chemical potential energy into kinetic energy, so the total energy spent would be greater than the kinetic energy gained, the rest of the energy would go to heat.) Running here on earth, you have loss mechanisms such as friction and air resistance. In addition, at a run each stride is like a small jump, so a lot of energy is probably spent fighting gravity. Running for a longer time means you need to counter-act these forces for longer which takes more energy. To be really realistic you'd need to take into account how these forces varied with your speed, and take into account your biological efficiency at different speeds, etc. Intuitively, this is also why bikes take much less energy to go farther, you're minimizing several of the loss mechanisms. As you can probably tell, a proper calculation of all these effects is very difficult. Physics at this level is just trying to give you a baseline to begin these calculations and give you some intuition for the underlying principles.
{ "domain": "physics.stackexchange", "id": 72505, "tags": "newtonian-mechanics, energy, kinematics, velocity, dissipation" }
watchdog for c++11
Question: I created a watchdog/notifier in C++ and wanted to make it better so it could be used by multiple people. The idea is that there is a timer class and event. client create events and pass a lambda function, timeout in seconds and repeat mode i.e. whether to keep the event active after a timeout or deactivate once the timeout is reached (default mode). The event can be activated and deactivated by the client on-demand or has done in the lambda function. timer.hpp #pragma once #include <functional> #include <chrono> #include <vector> #include <utility> #include <set> #include <stack> #include <thread> #include <mutex> #include <condition_variable> #include <algorithm> struct Event { unsigned int id {0}; std::chrono::time_point<std::chrono::steady_clock> startTimepoint; std::chrono::seconds timeout {std::chrono::seconds::zero()}; std::function<void(unsigned int id)> function {nullptr}; // handler bool isRepeated {false}; bool isActive {false}; bool isExecuted {false}; Event(unsigned int p_id, std::chrono::seconds p_timeout, std::function<void(unsigned int p_id)>&& p_function, bool p_isRepeated) : id(p_id), timeout(p_timeout), function(p_function), isRepeated(p_isRepeated) { } }; struct TimeEvent { std::chrono::time_point<std::chrono::steady_clock> nextTimepoint; unsigned int eventID; }; inline bool operator<(const TimeEvent& l, const TimeEvent& r) { return l.nextTimepoint < r.nextTimepoint; } class Timer { public: Timer(); virtual ~Timer(); unsigned int RegisterEvent(std::function<void(unsigned int p_id)>&& p_function, std::chrono::seconds timeout = std::chrono::seconds::zero(), bool isRepeated = false); bool RemoveEvent(unsigned int id); bool ActivateEvent(unsigned int id, int timeout = 0); bool IsActivated(unsigned int id) const; bool DeactiveEvent(unsigned int id); bool DeactivateAllEvent(); bool IsExecuted(unsigned int id) const; private: void Run(); std::mutex m_mutex {}; std::condition_variable m_condition {}; std::thread m_workerThread {}; bool m_isTimerActive {true}; std::vector<Event> m_eventsList {}; std::set<TimeEvent> m_activeTimeEventSet {}; //Auto ordered due to set std::stack<unsigned int> m_freeEventIds {}; }; timer.cpp #include "timer.hpp" void Timer::Run() { std::unique_lock<std::mutex> lock(m_mutex); while (m_isTimerActive) { if (m_activeTimeEventSet.empty()) { m_condition.wait(lock); } else { TimeEvent te = *m_activeTimeEventSet.begin(); if (std::chrono::steady_clock::now() >= te.nextTimepoint) { m_activeTimeEventSet.erase(m_activeTimeEventSet.begin()); lock.unlock(); m_eventsList[te.eventID].function(te.eventID); lock.lock(); m_eventsList[te.eventID].isExecuted = true; if (m_eventsList[te.eventID].isActive && m_eventsList[te.eventID].isRepeated) { te.nextTimepoint += std::chrono::duration_cast<std::chrono::seconds>(m_eventsList[te.eventID].timeout); m_activeTimeEventSet.insert(te); } else { m_eventsList[te.eventID].isActive = false; } } else { m_condition.wait_until(lock, te.nextTimepoint); } } } } Timer::Timer() { std::unique_lock<std::mutex> lock(m_mutex); m_workerThread = std::thread([this] { Run(); }); } Timer::~Timer() { std::unique_lock<std::mutex> lock(m_mutex); lock.unlock(); m_isTimerActive = false; m_condition.notify_all(); m_workerThread.join(); m_eventsList.clear(); m_activeTimeEventSet.clear(); while (!m_freeEventIds.empty()) { m_freeEventIds.pop(); } } unsigned int Timer::RegisterEvent(std::function<void(unsigned int p_id)>&& p_function, std::chrono::seconds timeout, bool isRepeated) { unsigned int id; std::unique_lock<std::mutex> lock(m_mutex); if (m_freeEventIds.empty()) { id = m_eventsList.size(); Event e(id, timeout, std::move(p_function), isRepeated); m_eventsList.push_back(std::move(e)); } else { id = m_freeEventIds.top(); Event e(id, timeout, std::move(p_function), isRepeated); m_freeEventIds.pop(); m_eventsList[id] = std::move(e); } lock.unlock(); m_condition.notify_all(); return id; } bool Timer::ActivateEvent(unsigned int id, int timeout) { std::unique_lock<std::mutex> lock(m_mutex); if (m_eventsList.size() == 0 || m_eventsList.size() < id) { return false; } if(timeout) { m_eventsList[id].timeout = std::chrono::seconds(timeout); } if (m_eventsList[id].timeout > std::chrono::seconds::zero()) { m_eventsList[id].isActive = true; m_eventsList[id].isExecuted = false; m_eventsList[id].startTimepoint = std::chrono::steady_clock::now(); auto it = std::find_if(m_activeTimeEventSet.begin(), m_activeTimeEventSet.end(), [&](const TimeEvent & te) { return te.eventID == id; }); if (it != m_activeTimeEventSet.end()) { m_activeTimeEventSet.erase(it); } m_activeTimeEventSet.insert(TimeEvent {m_eventsList[id].startTimepoint + std::chrono::duration_cast<std::chrono::seconds>(m_eventsList[id].timeout), id }); } lock.unlock(); m_condition.notify_all(); return true; } bool Timer::IsActivated(unsigned int id) const { return m_eventsList[id].isActive; } bool Timer::DeactiveEvent(unsigned int id) { std::unique_lock<std::mutex> lock(m_mutex); if (m_eventsList.size() == 0 || m_eventsList.size() < id) { return false; } m_eventsList[id].isActive = false; auto it = std::find_if(m_activeTimeEventSet.begin(), m_activeTimeEventSet.end(), [&](const TimeEvent & te) { return te.eventID == id; }); if (it != m_activeTimeEventSet.end()) { m_activeTimeEventSet.erase(it); } lock.unlock(); m_condition.notify_all(); return true; } bool Timer::RemoveEvent(unsigned int id) { std::unique_lock<std::mutex> lock(m_mutex); if (m_eventsList.size() == 0 || m_eventsList.size() < id) { return false; } m_eventsList[id].isActive = false; auto it = std::find_if(m_activeTimeEventSet.begin(), m_activeTimeEventSet.end(), [&](const TimeEvent & te) { return te.eventID == id; }); if (it != m_activeTimeEventSet.end()) { m_freeEventIds.push(it->eventID); m_activeTimeEventSet.erase(it); // Note: Do not erase from eventsList, else the other ids becomes invalid } lock.unlock(); m_condition.notify_all(); return true; } bool Timer::IsExecuted(unsigned int id) const { return m_eventsList[id].isExecuted; } bool Timer::DeactivateAllEvent() { std::unique_lock<std::mutex> lock(m_mutex); if (m_eventsList.size() == 0) { return true; } for (unsigned int i = 0; i < m_eventsList.size(); ++i) { m_eventsList[i].isActive = false; m_eventsList[i].isExecuted = false; m_freeEventIds.push(m_eventsList[i].id); } m_activeTimeEventSet.erase(m_activeTimeEventSet.begin(), m_activeTimeEventSet.end()); lock.unlock(); m_condition.notify_all(); return true; } client.cpp #include "timer.hpp" #include <iostream> #include <chrono> #include <thread> using namespace std; int quickFunc(bool &done) { std::this_thread::sleep_for(std::chrono::seconds(10)); std::cout << "MAIN: quickFunc() executed" << std::endl; done = true; } int slowFunc(bool &done) { for(size_t i = 0; i < 10; ++i) { std::this_thread::sleep_for(std::chrono::seconds(1)); std::cout << "MAIN: slowFunc() executing" << std::endl; } std::cout << "MAIN: slowFunc() executed" << std::endl; done = true; } int main() { Timer* timer = new Timer(); bool quickFuncDone = false; bool slowFuncDone = false; unsigned int id1 = timer->RegisterEvent([&](unsigned int) { if(quickFuncDone == false) { std::cout << "MAIN: quickFunc() not executed, will not observe" << std::endl; } }, std::chrono::seconds(2)); timer->ActivateEvent(id1); quickFunc(quickFuncDone); unsigned int id2 = timer->RegisterEvent([&](unsigned int) { if(slowFuncDone == false) { std::cout << "WATCHDOG: slowFunc() is not yet completed, continue observing" << std::endl; } else { std::cout << "WATCHDOG: slowFunc() is completed, deactivating myself" << std::endl; timer->DeactiveEvent(id2); } }, std::chrono::seconds(2), true); timer->ActivateEvent(id2); slowFunc(slowFuncDone); std::this_thread::sleep_for(std::chrono::seconds(30)); delete timer; return 0; } Looking forward to some good edits and suggestions so it becomes generic and can be released to the public for wider use. Answer: Move Event and TimeEvent inside class Timer These types are just implementation details of your Timer class, and are not part of the public API. So by moving them into class Timer, you avoid polluting the global namespace. It's simply: class Timer { struct Event { ... }; struct TimeEvent { ... }; ... }; One issue is the operator<() for TimeEvents. You can't move that into class Timer like it is, because then it might think it's overloading Timer's own comparison operator. You can either make it friend, or just move it into struct TimeEvent; there is no reason here why it should be a free function. Consider creating an alias for the clock You are using std::chrono::steady_clock in a few places. It's the right clock to use, but you can avoid typing that long name and make it easier to switch to another clock later by creating an alias for it: class Timer { using clock = std::chrono::steady_clock; struct Event { ... clock::time_point startTimepoint; ... }; ... }; Simplify initializing variables to zero You are explicitly writing out the zero value for each variable you initialize, like in this line: std::chrono::seconds timeout {std::chrono::seconds::zero()}; But you can use the empty brace syntax to do this: std::chrono::seconds timeout {}; The same goes for 0, false, nullptr and so on. The benefit is that if you ever change the type of something, you don't need to change the value you initialized it with (unless it was a non-zero value of course). Some types like std::mutex, as well as most containers, don't need to be explicitly "zeroed" at all, so these don't even need the {}. No need to explicitly clear containers in the destructor In Timer::~Timer(), you explicitly clear all the containers, but the destructor of those containers will be called automatically after your destructor, and those will take care of clearing themselves. Ensure you handle events with identical expiration times What if two events at some point will have the same value for nextTimePoint? The problem is that this will result in a conflict when trying to add them both to a std::set. Either ensure you disambiguate between two events in the comparison function (for example, check first if the timepoints are equal, if so compare eventID instead), or use std::multiset. Remove useless member variables There are some member variables that are not used at all, or don't seem to have a very useful purpose: Event::startTimepoint is set at construction time but never read from. Event::isExecuted is not used by the client code. Is it even necessary? If some flag is necessary, the callback function could set it itself. Also, it is reset in Timer::DeactivateAllEvent(), which is confusing to me: why would deactivating an event that has executed before reset that flag? Event::isActive is unnecessary, why not just call RemoveEvent() and add it back with RegisterEvent(), or alternatively deactivate it by setting the timeout to effectively infinity. Removing unnecessary member variables keeps structs and classes small, which is especially important if you need lots of them. Once you remove the support for registered but inactive events, you could also consider getting rid of Timer::m_eventsList, move nextTimepoint from TimeEvent into Event, so you just have a set of Events sorted on next expiration time. ID bookkeeping There are some issues having an integer ID for events. You need some way to do the bookkeeping of which IDs are free. If you create a thousand events and then unregister them all, then you suddenly have Timer::m_freeEventIds containing a thousand integers. Also, despite looking up an event by ID in m_eventsList being \$\mathcal O(1)\$, you still need to use std::find_if() to find the corresponding element in m_activeTimeEventSet when deregistering an event. This is an \$\mathcal O(N)\$ operation. If you could use C++17, I would just store the Events this way: struct EventCmp { bool operator()(const std::unique_ptr<Event> &a, const std::unique_ptr<Event> &b) { if (a->nextTimepoint == b->nextTimepoint) return a.get() < b.get(); else return a->nextTimepoint == b->nextTimepoint ? ; } }; std::set<std::unique_ptr<Event>, EventCmp> m_events; And then use the raw pointer as the ID. So to register an event: Event *Timer::RegisterEvent(...) { // Allocate the event auto event = std::make_unique<Event>(timeout, p_function, ...); // Move the event into the set of events std::unique_lock<std::mutex> lock(m_mutex); m_events.insert(std::move(event)); // Return a pointer to the event return event.get(); } You could make this safer by wrapping the pointer in a class, possibly named EventHandle, to prevent the caller from modifying the event. When you run the event loop, you move the Event out of the set using std::set::extract() and insert it back in after modifying its nextTimepoint, like so: void Timer::Run() { ... auto node = m_events.extract(m_events.begin()); Event *event = node.value().get(); if (event->isRepeated) event->nextTimePoint += event->timeout; else event->nextTimePoint = /* infinity */; event->function(event); m_events.insert(node); ... }; This requires C++17 though, as using std::set::extract() is the only way to move a std::unique_ptr in and out of a set. You can make it work with C++11 if you store raw pointers instead of std::unique_ptrs in the set, and then you first make a copy of the pointer, then erase() it from the set, and insert it afterwards again: std::set<Event *, EventCmp> m_events; ... void Timer::Run() { ... Event *event = *m_events.begin(); m_events.erase(m_events.begin()); if (event->isRepeated) event->nextTimePoint += event->timeout; else event->nextTimePoint = /* infinity */; event->function(event); m_events.insert(event); ... };
{ "domain": "codereview.stackexchange", "id": 41969, "tags": "c++11, timer" }
Why isn't water running faster hotter?
Question: I was running the washing up water this morning, and started to think about why the cold tap isn't hot, and why the water doesn't get hotter the faster it is flowing (if anything, the cold tap gets colder the faster it flows). From my understanding $K.E = \frac{mv^2}{2}$ and temperature is directly proportional to kinetic energy. I know that the $v$ in the above equation is really the mean speed of the particles and therefore some are moving backwards and some moving forwards, it is the speed that is used. But surely the particles of water in the tap are all moving faster, therefore they should all be hotter. Perhaps the particles in the stream are moving at a much higher mean speed than the water is flowing, so the temperature increase is negligible... Am I correct in thinking this? or otherwise, why doesn't the cold tap get hot the more you turn it on? Answer: The water gets colder the longer you run it (in the UK at least) because the water mains pipes buried in the ground are colder than the ones in your house, so sadly this isn't evidence for any fundamental physical effect. In principle any fluid flowing in a pipe gets hotter because energy is dissipated in viscous flow. You could in principle calculate the energy dissipated using the pressure drop per length of pipe, which is described by the Darcy-Weisbach equation, but this would be a somewhat involved calculation for real pipes/taps and in any case it isn't relevant to the core of your question. When you relate velocity to temperature you're presumably thinking of the Maxwell-Boltzmann distribution for the temperature dependance of the velocity profile in gases. The trouble is this distribution is arrived at by considering redistribution of energy between gas molecules due to collisions between them. If you simply add a constant velocity to every gas molecule you aren't making any difference to the way the gas molecules collide with each other, because it's only their relative velocities that matter. Although water is a liquid not a gas the same argument applies. It's the velocities of the water molecule relative to each other that determine the temperature. So just adding a constant velocity to every water molecule makes no difference.
{ "domain": "physics.stackexchange", "id": 62978, "tags": "thermodynamics, energy" }
Have there ever been simultaneous cyclones in the same ocean but different hemispheres?
Question: I'm teaching a physics class, and I'm getting to the point where we are talking about the Coriolis effect. I usually show a couple of photos/videos showing a storm in the Northern Hemisphere and a storm in the Southern Hemisphere, and talk about why they rotate different directions. However, it occurred to me that at some point in history, there could well have been two storms occurring in the same ocean but in different hemispheres. Has this ever occurred? This would most likely have to be in the Pacific or Indian Oceans (probably the Pacific, since from what I've gathered the Northern Indian Ocean is less active than most other cyclone basins.) Ideally, a photo/video of this event would exist that I could show my students. Answer: It does occasionally happen. Not often, because to kick-start a hurricane there has to be some rotation to start with, from the Coriolis effect, together with a sea surface temperature of >27 °C (+ some other requirements such as lack of horizontal shear). This combination is best met during summer when one or other hemisphere is tilted towards the sun, so the hurricane seasons in both hemispheres tend to be 6 months out of synchronization. But there are occasional overlaps, as in this vector composite, found at http://www.wunderground.com/blog/JeffMasters/archive.html?year=2015&month=06&MR=1
{ "domain": "earthscience.stackexchange", "id": 1687, "tags": "meteorology, tropical-cyclone" }
Ranking Score System
Question: I have an assignment to solve this problem. There are a total of 12 test case files, 1 of which that I have failed due to exceeding limit on the script. Question Description Bob has somehow managed to obtain a class list of \$N\$ students that contains the name of the students, as well as their scores for an exam. The task is to generate the rank of each student in the class list. Let \$S\$ be the number of students in the class list that have obtained a higher score than student \$X\$. The rank of student \$X\$ in the class list is formally defined as \$(S+1)\$. This means that if there are many students that obtain the same score, they will all have the same rank. Input The first line of input contains a single integer \$N\$, the number of students in the class list. \$N\$ lines will follow. Each line will describe one student in the following form: [name] [score]. Output For each student, print the rank of the student in the following form: [name] [rank]. These students should be printed in the same order as the input. Limits \$1 \leq N \leq 50000 All the names of students will only contain uppercase and lowercase English letters with no spaces. The names will not be more than 20 characters long. It is possible that there can be 2 students with the same name. All scores of students will range from 0 to 109 inclusive. Does anyone have a solution to my problem? Do inform me if any more information is needed. Any other comments on coding styles and space complexities that may arise are also appreciated, though the focus should be the time. Here's my code and test cases. import java.util.*; //Comparator to rank people by score in ascending order //No tiebreaking for equal scores is considered in this question class PairComparator implements Comparator<List<Object>> { public int compare(List<Object> o1, List<Object> o2) { return (Integer)o1.get(0) - (Integer)o2.get(0); } } public class Ranking { private void run() { Scanner sc = new Scanner(System.in); ArrayList<List<Object>> inputPairs = new ArrayList<>(); ArrayList<String> nameIterList = new ArrayList<>();//To store names in scanned order HashMap<String, Integer> dupeCount = new HashMap<>();//To consider cases where there are people with same names int count = sc.nextInt(); for (int i=0;i<count;i++) { String name = sc.next(); int score = sc.nextInt(); name = checkDuplicates(nameIterList,name,dupeCount);//returns a unique name after considering duplicates List<Object> pair = List.of(score,name);//simulates a struct data structure in C with non-homogeneous elements inputPairs.add(pair); nameIterList.add(name); } Collections.sort(inputPairs, (new PairComparator()).reversed());//descending order sorting HashMap<String,Integer> nameRank = new HashMap<>();//name and respective rank in O(1) time makeTable(nameRank,inputPairs); for (String name: nameIterList) { System.out.println(String.format("%s %d",name.trim(),nameRank.get(name))); } //for displaying purposes, repeated name is printed } public static void main(String[] args) { Ranking newRanking = new Ranking(); newRanking.run(); } public static void makeTable(HashMap<String,Integer> nameRank, ArrayList<List<Object>> inputPairs) { int lowestRank = 1; int previousScore = (Integer)inputPairs.get(0).get(0); for (int i=0;i<inputPairs.size();i++) { List<Object> pairs = inputPairs.get(i); String name = (String) pairs.get(1); int score = (Integer) pairs.get(0); int currentRank = i+1;//default rank if there are no tiebreakers if (score==previousScore) { currentRank = lowestRank;//takes the smallest possible rank for a tie-breaker } else { lowestRank = currentRank;//updates the smallest possible rank as tie-breaker is broken previousScore = score; } nameRank.put(name,currentRank);//updates HashMap } } public static String checkDuplicates(ArrayList<String> nameList, String name, HashMap<String,Integer> dupeCount) { if (dupeCount.containsKey(name)) { int count = dupeCount.get(name); dupeCount.replace(name,count+1); //updates the duplicateTable return name+ new String(new char[count]).replace('\0', ' ');//new name is appending with spaces, trimmed later on } else {//entry not found, add in as the first one dupeCount.put(name,1); return name;//no change } } } Sample Inputs 25 Sloane 15 RartheCat 94 Taylor 34 Shark 52 Jayce 58 Westin 91 Blakely 6 Dexter 1 Davion 78 Saanvi 65 Tyson 15 Kiana 31 Roberto 88 Shark 55 MrPanda 25 Rar 26 Blair 12 RartheCat 81 Zip 74 Saul 58 ProfTan 77 SJShark 0 Georgia 79 Darian 44 Aleah 7 Sample Output Sloane 19 RartheCat 1 Taylor 15 Shark 13 Jayce 10 Westin 2 Blakely 23 Dexter 24 Davion 6 Saanvi 9 Tyson 19 Kiana 16 Roberto 3 Shark 12 MrPanda 18 Rar 17 Blair 21 RartheCat 4 Zip 8 Saul 10 ProfTan 7 SJShark 25 Georgia 5 Darian 14 Aleah 22 Answer: Your data model is slowing down your code. ArrayList<List<Object>> inputPairs = new ArrayList<>(); ArrayList<String> nameIterList = new ArrayList<>();//To store names in scanned order HashMap<String, Integer> dupeCount = new HashMap<>();//To consider cases where there are people with same names Calling ArrayList.add() will add an item to the array list. If insufficient room exists in its storage area, the area reallocated double its previous size, and the information copied to the new storage area. With 50000 names, with an initial allocated space of 8 items, you will go through 13 reallocations of the inputPairs and nameIterList containers. The HashMap stores its information differently, but it will suffer from the same doubling of capacity steps, with an additional penalty of "rebinning" the contents into the proper bins. All of this takes time, and all of this can all be avoided by pre-allocating your storage container sizes. You know what the limit is: 50000. Alternately, you can read in N and then allocate properly sized storage containers. int count = sc.nextInt(); ArrayList<List<Object>> inputPairs = new ArrayList<>(count); ArrayList<String> nameIterList = new ArrayList<>(count); HashMap<String, Integer> dupeCount = new HashMap<>(count*2); A HashMap will rebin by default at 75% capacity, so I've initialized it at double the required capacity, so it won't exceed the limit. ArrayList<List<Object>> may not be the worst storage structure to use, but it comes close. List.of(score,name) should allocate a specialized, immutable two-member structure to use for the list, but you still have to go through the overhead of the List interface to .get() at the members. Worst, the score has to be boxed from an efficient int into a full blown Integer object. This auto boxing takes both time and space. Worse, the additional object allocations will cause additional cache misses, slowing down the program. True, the Integer objects will probably all be interned varieties, due to their restricted range, but it all adds up to your time-limit-exceeded issue. List.of(score, name) was used to avoid creating your own simple class: class StudentRecord { String name; int score; } Instead of 3 objects (at least) per student, you only have two: the StudentRecord and the name. Access to the member fields is fast; no .get(int) overhead. (But even this is overhead that you don't need!) Checking for duplicate names, and creating fake names to avoid the duplicates is a time wasting operation. We can avoid it, with a smarter algorithm. The better way First: let's simplify the data down to the bare minimum... int count = sc.nextInt(); String[] names = new String[count]; int[] score = new int[count]; ... two parallel arrays, one containing the student names (in order), and one containing the scores (in order). Let's jump to the middle... int[] rank = new int[110]; You have 110 possible score values, each which corresponds to exactly one rank. If you have 5 students with a score of 109 and one student with a score of 108, then rank[109] should contain 1, and rank[108] should contain 6. Jumping to the end... for(int i=0; i<count; i++) { System.out.printf("%s %d\n", name[i], rank[score[i]]); } ... prints out the student, looks up the rank corresponding to their score and prints that as well. Creation of the rank[] array Since this is a programming challenge, I'll leave this up to you. There are several ways to do it. Good luck.
{ "domain": "codereview.stackexchange", "id": 33872, "tags": "java, algorithm, programming-challenge, time-limit-exceeded" }
"Uniform" Set Cover Approximation?
Question: The (optimization version of) Set Cover problem is the following: given a "universe" set $S$ and a collection of subsets $S_1, \cdots, S_m \subseteq S$, we want to find the minimum cardinality set of $k$ elements $\{i_1, \cdots, i_k\}$ such that $\bigcup_{i=1}^k S_{i_j} = S$. It is a well-known result in an introductory approximation algorithms class that this problem has no $O(\log n)$-factor approximation unless $P = NP$. However, I'm interested in a restricted version of the problem: suppose that $|S_i| = n$ for all $i$ (i.e., all the subsets are exactly the same size). This problem turns out to still be $NP$-complete, but is there a better approximation ratio one can achieve? I don't know if there is terminology for the problem (I dubbed it "uniform" here), because searching through Google Scholar and the arXiv has not yielded much. Answer: No, one can't achieve a better approximation ratio for your problem. This follows by a simple padding argument. Let $S_1^*,\dots,S_m^* \subseteq S^*$ be an instance of ordinary set cover (where the sizes of the sets are not restricted to be the same). Let $n = |S|$. Define $S^* = S \cup \{1,2,\dots,n\}$ where it is assumed that $1,2,\dots,n$ represent $n$ new symbols not found in $S$. Also define $S_i = S^*_i \cup \{1,2,\dots,n-|S^*_i|\}$ and $S_{m+1} = \{1,2,\dots,n\}$. Then $S_1,\dots,S_m,S_{m+1} \subseteq S$ form an instance of your uniform set cover problem; by construction, the sets $S_1,\dots,S_m$ all have the same size. Moreover, the minimum cardinality cover for the original problem $S^*$ differs in size from the minimum cardinality cover for the uniform problem $S$ by at most one, since given any solution $\{i_1,\dots,i_k\}$ for the original problem we obtain a valid solution $\{i_1,\dots,i_k,m\}$ for the uniform problem, and vice versa, any valid solution for the uniform problem is also a solution for the original problem. It follows from this reduction that uniform set cover has the same approximation factor as ordinary set cover. Thus, all the standard approximability results for set cover carry over to your problem as well.
{ "domain": "cs.stackexchange", "id": 7900, "tags": "reference-request, approximation" }
Perkin reaction for aliphatic aldehydes
Question: Why isn't Perkin reaction possible with aliphatic aldehydes? Since the carbon atom of carbonyl group of aliphatic aldehydes is more electrophilic than that of aromatic ones, the reaction must be more feasible in case of aliphatic compounds. Please explain. Answer: In Perkins reaction, condensation occurs between aromatic aldehydes (which cannot undergo self condensation) with an acid anhydirde in presence of sodium or potassium salt of corresponding acid. Consider Perkins reaction of benzaldehye with acetic anhydride to form cinnamic acid. The first two steps occur in weakly basic medium (sodium acetate). The last two occur in acidic medium (protonation and subsequent dehydration). Notice that the last step is particularly favoured owing to formation of a conjugated product. This wouldn't be the case for aliphatic aldehydes (with no alpha Hydrogen , else self condensation may occur). Also, if the anhydride used has only one alpha Hydrogen (here acetic anhydride has 3 alpha Hydrogen), then the reaction barely proceeds to completion.Instead an aldol type product is obtained. This shows the role conjugation plays in making the last step fairly irreversible. Furthermore, this reaction occurs with vinylogs, hetrocyclic aldehydes and even pthalic anhydrides. Notice that in all cases , final product is conjugated. It would be worthwhile to mention another alternative mechanism, which has been mentioned in Peter Stykes, Wikipedia as well. Some Support for this mechanism comes from fact that anhydrides with only 1 alpha hydrogen donot give this reaction. Again, here the base promoted dehydration step would be largely irreversible incase of aromatic aldehydes due to conjugation. Hope this helps. References: Perkin, W. H. J. Chem. Soc. 1868, 21, 181 XXIII.—On the hydride of aceto-salicyl.DOI: 10.1039/js8682100181 Peter Stykes. Organic mechanism. Sixth Edition 3.https://en.wikipedia.org/wiki/Perkin_reaction SN Sanyal: Reactions,Rearrangements and Reagents. 5.http://www.name-reaction.com/perkin-reaction
{ "domain": "chemistry.stackexchange", "id": 5072, "tags": "organic-chemistry" }
Testing a ViewModel in Android - MVVM with DataBinding
Question: I am using MVVM pattern with databinding. I have written tests. But I want them reviewed. The test is related to JUnit test on the ViewModel. FeedViewModelTest.java @RunWith(PowerMockRunner.class) @PrepareForTest({Observable.class, AndroidSchedulers.class}) @PowerMockIgnore("javax.net.ssl.*") public class FeedViewModelTest { FeedViewModel feedViewModel; DataManager dataManager; FeedViewModel.DataListener dataListener; FeedApi feedApi; @Before public void setUp() { dataListener = mock(FeedViewModel.DataListener.class); Context mMockContext = mock(Context.class); dataManager =mock(DataManager.class); feedApi = mock(FeedApi.class); feedViewModel = spy(new FeedViewModel(mMockContext, dataListener)); } @Test public void testShouldScheduleLoadFromAPIOnBackgroundThread() { Observable<FeedResponse> observable = (Observable<FeedResponse>) mock(Observable.class); when(dataManager.fetchFeed()).thenReturn(observable); when(observable.subscribeOn(Schedulers.io())).thenReturn(observable); when(observable.observeOn(AndroidSchedulers.mainThread())).thenReturn(observable); //call test method feedViewModel.fetchFeed(); verify(feedViewModel).fetchFeed(); TestSubscriber<FeedResponse> testSubscriber = new TestSubscriber<>(); observable.subscribeOn(Schedulers.io()); observable.observeOn(AndroidSchedulers.mainThread()); observable.subscribeWith(new DisposableObserver<FeedResponse>() { @Override public void onNext(FeedResponse value) { dataListener.onDataChanged(value.getData()); } @Override public void onError(Throwable e) { e.printStackTrace(); dataListener.onError(); } @Override public void onComplete() { } }); //verify if all methods in the chain are called with correct arguments verify(observable).subscribeOn(Schedulers.io()); verify(observable).observeOn(AndroidSchedulers.mainThread()); verify(observable).subscribeWith(Matchers.<DisposableObserver<FeedResponse>>any()); } } FeedViewModel.java public class FeedViewModel { public DataManager dataManager; private DataListener datalistener; private Context mContext; public FeedViewModel(Context context, DataListener datalistener) { this.datalistener = datalistener; mContext = context; dataManager = new DataManager(); } public void fetchFeed() { dataManager.fetchFeed() .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribeWith(new DisposableObserver<FeedResponse>() { @Override public void onError(Throwable e) { e.printStackTrace(); datalistener.onError(); } @Override public void onComplete() { } @Override public void onNext(FeedResponse feedResponse) { if(datalistener!=null) { datalistener.onDataChanged(feedResponse.getData()); } } }); } public interface DataListener { void onDataChanged(List<FeedModel> model); void onError(); } } DataManager.java public class DataManager { private FeedApi feedApi; private Observable<FeedResponse> feedResponseObservable; public FeedApi getFeedApi() { return feedApi; } public DataManager() { feedApi = new FeedApi(); } public Observable<FeedResponse> fetchFeed() { feedResponseObservable = feedApi.fetchFeed(1); return feedResponseObservable; } } Testing libs used: testCompile 'junit:junit:4.12' testCompile 'org.mockito:mockito-core:1.9.5' testCompile 'org.powermock:powermock-api-mockito:1.5.6' testCompile 'org.powermock:powermock-module-junit4:1.6.2' DataListener is an interface and is used as a callback to the activity. I have some views in activity that are visible or gone based on the data fetched from the server. Is my unit test for FeedViewModel correct? Note: The test for FeedViewModelTest passes. I am using the RxJava2. Answer: It's been a while and I haven't got a review for the question. I am attempting to answer my own question. Do point out any mistakes. Happy to take it. Create Mocks Define return values for methods with mockitos when Call the test method Verify all the methods are called in the chain with correct arguments I changed my FeedViewModel constructor public FeedViewModel(Context context, DataListener datalistener,DataManager dataManager) { this.datalistener = datalistener; mContext = context; this.dataManager = dataManager; } With the above I can pass actual arguments while fetching data and mock them while testing. Mockitos when requires mock objects. With that in place, I had to pass the mocks while setting up in the test Mocks: dataListener = mock(FeedViewModel.DataListener.class); Context mMockContext = mock(Context.class); dataManager =mock(DataManager.class); Then: feedViewModel = spy(new FeedViewModel(mMockContext, dataListener,dataManager)); Then my final unit test: @Test public void testShouldScheduleLoadFromAPIOnBackgroundThread() { Observable<FeedResponse> observable = (Observable<FeedResponse>) mock(Observable.class); when(dataManager.fetchFeed()).thenReturn(observable); when(observable.subscribeOn(Schedulers.io())).thenReturn(observable); when(observable.observeOn(AndroidSchedulers.mainThread())).thenReturn(observable); //call test method feedViewModel.fetchFeed(); verify(feedViewModel).fetchFeed(); verify(observable).subscribeOn(Schedulers.io()); verify(observable).observeOn(AndroidSchedulers.mainThread()); verify(observable).subscribeWith(Matchers.<DisposableObserver<FeedResponse>>any()); } To be sure that my code is correct, I checked some repos on github. Although I used MVVM pattern the test case is similar when you use MVP. To test you need to have a clean architecture. With clear separation you can test your viewmodel with junit test and the UI with espresso. I referred to this github repository to make sure my test code is correct.
{ "domain": "codereview.stackexchange", "id": 22511, "tags": "java, unit-testing, android, rx-java" }