anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Why is meta xylene formed when toluene undergoes alkylation in presence of excess AlCl3?
Question: My book says that in presence of excess aluminium chloride, when toluene reacts with methyl iodide, meta-xylene is the major product. Why aren't ortho-para products favored here? Edit: The book is "Higher Secondary Chemistry Second Paper", third edition (reprint: 2013) by Haradhan Nag. The reaction is given on page 292. Answer: Alkylation seems to be reversible under duress. To make a decent analysis product ratios and times would be informative. The initial formation of ortho and para is because of lower activation energies. If they can equilibrate meta is favored over ortho because of steric effects and over para because there are twice as many meta and the electronic effect that stabilizes the transition state should actually raise the energy of the para ground state. Subtle perhaps but it does not take much energy difference to change close product ratios.
{ "domain": "chemistry.stackexchange", "id": 16103, "tags": "organic-chemistry" }
How does rate reduction work in a decimator with even valued gain?
Question: Cascaded integrator-comb (CIC) filters are used in decimators, and decimators with a gain that is a power of 2 are popular (at least at my workplace) because the gain can be normalized with a right shift of the output data. My understanding of decimators comes from Discrete Time Signal Processing by Alan Oppenheim, where decimators are described as a low pass anti-alias filter (AAF) followed by a rate reducer. The Rick Lyons article linked below states that a first order CIC decimator is equivalent to a moving average filter followed by a rate reducer. https://www.dsprelated.com/showarticle/1337.php How does rate reduction in a CIC decimator with even valued gain work if a moving average filter with even length impulse response has a non-integer group delay? For example, taking a decimation ratio of 5, the anti-alias filter's impulse response would be [1,1,1,1,1], and the rate reducer would select every 5th sample starting with the 3rd one because the AAF has a group delay of 2. But what if the decimation ratio was 4? The rate reducer would select every 4th sample, but would it start with the second sample or the third? The group delay is 1.5, so we should start with the 2.5th sample, but it doesn't exist. Answer: As the OP has noted and as detailed further here, the CIC decimator is mathematically identical to a moving average filter followed by a down sampler. In the structure of a moving average filter followed by a down sampler, the group delay does not dictate which sample you pick. In the OP's example of a 5 sample moving average, the group delay is indeed two samples as the OP has stated, so a sampled sine wave for example would appear with a two sample delay at the output of the moving average. However regardless of this delay, we can still choose to down-sample starting with ANY sample with the result that the down-sampled waveform will just be an additional delayed copy of the waveform for any other starting sample selected (so we get in this case the choice of five possible delays for the same output waveform). I demonstrate this with a simple sinusoidal test waveform in the graphic below: In the upper graph we see the input signal and signal after the 5 sample moving average where the 2 sample delay occurs. In the lower plot shows the options for the output waveform depending on which sample we start with: "start = 0" means we select the very first sample out of the moving average filter output (which would be the first sample at the input given $y[n]=x[n]+x[n-1]+x[n-2]+x[n-3]+x[n-4]$), and then every 5th sample after that. "start =1" means we start with the second sample and then select every 5th sample after that, etc. Note that the resulting output is still the same waveform, just representing how it appears at a different delay in time. In this case each of those possible outputs already includes the filter group delay, and then we can optionally add more delay depending on which sample we start with (with reference to the output sampling rate), simply as samples of the delayed waveform. In case the fractional delay itself was a point of confusion for the OP: When we have a filter that includes a 1/2 sample delay, this means the samples at the output of the filter will be the interpolated values midway between the original input samples. The down sampling of this will simply be samples selected from this delayed waveform. This is the same result to having delayed the analog waveform by 1.5 samples prior to sampling. I demonstrate the point I am trying to make with the two plots below, here showing the result of a 4 sample moving average (thus group delay = 1.5 samples) of the same waveform used above, but also divided by 4 so that we can compare input vs output directly. First observe the zoomed out plot where we see the output delayed by 1.5 samples: I zoom in on the waveforms in the plot below to show that the output samples with the 1.5 sample delay are indeed the expected samples in the waveform 1.5 samples earlier, as values that did not originally exist--they are simply the interpolated values associated with that fractional delay. That said, the subsequent down sampling we do will be on these new values regardless of what the group delay of the filter was: if we start sooner or later it just adjusts the final delay of the output but this is the same whether we have an integer or fractional group delay in the moving average filter. Consider a simple case of a moving average filter of 2 samples, and we down sample by selecting every other sample: We can select either the even samples OR the odd samples at the output of the two sample moving average. Within the bandwidth of the decimation, the two waveforms are identical other than the difference in phase associated with the one sample offset (one sample offset at the input sampling rate). Further, prior to down sampling, the delay at the output of the 2 sample moving average will be one half a sample at the input sampling rate: the group delay of any symmetric or antisymmetric filter (in this case with coefficients [1 1]) is $(N-1)/2$ where $N$ is the number of coefficients (here 2). That said, we see in this example, relative to the input sampling rate the delay at the output of the two sample moving average is 0.5 samples. We then select either even or odd samples which will have an 1 sample delay between them (at the input sampling rate). A diagram depicting this is shown below, where the group delays shown are given in units of the input sampling rate. Assuming the down-samplers operate concurrently, selecting the "odd" samples is similar to a 1 sample delay of the output of the 2 sample moving average prior to down sampling. The two outputs shown will be identical with regards to signal content within the decimated spectrum other than a linear phase difference consistent with the 1 sample delay between them. In units of samples at the output sampling rate, the group delays shown at the output would therefore be half of what is shown or -.75 samples and -.25 samples. This is just the resulting delay of the filter depending on which output is chosen (which starting value is chosen for down sampling) but either will result in the resampled waveform at the lower rate with the appropriately interpolated values given by that fractional delay.
{ "domain": "dsp.stackexchange", "id": 11910, "tags": "finite-impulse-response, decimation, group-delay" }
How variable does a star have to be, to be a variable star?
Question: Variable stars are stars whose apparent magnitude varies. But there are so many phenomena that can cause a star to be variable, that I would expect all stars to be variable. A rotating star has a starspot? It's variable! A planet is transiting? It's variable! A cloud of dust passes in front? It's variable! This makes me wonder what exactly distinguishes a variable from a non-variable star? Must the variations in brightness be larger than a given magnitude? Must the variations in brightness be periodic? Do dips (or peaks) in brightness be observed more than once? Answer: There is no lower limit, and as you say, all stars are somewhat variable. However catalogues of variable stars exist, and they can record a wide range of levels of variability. For example, the general catalogue of variable stars lists stars like Alpha Triangulum, with a variability of 0.01 magnitudes. Ultimately a variable star is a star which has had its variability measured and studied and recorded in a catalogue.
{ "domain": "astronomy.stackexchange", "id": 4294, "tags": "star, variable-star, definition" }
What do you call mRNAs that translate to the same protein?
Question: For example AUAACC and AUCACG in distinct mRNAs may both be translated to the same dipeptide Ile-Thr. Answer: This is by no means a consensus, but this paper from 2000 proposes a new term: Isotranscript. According to the authors: Duplicated genes often encode distinct proteins that differ by only a few amino acids, such as the two mammalian S27 isoforms, S271 and S272, documented here and the yeast and Arabidopsis S27 isoforms identified previously. On the other hand, we have found that different S27 transcripts encode 100% identical proteins. This phenomenon has also been described previously for the histone subtype, H3.3, in which two different transcripts encode the same H3.3 amino acid sequence . We have chosen the terminology “isotranscripts” to describe such mRNAs. (emphasis mine) Source: Thomas, E., Alvarez, C. and Sutcliffe, J. (2000). Evolutionarily Distinct Classes of S27 Ribosomal Proteins with Differential mRNA Expression in Rat Hypothalamus. Journal of Neurochemistry, 74(6), pp.2259-2267.
{ "domain": "biology.stackexchange", "id": 7879, "tags": "molecular-biology, proteins, terminology, rna" }
Convolution with RTD laminar flow
Question: I have laminar flow in a tube. Consider the tube to be 0.2 m long and with an average velocity of 0.05 m/s. The analytical expression for my transfer function is: $E(t)= \frac{\tau^2}{2*t^3}$ for $t$>=$\frac{\tau }{2}$ and $E(t)=0$ for $t < \frac {\tau }{2}$. $\tau$ is the mean residence time. In this case: $\tau$=0.2m/0.05m/s=4 s. I want to convolute this with an exponential equation: $E_2=(1-exp(\frac{-t}{2.55}))$. This equation descripes the magnetization of a particle in a static magnetic field. I want to get the average magnetization of the particles at the outlet of the tube. So I thought I would do the following: $E_{out}(t)=E(t)*E2(t)$. I would then take the value at t=4s. I have the results from CFD which gives the following: CFD result, the red line is the magnetic field and the green line is $E_{out}(t)$. I'm interested in the value at x=0,2m which is 0.7942. My results of the convolution are totally different:Convolution result What am I doing wrong? I'm a little bit confused. Has anyone an idea of how to approach this problem. Best regards, Gesetzt Answer: The average magnetization exiting the tube in steady state should just be the product not the convolution. $$\int_0^\infty E(t)\, E2(t) \, dt$$ $$\int_{\frac{\tau}2}^\infty \left(1-e^{-t\frac{t}{k}}\right)\, \frac{\tau^2}{2\,t^3} \, dt$$ $$-\frac{\tau^2 Ei\left(-\frac{\tau}{2 k}\right)}{4 k^2} - \frac{\tau \, e^{-\frac{\tau}{2 k}}}{2 k} + e^{-\frac{\tau}{2 k}}-1$$ Where $Ei$ is the Exponential Integral This looks like it comes out to a bit less than your CFD results: The x axis in this case is the average residence time which should be proportional to the length along the tube, making our plots comparable.
{ "domain": "engineering.stackexchange", "id": 2151, "tags": "chemical-engineering, cfd, matlab" }
How do I calculate the range of a fixed-point number with $a$ integer bits and $b $ fractional bits?
Question: What does fixed-point number range represents? Why we use formula $2^a - 2^{-b}$, why minus $2^{-b}$? Where $a$ is number of integer bits, and $b$ is number of fractional bits. If we have for example $a = 8$ and $b = 2$, don't we have a possibility to represent $2^a + 2^{-b}$ number and so that will represent the range? Answer: Let's assume that we are dealing with unsigned number types. If you would use all $a+b$ bits for the integer part then the set of possible numbers would be: $$\left\{0, 1, 2, 3, \dots, 2^{a+b}-1\right\}.$$ These numbers can be divided by $2^b$ (or multiplied by $2^{-b}$) to take use of $b$ bits for the fractional part, resulting in this set of possible numbers: $$\left\{0, 1\times2^{-b}, 2\times2^{-b}, 3\times2^{-b},\dots, \left(2^{a+b}-1\right)\times2^{-b}\right\} \\=\left\{0, 2^{-b}, 2\times2^{-b}, 3\times2^{-b},\dots, \underline{\underline{2^a-2^{-b}}}\right\}.$$ So your formula gives the largest number that can be represented (double underlined). The formula can also be understood as going a step $2^{-b}$ or one least significant bit (LSB) worth backwards from $2^a$ which is the first number that has the same truncated binary string representation as 0, the first number in the system. In a similar way in the 8-bit unsigned integer system we take a LSB-sized step backwards from 256 (1 0000 0000 binary) to obtain 255 (1111 1111 binary) which is the largest representable number in that system. I just found out that range has an established meaning in arithmetic: Largest value minus smallest value. The smallest value happens to be zero so in this case the largest value equals the range.
{ "domain": "dsp.stackexchange", "id": 3814, "tags": "fixed-point" }
Does the Lieb-Robinson bound constrain the speed of entanglement information transmission?
Question: I just learned from the existence of the theoretical Lieb-Robinson bound, which indicates that the speed at which information can be propagated in non-relativistic quantum systems cannot exceed this upper limit. If this is true, then does it mean that the entanglement is not transmitting information? So what does it mean for protocols like BB84 (quantum key distribution)? Answer: Entanglement does not transmit information, as follows from the No-communication theorem. Lieb-Robinson bound is a limit on speed at which perturbation propagates using short-range interactions, for example in spin lattice. I doubt it means something for protocols such as BB84; you can transmit quantum information by sending polarized photons, and photons move at the speed of light; or you can transmit quantum information using quantum teleportation protocol, also limited by the speed of light.
{ "domain": "quantumcomputing.stackexchange", "id": 1189, "tags": "communication, bb84, faster-than-light" }
Puzzle game solver via Backtracking
Question: I'm relatively new to Haskell and I'd like to get feedback on the style of my program. Specifically: Coding Style: Can any parts be written in a more concise or more readable way? Misuse: Are there any parts that are generally should be avoided, or any typical beginner mistakes? (Basically the do's and don't's) The game The program solves a puzzle that is also known as binoxxo: There is a 2n x 2n square grid that is partially filled with X and O. The goal filling each cell of the grid with an X or O such that the result satisfies three conditions: In each column and each row, the same symbol cannot occur more than twice in a row. (I.e. OXXOXO is ok, however OXXXOO is not.) Each row and each column have exactly n Xs and n Os. No two rows are equal and no two columns are equal. (An online version can be found here.) The program The following module consists of three parts: First we define the new type to describe the grid, with the type constructors X,O and finally E (for an empty cell). Then we have two functions: isValid checks whether all three rules hold so far, and backtrack which actually solves the puzzle. It does so by placing an X and O in the first E spot of the given puzzle checking the validity of the new grids, and if that succeeds, recursing a step deeper. This function finds all the possible solutions, but due to the lazy evaluation we can use take 1 $ backtrack myBoard to get only one and finish a little bit quicker. module Binoxxo (isValid, backtrack, exampleBoard, Cell (..), Board) where import Control.Applicative import Data.List data Cell = E | X | O -- E = empty deriving (Eq,Show) type Board = [[Cell]] exampleBoard = -- for solving: call "backtrack exampleBoard" [[X,X,E,E], [E,E,E,E], [O,E,E,X], [O,O,E,X]] -- backtracking backtrack :: Board -> [Board] backtrack b | isFull b = [b] --board has no more empty cells | otherwise = nub $ concat $ map backtrack validBoards where isFull b = not $ E `elem` concat b newBoards = generateAllBoards b :: [Board] validBoards = filter isValid newBoards generateAllBoards :: Board -> [Board] -- adds one new X/O in the position of a E generateAllBoards b = concat $ map assembleBoards (prefixRowSuffix b) where prefixRowSuffix :: [a] -> [([a],a,[a])]-- [1,2,3,4] -> [([],1,[2,3,4]), ([1],2,[3,4]), ([1,2],3,[4]), ([1,2,3],4,[])] prefixRowSuffix b = zip3 (inits b) b (drop 1 $ tails b) assembleBoards :: ([[Cell]],[Cell],[[Cell]]) -> [Board] assembleBoards (front,m,back) = take 2 -- we only need to place X and O in the first occurence of E, because one of them MUST be correct [front ++[f++[x]++b]++ back | (f,E,b)<-prefixRowSuffix m,x<-[X,O]] -- validity check (implement the three rules) isValid :: Board -> Bool isValid b = and $ [and . map checkNeighbours, checkDupli, and . map checkCount] -- each of these get applied to all normal and transposed board <*> [rows, cols] where rows = b :: Board cols = transpose b :: Board -- we cannot have three consecutive X or O checkNeighbours :: [Cell] -> Bool checkNeighbours (a:b:c:xs) = let this = not $ any ((&& a==b && b==c) . (c==)) [X,O] rest = checkNeighbours (b:c:xs) in this && rest checkNeighbours _ = True -- we cannot have two equal rows/columns checkDupli :: Board -> Bool checkDupli b = check $ filter (all (/=E)) b -- only check full rows for duplicates where check (x:xs) = (not $ x `elem` xs) && check xs check [] = True -- if row is of length n, we can have at most n/2 X and O checkCount :: [Cell] -> Bool checkCount xs = notTooMany O && notTooMany X where len = length xs notTooMany xo = len >= 2 * length (filter (==xo) xs) Answer: isFull fires when assembleBoards returns an empty list; we'd expect to be able to ask that question only once. newBoards and validBoards do not deserve names - if you want the reader to be able to tell what the value means, comments are more appropriate. Most of the rest of backtrack is about descending into a nested data structure and changing a small part, which lens specializes in: traverse . traverse . filtered (==E) descends into the board, then each of its elements, then each of their E cells. holesOf gives you, roughly speaking, the positions of the targets in the original board - it separates the board into a Cell and a Cell -> Board for each target. peek lets you forget there was an E. import Control.Comonad.Representable.Store (peek) import Control.Lens (holesOf, filtered) backtrack :: Board -> [Board] backtrack b = case setFirstE b of -- board has no more empty cells Nothing -> [b] -- we only need to place X and O in the first occurence of E, because one of them MUST be correct Just setter -> nub $ concatMap backtrack $ filter isValid $ map setter [X,O] -- Punches the first E out of the board, if any. setFirstE :: Board -> Maybe (Cell -> Board) setFirstE = listToMaybe . map (flip peek) . holesOf (traverse . traverse . filtered (==E))
{ "domain": "codereview.stackexchange", "id": 26263, "tags": "beginner, haskell, backtracking" }
right and left derivatives of a cfg from parse tree?
Question: i just want to double check with what i concluded from this parse trees. A / \ / \ B C / \ / \ 0 D D E | | 0 1 from the above tree i get right derivation to be A -> C C -> D|E E -> 1 D -> 0 and the left derivation as A -> B B -> 0|D D -> 0 i just want someone to proof check for me. and tell me if am wrong somewhere. Thanks! Answer: What you wrote are not derivations, they are grammars. The leftmost derivation in your case is $A \to BC \to 0DC \to 0DDE \to 0D0E \to 0D01$. I'll leave you to figure out the rightmost derivation.
{ "domain": "cs.stackexchange", "id": 8535, "tags": "automata, context-free" }
Regular vs. General Geodesic Equation
Question: I'm reading Wald, and I've just got up to the Geodesic equation: $$T^a \nabla_a T^b = 0.\tag{1}$$ Right after, Wald says that "one might require only that the tangent vector to the curve point in the same direction as itself when parallel propagated, and not demand that it maintain the same length", which yields: $$T^a \nabla_a T^b = \alpha T^b.\tag{2}$$ How can we start from the second equation, assume that the tangent to the curve $T^a$ has constant length, and reach the first equation? I've tried something like the folllowing: We have $T^a \nabla_a T^b = \alpha T^b$ and know that $T^a$ has a constant length, so we can say $g_{ab}T^aT^b = K$. In a coordinate system $\psi$, we can rewrite the first equation as $$\frac{dT^\mu}{dt} + \Gamma^\mu_{\sigma \nu}T^\sigma T^\nu = \alpha T^\mu$$ $$\Rightarrow \frac{dT^\mu}{dt} + (g^{\sigma \nu}g_{\sigma \nu})\Gamma^\mu_{\sigma \nu}T^\sigma T^\nu = \alpha T^\mu$$ $$\Rightarrow \frac{dT^\mu}{dt} + g^{\sigma \nu}\Gamma^\mu_{\sigma \nu}(g_{\sigma \nu}T^\sigma T^\nu) = \alpha T^\mu$$ $$\Rightarrow \frac{dT^\mu}{dt} + g^{\sigma \nu}\Gamma^\mu_{\sigma \nu}K = \alpha T^\mu$$ I'm stuck at this point. I've tried plugging in the Christoffel symbols in terms of the metric but didn't see any simplification. Any tips on how to proceed? Answer: Length of the tangent vector is given by $T^a T_a$. Its change along the geodesic is then $$ T^a \nabla_a ( T^b T_b) = 2 T^a \nabla_a T^b T_b = 2 \alpha T^b T_b. $$ Thus, if $\alpha=0$, then the length remains unchanged along the geodesic.
{ "domain": "physics.stackexchange", "id": 86514, "tags": "general-relativity, differential-geometry, metric-tensor, geodesics" }
Python 3 minesweeper tkinter game
Question: I have just finished my minesweeper game using tkinter and would like to know how I could improve my program. from tkinter import * from tkinter import messagebox from random import randint class setupwindow(): def __init__(window): #window is the master object of the setup window window.root = Tk() window.root.title("Setup") window.root.grid() window.finish = "N" labels = ["Height:", "Width:", "Mines:"] window.label = ["","",""] window.entry = ["","",""] for i in range(3): window.label[i] = Label(text = labels[i]) window.label[i].grid(row = i, column = 1) window.entry[i] = Entry() window.entry[i].grid(row = i, column = 2) window.startbutton = Button(text = "Start", command = lambda: setupwindow.onclick(window)) window.startbutton.grid(column = 2) window.root.mainloop() def onclick(window): setupwindow.verification(window) if window.verf == "Y": window.finish = "Y" window.root.destroy() return window def verification(window): height = window.entry[0].get() width = window.entry[1].get() mines = window.entry[2].get() window.verf = "N" if height.isdigit() and width.isdigit() and mines.isdigit(): height = int(height) width = int(width) mines = int(mines) if height > 0 and height <= 24: totalsquares = height * width if width > 0 and width <= 48: if mines > 0: if mines < totalsquares: window.verf = "Y" window.height = height window.width = width window.mines = mines else: messagebox.showerror("Invalid", "You cannot have more mines than squares!") else: messagebox.showerror("Invalid", "You can't play minesweeper without mines!") else: messagebox.showerror("Invalid", "Width must be between 1 and 48 inclusive") else: messagebox.showerror("Invalid", "Height must be between 1 and 24 inclusive") else: messagebox.showerror("Invalid", "All values must be integers") class gamewindow(): def __init__(s, setup): #s is the master object of the main game s.height = setup.height s.width = setup.width s.mines = setup.mines s.root = Tk() s.root.title("Minesweeper") s.root.grid() s.finish = "N" s.maingrid = list() for i in range(s.height): s.maingrid.append([]) for x in range(s.width): s.maingrid[i].append(" ") s.maingrid[i][x] = Button(height = 0, width = 3, font = "Calibri 15 bold", text = "", bg = "gray90", command = lambda i=i, x=x: gamewindow.onclick(s, i, x)) s.maingrid[i][x].bind("<Button-3>", lambda event="<Button-3>", i=i, x=x: gamewindow.rightclick(event, s, i, x)) s.maingrid[i][x].grid(row = i, column = x) s.maingrid[i][x].mine = "False" totalsquares = s.height * s.width s.scoreneeded = totalsquares - s.mines s.score = 0 indexlist = list() for i in range(totalsquares): indexlist.append(i) spaceschosen = list() #where the mines are going to be for i in range(s.mines): chosenspace = randint(0, len(indexlist) - 1) spaceschosen.append(indexlist[chosenspace]) del indexlist[chosenspace] for i in range(len(spaceschosen)): xvalue = int(spaceschosen[i] % s.width) ivalue = int(spaceschosen[i] / s.width) s.maingrid[ivalue][xvalue].mine = "True" s.root.mainloop() def onclick(s, i, x): colourlist = ["PlaceHolder", "Blue", "Green", "Red", "Purple", "Black", "Maroon", "Gray", "Turquoise"] if s.maingrid[i][x]["text"] != "F" and s.maingrid[i][x]["relief"] != "sunken": if s.maingrid[i][x].mine == "False": s.score += 1 combinationsi = [1, -1, 0, 0, 1, 1, -1, -1] combinationsx = [0, 0, 1, -1, 1, -1, 1, -1] #All the surrounding spaces minecount = 0 for combinations in range(len(combinationsi)): tempi = i + combinationsi[combinations] tempx = x + combinationsx[combinations] if tempi < s.height and tempx < s.width and tempi >= 0 and tempx >= 0: if s.maingrid[tempi][tempx].mine == "True": minecount = minecount + 1 if minecount == 0: minecount = "" s.maingrid[i][x].configure(text = minecount, relief = "sunken", bg = "gray85") if str(minecount).isdigit(): s.maingrid[i][x].configure(fg = colourlist[minecount]) if minecount == "": for z in range(len(combinationsi)): if s.finish == "N": ivalue = i + int(combinationsi[z]) xvalue = x + int(combinationsx[z]) if ivalue >= 0 and ivalue < s.height and xvalue >=0 and xvalue < s.width: if s.maingrid[ivalue][xvalue]["relief"] != "sunken": gamewindow.onclick(s, ivalue, xvalue) if s.score == s.scoreneeded and s.finish == "N": messagebox.showinfo("Congratulations", "A winner is you!") s.finish = "Y" s.root.destroy() else: s.maingrid[i][x].configure(bg = "Red", text = "*") for a in range(len(s.maingrid)): for b in range(len(s.maingrid[a])): if s.maingrid[a][b].mine == "True": if s.maingrid[a][b]["text"] == "F": s.maingrid[a][b].configure(bg = "Green") elif s.maingrid[a][b]["bg"] != "Red": s.maingrid[a][b].configure(bg = "Pink", text = "*") elif s.maingrid[a][b]["text"] == "F": s.maingrid[a][b].configure(bg = "Yellow") messagebox.showinfo("GAME OVER", "You have lost") s.root.destroy() def rightclick(event, s, i, x): if s.maingrid[i][x]["relief"] != "sunken": if s.maingrid[i][x]["text"] == "": s.maingrid[i][x].config(text = "F") elif s.maingrid[i][x]["text"] == "F": s.maingrid[i][x].config(text = "?") else: s.maingrid[i][x].config(text = "") if __name__ == "__main__": setup = setupwindow() if setup.finish == "Y": game = gamewindow(setup) quit() Answer: PEP-8 Class names should normally use the CapWords convention. #class-names Don't use spaces around the = sign when used to indicate a keyword argument, or when used to indicate a default value for an unannotated function parameter. #whitespaces Always use self for the first argument to instance methods. #function-and-method-arguments There are other PEP-8 violations. Read complete document Guard clause What are guard clauses and how to use them? Do not use for i in range(len(list)) unless you really need index. Even if you do need it, use enumerate instead. Loop Like A Native https://stackoverflow.com/questions/11901081/only-index-needed-enumerate-or-xrange Convert range to list indexlist = list() for i in range(totalsquares): indexlist.append(i) is equivalent to indexlist = list(range(totalsquares))
{ "domain": "codereview.stackexchange", "id": 33598, "tags": "python, tkinter, minesweeper" }
How Do I Do This Integral?
Question: I am trying to derive a boson coherent path integral and one part of the derivation is to evaluate/prove $$ \int d\Psi(\tau) d\Psi^*(\tau) |\Psi(\tau)|^{2n} \exp(-|\Psi(\tau)|^2) = (n!) \pi. $$ This is what I tried to do $$ \int d\Psi(\tau) d\Psi^*(\tau) (\Psi)^n(\Psi^*)^n \exp(-\Psi(\tau) \Psi(\tau)^*) = \int d \Psi(\tau) \Psi^n\int(\Psi^*)^n \exp(-\Psi \Psi^*) d\Psi^* = \int d \Psi(\tau) \Psi^n\bigg[ n! (1/\Psi)^{n+1}\bigg] $$ I was using $\int_0^\infty x^n e^{-x/a} dx = n! a^{n+1}$ How do I derive this integral? Answer: Make a substitution $\Psi=\rho\exp(i\phi)$, $\Psi^*=\rho\exp(-i\phi)$, the absolute value of the Jacobian of the transformation is $2\pi\rho$. Integration over $\phi$ is trivial; to calculate the integral over $\rho$ use integration by parts and induction.
{ "domain": "physics.stackexchange", "id": 86249, "tags": "quantum-mechanics, homework-and-exercises, integration, coherent-states" }
tf frames in different time epoch from ros time?
Question: I am trying to tackle the following warning when running Navigation: [ WARN] [1368081077.201722266]: Could not get robot pose, cancelling reconfiguration [ WARN] [1368081077.401579990]: Costmap2DROS transform timeout. Current time: 1368081077.4015, global_pose stamp: 136.4000, tolerance: 0.3000 The transform timed out because the time & the pose stamp are extremely different. Are they in different time epochs? If so, how can that be solved? Thanks, Ernest Originally posted by Ernest on ROS Answers with karma: 341 on 2013-05-08 Post score: 2 Answer: You're probably running Gazebo and missing the use_sim_time parameter. Try the following: roscore rosparam set use_sim_time true Now start launching the rest of your system. The explanation is that the navigation stack was probably launched before the use_sim_time parameter was set (so it uses wall time: 1368081077.4015), and the thing publishing the TF transforms was launched afterwards (so it uses sim time: 136.4000). Originally posted by Martin Günther with karma: 11816 on 2013-05-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Ernest on 2013-05-09: spot on! that solved the problem. Thank you very much.
{ "domain": "robotics.stackexchange", "id": 14121, "tags": "navigation, transform" }
Inexact differential-path function
Question: From Wikipedia: an inexact differential cannot be expressed in terms of its antiderivative for the purpose of integral calculations What's the mathematical reason that inexact differential cannot be expressed in terms of its antiderivative? Not enough information? Too complicated? $\mathrm{d}U$ is an exact differential. If you integrate $\mathrm{d}U$, what will you get then? Answer: I think you answer comes from the part of the sentence that you left of the quote from Wikipedia: ... i.e. it's value can't be inferred by looking just at the initial and final states of the given system. Inexact differential are the result of integrating nonstate functions (functions which aren't path independent). The fundamental theorem of calculus, which allows the integration of a function over a range by determining the antiderivative at the end points of that range, requires path independence in order to be true. So it won't work for inexact differentials that are path dependent. As for integrating over $dU$, $U$ is a state function and so you can apply the fundamental theorem and solve the integral using the antiderivatives at the end points.
{ "domain": "chemistry.stackexchange", "id": 11168, "tags": "thermodynamics" }
Can Homotopy Type Theory be used to derive more efficient algorithms on more efficient data representations from less efficient ones?
Question: I've read here that in HoTT, compilers could swap out less efficient representations of data for more efficient ones and I'm wondering whether my interpretation of that statement is correct. Say we have two different ways of representing the natural numbers, unary (zero and successor) and binary. Here is a function that checks evenness on the former representation: even : UnaryNat -> Bool even zero = true even (succ zero) = false even (succ (succ n)) = even n If we then have an isomorphism between the unary and binary representations, we trivially get an evenness function for the binary representation "for free", simply by converting a given binary natural number to a unary one, applying the even function, and converting the result back to the binary representation. Obviously, this is not very efficient, and we also don't need HoTT for this. A better way to check whether a binary natural number is even would be to check if its least significant digit is a zero. My question is: Could we derive this more efficient algorithm for binary natural numbers from our definition of evenness for unary natural numbers using HoTT? If so, would this also be possible for other data types? I haven't studied any HoTT yet and since its appears to be a pretty complex subject I would like to find out whether it's as exciting as I think it is. Thanks! Answer: You ask if we could derive a more efficent way to compute even? Yes, we could of course. The point however is that compilers could not. Having a compiler automatically perform very fancy optimization techniques is a hard problem. In fact, if you ask for too much the problem may become undecidable, and for almost all interesting cases at least extremely hard. Even just asking whether two simple types are isomorphic quickly leads to open research questions. What we have gained from HoTT is not a magic wand, but rather a formalism that provides a very good and structured way of thinking and reasoning about isomorphism, equivalence and equality in general. One can think easily enough about "small scale" notion of equivalence, for example when we focus just on one particular data structure, such as a dictionary, but it's a whole other matter to be able to talk about all notions of equivalence at the same time. In your particular case it is easy to see what it would mean to have an equivalence between two implementations of natural numbers. But how about equivalence of large software components? Or equivalence of implementations of partial differential equation solvers? How do we even being to think about it? HoTT has an answer.
{ "domain": "cs.stackexchange", "id": 16395, "tags": "optimization, homotopy-type-theory" }
Transparency and visibility of light in tyndall effect
Question: Oil and water are both transparent however, they lose their transparency once they are mixed together. What is the reason for this? The size of the molecules are still the same so why does the substance become cloudy? Answer: Although the size of the molecules are still the same, the oil (nonpolar) cannot dissolve into the water (polar), so the oil drops form inside water, which is much larger than the molecule itself. Those drops (emulsion) reflects light and causes tyndall effect.
{ "domain": "physics.stackexchange", "id": 55715, "tags": "visible-light, molecules" }
Why does a wetted cloth get structurally reinforced while paper doesn't?
Question: In the Hollywood movie, Shanghai Knights (pardon me for the non scientific citation), Jackie Chan states that 'wet cloth' does not break easily. How true a fact is this ? We know from common experience that paper when wet tends to tear easily. Then what is the difference in the breaking stress of wet paper vs wet cloth. What is happening here at the molecular level? Does wetness induce structural strain due to cohesive forces which tend to degrade the fiber strength of paper whereas reinforce that of the cloth? Thanks. (I will be happy to edit out the movie reference, in case it violates any SE clause, I merely wanted to bring some context into the question) Answer: From what I've learnt, this relative strengthening,of cloth when wet, and weakening of paper is due to two factors. First up:The cloth - google fibre structures of cloth, and you will notice that cloth fibers are uniformly but relatively less interlocked, and addition of water causes further attraction via hydrogen bonding. For the paper, it has relatively more interlocked fiber structure, but when water is added to it, water molecules tend to displace some interlocked fibers. Even though hydrogen bonding takes place, but in the case of water, the loss of initial bonds is much too much, to overcome through hydrogen bonding. Hence it becomes relatively weak in terms of its structural strength.
{ "domain": "physics.stackexchange", "id": 30790, "tags": "newtonian-mechanics, everyday-life, material-science, stress-strain" }
Do electrons in multi-electron atoms really have definite angular momenta?
Question: Since the mutual repulsion term between electrons orbiting the same nucleus does not commute with either electron's angular momentum operator (but only with their sum), I'd assume that the electrons don't really have a well-defined angular momentum (i.e., they do not occupy a pure $\left|lm\right>$ state). I would assume that the actual wavefunction is dominated by one such state compared to others, so it is approximately pure, but is there really a points in enumerating electrons according to their momenta, like the s, p, d, f and so on sub-shells? Answer: It's not often that dmckee and I differ (mainly because he's usually right :-) but we differ on this on. Or at least we differ if I've correctly understood what you're asking. In a hydrogen atom the 1s, 2s, etc wavefunctions are (subject to various approximations) good descriptions of the single electron and have well defined angular momentums. In multielectron atoms it's convenient to think of electrons populating successive 1s, 2s, etc levels, but this is only a conceptual model and not an accurate representation. You're quite correct that while there is a well defined angular momentum for the whole atom, you cannot define the angular momentum of individual electrons. In the old days (maybe it's still done) we'd calculate atomic structure using a Hartree-Fock method with individual electron wavefunctions as the basis, and as dmckee points out, atoms have spectral lines that can often be approximately thought of as exciting a specific electron between individual electron wavefunctions. However what you're really doing is labelling the whole atom as an $l,m$ state and not an individual electron.
{ "domain": "physics.stackexchange", "id": 6128, "tags": "quantum-mechanics, electrons, physical-chemistry, atoms, atomic-physics" }
Fourier Transform of exponential
Question: While solving Example 4.1 of Signals and Systems by Alan Oppenheim. Example 4.1 is: $$ x(t)=e^{-at}u(t), a>0$$ and the transform I get is: $$ X(j\omega)\frac{1}{a+j\omega}, a>0$$ The problem is understanding the sketch: The sketch is drawn for both the values $+a$ and $-a$> i dont understand why it $-a$ when $a>0$ is defined. Answer: The plot is of $$\mid X\left(i\omega\right) \mid = \sqrt{\left(\frac{1}{a+j\omega}\right)\left(\frac{1}{a-j\omega}\right)} = \frac{1}{\sqrt{a^2 + \omega^2}}$$ against $\omega$ In particular $\omega$ can be equal to $-a$. This checks out with Wolfram alpha
{ "domain": "dsp.stackexchange", "id": 3308, "tags": "fourier-transform, transform, fourier" }
Why do stars flicker?
Question: Why do stars flicker and planets don't? At least this is what I've read online and seen on the night sky. I've heard that it has to do something with the fact that stars emit light and planets reflect it. But I don't get it, isn't this light, just "light"? What happens to the reflected light that it doesn't flicker anymore? I was thinking that it has to do something with Earth's atmosphere, different temperatures or something (if this has any role at all). Answer: Here is a nice answer, taken from http://www.enchantedlearning.com/subjects/astronomy/stars/twinkle.shtml The scientific name for the twinkling of stars is stellar scintillation (or astronomical scintillation). Stars twinkle when we see them from the Earth's surface because we are viewing them through thick layers of turbulent (moving) air in the Earth's atmosphere. Stars (except for the Sun) appear as tiny dots in the sky; as their light travels through the many layers of the Earth's atmosphere, the light of the star is bent (refracted) many times and in random directions (light is bent when it hits a change in density - like a pocket of cold air or hot air). This random refraction results in the star winking out (it looks as though the star moves a bit, and our eye interprets this as twinkling). Stars closer to the horizon appear to twinkle more than stars that are overhead - this is because the light of stars near the horizon has to travel through more air than the light of stars overhead and so is subject to more refraction. Also, planets do not usually twinkle, because they are so close to us; they appear big enough that the twinkling is not noticeable (except when the air is extremely turbulent). Stars would not appear to twinkle if we viewed them from outer space (or from a planet/moon that didn't have an atmosphere).
{ "domain": "physics.stackexchange", "id": 55178, "tags": "visible-light, stars, observational-astronomy" }
Relation between "syntax" and "grammar" in CS
Question: I do sure that "grammar" and "syntax" is two different thing in CS, e.g Syntax of Java language is defined by a context-free grammar. My question are What is different in definitions of "grammar" and "syntax" in CS? What is relation between them, can we describe it by using set theory? Answer: Your quote has the following operational meaning for syntax in the context of programming languages: The syntax of a programming language is the set of all syntactically valid programs. The syntax only describes what the valid programs look like; semantics gives them meaning – tells you how to execute them. The set of all valid programs is an example of a formal language; a formal language is just a collection of strings. There are many ways of describing formal languages. One of them is through formal grammars, which you can think of as a type of formula that describes a formal language. Context-free grammars are a restricted type of formal grammars. As such, they cannot describe all formal languages, but only context-free languages. (General grammars also describe only some of the formal languages; this is due to the fact that there are countably many grammars but uncountably many formal languages.) Summarizing: A grammar is one way to specify a formal language. Finally, let me mention that the set of syntactically valid Java programs actually cannot be described by a context-free grammar; only some features can be described. Context-free grammars cannot enforce, for example, that a variable not be defined twice in the same context. However, once we abstract away such details (for example, by replacing variable names with generic placeholders), the resulting set of syntactically valid Java programs does lend itself (perhaps) to being captured by a context-free grammar. Parsing (in our context) is the process of reading the source code of a Java program and constructing its parse tree, which is its structure as seen from the perspective of the context-free grammar. The parse tree captures the semantics of the program in a form amenable to further manipulation in the compilation process. Abstracting away details such as variable names and values of constants relegates this information into the form of parameters, but doesn't obstruct the control structure of the program.
{ "domain": "cs.stackexchange", "id": 7221, "tags": "terminology, context-free, formal-grammars, programming-languages, syntax" }
Analysis of Dijkstra algorithm's (Lazy) running time
Question: I'm trying to figure out the running time for a Dijkstra algorithm. All the sources I have read say that the running time is O(E * log (E)) for a lazy implementation. But when we do the math we get O(E * (Log(E)+E*Log(E))). Since E isn't a constant, I don't see how someone could reduce this to O(E * log (E). Are we analyzing the wrong or is it possible to reduce? while (!minPQ.isEmpty()) { <=== O(E) Node min = minPQ.poll(); <=== O(log(e) for (Edge edge : graph.adj(min)) { <=== O(E) if (min.getId() == target.getId()) { // Source and Target = Same edge if (edgeTo.size() == 0) edgeTo.put(target, edge); return; } relax(edge, min, vehicle); <=== log(e) (because of add method on PQ) } } Answer: First of, you can make some of the bounds a little tighter and replace some $E$s with $V$s. The while loop at the beginning will only run $O(|V|)$ iterations (you visit every node only once), and the for (Edge edge : graph.adj(min)) loop will run only $O(|V|)$ iterations at most (a node can have at most $O(|V|)$ adjacent edges). Same with the log factors, although in that case it doesn't matter as much since $O(\log |V|) = O(\log |E|)$ (if the graph is connected). Via simple multiplication this gives you $O(|V| \cdot (\log |V| + |V| \cdot \log |V|)) = O(|V|^2 \cdot \log |V|)$. In a dense graph, this is already the desired complexity. Since a dense graph has $O(|V|^2) = O(|E|)$. However in a sparse graph, e.g. when $O(|E|) = O(|V|)$, then you can still do a lot better. The problem you are facing is that multiplying the upper bounds can lead to overestimation. Look at the following example: for (i = 1 to N) { limit = N if i == 1 else 1 for (j = 1 to N) { constant_work() } } The outer loop clearly runs $O(N)$ times, and the inner loop runs also $O(N)$ times (because in the worst case it does). You can say that in total the complexity is $O(N^2)$ times. But this is just an upper bound. Most of the time the inner function actually does almost no work. In reality if you count the number of times you run the function constant_work(), you will get $$N + 1 + 1 + \cdots + 1 = 2N - 1 = O(N)$$ $N$ iterations for i == 1 and otherwise only $1$ iteration. So the code runs in $O(N)$ time. The same effect happens when you loop over edges next to a node: for (Edge edge : graph.adj(min)). Yes, in the worst case you have $O(|V|)$ edges, but in a sparse graph, most of the time you have a lot less. You can count them from a different angle. If you fixate an edge $(u, v)$, how often will you touch that edge, and move into the body of the loop? Only twice! Once when min == u, and once when min == v. Therefore the inner part of the loop, with runtime $O(\log |V|)$, will run only $O(2 |E|) = O(|E|)$ times. Which means that the whole thing runs in $O(|E| \log |V|)$.
{ "domain": "cs.stackexchange", "id": 16267, "tags": "algorithms, big-o-notation, dijkstras-algorithm" }
Search object literal and return first key/value pair that matches regex
Question: This does what I want: function test(data){ for(var key in data){ if(/_is$/.test(key)){ var obj = {} obj[key] = data[key]; return obj; } } } test( {a: 'a', user_id_is: '1', b: 'b'} ) > Object {user_id_is: "1"} But 8 lines of code for such a simple task looks inefficient. Is there a more concise way to achieve such a basic task in JavaScript, something like how Ruby does it? data.detect {|item| item =~ /_is$/} Answer: To be fair, the entire thing in Ruby would be a few more method calls: {:a => 1, :b => 2}.detect { |i| i != :b }.each_slice(2).to_h Can we do this in a similar number of lines in JavaScript? First, we can shorten the 8 line original down to 5 lines if we use ES2015 syntax: function test(data) { for(let key of data) { if(/_is$/.test(key)) return {[key]: data[key]}; } } Adding in ES2015 methods we can use reduce and an arrow function to get rid of the return at the cost of making this always loop over all the keys in the object (now down to 3 lines): const test = data => Object.keys(data).reduce((obj, key) => obj || /_is$/.test(key) ? {[key]: data[key]} : undefined, undefined ); If we're willing to use ES2017, we can use Object.entries and a helper function to get the line count down to 2 at the expense of a little legibility: const toObj = ([key, value] = []) => key ? {[key]: value} : undefined; const test = data => toObj(Object.entries(data).find(([key, value]) => /_is$/.test(key)));
{ "domain": "codereview.stackexchange", "id": 20268, "tags": "javascript, regex" }
Why are sound waves adiabatic?
Question: I want to know why we can treat sound waves as an adiabatic process. Precisely, I know that pressure and density vibrations occur so fast that molecules have no time to exchange energy (I might be wrong). But I would like a deeper explanation, not using a mathematical argument, but maybe a physical and numerical one (I haven't found any useful data to help me argue this fact). Answer: For starters, to quote Allan Pierce in Acoustics, The often stated explanation, that oscillations in a sound wave are too rapid to allow appreciable conduction of heat, is wrong. That one surprised me when I learned it myself. In fact, sound is not an adiabatic process for all frequencies. For any medium there is a thermal conduction frequency, $$ f_{\mathrm{TC}} =\frac{\rho c_{p} c^{2}}{2 \pi \kappa}. $$ Frequencies much lower than this value will be well-approximated as adiabatic. However, increasing the frequency through and above this point will transition the process from adiabatic to isothermal. For air, this frequency is $\sim 10^{9} \, \mathrm{Hz}$, well above the range of human hearing, so we almost always treat sound as adiabatic. The physical reason this occurs is that heat transfer due to conduction is proportional to the temperature gradient. This is just a statement of Fourier's law for heat conduction. Consider what happens as the frequency of a harmonic wave decreases: The wavelength increases, and the slope of the oscillating waveform decreases as it is "stretched out." Assuming equal amplitudes, lower frequency waves will therefore set up smaller temperature gradients, which will conduct heat less effectively. If the heat conduction is negligible, then the entropy is conserved by the process. So, in summary, the thermal gradients set up by sound waves for typical frequencies of interest are small enough to be neglected, hence sound is a (very nearly) adiabatic process. However, as Thomas pointed out below, in reality frequencies that cross into the potentially-isothermal regime are almost always affected by attenuation first, and the principal effects from conduction and viscosity are actually to damp out the sound wave. In case you decide you do want to see some math, the energy equation is $$ \rho T \frac{d s}{d t} = \kappa \nabla^{2} T.$$ The previous arguments can be seen mathematically by linearizing about a quiescent base state and assuming harmonic wave solutions for $s$ and $T$. The equation can be rewritten as $$ - i \omega \rho_{0} T_{0} \hat{s} = - \kappa \frac{\omega^{2}}{c^{2}} \hat{T}, $$ $$ \hat{s} = - i \frac{\kappa \omega}{\rho_{0} T_{0} c^{2}} \hat{T}. $$ As the angular frequency $\omega = 2 \pi f \rightarrow 0,$ so must the amplitude of the entropy oscillation, $\hat{s}$. As with the quote at the beginning, much of my answer draws from Acoustics by Allan Pierce.
{ "domain": "physics.stackexchange", "id": 29380, "tags": "thermodynamics, waves, acoustics, adiabatic" }
I would like to open a ticket on the robot-state-publisher
Question: I would like to open a ticket on the robot-state-publisher. But there is no link on the wiki page nor do I see how to do this in the github source page. Can someone please tell me how to do this. Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-09-28 Post score: 0 Answer: Github show a link to issues at the top right. Originally posted by dornhege with karma: 31395 on 2013-09-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rnunziata on 2013-09-28: was not sure if issues was the place for tickets ie. report bugs. thanks
{ "domain": "robotics.stackexchange", "id": 15694, "tags": "ros" }
Why would we need to ground an AC source
Question: I'm new to this field hence this weird question. Why would we need to ground an AC source? Why wouldn't it be enough to have just one pole to get an AC current going? I understand why it wouldn't work in DC case where current is flowing in one direction. However, in case of AC source where the current is not flowing anywhere but rather just oscillating back and forth it's not that clear to me why connecting load to only one pole wouldn't work? Thanks. Answer: There are two related issues here, and I'm not sure which is your actual question. In household AC circuits with three-wire cables, the ground is primarily a safety system. Somewhere near where the utility power reaches your electric meter, your ground wire is actually connected to ... the ground, via a stake in the dirt or a connection to your plumbing. The earth can source or sink a great deal of charge without harm, and so a short between a live wire and a grounded case is much less dangerous than a short between a live wire and a loved one. If you look closely at wire runs through metal conduit, (usually outdoors at home, more common in industrial settings) you should find that both ends of the conduit are connected to the ground wire; this shields the power cable somewhat against acting like a radio antenna. You can absolutely operate an AC circuit without a ground wire; the was generally how systems were installed in the several decades before grounded systems became standard, and you still find them sometimes in old houses. I think it's more likely that you're asking about whether you can operate an AC circuit without a return path for the current, since the current is just sloshing around. That's possible too, though the details for the circuit are a little different. Here's a photo of one: As you probably remember from your experience with these "one-wire" systems, you don't even have to be in contact with them for the field to work! The electrical energy is carried not by the charges, but by the fields created by the moving charges, and the energy flow goes in the direction determined by the Poynting vector. This is even the case for a DC circuit, which has this sort of field between its "source" and "return" lines. What would a one-wire, contact transmission system be like? The voltages in the one wire would have to be regulated relative to ground (not the ground wire, since we've eliminated that, but relative to a stake in the dirt somewhere). Essentially you'd be using the earth as a "return line." Since the return flow would presumably spread out over a large volume under the earth, you'd effectively have a transmission line with a very large gap between the source and return currents, which would give a tiny capacitance and a huge impedance. It could be made to work, but it would be much more finicky and unpredictable than the system we have now.
{ "domain": "physics.stackexchange", "id": 14538, "tags": "electric-circuits" }
What are the tallest hydrothermal vents in the world?
Question: Hydrothermal vents are commonly found near active volcanic areas and provide for a rich and complex environment for many biological organisms (archaea, bacteria, tube worms...). Many new hydrothermal vent systems are being discovered and many more are likely to be discovered in the future. My questions are which hydrothermal vent system is currently considered to be the tallest? What contribute to make that system larger? How old is the vent system? Are there any peculiar organisms associated with the system that haven't been found anywhere else? Answer: The tallest hydrothermal vent is the 60 m (180 ft) tall carbonate Poseidon vent, among the Lost City hydrothermal vent system on top of the ~4km high Atlantis Massif, located 20 km west of the Mid-Atlantic Ridge at 30°N. The vents emit pH 9-11 metal-poor fluids at about 90°C. The composition of the hydrothermal vents are aragonite, calcite (both polymorphs of calcium carbonate, $\ce{CaCO3}$), and brucite ($\ce{Mg(OH)2}$) giving the white colour of the vents, the pinnacle of the Poseidon vent is shown below: NOAA Ocean Explorer The Lost City hydrothermal vents are discussed in detail in the University of Washington's Lost City Research and NOAA's Formation of Carbonate Chimneys at the Lost City Hydrothermal Field page, from the websites, in answer to your questions: Age: presumed to be over 30,000 years old. What contributes to its height: Asides from the age of the vents, which allows the vents to build up. The vents are formed on a base of thick fossiliferous pelagic limestone, which form an impervious lid trapping the hydrothermal fluids and geothermal heat. This limestone overlies a sedimentary breccia, which overlies fractured and faulted basement of highly deformed serpentites and exposed Mg-rich mantle rocks (peridotites). Biomass: It is believed that 58% of the fauna are endemic to the Lost City environment, including the Lost City Methanosarcinales inhabiting the anoxic interior zones of the vent, which are an example of Methane- and Sulfur-Metabolizing Microbial Communities Dominate the Lost City Hydrothermal Field Ecosystem (Brazelton et al. 2006). Observations suggest that $\ce{CH4}$ and $\ce{S}$ cycling dominate the ecological processes in the Lost City vent environment.
{ "domain": "earthscience.stackexchange", "id": 358, "tags": "ocean, volcanology, geothermal-heat, hydrothermal-vents" }
Does the definition of stable system contradict itself?
Question: A system is said to be stable when any of its poles are <0. However I don't get why that is the case. Negative poles mean negative angular frequency, and negative angular frequency is equal to positive angular frequency, so the definition contradicts itself. What am I missing? Answer: A system is said to be stable when any of its poles are <0. I'm not sure who said that, but they're wrong or they've been misquoted. A Linear Time Invariant system that can be described by ordinary differential equations is stable if the real part of every blessed one of its poles is less than zero. This is because for any pole $a$ in a system transfer function, nearly any response of the system to an input will have an element of the response that is proportional to $e^{a t}$. For any $a$ such that $\mathcal R (a) < 0$, $\lim_{t \to \infty} e^{a t} = 0$. But for any $a$ such that $\mathcal R (a) > 0$, $\lim_{t \to \infty} \left | e^{a t} \right | = \infty$. So even one pole with a positive real part will cause the whole system to be unstable. Negative poles mean negative angular frequency That is incorrect. In the Laplace domain, any sinusoidal component to a signal is caused by components with $s = a + j\omega,\ \omega \ne 0$. Your "angular frequency" is the $j\omega$ part, and it is orthogonal to the real number line. and negative angular frequency is equal to positive angular frequency That is also incorrect. You are probably starting with $\cos j \omega = \cos -j \omega$ and jumping to a conclusion, bolstered by the fact that in a system with all real-valued gains, the poles and zeros will all be purely real or will occur in complex-conjugate pairs. However, while I've never seen it used in a control system, it is common in communications systems to convert a signal centered on a carrier down to baseband as an inphase/quadrature pair -- and an inphase/quadrature pair acts just like a complex number. In this context (and in some hypothetical control systems context where it makes sense to have complex-valued gains), negative frequencies do have meaning, and a negative frequency does not "equal" a positive frequency -- because $e^{j\omega t} \ne e^{-j \omega t}\ \forall \ t$.
{ "domain": "dsp.stackexchange", "id": 11463, "tags": "linear-systems, transfer-function, laplace-transform" }
How to prove that this is NP complete?
Question: I'm trying to prove that if P = NP, then {⟨a, b, c⟩ : a + b = c} (as addition over N) is NP-complete. I think I managed to prove that it is in NP, but I'm not sure what would be a good NP complete problem to reduce to would be, or what algorithm to use. Any ideas? Answer: Your language is in P; try to figure out why. If P=NP, all non-trivial languages in P are NP-complete, and in particular this one.
{ "domain": "cs.stackexchange", "id": 3669, "tags": "complexity-theory, formal-languages" }
Hydrogenation of pent-4-en-2-one
Question: If one equivalent of $\ce{H2}/\ce{Pt}$ is made to react with one equivalent of pent-4-en-2-one, what will be the product formed? Answer: You have not specified the full reaction conditions - temperature and pressure can make a considerable difference - however this quote from Chemistry Libre Texts here (in turn quoting Basic Principles of Organic Chemistry by Roberts & Caserio) gives us a clear indication of the relative ease of hydrogenation of ketones vs double bonds. Hydrogenation of aldehyde and ketone carbonyl groups is much slower than of carbon-carbon double bonds so more strenuous conditions are required. This is not surprising, because hydrogenation of carbonyl groups is calculated to be less exothermic than that of carbon-carbon double bonds....It follows that it is generally difficult to reduce a carbonyl group in the presence of a carbon-carbon double bond by hydrogenation without also saturating the double bond. Other reducing agents are more selective:
{ "domain": "chemistry.stackexchange", "id": 16077, "tags": "organic-reduction" }
Why do we get runny noses in the cold?
Question: The most annoying thing for me about being cold is a runny nose. Is there an advantage to having a runny nose when cold? What does having a runny nose achieve? Answer: There are two reasons for this: Nasal mucus helps warm inhaled air before it reaches the lungs. In cold weather, the mucus tends to dry out, so the membranes increase their production. At the same time, exhaled air is warmer than the surrounding air, so it contains more moisture than the outside air can hold. This moisture condenses around the tip of the nose. Explanation found here. So there's no particular advantage to getting a runny nose; it's just a normal reaction occurring in extreme conditions.
{ "domain": "biology.stackexchange", "id": 7629, "tags": "human-biology, physiology" }
Reference request: Path accuracy algorithm in the joint angle space
Question: I am currently reviewing a path accuracy algorithm. The measured data are points in the 7 dimensional joint space (the robot under test is a 7 axes Robot, but this is not of importance for the question). As far as I know path accuracy is measured and assessed in configuration (3 D) space. Therefore I am wondering if a path accuracy definition in joint angle space has any practical value. Sure, if one looks at the joint angle space as a 7 dimensional vector space in the example (with Euclidean distance measure) one can do formally the math. But this seems very odd to me. For instance, an angle discrepancy between measured and expected for the lowest axis is of much more significance than a discrepancy for the axis near the actuator end effector. So here is my Question: Can anyone point me to references where path accuracy in joint space and/or algorithms for its calculation is discussed ? (I am not quite sure what tags to use. Sorry if I misused some.) Answer: The forward kinematics of the manipulator will correctly identify the larger displacements of the end effector for small rotations of the proximal joints, as opposed to the smaller displacements of the end effector for small rotations of the distal joints. When these motions are due to errors - all real mechanical systems have them - the established process for relating joint errors to task-space errors involves analytical perturbation analysis, and/or physical calibration of the system. A good starting point would be Siciliano and Khatib, Handbook of Robotics. Check out the end of Chapter 14 (they only hit upon the topic but the references will certainly help). You can also look at the papers which describe 3D sensors for robot calibration. Those papers frequently derive the perturbation analysis, then show how the new sensor allowed the end effector errors to be reduced after calibration. I recommend many of the editions of Lenarcic's Advances in Robot Kinematics. The 2000 edition with Stanisic has a paper by Khalil et al regarding calibration techniques. Or a web search will find many such papers, e.g., http://www.columbia.edu/~yly1/PDFs2/wu%20recursive.pdf http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=535825&fileOId=625590 http://math.loyola.edu/~mili/Calibration/index.html (follow the references in this one). Hope this helps.
{ "domain": "robotics.stackexchange", "id": 966, "tags": "algorithm, industrial-robot, joint" }
libEGL.so missing while building melodic on ARM ubuntu 18.04
Question: Got the errors below after running: rosdep install --from-paths src --ignore-src --rosdistro melodic -y I am not sure how to fix or what is wrong: CMake Error at /usr/lib/aarch64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfig.cmake:27 (message): The imported target "Qt5::Gui" references the file "/usr/lib/aarch64-linux-gnu/libEGL.so" but this file does not exist. Possible reasons include: * The file was deleted, renamed, or moved to another location. * An install or uninstall procedure did not complete successfully. * The installation package was faulty and contained "/usr/lib/aarch64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake" but not all the files it references. Call Stack (most recent call first): /usr/lib/aarch64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake:32 (_qt5_Gui_check_file_exists) /usr/lib/aarch64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake:54 (_qt5gui_find_extra_libs) /usr/lib/aarch64-linux-gnu/cmake/Qt5Gui/Qt5GuiConfig.cmake:184 (include) /usr/lib/aarch64-linux-gnu/cmake/Qt5Widgets/Qt5WidgetsConfig.cmake:101 (find_package) src/qt_gui_cpp/CMakeLists.txt:3 (find_package) -- Configuring incomplete, errors occurred! See also "/home/rock64/ros_catkin_ws/build_isolated/qt_gui_cpp/CMakeFiles/CMakeOutput.log". See also "/home/rock64/ros_catkin_ws/build_isolated/qt_gui_cpp/CMakeFiles/CMakeError.log". <== Failed to process package 'qt_gui_cpp': Command '['/home/rock64/ros_catkin_ws/install_isolated/env.sh', 'cmake', '/home/rock64/ros_catkin_ws/src/qt_gui_core/qt_gui_cpp', '-DCATKIN_DEVEL_PREFIX=/home/rock64/ros_catkin_ws/devel_isolated/qt_gui_cpp', '-DCMAKE_INSTALL_PREFIX=/home/rock64/ros_catkin_ws/install_isolated', '-DCMAKE_BUILD_TYPE=Release', '-G', 'Unix Makefiles']' returned non-zero exit status 1 Reproduce this error by running: ==> cd /home/rock64/ros_catkin_ws/build_isolated/qt_gui_cpp && /home/rock64/ros_catkin_ws/install_isolated/env.sh cmake /home/rock64/ros_catkin_ws/src/qt_gui_core/qt_gui_cpp -DCATKIN_DEVEL_PREFIX=/home/rock64/ros_catkin_ws/devel_isolated/qt_gui_cpp -DCMAKE_INSTALL_PREFIX=/home/rock64/ros_catkin_ws/install_isolated -DCMAKE_BUILD_TYPE=Release -G 'Unix Makefiles' Command failed, exiting. Originally posted by lukewd on ROS Answers with karma: 116 on 2018-11-22 Post score: 0 Answer: I resolved this by first going into the /usr/lib/aarch64-linux-gnu directory and observing there was a libEGL.so symlink to a nonexistant file in that same directory. There were a few other libEGL.so type files so I updated the symlink. In my case on a rockpro64 using Ubuntu 18.04 I did: cd /usr/lib/aarch64-linux-gnu sudo ln -s libEGL_mesa.so.0.0.0 libEGL.so First I removed the bogus symlink, not sure if that was necessary or not. If you have another file similar but not identical to libEGL_mesa.so.0.0.0, I assume you would use that instead. After doing this the build completed normally. UPDATE* Trying this again with latest sources led to several more missing files, including: libglesv2.so -> I did a "sudo find / -name "libGLESv2.so", it was in a chromium-browser folder. I copied it to the expected location. Compiling ran out of memory while compiling pcl_ros/segmentation/extract_clusters.cpp.o. I added 4gb of swap and set swappiness to 25. (Not sure if that was an optimal setting but it worked to get past that point). For reference, I started with this linux image http://wiki.pine64.org/index.php/ROCKPro64_Software_Release#LXDE_Desktop_aarch64_.5BmicroSD_.2F_eMMC_Boot.5D_.5B0.7.9.5D Originally posted by lukewd with karma: 116 on 2018-11-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32087, "tags": "ros-melodic" }
What Snake is this? (Found at my door in Raebareli, North India)
Question: Found this snake at my door in Raebareli, North India. We of course allowed it to go away peacefully. Could someone please identify it for me? Answer: It is the venomous Common Krait (Bungarus caeruleus), one of the Big Four Indian Snakes, responsible for the highest number of snake bites in India.
{ "domain": "biology.stackexchange", "id": 11456, "tags": "species-identification, species, herpetology, snake" }
Using sinusoids to represent sound waves
Question: Having some free time, I've begun a mini-project attempting to understand how music works in more detail, which has so far only involved music theory and a bit of set theory. I'm now trying to get my head around the physics of sound waves, and in particular, I'm trying to understand how any sound can be analysed as a sum of sin waves. I'm studying for a mathematics degree, so I'm familiar with Fourier transforms, trigonometric functions etc., but I've never really needed to apply them to sound waves in any detail. In this question I'm not really asking how the mathematics of Fourier transforms work, I'm more concerned with the specifics of how a sine (or cosine) function can model a sound wave. Apologies in advance for the long question, it was originally almost twice as long but I've condensed it as much as I could. I know that a sound wave is an longitudinal oscillation of particles in a medium, and that a wave might be generated by something moving back and forth repeatedly such as a guitar string, or a piece of cardboard inside a speaker. I've had a bit of trouble understanding how these oscillations are passed on, so as context to my questions I'm going to try and explain my current understanding of it (and hopefully this will make it easier to highlight where I've gone wrong). My conceptual understanding so far: (Having read this question) I'm imagining the first oscillation of a speaker membrane (i.e. a piece of cardboard) in air, but it could be a vocal chord, a guitar string etc. The membrane imparts a certain initial velocity to the particles it hits, but this velocity is quickly absorbed by the surrounding particles of air due to inefficient collisions occurring at angles different to the direction of wave propagation. The nearby particles are pushed very close together, and due to the nature of gases/liquids, spread out, but this causes a gathering of particles somewhere further down the line, and the chain continues until the movement of particles reaches my ear. When the speaker membrane moves backwards, an area of space is freed up for the particles to move into; the particles begin to move in the reverse direction. Since this movement is not at all governed by the initial velocity imparted to the particles, the speed of sound is constant (within a given medium). If what I have said above is correct, then (as far as I can tell) to define the wavelength $\lambda$ we have to stop time for a moment, then measure the distance between two points where the particles are maximally dense. The frequency $f$ of the speaker membrane is defined to be how many times it oscillates back and forth per unit time, so that if the time taken for one oscillation is $T$, we have $f=\frac{1}{T}$. Clearly the speed of the wave will be $v=\frac{\lambda}{T}=f\lambda$. However we know $v$ is a constant, so increasing the frequency of the speaker membrane means decreasing the wavelength of the wave. Every time the membrane oscillates it sends another chain of dense particles on its way, so that the frequency of the membrane is equal to that of the wave. Question 1: Does there exist a velocity such that the membrane moves too quickly for the air particles? In order for the above model to work, the air must fill the gap left by the speaker membrane as it oscillates almost instantaneously, else the membrane will have no particles to move on its return journey. The speed of sound in air is $343$ $m/s$. If the membrane moves at say twice this speed, then it will have returned before the air particles have returned to their original positions. Obviously this puts no limit on the frequency, as you can make the velocity of the membrane arbitrarily small by reducing the distance it has to travel, but am I right in saying that this will occur? Question 2: In which cases can we model sound with a sine function, and why? This is my main problem; I don't understand how a sine function corresponds to a sound wave in reality. Suppose $f(x)=\sin(x)$ where $x$ is the axis of wave propagation. Does $\sin(x)$ (the amplitude) represent the displacement of the speaker membrane from its equilibrium position, or does it represent air pressure (for a given moment)? I'm not quite sure how we even know that air pressure varies in accordance with a sinusoid, I can't seem to find an explanation anywhere. Having googled this quite a lot, I find most courses either just state that the sinusoid will represent air pressure or some other quantity, or worse still they neglect to comment and don't label the axes etc. Also, the function $f(x)=\sin(x)$ is not time dependent, and only represents variation in one axis. If this is the case, how can it fully model the propagation of a wave in space. When you look at wave forms on programs like audacity, they only graph time and amplitude, so a simple pure tone would be a function like $A(t)=\sin(t)$. This function is completely position independent, so I'm assuming it is created by detecting pressure variations at a point and graphing it. It doesn't fully describe the wave though! So I'm wondering what kind of function will represent a pure tone in three dimensions? Answer: To expand on Xcheckr's answer: The full equation for a single-frequency traveling wave is $$f(x,t) = A \sin(2\pi ft - \frac{2\pi}{\lambda}x).$$ where $f$ is the frequency, $t$ is time, $\lambda$ is the wavelength, $A$ is the amplitude, and $x$ is position. This is often written as $$f(x,t) = A \sin(\omega t - kx)$$ with $\omega = 2\pi f$ and $k = \frac{2\pi}{\lambda}$. If you look at a single point in space (hold $x$ constant), you see that the signal oscillates up and down in time. If you freeze time, (hold $t$ constant), you see the signal oscillates up and down as you move along it in space. If you pick a point on the wave and follow it as time goes forward (hold $f$ constant and let $t$ increase), you have to move in the positive $x$ direction to keep up with the point on the wave. This only describes a wave of a single frequency. In general, anything of the form $$f(x,t) = w(\omega t - kx),$$ where $w$ is any function, describes a traveling wave. Sinusoids turn up very often because the vibrating sources of the disturbances that give rise to sound waves are often well-described by $$\frac{\partial^2 s}{\partial t^2} = -a^2 s.$$ In this case, $s$ is the distance from some equilibrium position and $a$ is some constant. This describes the motion of a mass on a spring, which is a good model for guitar strings, speaker cones, drum membranes, saxophone reeds, vocal cords, and on and on. The general solution to that equation is $$s(t) = A\cos(a t) + B\sin(a t).$$ In this equation, one can see that $a$ is the frequency $\omega$ in the traveling wave equations by setting $x$ to a constant value (since the source isn't moving (unless you want to consider Doppler effects)). For objects more complicated than a mass on a spring, there are multiple $a$ values, so that object can vibrate at multiple frequencies at the same time (think harmonics on a guitar). Figuring out the contributions of each of these frequencies is the purpose of a Fourier transform.
{ "domain": "physics.stackexchange", "id": 75328, "tags": "waves, acoustics, shock-waves" }
Angular momentum and torque of an oscillating cylindrical rod
Question: Imagine a homogeneous uniform cylinder whose cross section is not necessarily a circle (it can be an ellipse) but some other figure with area $A$. The cylinder oscillates like a see-saw around its midpoint so its angular velocity is $\dot{\theta}=\frac{d\theta}{dt}$, where $\theta$ is the angle between the cylinder and the horizontal. We have the following relations between its volume $V$, its density $\rho$, its length $L$ and the area $A$ of its cross section $$m=(m_{L}+m_{R})=\rho V=\rho LA$$ where $m_{R}$ is the mass of the right half of the cylinder and $m_{L}$ is the mass of the left half of the cylinder (left and right of the midpoint around which it is oscillating). In general $m_{R}\neq m_{L}$ since there is a slight asymmetry between the two sides of the cylinder. What are expressions for the angular momentum and torque of the gravitaty force in terms of $m_{L},m_{R},\dot{\theta},\rho$ and $A$ assuming the center of mass of each half is located approximately a quarter of the length $L$ of the cylinder? For the torque I get using the definition and assuming the center of mass at $L/4$, $$\tau=m_{L}g\cos\theta\frac{L}{4}-m_{R}g\cos\theta\frac{L}{4}=\frac{g\cos\theta (m_{L}^{2}-m_{R}^{2})}{4\rho A}$$ How would I compute the angular momentum? I assume $v=\dot{\theta}L/2$? Answer: Divide mentally the cylinder into two halves: the half to the left of the oscillation midpoint and the half to the right of the oscillation midpoint. Lets call the length of the left part of the cylinder $L_{l}$ and the length of the right part $L_{r}$. The total length $L=L_{r}+L_{l}$. We know that the mass of the left half is $$m_{l}=\rho V_{l}=\rho AL_{l}$$, hence $$L_{l}=\frac{m_{l}}{\rho A}\label{1}\tag{1}$$ Similarly for the right side $$m_{r}=\rho V_{r}=\rho AL_{r}$$ and $$L_{r}=\frac{m_{r}}{\rho A}\label{2}\tag{2}$$ $A$ denotes the area of the cross section of the cylinder (same on left and right halves of the cylinder). The angular momentum is given by the product of the momentum of inertia with the angular velocity of each cylinder half so $$AM=I_{l}\dot{\theta}+I_{r}\dot{\theta}$$ Since $I_{r}=\frac{m_{r}L_{r}^{2}}{3}, I_{l}=\frac{m_{l}L_{l}^{2}}{3}$ the angular momentum becomes $$AM=\frac{m_{r}^{3}+m_{l}^{3}}{3\rho^{2} A^{2}}\dot{\theta}$$ where I used (\ref{1}) and (\ref{2}) to replace $L_{l},L_{r}$. To derive the total torque due to the gravity force add up the torque from the left to the torque from the right halves of the cylinder, taking into account that the gravity force acts at the midpoint. Since torque is $$\tau=m_{l}\vec{r_{l}}\times \vec{F}+m_{r}\vec{r_{r}}\times\vec{F}$$ we get $$\tau=g\cos\theta(m_{l}L_{l}/2-m_{r}L_{r}/2)=\frac{m_{l}^{2}-m_{r}^{2}}{2\rho A}g\cos\theta$$ where I assume the center of mass of each side of the cylinder lies at the middle of it so at $L_{r}/2$ or $L_{l}/2$ and hence $\vec{r_{r}}=L_{r}/2(\cos\theta,\sin\theta,0),\vec{r_{l}}=L_{l}/2(-\cos\theta,-\sin\theta,0)$ and then I used again (\ref{1}) and (\ref{2}).
{ "domain": "physics.stackexchange", "id": 33984, "tags": "homework-and-exercises, newtonian-mechanics, angular-momentum, rotational-dynamics, torque" }
How to calculate the state given by two qubits?
Question: Let's say two qubits are both in $|+\rangle$ state. We need to find $a_1$, $a_2$, $a_3$, and $a_4$ in $|\phi\rangle = a_1|00\rangle + a_2|01\rangle + a_3|10\rangle + a_4|11\rangle$, how do we find these amplitudes? How do we do it in general case, when each of the qubits are not necessarily in $|+\rangle$ state, but in some $|?\rangle$ state? Answer: Start by writing out what the $|+\rangle$ state actually is: $$ |+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle) $$ So, two qubits in the state $|+\rangle$ are in the state $$ |+\rangle|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\otimes \frac{1}{\sqrt{2}}(|0\rangle+|1\rangle) $$ You then need to expand this out, making use of the distributivity of the tensor product, and match up with the specified form of $|\phi\rangle$. Since this sounds a bit like a homework problem, I'm not going to do that for you explicitly. (If you've tried something and got stuck, show us what you tried!) The general case is absolutely equivalent, you just replace $|+\rangle|+\rangle$ with something like $$ (b_0|0\rangle+b_1|1\rangle)\otimes (b_2|0\rangle+b_3|1\rangle) $$
{ "domain": "quantumcomputing.stackexchange", "id": 192, "tags": "quantum-state" }
Do soundwaves produce heat?
Question: Sound dies down after a while. The energy emmited by the speakers must go somewhere. Where does it dissipate this energy? Is it in heat? Theoretically, if i play music in a room, would it get hotter in the room? Answer: The sound wave energy is dissipated by viscous dissipation in the room air. This increases the temperature of the room air, but only extremely slightly. There just isn't enough energy of the sound waves in air to do much more.
{ "domain": "physics.stackexchange", "id": 89299, "tags": "thermodynamics, energy, acoustics, air, vibrations" }
Single author papers against my advisor's will?
Question: I am a third year PhD student in an area of theoretical CS that would like advice for a difficult situation with my advisor. My advisor is not involved in my research projects at all. In particular, I have come up with all of my paper ideas, and have executed the papers alone. However, she always insists on adding her name as a co-author. This has started to increasingly bother me, as I work very hard (alone) on my research and believe I should get credit for that. In addition, she is a bully and treats me quite badly, so it makes it even harder for me to benefit her in this way. For my most recent paper, I brought up how I didn't believe she was meeting the IEEE 1 or ACM 2 guidelines for authorship, and told her that I believed I should be sole author on my paper. She agreed that she shouldn't be an author, although she was visibly angry. She said that I was a "weirdo" for doing this, and said that everybody already knows that advisors take credit for their student's work and that publishing with your advisor is the same as publishing alone. But most importantly, she told me that she would not approve my proposal/dissertation if I did not add her name to several more top-tier papers because then I "have no ties to the university" since I am not working with a professor, and therefore cannot receive my PhD. Obviously, I need a new advisor. However, there is really no one in my department in my research area. Switching research areas or departments are not options. So the remaining options are the following: (1) Add her name to several more papers. I do not like this idea because it is unethical, and there is no guarantee that anything is even gained in this option. She could simply refuse to recommend me in the end after I got her a bunch of papers. (2) Ignore her threats, and force my way to finishing my PhD while publishing single author papers. I do not believe she could stop me from graduating since I already have a decent publication record, and presumably will continue getting my work out. I have a fellowship, so she can't control my funding. Clearly, I will not have a letter of recommendation in this case. On the other hand, I will have a bunch of single author papers. (3) Try to convince a professor in an unrelated research area to be my advisor, emphasizing that I am independent and can do my work alone. There are a few theory professors in my dept, although they are totally different areas. I have no idea the chance of this working out. (4) Go to the department chair and tell him the whole story, ask what to do. What do you think I should do? Answer: As a department chair, I can say you aren't alone. These situations come up all too often. Please do reach out to your department chair, graduate program director or grad student ombudsperson if your institution has one. We want to know when our faculty are behaving badly and often we can help.
{ "domain": "cstheory.stackexchange", "id": 4688, "tags": "soft-question, advice-request" }
is FIND WORDS in P?
Question: FIND WORDS is the following decision problem: Given a list of words L and a Matrix M, are all words in L also in M? The words in M can be written up to down, down to up, left to right, right to left, diagonal-left-up, diagonal-left-down, diagonal-right-up and diagonal-righ-down. To be specific this is the classic game that can be found in the puzzling week: FIND WORDS Now this decision problem clearly is in NP because, given a certificate with the positions of the words in the matrix (indexes), a verifier can check it in polynomial time. My question is this: are we aware of Turing Machines that decide this language in polynomial time? Answer: Your language is in P. Suppose that the matrix is $n\times n$ and that the words have total length $\ell$. Each word can start at at most $n^2$ positions and be written in $O(1)$ many orientations, for a total of $O(n^2)$ possible placements. Checking each one costs at most $O(m)$, where $m$ is the length of the word. In total, we obtain an algorithm whose running time is $O(\ell n^2)$, which is quadratic in the input length. Using a trie or similar data structure, you can probably improve this to linear time $O(n^2 + \ell)$.
{ "domain": "cs.stackexchange", "id": 14402, "tags": "np, decision-problem" }
Lyonization vs Genetic Imprinting
Question: Lyonization is the process in which there is inactivation of an X chromosome in females. This process is implicated in mosaic forms of turner's syndrome (in this case the altered X chromosome is Lyonised, thus allowing the normal X Chromosome to express itself). Is Lyonization considered as a form of genetic imprinting? What is the difference between the two (if no)? Lyonization is a random process (according to the extent of my knowledge which X chromosome is inactivated is not predicatable). Is genetic imprinting also random? Is there any known aberration of Lyonization in males? (meaning the single X Chromosome or Y Chromosome gets silenced) Answer: I use the term "X-inactivation" instead of "lyonization". Anyway, X-inactivation is a very specific process by which an entire X chromosome (or equivalent sex chromosome) in a female mammal is completely silenced. "Imprinting" is typically applied to single genes, wherein genes inherited from a specific parent are always epigenetically silenced. The genes are usually autosomal. Thus, we're talking about scales : entire sex chromosome composed of thousands of genes (x-inactivation) vs single genes (imprinting) The choice for which X-chromosome is inactivated is sporadic and different in different cells within the body. It is controlled by opposing expression of different non-coding RNAs (XIST on the inactive X and TSIX on the active X). Imprinting is very specific to the allele that is inherited from a specific parent (for example, some genes are paternally imprinted, while others can be maternally imprinted). Males with aberrant X-inactivation will lack any expression of genes carried on their single X-chromosome, a condition that will not result in a viable fetus. However, whether there are aberrant X-inactivations in single cells or tissues of ageing adult males....perhaps likely yes, given all the other aberrant somatic mutation events that occur over the course of life.
{ "domain": "biology.stackexchange", "id": 3504, "tags": "genetics, reproduction" }
What are the advantages of forgetting?
Question: How forgetting things is helpful for the brain or the human body biologically? This web page After some moment of being rude, selfish, or weak, either we are able to put it behind us, or the person who suffered at the result of our imperfection moves on. The reason for this is our ability to forget about it. We forget not because we have an imperfect hippocampus (our brain’s memory organ); it's actually an evolved solution. The ability to lose information allows new information to come in that is more relevant, more pertinent to an ongoing reality. Forgetting allows us to update. and this Huffington post article According to a study in Nature, our awareness is limited to only three or four objects at any given time. To be able to think at your highest level, you therefore must be very efficient at filtering out all of the background noise: Your racing thoughts, the ringing phone, your neighbor’s barking dog, and the list goes on. The Nature study found that when participants were asked to “hold in mind” certain objects while ignoring others, there are significant variations in how well each of us can keep irrelevant objects out of our awareness. The researchers concluded that our memory capacity is therefore not simply about storage space, but rather “how efficiently irrelevant information is excluded from using up vital storage capacity.” provide some backgrounds. Answer: Short answer It has been shown that loss of long-term memories may enhance the retrieval of others. Short-term working memory is explicitly designed to be volatile and non-lasting. However, there are many other types of memories where memory loss may not be explicitly beneficial, or even outright debilitating such as in the case of Alzheimer's or stroke. Background First off all, there are many types of memories, including sensory memory, motor memory, short-term (working) memory, long-term memory, explicit & implicit memory, declarative & procedural memory and so on. Hence, because the question is quite broad, I will focus on long-term memory, short term-memory and sensory memory to discuss that memory loss can be beneficial, neutral, or detrimental. Beneficial effects of loosing memories Long-term memory is probably what you are after and there are studies in that field that have linked the loss of memories to enhanced processing of other memories. More specifically, there are adaptive benefits of forgetting, namely a reduced demand on cognitive controls during future acts of remembering of other stored information. Even more specific: retrieval of memories after forgetting others are thought to reduce the necessary engagement of functionally coupled cognitive control mechanisms that detect (anterior cingulate cortex) and resolve (dorsolateral and ventrolateral prefrontal cortex) mnemonic competition (Kuhl et al., 2007). The improvement of particular memory processes by forgetting others may be linked to them being closely related. Indeed, motor tasks more remote do not benefit much from forgetting unrelated ones (Shea & Right, 1991). Inherently volatile memory Short-term working memory is explicitly designed to aid in on-demand task performance. Short term memory is for example used to remember a set of components (e.g., colors) and use that information to deal with a certain task at hand (which objects depicted here match the colors you just saw?). If all these memories would be retained, tasks dependent on working memory would not be possible. Sensory memory an ultra-short term memory that is kept only for very short amounts of time, allowing people to, e.g., track a light and make a symbol or letter out of it before the information is funneled to the short term-memory. Neutral effects of loss of neural function: a side track off memory lane However, forgetting may be simply another example of the use it or lose it principle that applies to pretty much everything in the human body; when you don't walk, the bones in the legs will weaken along with the musculature used for locomotion. Similarly, when the inner ear or the retina becomes dysfunctional and degenerate, the deafferented auditory nerve and optic nerve start to degenerate, respectively. The associated deafferented sensory cortices will slowly be taken over by other adjacent cortical areas due to the plasticity of the cortex. In blind folks, for example, the tactile and auditory cortices have been shown to take over the primary visual cortex. Given that the visual cortex is huge compared to the tactile and auditory cortex, one would expect substantial increase in performance on tactile and acoustic tasks in blind folks. Yet, this is debated (Stronks et al, 2015). In fact, normally sighted folks can learn braille as well as their blind peers, suffice they get an equal amount of practice. In other words, practice is the key, not enhanced areas of cortex being available per se. Hence, 'forgetting' to see or 'forgetting' to hear is, as far as my knowledge goes, not associated with any benefits whatsoever, barred a minority of studies that showed a slight benefit of being blind in auditory tasks (Stronks et al., 2015). Pathological memory loss - not so good: However, forgetting of memories may also be pathological; think of the impaired short-term memory of Alzheimer's patients, or amnesia due to stroke. Forgetting is not always beneficial. References - Kuhl et al., Nature Neurosci (2007); 10: 908-14 - Shea & Right, Res Quarterly Exercise Sport (1991); 62(3) - Stronks et al., Brain Res (2015); 1624: 140–52
{ "domain": "biology.stackexchange", "id": 6792, "tags": "neuroscience, neurophysiology, memory, cognition" }
Why is it always planets orbiting stars?
Question: In our solar system, there are 8 planets orbiting a star, the Sun. And I understand that there are about 500 confirmed solar systems out there. But why is it always planets orbiting stars? Why can't it be several stars orbiting a planet, or a star orbiting a star? Why is a star by definition stationary as opposed to planets which are moving. Of course, in that case, it wouldn't make much sense to call it a solar system, but still. Answer: "Why is a star by definition stationary as opposed to planets which are moving." This just isn't true. Both star and planets orbit a point in the system known as the barycenter (a.k.a. the center of mass of the system). Because stars are much more massive, this barycenter is much closer to the center of the star than it is to the planets. Hence it appears to the casual observer that the less massive object orbits around the more massive object. In fact both objects orbit the barycenter. Like so: The following diagram (seen on the wikipedia page above) shows the motion of the solar system barycenter with respect to the center of the Sun. Note that the barycenter is close to the center of the Sun, but often spends time outside the visible surface of the Sun. It follows this complicated path, mainly because of Jupiter but all the other planets make smaller contributions too.
{ "domain": "astronomy.stackexchange", "id": 1257, "tags": "star, planet" }
I am studying fragmentation warheads and this formula of 'numbers of fragments hitting the target' is just not making sense
Question: What is parameter 'p' in the formula? If somebody knows any better source to study this part... welcome to suggest. Thanks a lot. Probable Number of Fragments Hitting the Target It can be proven that the fragments from a typical warhead are generally lethal at long range, far in excess of the lethal effects from blast weapons of equivalent size. Drag reduces the energy slowly. For example, fragments from a hand-grenade can be dangerous to a range of about 100 m. However, the likelihood of being struck by a fragment at 100 m is small. There are only so many fragments that are distributed in all directions. The average number striking a target will reduce proportionally to 1/R^2, where R is the range. We can express this in the following formula: N_hits = A(N_o/4pR^2) where: N_hits is the expected number of fragments hitting the target; N_o is the initial number of fragments from the warhead; A is the frontal area of the target presented to the warhead; and R is the range of the target to the warhead. You can follow the link and search for 'fragments hitting the target' here https://fas.org/man/dod-101/navy/docs/es310/dam_crit/dam_crit.htm Answer: The $p$ is a crude stand-in for $\pi$. The site probably existed before Greek letters were available in web fonts. The ratio of the number of fragments is simply the ratio of two areas.
{ "domain": "physics.stackexchange", "id": 66887, "tags": "statistics" }
Signal-to-Noise Ratio (SNR) needed to discern two superimposed signals
Question: Say I am trying to measure a exponentially decaying signal ($S_y$), that constitutes a portion $\gamma$ (about 5% so $\gamma=\frac{1}{20}$) of total signal measured($S=S_y+S_b$ and $S_b=S_0 \beta e^{\frac{-t}{C_b}}$ and $S_y=S_0 \gamma e^{\frac{-t}{C_y}}$). Where $\gamma+\beta=1$ and $C_Y$ is about $\frac{1}{4}$ of $C_b$. I want to determine the least Signal-to-Noise Ratio (SNR) needed to be able to discern that signal. The noise in our signal is additive white noise. Should the SNR be $>\gamma=\frac{1}{20}$ or is there a better more formal way to decide the SNR limit? $$S(t)=S_0(\gamma e^{\frac{-t}{C_y}}+\beta e^{\frac{-t}{C_b}}) + n(t)$$ So my question is how do I determine the minimal SNR when I measure signal $S$ over time $t$, but the thing I am interested in is fluctuation (max 10%) in $S_y$. All constants are known to me, however i only know the mean of $C_y$, as it fluctuates and so $S_y$ does too. Thanks for any help. If I need to clarify any details please tell me and I will do so. Answer: This is a work-in-progress until more information is added to the question. First, I'm assuming that you know the magnitudes of your decaying exponentials, but you don't know their time constants (either $C_x$ or $C_y$) or your signal level $S_0$. Second, you say that there is Gaussian noise, but not whether it's additive. I'm going to assume it's additive. That means your signal is really: $$ S(t)=S_0(\frac{1}{20}e^{\frac{-t}{C_y}}+\frac{19}{20}e^{\frac{-t}{C_x}}) + n(t) $$ where $n(t)$ is additive, white, Gaussian noise with zero mean and variance $\sigma^2$. Then, to set up the problem, we need to postulate the estimated values of $\hat{C}_x$, $\hat{C}_y$, $\hat{S}_0$, and $\sigma^2$ in $\hat{S}(t)$: $$ \hat{S}(t; \hat{C}_x, \hat{C}_y, \hat{S}_0)=\hat{S}_0(\frac{1}{20}e^{\frac{-t}{\hat{C}_y}}+\frac{19}{20}e^{\frac{-t}{\hat{C}_x}}) $$ Then a least squares approach to solving it would define the error, $E$, as: $$ E(\hat{C}_x, \hat{C}_y, \hat{S}_0) = \int_{I} \left| S(t) - \hat{S(t)}\right|^2 dt $$ where $I$ is your time interval of interest, and minimize this with respect to $\hat{C}_x$, $\hat{C}_y$, and $\hat{S}_0$. I interpret your question as asking: what value $\frac{S_0^2}{\sigma^2}$ does this work over? The answer is: it depends on what you mean by "work". You can get an estimate for any noise level. The question is, what error can you tolerate in that estimate. Also, it will depend somewhat on your integration length ($I$). EDIT OK, I see you've changed notation so it's $C_b$ and $C_y$ now. I've written a short scilab script to try to see how things change with noise level. I've assumed EVERYTHING is precisely known except the $C_y$ parameter. The error in estimating the parameter versus the value of $\sigma$ is shown in the plot below. Code below generates it. S0 = 1; Cb = 201; Cy = 100; sigma = 0.1; mb = 1/20; my = 19/20; T=1000; t=[0:T-1]; Sb = mb*exp(-t/Cb); Sy = my*exp(-t/Cy); clf subplot(311) plot(Sb) plot(Sy,'g') plot(S,'r') ERRORS = []; sigmaRange = [0.0 0.1 0.2 0.5 1 2 5 10]; for sigma=sigmaRange, CyhatRange = 90:.01:110; estimates=[] NRuns = 100; for n=1:NRuns S = S0*(Sb + Sy) + sigma*rand(1,T,'normal'); ERR= []; for Cyhat = CyhatRange, S_hat = S0*(Sb + my*exp(-t/Cyhat)) ERR = [ERR; sum((S-S_hat).^2)]; end [mx,ix] = min(ERR); CyhatEst = CyhatRange(ix); //disp(CyhatEst); estimates = [estimates; CyhatEst]; subplot(312); plot(CyhatRange,ERR) end disp(mean((estimates-Cy).^2)) ERRORS = [ERRORS; mean((estimates-Cy).^2)]; end subplot(313) plot(sigmaRange,ERRORS);
{ "domain": "dsp.stackexchange", "id": 1129, "tags": "homework, noise" }
Boundary Pressure of a system in the Grand Canonical Ensamble
Question: Consider a system of fixed volume $V$ in equilibrium with a reservoir of both heat and particles (hence we may describe the system using the Grand Canonical Ensamble). While I was trying to derive the "Density Fluctuations Relation" I have stumbled upon the fact that I need to use the canonical expression for the boundary pressure (so that I could differentiate it twice by $\mu$): $$ P = - \left( \frac{\partial A}{\partial V} \right)_{N,T} $$ however, I noticed that if I don't specify the value of $N$ then I get an expression for $\sigma^2[N]$ which depends on $N$ itself, which is impossible (the variance of a random variable can not depend on the random variable itself). When I asked my teacher about this fact he told me that the expression above should be evaluated at $\langle N \rangle$ because we are assuming the system to be at equilibrium and we are describing it using the Canonical Ensamble, thus in the thermodynamic limit it contains $\langle N \rangle$ particles. I would like to know if this reasoning is correct or if there is something missing. Also I would like to know if I am right in saying that books like Huang's are a little sloppy when dealing with these derivations (indeed, this specific derivation is carried out at pages 152-153 and Huang never mentions the problem, actually he just changes $N$ to its mean value $\bar N$ after "minor rewritings"). Any comment or answer is much appreciated and let me know if I can explain myself clearer! Answer: I agree that Kerson Huang's textbook sometimes has sloppy derivations. In this case, however, the book complicates a quite simple derivation. Indeed, his formula for the variance of the number variable $N$ is (formula 7.38 in the book) $$ \langle N^2 \rangle - \langle N \rangle^2 = z \frac{\partial }{\partial z} z \frac{\partial }{\partial z} \log {\mathcal Q }(z,V,T)= kTV \frac{\partial^2 P }{\partial \mu^2}. \tag{1} $$ Where ${\mathcal Q }$ is the grand canonical partition function, $P$ the pressure, $T$ temperature, $k$ the Boltzmann's constant, $V$ the volume, $\mu$ the chemical potential, and $z$ the activity ($z=e^{\beta \mu}$). By using the expression for the average number of molecules (formula 7.36): $$ \langle N \rangle =z \frac{\partial }{\partial z} \log {\mathcal Q }(z,V,T) $$ formula ($1$) can be written as $$ \langle N^2 \rangle - \langle N \rangle^2 = z \frac{\partial }{\partial z}\langle N \rangle= zV\frac{\partial }{\partial z} \rho = VkT \frac{\partial }{\partial \mu} \rho\tag{2} $$ where $\rho = \langle N \rangle /V$ and the pertial derivatives are at constant $T$. The reduction of the partial derivative at the end of equation $(2)$ is immediate, without passing through the Helmholtz free energy, if one remembers that, as a consequence of the homogeneity of degree $1$ of the internal energy, we have the Gibbs-Duhem equation $$ d\mu = -s dT + \frac{1}{\rho} dP. $$ At constant $T$, it immediately gives us $$ \left. \frac{\partial \mu}{\partial \rho}\right|_T = \frac{1}{\rho}\left.\frac{\partial P}{\partial \rho}\right|_T $$ This is equivalent to the formula $7.42$ in the book, which can be easily recast in terms of the isothermal compressibility. It is clear that, apart from a few formulas necessary for the context, the actual derivation is reduced to the last two.
{ "domain": "physics.stackexchange", "id": 98447, "tags": "thermodynamics, statistical-mechanics, probability" }
If galaxies are moving away from each other then why are the Milky Way and Andromeda galaxy coming towards each other?
Question: The Andromeda Galaxy is approaching the Milky Way at about 684000 mi/hours, making it one of the few blueshifted galaxies. The Andromeda Galaxy and the Milky Way are thus expected to collide in about 3.75 or 4.5 billion years. Why are some galaxies moving away and why are our galaxy and Andromeda coming towards each other? Answer: Here is my answer to a similar question posted on the physics stack exchange website. Hubble's law (the law that deals with the expansion of the universe) applies to the expansion of space itself, i.e., if two objects stationary to each other that had no force between them were left alone the distance between would increase with time because space itself is expanding. This is what Hubble's law addresses. In the case of the Milky Way and Andromeda galaxies (and all galaxies for that matter) there is a force between them: gravity. The gravitational force between the Milky Way and Andromeda galaxies has produced an acceleration that is causing the two galaxies to be moving towards each other faster than the space between them is expanding as calculated by Hubble's law. However, the vast majority of galaxies lie far enough away from the Milky Way that the gravitational force between us and them is small compared to the Hubble expansion and Hubble's law dominates. In short, Hubble's law applies throughout the universe, but localized systems may have enough gravitational attraction between them that the gravitational effects dominate
{ "domain": "astronomy.stackexchange", "id": 5818, "tags": "galaxy, milky-way, formation" }
How is $j=1/2$ representation, $U(R(\theta,\hat{\bf n}))=e^{i{\sigma}\cdot{\hat {\bf n}}\theta/2}$, is a projective representation of ${\rm SO}(3)$?
Question: A projective unitary representation of ${\rm SO(3)}$ satisfies $$U(R_1)U(R_2)=e^{i\phi(R_1,R_2)}U(R_1R_2)\tag{1}$$ where $R_1,R_2\in {\rm SO(3)}$. How to show that the $j=1/2$ representation, $U(R(\theta,\hat{\bf n}))=e^{i{\sigma}\cdot{\hat {\bf n}}\theta/2}$, is a projective representation of ${\rm SO}(3)$ i.e., satisfies the condition $(1)$. To do this, one has to show that for $$R_1R_2=R_3\Rightarrow U(R_1)U(R_2)=e^{i\phi}U(R_3).\tag{3}$$ Any suggestions how to show this or at least check this? Answer: OP describes projective representations in terms of a 2-cocycle, see section 3 below. An alternative description is in terms of a quotient $$PSU(2)~:=~ SU(2)/\mathbb{Z}_2~\cong~SO(3),\tag{A}$$ where $SU(2)$ denotes the 2-dimension $j=1/2$ non-projective defining/fundamental/spinor representation and $$\mathbb{Z}_{2}~\cong~\{\pm {\bf 1}_{2 \times 2}\}.\tag{B}$$ In other words, in this latter description the 2-dimensional representation of $SO(3)$ is double-valued, i.e. there are 2 branches $\pm U$ represents the same $SO(3)$ rotation. Let $\vec{\alpha}=\theta\hat{\bf n}$ be a rotation-vector in the axis-angle representation $(\hat{\bf n},\theta)$. The opposite branch is given by the axis-angle representation $(-\hat{\bf n},2\pi\!-\!\theta)$. To describe a general $SO(3)$-element ($SU(2)$-element) it is enough to consider a rotation-vector $\vec{\alpha}\in \mathbb{R}^3$ with length $|\vec{\alpha}|\leq \pi$ ($|\vec{\alpha}|\leq 2\pi$), respectively. Note that the $4\pi$-periodicity of $SU(2)$ becomes the familiar $2\pi$-periodicity of $SO(3)$. See also e.g. this & this related Phys.SE posts. From the non-projective defining representation of $SU(2)$, we have $$U(\vec{\gamma})~=~U(\vec{\alpha})U(\vec{\beta}) ,\tag{C}$$ cf. e.g. this Phys.SE post. As mentioned before, we may assume that $|\vec{\alpha}|,|\vec{\beta}|,|\vec{\gamma}| \leq 2\pi$. However, if we only want to use rotation-vectors with lengths $\leq \pi$ (corresponding to $SO(3)$-rotations), we might have to use the opposite branch. Such a transition costs a non-trivial 2-cofactor in eq. (C). References: G 't Hooft, Introduction to Lie Groups in Physics, lecture notes; chapters 3 + 6. The pdf file is available here.
{ "domain": "physics.stackexchange", "id": 79754, "tags": "group-theory, representation-theory, rotation, spinors" }
How to find mean ,max ,min in constant time?
Question: I was asked to be able to find minimum, maximum and mean of a large array in constant time. I used 3 variables to track these statistics and updated them on every insert operation. I don't feel like it is the correct answer or maybe there is more nuance. Given that nothing was said about extra space, is this the simplest(and correct) way to do it ? Answer: If you are just given an array and asked to find these statistic there there is no way to do that (unless the size of the array is upper bounded by a constant). An easy way to convince yourself that this is true is noticing that in $O(1)$ time you can only access $O(1)$ entries of the array. Then the maximum, minimum, or median could be in the unread entries (notice that you can safely assume that any $O(1)$-time algorithm always accesses the returned entry). You talk about "insert operations" which makes me thing that you were actually asked to design a data structure that supports some set of operations, among which there are insertions and reporting the above statistics. If this is the case then the answer depends on how what the other operations are and how much time do you want to spend on those. If the only updates to the array are insertion then it suffices to keep track of the number of elements, their sum, and the maximum and minimum element after each insertion. This will require $O(1)$ time per insertion and $O(1)$ time to report the statistics.
{ "domain": "cs.stackexchange", "id": 17386, "tags": "algorithm-analysis, runtime-analysis, space-complexity" }
Software driver for digital inputs expander communicating over SPI
Question: On this platform I have been developing a software driver for two digital inputs expanders communicating over the SPI. My C++ code is based on the SPI driver which accompanies the SDK offered by the MCU manufacturer. From the timing perspective the driver has been conceived as non-blocking with periodic updates exploiting callback function. From the architectural point of view the driver has been designed in following manner: the top level layer of the driver consists of the DigitalInputsDriver class which provides the interface for the application layer of the software the digital inputs expander is modeled by the MAX22190 class the expander is basically a set of registers. This is reflected in design by the fact that the MAX22190 class contains an array of registers (instances of the Register class). Those instances are "mirrors" of the real hw registers in the MAX22190 chips. Content of those mirrors is held in consistent state with their hw counterparts based on algorithms for configuration and refreshing encapsulated in the classes Configurator and Refresher. Based on stimulus comming from the Configurator and Refresher the Register objects communicate with the MAX22190 chips over SPI via messages which are instances of WriteRegRequestMsg, ReadRegRequestMsg, WriteRegResponseMsg and ReadRegResponseMsg classes which have same interface defined by the Message abstract class. each MAX22190 can be richly configured, so DigitalInputsDriverCfg class has been defined which manages the configuration. DigitalInputsDriver #include "DigitalInputsDriverCfg.h" #include "MAX22190.h" #include "Transceiver.h" #include <cstdint> class DigitalInputsDriver { public: enum class Input{ kDi_00, kDi_01, kDi_02, kDi_03, kDi_04, kDi_05, kDi_06, kDi_07, kDi_08, kDi_09, kDi_10, kDi_11, kDi_12, kDi_13, kDi_14, kDi_15, kNoDigitalInputs }; enum class State{kLow, kHigh}; enum class Fault1{ kWireBreakDevice0, k24VMDevice0, k24VLDevice0, kOverTemperature1Device0, kOverTemperature2Device0, kFault2Device0, kPorDevice0, kCrcDevice0, kWireBreakDevice1, k24VMDevice1, k24VLDevice1, kOverTemperature1Device1, kOverTemperature2Device1, kFault2Device1, kPorDevice1, kCrcDevice1 }; enum class Fault2{ kREFWBShortDevice0, kREFWBOpenDevice0, kREFDIShortDevice0, kREFDIOpenDevice0, kOverTempShutdownDevice0, kFault8ClkDevice0, kREFWBShortDevice1 = 8, kREFWBOpenDevice1 = 9, kREFDIShortDevice1 = 10, kREFDIOpenDevice1 = 11, kOverTempShutdownDevice1 = 12, kFault8ClkDevice1 = 13, }; DigitalInputsDriver(DigitalInputsDriverCfg *_dig_in_cfg, uint16_t _spi_device_id); void update(void); void initialize(void); bool isReady(void); State getInputState(Input input); bool isFault1Active(Fault1 fault); bool isFault2Active(Fault2 fault); void handleFpgaProtection(void); void handleSpiEndOfTransactionInterrupt(void); private: Transceiver transceiver; MAX22190 device0; MAX22190 device1; MAX22190 *devices[static_cast<uint8_t>(DigitalInputsDriverCfg::Device::kNoMAX22190Devices)]; }; #include "DigitalInputsDriver.h" DigitalInputsDriver::DigitalInputsDriver(DigitalInputsDriverCfg *_dig_in_cfg, uint16_t _spi_device_id) : transceiver(_spi_device_id), device0(DigitalInputsDriverCfg::Device::kMAX22190Device_0, _dig_in_cfg, &transceiver), device1(DigitalInputsDriverCfg::Device::kMAX22190Device_1, _dig_in_cfg, &transceiver) { devices[static_cast<uint8_t>(DigitalInputsDriverCfg::Device::kMAX22190Device_0)] = &device0; devices[static_cast<uint8_t>(DigitalInputsDriverCfg::Device::kMAX22190Device_1)] = &device1; } void DigitalInputsDriver::initialize(void) { transceiver.initialize(); for(MAX22190 *device : devices){ device->initialize(); } } void DigitalInputsDriver::update(void) { for(MAX22190 *device : devices){ device->update(); } } bool DigitalInputsDriver::isReady(void) { bool retval = true; for(MAX22190 *device : devices){ if(device->isReady() == false){ retval = false; break; } } return retval; } DigitalInputsDriver::State DigitalInputsDriver::getInputState(Input input) { return static_cast<DigitalInputsDriver::State>(devices[(static_cast<uint8_t>(input) >> 3)]->isInputActive(static_cast<DigitalInputsDriverCfg::Input>(static_cast<uint8_t>(input) - ((static_cast<uint8_t>(input) >> 3) << 3)))); } bool DigitalInputsDriver::isFault1Active(Fault1 fault) { return devices[(static_cast<uint8_t>(fault) >> 3)]->isFault1Active(static_cast<MAX22190::Fault1>(static_cast<uint8_t>(fault) - ((static_cast<uint8_t>(fault) >> 3) << 3))); } bool DigitalInputsDriver::isFault2Active(Fault2 fault) { return devices[(static_cast<uint8_t>(fault) >> 3)]->isFault2Active(static_cast<MAX22190::Fault2>(static_cast<uint8_t>(fault) - ((static_cast<uint8_t>(fault) >> 3) << 3))); } void DigitalInputsDriver::handleFpgaProtection(void) { for(MAX22190 *device : devices){ device->handleFaultPinActivation(); } } void DigitalInputsDriver::handleSpiEndOfTransactionInterrupt(void) { for(MAX22190 *device : devices){ device->notifyEndOfTransaction(); } } MAX22190 #include "Register.h" #include "DigitalInputsDriverCfg.h" #include "Configurator.h" #include "TransactionEndListener.h" #include "Transceiver.h" #include "Refresher.h" class MAX22190 : public TransactionEndListener { friend class Configurator; friend class Refresher; public: enum class Fault1{ kWireBreak, k24VM, k24VL, kOverTemperature1, kOverTemperature2, kFault2, kPor, kCrc }; enum class Fault2{ kREFWBShort, kREFWBOpen, kREFDIShort, kREFDIOpen, kOvertempShd, kFault8Clk }; MAX22190(DigitalInputsDriverCfg::Device _device, DigitalInputsDriverCfg *_configuration, Transceiver *_transceiver); void initialize(void); void update(void); bool isReady(void); bool isInputActive(DigitalInputsDriverCfg::Input input); bool isFault1Active(Fault1 fault); bool isFault2Active(Fault2 fault); void handleFaultPinActivation(void); void activateLatch(void); uint8_t getDeviceId(void); void notifyEndOfTransaction(void); private: enum class ConfigurationState{ kConfigInEnReg, kConfigFlt1Reg, kConfigFlt2Reg, kConfigFlt3Reg, kConfigFlt4Reg, kConfigFlt5Reg, kConfigFlt6Reg, kConfigFlt7Reg, kConfigFlt8Reg, kConfigFault2EnReg, kConfigFault1EnReg, kConfigCfgReg, kConfigGpoReg, kConfigurationEnd }; enum class RefreshState{ kRefreshFault1Reg, kRefreshFault2Reg, kRefreshDiReg, kRefreshWbReg }; static constexpr uint8_t wb_reg_addr = 0x00; static constexpr uint8_t di_reg_addr = 0x02; static constexpr uint8_t fault1_reg_addr = 0x04; static constexpr uint8_t flt1_reg_addr = 0x06; static constexpr uint8_t flt2_reg_addr = 0x08; static constexpr uint8_t flt3_reg_addr = 0x0A; static constexpr uint8_t flt4_reg_addr = 0x0C; static constexpr uint8_t flt5_reg_addr = 0x0E; static constexpr uint8_t flt6_reg_addr = 0x10; static constexpr uint8_t flt7_reg_addr = 0x12; static constexpr uint8_t flt8_reg_addr = 0x14; static constexpr uint8_t cfg_reg_addr = 0x18; static constexpr uint8_t inen_reg_addr = 0x1A; static constexpr uint8_t fault2_reg_addr = 0x1C; static constexpr uint8_t fault2en_reg_addr = 0x1E; static constexpr uint8_t gpo_reg_addr = 0x22; static constexpr uint8_t fault1en_reg_addr = 0x24; static constexpr uint8_t nop_reg_addr = 0x26; static constexpr uint8_t no_regs = 18; Register wb_reg; Register di_reg; Register fault1_reg; Register flt1_reg; Register flt2_reg; Register flt3_reg; Register flt4_reg; Register flt5_reg; Register flt6_reg; Register flt7_reg; Register flt8_reg; Register cfg_reg; Register inen_reg; Register fault2_reg; Register fault2en_reg; Register gpo_reg; Register fault1en_reg; Register nop_reg; Register* register_map[no_regs]; Configurator configurator; Refresher refresher; DigitalInputsDriverCfg::Device device; DigitalInputsDriverCfg *configuration; Transceiver *transceiver; bool device_ready; bool device_configured; bool read_faults; ConfigurationState state_config; RefreshState state_refresh; bool configure(void); void configureInEnReg(void); void configureFlt1Reg(void); void configureFlt2Reg(void); void configureFlt3Reg(void); void configureFlt4Reg(void); void configureFlt5Reg(void); void configureFlt6Reg(void); void configureFlt7Reg(void); void configureFlt8Reg(void); void configureFault2EnReg(void); void configureFault1EnReg(void); void configureCfgReg(void); void configureGpoReg(void); void refresh(void); void refreshDiReg(void); void refreshWbReg(void); void refreshFault1Reg(void); void refreshFault2Reg(void); }; #include "MAX22190.h" MAX22190::MAX22190(DigitalInputsDriverCfg::Device _device, DigitalInputsDriverCfg *_configuration, Transceiver *_transceiver) : wb_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kWb, wb_reg_addr, _transceiver), di_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kDi, di_reg_addr, _transceiver), fault1_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFault1, fault1_reg_addr, _transceiver), flt1_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt1, flt1_reg_addr, _transceiver), flt2_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt2, flt2_reg_addr, _transceiver), flt3_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt3, flt3_reg_addr, _transceiver), flt4_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt4, flt4_reg_addr, _transceiver), flt5_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt5, flt5_reg_addr, _transceiver), flt6_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt6, flt6_reg_addr, _transceiver), flt7_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt7, flt7_reg_addr, _transceiver), flt8_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFlt8, flt8_reg_addr, _transceiver), cfg_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kCfg, cfg_reg_addr, _transceiver), inen_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kInEn, inen_reg_addr, _transceiver), fault2_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFault2, fault2_reg_addr, _transceiver), fault2en_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFault2En, fault2en_reg_addr, _transceiver), gpo_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kGpo, gpo_reg_addr, _transceiver), fault1en_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kFault1En, fault1en_reg_addr, _transceiver), nop_reg(_device, DigitalInputsDriverCfg::MAX22190RegisterType::kNop, nop_reg_addr, _transceiver), configurator(_configuration), refresher() { register_map[0] = &wb_reg; register_map[1] = &di_reg; register_map[2] = &fault1_reg; register_map[3] = &flt1_reg; register_map[4] = &flt2_reg; register_map[5] = &flt3_reg; register_map[6] = &flt4_reg; register_map[7] = &flt5_reg; register_map[8] = &flt6_reg; register_map[9] = &flt7_reg; register_map[10]= &flt8_reg; register_map[11]= &cfg_reg; register_map[12]= &inen_reg; register_map[13]= &fault2_reg; register_map[14]= &fault2en_reg; register_map[15]= &gpo_reg; register_map[16]= &fault1en_reg; register_map[17]= &nop_reg; device = _device; device_ready = false; device_configured = false; read_faults = false; configuration = _configuration; state_config = ConfigurationState::kConfigInEnReg; state_refresh = RefreshState::kRefreshDiReg; transceiver = _transceiver; } void MAX22190::initialize(void){ transceiver->registerListener(this); device_ready = true; } void MAX22190::update(void) { if(device_ready){ if(!device_configured){ device_configured = configure(); }else{ refresh(); } } } bool MAX22190::isReady(void) { return (device_ready && device_configured); } bool MAX22190::isInputActive(DigitalInputsDriverCfg::Input input) { uint8_t reg_data = di_reg.getData(); return ((reg_data & (1 << static_cast<uint8_t>(input))) != 0); } bool MAX22190::isFault1Active(Fault1 fault) { uint8_t reg_data = fault1_reg.getData(); return ((reg_data & (1 << static_cast<uint8_t>(fault))) != 0); } bool MAX22190::isFault2Active(Fault2 fault) { uint8_t reg_data = fault2_reg.getData(); return ((reg_data & (1 << static_cast<uint8_t>(fault))) != 0); } void MAX22190::handleFaultPinActivation(void) { read_faults = true; } void MAX22190::activateLatch(void){} void MAX22190::notifyEndOfTransaction(void){ for(Register* reg : register_map){ if(reg->isPending()){ reg->handleEndOfTransaction(); break; } } } uint8_t MAX22190::getDeviceId(void){ return static_cast<uint8_t>(device); } bool MAX22190::configure(void) { bool configuration_finished = false; switch(state_config){ case ConfigurationState::kConfigInEnReg: configureInEnReg(); break; case ConfigurationState::kConfigFlt1Reg: configureFlt1Reg(); break; case ConfigurationState::kConfigFlt2Reg: configureFlt2Reg(); break; case ConfigurationState::kConfigFlt3Reg: configureFlt3Reg(); break; case ConfigurationState::kConfigFlt4Reg: configureFlt4Reg(); break; case ConfigurationState::kConfigFlt5Reg: configureFlt5Reg(); break; case ConfigurationState::kConfigFlt6Reg: configureFlt6Reg(); break; case ConfigurationState::kConfigFlt7Reg: configureFlt7Reg(); break; case ConfigurationState::kConfigFlt8Reg: configureFlt8Reg(); break; case ConfigurationState::kConfigFault2EnReg: configureFault2EnReg(); break; case ConfigurationState::kConfigFault1EnReg: configureFault1EnReg(); break; case ConfigurationState::kConfigCfgReg: configureCfgReg(); break; case ConfigurationState::kConfigGpoReg: configureGpoReg(); break; case ConfigurationState::kConfigurationEnd: configuration_finished = true; break; } return configuration_finished; } void MAX22190::refresh(void) { switch(state_refresh){ case RefreshState::kRefreshDiReg: refreshDiReg(); break; case RefreshState::kRefreshWbReg: refreshWbReg(); break; case RefreshState::kRefreshFault1Reg: refreshFault1Reg(); break; case RefreshState::kRefreshFault2Reg: refreshFault2Reg(); break; } } void MAX22190::configureInEnReg(void) { if(configurator.configure(inen_reg)){ state_config = ConfigurationState::kConfigFlt1Reg; } } void MAX22190::configureFlt1Reg(void) { if(configurator.configure(flt1_reg)){ state_config = ConfigurationState::kConfigFlt2Reg; } } void MAX22190::configureFlt2Reg(void) { if(configurator.configure(flt2_reg)){ state_config = ConfigurationState::kConfigFlt3Reg; } } void MAX22190::configureFlt3Reg(void) { if(configurator.configure(flt3_reg)){ state_config = ConfigurationState::kConfigFlt4Reg; } } void MAX22190::configureFlt4Reg(void) { if(configurator.configure(flt4_reg)){ state_config = ConfigurationState::kConfigFlt5Reg; } } void MAX22190::configureFlt5Reg(void) { if(configurator.configure(flt5_reg)){ state_config = ConfigurationState::kConfigFlt6Reg; } } void MAX22190::configureFlt6Reg(void) { if(configurator.configure(flt6_reg)){ state_config = ConfigurationState::kConfigFlt7Reg; } } void MAX22190::configureFlt7Reg(void) { if(configurator.configure(flt7_reg)){ state_config = ConfigurationState::kConfigFlt8Reg; } } void MAX22190::configureFlt8Reg(void) { if(configurator.configure(flt8_reg)){ state_config = ConfigurationState::kConfigFault2EnReg; } } void MAX22190::configureFault2EnReg(void) { if(configurator.configure(fault2en_reg)){ state_config = ConfigurationState::kConfigFault1EnReg; } } void MAX22190::configureFault1EnReg(void) { if(configurator.configure(fault1en_reg)){ state_config = ConfigurationState::kConfigCfgReg; } } void MAX22190::configureCfgReg(void) { if(configurator.configure(cfg_reg)){ state_config = ConfigurationState::kConfigGpoReg; } } void MAX22190::configureGpoReg(void) { if(configurator.configure(gpo_reg)){ state_config = ConfigurationState::kConfigurationEnd; } } void MAX22190::refreshDiReg(void) { if(refresher.refresh(di_reg)){ state_refresh = RefreshState::kRefreshWbReg; } } void MAX22190::refreshWbReg(void) { if(refresher.refresh(wb_reg)){ if(read_faults) { state_refresh = RefreshState::kRefreshFault1Reg; }else{ state_refresh = RefreshState::kRefreshDiReg; } } } void MAX22190::refreshFault1Reg(void) { if(refresher.refresh(fault1_reg)){ state_refresh = RefreshState::kRefreshFault2Reg; } } void MAX22190::refreshFault2Reg(void) { if(refresher.refresh(fault2_reg)){ state_refresh = RefreshState::kRefreshDiReg; read_faults = false; } } Register #include "DigitalInputsDriverCfg.h" #include "Transceiver.h" #include "Message.h" #include "WriteRegRequestMsg.h" #include "WriteRegResponseMsg.h" #include "ReadRegRequestMsg.h" #include "ReadRegResponseMsg.h" #include <cstdint> class Register { public: Register(DigitalInputsDriverCfg::Device _device, DigitalInputsDriverCfg::MAX22190RegisterType _type, uint8_t _address, Transceiver *_transceiver); bool read(void); bool write(uint8_t data); bool isConfigured(void); void setConfigured(void); bool isRefreshed(void); void setUnRefreshed(void); uint8_t getData(void); uint8_t getAddress(void); DigitalInputsDriverCfg::MAX22190RegisterType getType(void); DigitalInputsDriverCfg::Device getDevice(void); bool isPending(void); void handleEndOfTransaction(void); private: DigitalInputsDriverCfg::Device device; DigitalInputsDriverCfg::MAX22190RegisterType type; uint8_t address; uint8_t data; bool configured; bool pending; bool refreshed; Message::MessageType sent_msg; Transceiver *transceiver; WriteRegRequestMsg write_reg_request; ReadRegRequestMsg read_reg_request; WriteRegResponseMsg write_reg_response; ReadRegResponseMsg read_reg_response; void setData(uint8_t data); void handleWriteRegResponseMsg(void); void handleReadRegResponseMsg(void); }; #include <new> #include "Register.h" #include "Bits.h" Register::Register(DigitalInputsDriverCfg::Device _device, DigitalInputsDriverCfg::MAX22190RegisterType _type, uint8_t _address, Transceiver *_transceiver) { type = _type; address = _address; device = _device; configured = false; refreshed = false; pending = false; transceiver = _transceiver; } bool Register::read(void) { if (!transceiver->isBusy()){ new (&read_reg_request) ReadRegRequestMsg(device, address, transceiver); read_reg_request.write(); sent_msg = Message::MessageType::kReadRegRequest; pending = true; return true; }else{ return false; } } bool Register::write(uint8_t data) { if(!transceiver->isBusy()){ new (&write_reg_request) WriteRegRequestMsg(device, address, data, transceiver); write_reg_request.write(); sent_msg = Message::MessageType::kWriteRegRequest; pending = true; return true; }else{ return false; } } bool Register::isConfigured(void) { return configured; } void Register::setConfigured(void) { configured = true; } bool Register::isRefreshed(void) { return (refreshed != false); } void Register::setUnRefreshed(void) { refreshed = false; } uint8_t Register::getData(void) { return data; } uint8_t Register::getAddress(void) { return address; } DigitalInputsDriverCfg::MAX22190RegisterType Register::getType(void) { return type; } DigitalInputsDriverCfg::Device Register::getDevice(void) { return device; } bool Register::isPending(void) { return pending; } void Register::handleEndOfTransaction(void) { switch(sent_msg){ case Message::MessageType::kWriteRegRequest: handleWriteRegResponseMsg(); break; case Message::MessageType::kReadRegRequest: handleReadRegResponseMsg(); break; case Message::MessageType::kWriteRegResponse: break; case Message::MessageType::kReadRegResponse: break; } } void Register::setData(uint8_t read_data) { data = read_data; } void Register::handleWriteRegResponseMsg(void) { pending = false; new (&write_reg_response) WriteRegResponseMsg(transceiver); write_reg_response.read(); } void Register::handleReadRegResponseMsg(void) { uint8_t read_data; pending = false; new (&read_reg_response) ReadRegResponseMsg(transceiver); read_reg_response.read(); if(read_reg_response.isValid()){ read_data = read_reg_response.getRegisterData(); setData(read_data); refreshed = true; } } Configurator #include "DigitalInputsDriverCfg.h" #include "Register.h" class Configurator { public: Configurator(DigitalInputsDriverCfg *_configuration); bool configure(Register &reg); private: enum class State {kInit, kWrite, kRead, kCheck}; State state; DigitalInputsDriverCfg *configuration; uint8_t data; }; #include "Configurator.h" Configurator::Configurator(DigitalInputsDriverCfg *_configuration) { configuration = _configuration; state = State::kInit; } bool Configurator::configure(Register &reg) { bool reg_configured = false; switch(state){ case State::kInit: data = configuration->getRegisterConfigData(reg.getDevice(), reg.getType()); state = State::kWrite; break; case State::kWrite: if(reg.write(data)){ state = State::kRead; } break; case State::kRead: if(reg.read()){ state = State::kCheck; } break; case State::kCheck: if(!reg.isPending()){ if(data == reg.getData()){ reg_configured = true; reg.setConfigured(); state = State::kInit; }else{ state = State::kWrite; } } break; } return reg_configured; } Refresher #include "Register.h" class Refresher { public: Refresher(); bool refresh(Register &reg); private: enum class State {kRead, kCheck}; State state; }; #include "Refresher.h" Refresher::Refresher() { state = State::kRead; } bool Refresher::refresh(Register &reg) { bool reg_refreshed = false; switch(state){ case State::kRead: if(reg.read()){ reg.setUnRefreshed(); state = State::kCheck; } break; case State::kCheck: if(!reg.isPending()){ if(reg.isRefreshed()){ reg_refreshed = true; } state = State::kRead; } break; } return reg_refreshed; } Message class Message { public: enum class MessageType { kWriteRegRequest, kReadRegRequest, kWriteRegResponse, kReadRegResponse }; virtual void read(void) = 0; virtual void write(void) = 0; }; WriteRegRequest #include "Message.h" #include "DigitalInputsDriverCfg.h" #include "Transceiver.h" class WriteRegRequestMsg : public Message { public: WriteRegRequestMsg(); WriteRegRequestMsg(DigitalInputsDriverCfg::Device _device, uint8_t _address, uint8_t _data, Transceiver *_transceiver); void read(void); void write(void); private: union MsgData{ struct { uint32_t crc : 5; uint32_t fill : 3; uint32_t data : 8; uint32_t reg_addr : 7; uint32_t msb : 1; }bits; uint8_t bytes[3]; }; DigitalInputsDriverCfg::Device device; uint8_t address; uint8_t data; MsgData msg; uint8_t length; Transceiver *transceiver; }; #include "WriteRegRequestMsg.h" WriteRegRequestMsg::WriteRegRequestMsg(){} WriteRegRequestMsg::WriteRegRequestMsg( DigitalInputsDriverCfg::Device _device, uint8_t _address, uint8_t _data, Transceiver *_transceiver) { device = _device; address = _address; data = _data; transceiver = _transceiver; } void WriteRegRequestMsg::read(void){} void WriteRegRequestMsg::write(void) { msg.bits.msb = 1; msg.bits.reg_addr = (address & 0x7F); msg.bits.data = data; msg.bits.fill = 0; uint8_t crc = crcMAX22190(msg.bytes[2], msg.bytes[1], msg.bytes[0]); msg.bits.crc = crc; transceiver->putBytes(msg.bytes); transceiver->startTransaction(device); } ReadRegRequest #include "Message.h" #include "DigitalInputsDriverCfg.h" #include "Transceiver.h" class ReadRegRequestMsg : public Message { public: ReadRegRequestMsg(); ReadRegRequestMsg(DigitalInputsDriverCfg::Device _device, uint8_t _address, Transceiver *_transceiver); void read(void); void write(void); private: union MsgData{ struct { uint32_t crc : 5; uint32_t fill : 11; uint32_t reg_addr : 7; uint32_t msb : 1; }bits; uint8_t bytes[3]; }; DigitalInputsDriverCfg::Device device; uint8_t address; MsgData msg; Transceiver *transceiver; }; #include "ReadRegRequestMsg.h" ReadRegRequestMsg::ReadRegRequestMsg(){} ReadRegRequestMsg::ReadRegRequestMsg(DigitalInputsDriverCfg::Device _device, uint8_t _address, Transceiver *_transceiver) { device = _device; address = _address; transceiver = _transceiver; } void ReadRegRequestMsg::read(void){} void ReadRegRequestMsg::write(void) { msg.bits.msb = 0; msg.bits.reg_addr = address; msg.bits.fill = 0; uint8_t crc = crcMAX22190(msg.bytes[2], msg.bytes[1], msg.bytes[0]); msg.bits.crc = crc; transceiver->putBytes(msg.bytes); transceiver->startTransaction(device); } WriteRegResponse #include "Message.h" #include "Transceiver.h" class WriteRegResponseMsg : public Message { public: WriteRegResponseMsg(); WriteRegResponseMsg(Transceiver *_transceiver); void read(void); void write(void); bool isValid(void); uint32_t getData(void); private: union MsgData{ struct { uint32_t crc : 5; uint32_t flags : 3; uint32_t wb_reg_state : 8; uint32_t inputs_state : 8; }bits; uint8_t bytes[3]; }; MsgData msg; Transceiver *transceiver; }; #include "WriteRegResponseMsg.h" WriteRegResponseMsg::WriteRegResponseMsg(){} WriteRegResponseMsg::WriteRegResponseMsg(Transceiver *_transceiver) { transceiver = _transceiver; } void WriteRegResponseMsg::read(void) { transceiver->getBytes(msg.bytes); } void WriteRegResponseMsg::write(void){} bool WriteRegResponseMsg::isValid(void) { uint8_t crc = crcMAX22190(msg.bytes[2], msg.bytes[1], msg.bytes[0]); if(crc == 0){ return true; }else{ return false; } } uint32_t WriteRegResponseMsg::getData(void) { uint32_t data = 0; data = ((static_cast<uint32_t>(msg.bytes[0]) << 16) | (static_cast<uint32_t>(msg.bytes[1]) << 8) | (static_cast<uint32_t>(msg.bytes[2]) & 0xE0)); return data; } ReadRegResponse #include "Message.h" #include "Transceiver.h" class ReadRegResponseMsg : public Message { public: ReadRegResponseMsg(); ReadRegResponseMsg(Transceiver *_transceiver); void read(void); void write(void); bool isValid(void); uint32_t getData(void); uint8_t getRegisterData(void); private: union MsgData{ struct { uint32_t crc : 5; uint32_t flags : 3; uint32_t reg_state : 8; uint32_t inputs_state : 8; }bits; uint8_t bytes[3]; }; MsgData msg; Transceiver *transceiver; }; #include "ReadRegResponseMsg.h" ReadRegResponseMsg::ReadRegResponseMsg() {} ReadRegResponseMsg::ReadRegResponseMsg(Transceiver *_transceiver) { transceiver = _transceiver; } void ReadRegResponseMsg::read(void) { transceiver->getBytes(msg.bytes); } void ReadRegResponseMsg::write(void){} bool ReadRegResponseMsg::isValid(void) { uint8_t crc = crcMAX22190(msg.bytes[2], msg.bytes[1], msg.bytes[0]); if(crc == 0){ return true; }else{ return false; } } uint32_t ReadRegResponseMsg::getData(void) { uint32_t data = 0; data = ((static_cast<uint32_t>(msg.bytes[0]) << 16) | (static_cast<uint32_t>(msg.bytes[1]) << 8) | (static_cast<uint32_t>(msg.bytes[2]) & 0xE0)); return data; } uint8_t ReadRegResponseMsg::getRegisterData(void) { return msg.bytes[1]; } DigitalInputsDriverCfg #include <cstdint> class DigitalInputsDriverCfg { public: enum class Device { kMAX22190Device_0, kMAX22190Device_1, kNoMAX22190Devices }; enum class Input{ kInput_01, kInput_02, kInput_03, kInput_04, kInput_05, kInput_06, kInput_07, kInput_08, kNoInputs }; enum class MAX22190RegisterType{ kWb, kDi, kFault1, kFlt1, kFlt2, kFlt3, kFlt4, kFlt5, kFlt6, kFlt7, kFlt8, kCfg, kInEn, kFault2, kFault2En, kGpo, kFault1En, kNop }; enum class WireBreakDetection{ kWireBreakDetectionDisabled, kWireBreakDetectionEnabled }; enum class ProgrammableFilter{ kProgrammableFilterUsed, kProgrammableFilterBypassed }; enum class FilterDelay{ kInputFilterDelay50us, kInputFilterDelay100us, kInputFilterDelay400us, kInputFilterDelay800us, kInputFilterDelay1_6ms, kInputFilterDelay3_2ms, kInputFilterDelay12_8ms, kInputFilterDelay20ms }; enum class Flags24VClearMethod{ k24VFlagClearedBySpiTransactionFault1RegReading, k24VFlagClearedByFault1RegReading }; enum class FiltersOperation{ kFiltersOperationNormal, kFiltersOperationFixed }; enum class ShortCircuitDetection{ kShortCircuitDetectionDisabled, kShortCircuitDetectionEnabled }; enum class InputEnable{kInputDisabled, kInputEnabled}; enum class Fault2SrcInFault1Reg{ kFault8CkeInFault1Reg, kOtShdnInFault1Reg, kPinREFDIOpenInFault1Reg, kPinREFDIShortInFault1Reg, kPinREFWBOpenInFault1Reg, kPinREFWBShortInFault1Reg, kNoFault2Src }; enum class Fault2SrcUsage{kFault2SrcNotUsed, kFault2SrcUsed}; struct Fault2BitInFault1RegCfg { Fault2SrcInFault1Reg fault2; Fault2SrcUsage usage; }; enum class FaultPinCfg{kFaultPinNotSticky, kFaultPinSticky}; enum class FaultPinActivationSrc{ kFaultPinActivationCrc, kFaultPinActivationPor, kFaultPinActivationFault2, kFaultPinActivationAlarmT2, kFaultPinActivationAlarmT1, kFaultPinActivation24VL, kFaultPinActivation24VM, kFaultPinActivationWireBreak, kNoFaultPinActivationSrc }; enum class FaultPinActivationSrcUsage{kNotUsed, kUsed}; struct FaultPinActivationCfg { FaultPinActivationSrc src; FaultPinActivationSrcUsage usage; }; struct InputCfg { Input input; InputEnable input_enable; WireBreakDetection wire_break_detection_enable; ProgrammableFilter filter_enable; FilterDelay filter_delay; }; struct MAX22190Config { Device device; InputCfg inputs_cfg[static_cast<uint8_t>(Input::kNoInputs)]; Flags24VClearMethod flags_clear_method; FiltersOperation filters_operation; ShortCircuitDetection short_circuit_detection; Fault2BitInFault1RegCfg fault2_src_cfg[static_cast<uint8_t>( Fault2SrcInFault1Reg::kNoFault2Src)]; FaultPinCfg fault_pin_cfg; FaultPinActivationCfg fault_pin_activation_cfg[static_cast<uint8_t>( FaultPinActivationSrc::kNoFaultPinActivationSrc)]; }; DigitalInputsDriverCfg(const MAX22190Config *_configuration); uint8_t getRegisterConfigData(Device device, MAX22190RegisterType reg); private: union Fault1RegBitMap{ struct { uint32_t wbg_bit : 1; uint32_t _24vm_bit : 1; uint32_t _24vl_bit : 1; uint32_t alarmt1_bit : 1; uint32_t alarmt2_bit : 1; uint32_t fault2_bit : 1; uint32_t por_bit : 1; uint32_t crc_bit : 1; }bits; uint8_t byte; }; union FltxRegBitMap{ struct { uint32_t delay : 3; uint32_t fbp_bit : 1; uint32_t wbe_bit : 1; uint32_t reserve : 3; }bits; uint8_t byte; }; union CfgRegBitMap{ struct { uint32_t refdi_sh_ena_bit : 1; uint32_t reserve_01 : 2; uint32_t clrf_bit : 1; uint32_t _24vf_bit : 1; uint32_t reserve_02 : 3; }bits; uint8_t byte; }; union Fault2RegBitMap{ struct { uint32_t rfwbs_bit : 1; uint32_t rfwbo_bit : 1; uint32_t rfdis_bit : 1; uint32_t rfdio_bit : 1; uint32_t otshdn_bit : 1; uint32_t fault8ck_bit : 1; uint32_t reserve : 2; }bits; uint8_t byte; }; union Fault2EnRegBitMap{ struct { uint32_t rfwbse_bit : 1; uint32_t rfwboe_bit : 1; uint32_t rfdise_bit : 1; uint32_t rfdioe_bit : 1; uint32_t otshdne_bit : 1; uint32_t fault8cke_bit : 1; uint32_t reserve : 2; }bits; uint8_t byte; }; union GpoRegBitMap{ struct { uint32_t reserve : 7; uint32_t stk_bit : 1; }bits; uint8_t byte; }; union Fault1EnRegBitMap{ struct { uint32_t wbge_bit : 1; uint32_t _24vme_bit : 1; uint32_t _24vle_bit : 1; uint32_t alarmt1e_bit : 1; uint32_t alarmt2e_bit : 1; uint32_t fault2e_bit : 1; uint32_t pore_bit : 1; uint32_t crce_bit : 1; }bits; uint8_t byte; }; const MAX22190Config *configuration; uint8_t readInEnRegCfgData(Device device); uint8_t readFltxRegCfgData(Device device, Input input); uint8_t readFault2EnRegCfgData(Device device); uint8_t readFault1EnRegCfgData(Device device); uint8_t readCfgRegCfgData(Device device); uint8_t readGpoRegCfgData(Device device); }; #include "DigitalInputsDriverCfg.h" DigitalInputsDriverCfg::DigitalInputsDriverCfg(const MAX22190Config* _configuration) { configuration = _configuration; } uint8_t DigitalInputsDriverCfg::getRegisterConfigData(Device device, MAX22190RegisterType reg) { uint8_t data; switch(reg) { case MAX22190RegisterType::kWb: data = 0; break; case MAX22190RegisterType::kDi: data = 0; break; case MAX22190RegisterType::kFault1: data = 0; break; case MAX22190RegisterType::kFlt1: data = readFltxRegCfgData(device, Input::kInput_01); break; case MAX22190RegisterType::kFlt2: data = readFltxRegCfgData(device, Input::kInput_02); break; case MAX22190RegisterType::kFlt3: data = readFltxRegCfgData(device, Input::kInput_03); break; case MAX22190RegisterType::kFlt4: data = readFltxRegCfgData(device, Input::kInput_04); break; case MAX22190RegisterType::kFlt5: data = readFltxRegCfgData(device, Input::kInput_05); break; case MAX22190RegisterType::kFlt6: data = readFltxRegCfgData(device, Input::kInput_06); break; case MAX22190RegisterType::kFlt7: data = readFltxRegCfgData(device, Input::kInput_07); break; case MAX22190RegisterType::kFlt8: data = readFltxRegCfgData(device, Input::kInput_08); break; case MAX22190RegisterType::kCfg: data = readCfgRegCfgData(device); break; case MAX22190RegisterType::kInEn: data = readInEnRegCfgData(device); break; case MAX22190RegisterType::kFault2: data = 0; break; case MAX22190RegisterType::kFault2En: data = readFault2EnRegCfgData(device); break; case MAX22190RegisterType::kGpo: data = readGpoRegCfgData(device); break; case MAX22190RegisterType::kFault1En: data = readFault1EnRegCfgData(device); break; case MAX22190RegisterType::kNop: data = 0; break; } return data; } uint8_t DigitalInputsDriverCfg::readInEnRegCfgData(Device device) { uint8_t reg_data = 0; for(uint8_t cur_input = 0; cur_input < static_cast<uint8_t>(Input::kNoInputs); cur_input++){ if(configuration[static_cast<uint8_t>(device)] .inputs_cfg[cur_input] .input_enable == InputEnable::kInputEnabled){ reg_data |= (1 << cur_input); } } return reg_data; } uint8_t DigitalInputsDriverCfg::readFltxRegCfgData(Device device, Input input) { FltxRegBitMap reg_data; reg_data.byte = 0; if(configuration[static_cast<uint8_t>(device)] .inputs_cfg[static_cast<uint8_t>(input)] .wire_break_detection_enable == WireBreakDetection::kWireBreakDetectionEnabled){ reg_data.bits.wbe_bit = 1; } if(configuration[static_cast<uint8_t>(device)] .inputs_cfg[static_cast<uint8_t>(input)] .filter_enable == ProgrammableFilter::kProgrammableFilterBypassed){ reg_data.bits.fbp_bit = 1; } reg_data.bits.delay = static_cast<uint8_t>(configuration[static_cast<uint8_t>(device)] .inputs_cfg[static_cast<uint8_t>(input)] .filter_delay); return reg_data.byte; } uint8_t DigitalInputsDriverCfg::readFault2EnRegCfgData(Device device) { Fault2EnRegBitMap reg_data; reg_data.byte = 0; for(uint8_t cur_record = 0; cur_record < static_cast<uint8_t>( Fault2SrcInFault1Reg::kNoFault2Src); cur_record++){ switch (configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .fault2){ case Fault2SrcInFault1Reg::kFault8CkeInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.fault8cke_bit = 1; } break; case Fault2SrcInFault1Reg::kOtShdnInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.otshdne_bit = 1; } break; case Fault2SrcInFault1Reg::kPinREFDIOpenInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.rfdioe_bit = 1; } break; case Fault2SrcInFault1Reg::kPinREFDIShortInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.rfdise_bit = 1; } break; case Fault2SrcInFault1Reg::kPinREFWBOpenInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.rfwboe_bit = 1; } break; case Fault2SrcInFault1Reg::kPinREFWBShortInFault1Reg: if(configuration[static_cast<uint8_t>(device)] .fault2_src_cfg[cur_record] .usage == Fault2SrcUsage::kFault2SrcUsed){ reg_data.bits.rfwbse_bit = 1; } break; } } return reg_data.byte; } uint8_t DigitalInputsDriverCfg::readFault1EnRegCfgData(Device device) { Fault1EnRegBitMap reg_data; reg_data.byte = 0; for(uint8_t cur_record = 0; cur_record < static_cast<uint8_t>( FaultPinActivationSrc::kNoFaultPinActivationSrc); cur_record++){ switch(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .src){ case FaultPinActivationSrc::kFaultPinActivationCrc: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.crce_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivationPor: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.pore_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivationFault2: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.fault2e_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivationAlarmT2: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.alarmt2e_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivationAlarmT1: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.alarmt1e_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivation24VL: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits._24vle_bit = 1; } break; case FaultPinActivationSrc::kFaultPinActivation24VM: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits._24vme_bit = 1; } break; case FaultPinActivationSrc:: kFaultPinActivationWireBreak: if(configuration[static_cast<uint8_t>(device)] .fault_pin_activation_cfg[cur_record] .usage == FaultPinActivationSrcUsage::kUsed){ reg_data.bits.wbge_bit = 1; } break; } } return reg_data.byte; } uint8_t DigitalInputsDriverCfg::readCfgRegCfgData(Device device) { CfgRegBitMap reg_data; reg_data.byte = 0; if(configuration[static_cast<uint8_t>(device)].short_circuit_detection == ShortCircuitDetection::kShortCircuitDetectionEnabled){ reg_data.bits.refdi_sh_ena_bit = 1; } if(configuration[static_cast<uint8_t>(device)].filters_operation == FiltersOperation::kFiltersOperationFixed){ reg_data.bits.clrf_bit = 1; } if(configuration[static_cast<uint8_t>(device)].flags_clear_method == Flags24VClearMethod:: k24VFlagClearedByFault1RegReading){ reg_data.bits._24vf_bit = 1; } return reg_data.byte; } uint8_t DigitalInputsDriverCfg::readGpoRegCfgData(Device device) { GpoRegBitMap reg_data; reg_data.byte = 0; if(configuration[static_cast<uint8_t>(device)].fault_pin_cfg == FaultPinCfg::kFaultPinSticky){ reg_data.bits.stk_bit = 1; } return reg_data.byte; } The intended usage of the driver is following: whole the configuration information are in the FpgaDriversCfg struct the instance of the DigitalInputsDriver is along with instances of other drivers in the "container" Hal FpgaDriversCfg #include "DigitalInputsDriverCfg.h" struct FpgaDriversCfg { DigitalInputsDriverCfg::MAX22190Config di_devices_cfg[static_cast<uint8_t>( DigitalInputsDriverCfg::Device::kNoMAX22190Devices)]={ {DigitalInputsDriverCfg::Device::kMAX22190Device_0, {{DigitalInputsDriverCfg::Input::kInput_01, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_02, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_03, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_04, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_05, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_06, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_07, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_08, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}}, DigitalInputsDriverCfg::Flags24VClearMethod:: k24VFlagClearedBySpiTransactionFault1RegReading, DigitalInputsDriverCfg::FiltersOperation::kFiltersOperationNormal, DigitalInputsDriverCfg::ShortCircuitDetection::kShortCircuitDetectionEnabled, {{DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kFault8CkeInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kOtShdnInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFDIOpenInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFDIShortInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFWBOpenInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFWBShortInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}}, DigitalInputsDriverCfg::FaultPinCfg::kFaultPinSticky, {{DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationCrc, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationPor, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationFault2, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationAlarmT2, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationAlarmT1, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivation24VL, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivation24VM, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationWireBreak, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}}}, {DigitalInputsDriverCfg::Device::kMAX22190Device_1, {{DigitalInputsDriverCfg::Input::kInput_01, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_02, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_03, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_04, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_05, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_06, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_07, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}, {DigitalInputsDriverCfg::Input::kInput_08, DigitalInputsDriverCfg::InputEnable::kInputEnabled, DigitalInputsDriverCfg::WireBreakDetection::kWireBreakDetectionEnabled, DigitalInputsDriverCfg::ProgrammableFilter::kProgrammableFilterUsed, DigitalInputsDriverCfg::FilterDelay::kInputFilterDelay20ms}}, DigitalInputsDriverCfg::Flags24VClearMethod:: k24VFlagClearedBySpiTransactionFault1RegReading, DigitalInputsDriverCfg::FiltersOperation::kFiltersOperationNormal, DigitalInputsDriverCfg::ShortCircuitDetection::kShortCircuitDetectionEnabled, {{DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kFault8CkeInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kOtShdnInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFDIOpenInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFDIShortInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFWBOpenInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}, {DigitalInputsDriverCfg::Fault2SrcInFault1Reg::kPinREFWBShortInFault1Reg, DigitalInputsDriverCfg::Fault2SrcUsage::kFault2SrcUsed}}, DigitalInputsDriverCfg::FaultPinCfg::kFaultPinSticky, {{DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationCrc, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationPor, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationFault2, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationAlarmT2, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationAlarmT1, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivation24VL, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivation24VM, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}, {DigitalInputsDriverCfg::FaultPinActivationSrc::kFaultPinActivationWireBreak, DigitalInputsDriverCfg::FaultPinActivationSrcUsage::kUsed}}}}; DigitalInputsDriverCfg dig_inputs_cfg; FpgaDriversCfg():dig_inputs_cfg(di_devices_cfg){} }; Hal #include "FpgaDriversCfg.h" #include "DigitalInputsDriver.h" class Hal{ public: DigitalInputsDriver dig_in; Hal(); void initialize(void); private: FpgaDriversCfg fpga_configuration; }; #include "Hal.h" #include "FpgaDriversCfg.h" #include "Registers.h" Hal::Hal(): dig_in(&(fpga_configuration.dig_inputs_cfg), 1){} void Hal::initialize(void){ dig_in.initialize(); } main #include <cstdlib> #include "Hal.h" int main(int argc, char** argv){ Hal hal; hal.initialize(); while(1){ hal.dig_in.update(); // the methods getInputState, isFault1Active, isFault2Active are intended to be called from // different tasks of the RTOS } } I would like to ask you mainly for assessment of the whole driver design i.e. how the driver is divided into the C++ classes and how those classes interact. Further I have been thinking about the possibility to use some design pattern for working with the Register objects. Thank you in advance for any notes. Answer: Add include guards I am not seeing any include guards in your code. It is common practice to add those to header files; this allows them to depend on other header files without having to worry about loops and duplicates. The simplest solution is to add the following to the top of each header file: #pragma once This is supported by most compilers, however it is not standard C++. The standards-compliant way is to write a header file like so: #ifndef HEADERFILENAME_H #define HEADERFILENAME_H // Contents of header file come here ... #endif However, you need to replace HEADERFILENAME with something unique for each header file, typically the header's filename is used for that. Use of enums In principle using enum class for constants gives you excellent type safety, however there are some drawbacks, in particular there is the need to cast them whenever you need to get their integer value. I would keep using enum classes for things where you have really distinct names for each possible value, like the fault codes. However, I would not use them for pin numbers if you are just going to give the pin numbers names that have a one-to-one correspondence to their value, like in: enum class Input { kDi_00, kDi_01, ... }; Also, I see you prefixed all the enum names with a k. This is probably a leftover from a coding style using Hungarian notation, but since you don't use Hungarian notation for anything else, it doesn't make much sense. I would remove the ks, they just add visible noise and don't do anything to make the code safer. Avoid starting identifiers with underscores There are certain rules about using underscores in identifiers. In particular, starting with an underscore or using a double underscore is reserved in some situations. Your use is fine according to those rules, but if you were not aware of this to begin with, I would advise you to not start any names with underscores. I see you use those always in constructors, but they are almost never necessary. You only need them if you would shadow a member variable, and need to access both the member variable and the parameter at the same time in the body. However, note that you can use the same name as a member variable if you only use it in a constructor's initializer list, like so: Register::Register(DigitalInputsDriverCfg::Device device, DigitalInputsDriverCfg::MAX22190RegisterType type, uint8_t address, Transceiver *transceiver) : device{device}, type{type}, address{address}, transceiver{transceiver}, {...} Avoid the array of pointer in DigitalInputsDriver There is no need to have both device0, device1 and an array devices[]. I assume the reason you have this is because you want to be able to initialize device0 and device1 in the constructor's initializer list, but wanted to be able to access them as an array. You can remove the redundancy and get the best of both worlds though, by writing it as follows: class DigitalInputsDriver { ... MAX22190 devices[2]; }; DigitalInputsDriver::DigitalInputsDriver(DigitalInputsDriverCfg *dig_in_cfg, uint16_t spi_device_id) : transceiver{spi_device_id}, devices{ {DigitalInputsDriverCfg::Device::kMAX22190Device_0, dig_in_cfg, &transceiver}, {DigitalInputsDriverCfg::Device::kMAX22190Device_1, dig_in_cfg, &transceiver}, } {} You'll have to update all the code that uses devices[] now that it no longer contains pointers but values, for example: void DigitalInputsDriver::update(void) { for (MAX22190 &device : devices) { device.update(); } } Make member functions that do not modify state const Functions that do not modify state like member variables should be marked const, so the compiler can generate more optimal code. It will also make the compiler generate an error if you do accidentily write to a member variable from a const function. For example: class MAX22190 : public TransactionEndListener { ... bool isReady(void) const; ... }; bool MAX22190::isReady(void) const { return device_ready && device_configured; } Prefer member variable initialization where appropriate Instead of initializing member variables in the constructor, you can sometimes also initialize them at the place where the member variables are declared. For example: class Register { ... bool configured = false; // or bool configured{false}; or bool configured{}; bool pending = false; bool refreshed = false; ... }; This is especially useful if you have multiple constructors in the class, since it then avoids a lot of repetition. Use of placement new I was a bit surprised to see the following code: bool Register::read(void) { if (!transceiver->isBusy()) { new (&read_reg_request) ReadRegRequestMsg(device, address, transceiver); ... Placement new is something you do in rather rare circumstances where you either want to avoid default construction or where copy assignment is not possible. There is no problem copy-assigning a ReadRegRequestMsg, and since read_reg_request was already default constructed, there is no performance gain here. I would replace this line of code with: read_reg_request = ReadRegRequestMsg(device, address, transceiver); But upon further inspection, why store the request in a member variable in the first place? It is not used outside Register::read(), so it could just have been a local variable. But the only thing you do is call write() on it, so you don't even need to store the result in a local variable, and can just write: bool Register::read(void) { if (!transceiver->isBusy()) { ReadRegRequestMsg(device, address, transceiver).write(); ... But then that brings me to: Don't overcomplicate things Why is ReadRegRequestMsg a class, why does it inherit from class Message which itself doesn't do anything useful, when the only thing you need is the function that writes a message to a device? It could be a stand-alone function: void writeReadRegRequestMsg(DigitalInputsDriverCfg::Device device, uint8_t address, Transceiver *transceiver) { union { struct { uint32_t crc : 5; uint32_t fill : 11; uint32_t reg_addr : 7; uint32_t msb : 1; } bits; uint8_t bytes[3]; } msg; msg.bits.msb = 0; msg.bits.reg_addr = address; msg.bits.fill = 0; uint8_t crc = crcMAX22190(msg.bytes[2], msg.bytes[1], msg.bytes[0]); msg.bits.crc = crc; transceiver->putBytes(msg.bytes); transceiver->startTransaction(device); } Or perhaps better is to make this a member function of Register. For-case loops There are several parts of the code that look suspiciously like for-case-loops, like MAX22190::configure(), Configure::configure(), and Refresher::refresh(). Is this because you have modelled the system as a state machine, and want to advance the state one step at a time? The problem is that this makes the flow of the code hard to read. I would rather expect something like: bool MAX22190::configure(void) { for (auto reg: register_map) { configurator.configure(*reg)); } return true; } And in turn have configurator.configure() do all the steps necessary to configure the register by itself.
{ "domain": "codereview.stackexchange", "id": 40210, "tags": "c++, object-oriented, embedded, device-driver" }
Electrostatic energy of capacitors and point charges
Question: A number of questions have previously been asked about electrostatic energy for a system of conductors. But I am still confused about this topic. Suppose we have an ideal capacitor consisting of two spheres both of radius a. One sphere has the positive charge $Q_A$ = Q and its center is located at point A. The other sphere has the negative charge $Q_B$ = -Q and its center is located at point B. Whatever the size of the radius a, the electrostatic energy W for this system is given by $W =\frac{QV}{2}$ where $V = ɸ_A – ɸ_B$, the potential difference between the conductors. $ɸ_A$ is larger than $ɸ_B$ so W is positive. Now we let a tend to $0$ until we have two point charges Q and -Q. We conclude that W for the two point charges is positive. But that is wrong. Having two point charges $Q_A$ = Q and $Q_B$ = -Q we can still use $W =\frac{QV}{2}$ if we define $ɸ_A$ as the potential at point A due to $Q_B$ with $Q_A$ absent and $ɸ_B$ as the potential at point B due to $Q_A$ with $Q_B$ absent. Then we get $W = -\frac{1}{4\pi\epsilon_0}\frac{Q^2}{r}$ where r is the distance between $Q_A$ and $Q_B$. This formula is correct as it follows immediately from the definition of W. So W is actually negative for a system consisting of two point charges Q and -Q. Now I wonder, what went wrong in the first argument? Answer: You are taking limits without sufficient caution. In the first calculation $\phi_A$ is the potential relative to infinity and it tends to infinity in the limit. So the result here is undefined. Also you are not comparing like with like. In the second calculation $\phi$ is differently defined: you have introduced the concept of the potential due to only the other charge. If we use instead the potential relative to infinity then again the answer is undefined.
{ "domain": "physics.stackexchange", "id": 89946, "tags": "electrostatics, charge, potential-energy, capacitance" }
Relativistic velocity of the center of momentum frame
Question: In Newtonian mechanics the expression for the velocity of the center of momentum frame is $v_{CM}=\frac{\sum_i m_i v_i}{\sum_i m_i}$, where $v_i$ is the velocity of the particle $i$ in the lab frame. Is there any expression similar to this one in special relativity? Answer: $\frac{\vec v_{CM}}{c}=\frac{ \sum \vec p_{i}c }{ \sum E_i }=\frac{\vec P_{SYS}c}{E_{SYS}}$. On an energy-momentum diagram, this says add up all of the 4-momenta to get the 4-momentum of the system. The spatial velocity of the system 4-momentum is essentially the slope of that 4-momentum vector: the ratio of the spatial-components (the vector sum of relativistic-momenta) and the temporal-components (the sum of the relativistic energies). For massive particles, this becomes $\frac{ v_{CM}}{c}=\frac{\sum m_i c^2\sinh\theta_i }{\sum m_i c^2\cosh\theta_i}=\frac{\sum \gamma_i m_i v_i c}{\sum \gamma_i m_i c^2}$ or $\vec v_{CM}= \frac{\sum \gamma_i m_i \vec v_i }{\sum \gamma_i m_i}$
{ "domain": "physics.stackexchange", "id": 69486, "tags": "special-relativity, kinematics, reference-frames" }
RAII style API wrapper for PyMongo
Question: I just watched Raymond Hettinger's talk on making Python more Pythonic and realized I should be putting a lot of his ideas into practice, particularly wrapping API's in a class that makes everything simpler and easy to use. Here's what I'm doing to wrap PyMongo: from pymongo import MongoClient class MongoDB(object): """Provides a RAII wrapper for PyMongo db connections. Available collection functions limited to those in attributes_to_pass. Number of simultaneous connection users also tracked. """ attributes_to_pass = ["update", "insert", "count"] client = None num_users = 0 def __init__(self, db, collection): MongoDB.client = MongoDB.client or MongoClient() self.collection = MongoDB.client[db][collection] MongoDB.num_users += 1 def __enter__(self, *args): return self def __exit__(self, type, value, traceback): MongoDB.num_users -= 1 if MongoDB.num_users is 0: self.client.close() def __getattr__(self, attr): if attr in MongoDB.attributes_to_pass: return getattr(self.collection, attr) else: return getattr(self, attr) def main(): with MongoDB(db = "db1", collection = "c1") as m: print(m.count()) m.update({"jello":5} , {"hello":"you"}, upsert = True) print(m.count()) m.insert({"joe":6}) with MongoDB(db ='db1', collection = 'c2') as j: j.insert({"joe":6}) print(j.count()) if __name__ == "__main__": main() I would really appreciate all suggestions on how to make this better. Answer: Great that you provided a docstring! You made one minor formatting mistake though, there should be a blank line between your brief single line summary and the rest of the docstring. It's recommended in the style guide partially for readability and partially for script parsers. class MongoDB(object): """Provides a RAII wrapper for PyMongo db connections. Available collection functions limited to those in attributes_to_pass. Number of simultaneous connection users also tracked. """ It seems like attributes_to_pass is actually a constant. If that is the case it should be UPPER_SNAKE_CASE to be clear that it is, especially when mixed with attributes that often change like client and num_users. It should also be a tuple, as tuples are immutable so will not change unless updated in the source code. ATTRIBUTES_TO_PASS = ("update", "insert", "count") Also, when you call on these constants you refer to the class name. While that's possible I'd personally say it's less readable. I had thought you were calling on a class you imported for a second. You can just pass self instead of the classname. def __getattr__(self, attr): if attr in self.ATTRIBUTES_TO_PASS: return getattr(self.collection, attr) else: return getattr(self, attr)
{ "domain": "codereview.stackexchange", "id": 16100, "tags": "python, wrapper, pymongo, raii" }
is/can the probabilistic nature of quantum mechanics be explained by general relativity?
Question: It seems that as you drop down to smaller and smaller distances-scales you'd have greater and greater relative discrepancies of occurrences in the universe when you consider everything to be relative. Can these discrepancies explain, if not produce the weird probabilistic phenomenon at the quantum realm? If not what don't I understand? Why is this intuition wrong? EDIT: To try to be more clear allow me to introduce an analogy. If you're familiar with blockchain technology you'll know that each computer can be thought of as a node in the network. They share messages back and forth in a broadcast manner. Sometimes a computer will get a message that others don't, and if it wins the race to determine the consensus of what has happened in the most recent past the other computers will adopt its view of the history of the network. In other words, the longest coherent chain wins. Applying this analogy to an elementary view of quantum physics we could see a computer as a particle, or point, messages between them as forces between particles and consensus as the coherence of matter and energy on a macro scale. In the blockchain network, a computer gets messages A, B, C in that order, but another computer gets the messages in the B, C, A order. Eventually, they will agree when they adopt the longer chain, (the larger the spacetime) and all will arrive at a consensus. Is this what is happening at the quantum scale? from one particle's point of view traveling at a high speed in a certain direction sees the universe, especially his local universe in a very particular order whereas other particle's points of view disagree with his as they travel in a different direction at a high rate of speed? Could those "discrepancies in views of the universe" (the order in which every particle feels forces) create the probabilistic nature of the quantum world? Answer: There is no evidence at all to suggest that quantum mechanics supervenes on general relativity. General relativity, on a small scale, predicts none of the effects that we see in quantum mechanics. If it did, we wouldn't need quantum mechanics. There's nothing probabilistic about GR, nor does GR form any of the structures that arise in QM, nor does GR have anything to say about "spooky action at a distance." It does, however, provide an interesting take on magnetism.
{ "domain": "physics.stackexchange", "id": 41387, "tags": "quantum-mechanics, general-relativity, speed-of-light, probability, causality" }
How many peaks would there be in H1 NMR of ethene?
Question: Since the bonds can't rotate, would there be 2 peaks? Likewise, how many peaks would there be in trans-but-2-ene and cis-but-2-ene? Answer: You need to consider whether the protons in ethene are chemically equivalent, magnetically equivalent, or both. The lewis structure of ethene: Because of the symmetry of the planar ethene molecule, all H atoms are chemically and magnetically equivalent. The spin system is therefore A$_4$. The signal is not split because the environment is the same for all H atoms. The signal would therefore be a single peak. Compared to an "average" methyl group, the ethylene protons are less shielded, and would appear at a higher chemical shift than methyl groups.
{ "domain": "chemistry.stackexchange", "id": 4426, "tags": "nmr-spectroscopy, symmetry" }
Get the most frequent number and least frequent duplicate in array
Question: This code gets the most frequent number and least duplicate number in an array. How can I optimize it and improve its performance? public class X { public static String findMin(String[] numbers, int counter) { int count = 0; String elements = ""; for (String tempElement: numbers) { int tempCount = 0; for (int n = 0; n < numbers.length; n++) { if (numbers[n].equals(tempElement)) { tempCount++; if (tempCount > counter) { count = 0; break; } if (tempCount > count) { elements = tempElement; // System.out.println(elements); count = tempCount; } } } if (count == counter) { return elements; } } if (count < counter) { return ""; } return elements; } public static void main(String[] args) { String[] numbers = "756655874075297346".split(""); String elements = ""; int count = 0; for (String tempElement: numbers) { int tempCount = 0; for (int n = 0; n < numbers.length; n++) { if (numbers[n].equals(tempElement)) { tempCount++; if (tempCount > count) { elements = tempElement; // System.out.println(elements); count = tempCount; } } } } String x = ""; int c = 2; do { x = findMin(numbers, c++); } while (x == ""); System.out.println("Frequent number is: " + elements + " It appeared " + count + " times"); System.out.println("Min Frequent number is: " + x + " It appeared " + (c - 1) + " times"); } } Answer: First, you should care about indentation. Why slitting the numbers string while you can access individual char with numbers.charAt(n) ? It force you to work with String instead of char. You don't check for validity of the string. Here, it's hard coded, but if you have to get it from a unknown source, a good habit is to validate it before using. Also, you don't check if there's duplicates or not. If numbers is "1234567890" your program go for a infinite loop. You compute the "Max" count into the main, and the "Min" in a function; try to be consistent. For both computations, you make \$n*n\$ iterations (where \$n\$ is the numbers length) giving a complexity of \$ O(n^2)\$ for both. Instead, try to construct a table of occurrences by traversing the table once. After, simply search in this table the min and max values in one traversal. There's a very naive implementation, but i think a lot simpler to understand than your, and surely more efficient. public class X { public static void main(String[] args) { String data = "756655874075297346"; int[] counts = new int[10]; for (int i = 0; i < data.length(); i++) { char n = data.charAt(i); if (n >= '0' && n <= '9') { counts[n-'0']++; } } int min_index = 0; int max_index = 0; int min_count = Integer.MAX_VALUE; int max_count = 0; for (int i = 0; i < 10; i++) { if (counts[i] >= max_count) { max_index = i; max_count = counts[i]; } if (counts[i] > 1 && counts[i] < min_count) { min_index = i; min_count = counts[i]; } } System.out.println("Frequent number is: " + (char)(max_index + '0') + " It appeared " + max_count + " times"); if (min_count < Integer.MAX_VALUE) { System.out.println("Min Frequent number is: " + (char)(min_index + '0') + " It appeared " + min_count + " times"); } else { System.out.println("There's no duplicates!"); } } } Here, the function print the higher number with the higher number of occurrences (in case of multiples char with maximal occurrence count). If instead you want to get the lower, change if (counts[i] >= max_count) for if (counts[i] > max_count). Conversely, it print the lower duplicated number with the lower count, to get the higher duplicated with the lower count, change counts[i] < min_count with counts[i] <= min_count.
{ "domain": "codereview.stackexchange", "id": 32728, "tags": "java, performance, array, statistics" }
For a diatomic molecule, what is the specific heat per mole at constant pressure/volume?
Question: At high temperatures, the specific heat at constant volume $\text{C}_{v}$ has three degrees of freedom from rotation, two from translation, and two from vibration. That means $\text{C}_{v}=\frac{7}{2}\text{R}$ by the Equipartition Theorem. However, I recall the Mayer formula, which states $\text{C}_{p}=\text{C}_{v}+\text{R}$. The ratio of specific heats for a diatomic molecule is usually $\gamma=\text{C}_{p}/\text{C}_{v}=7/5$. What is then the specific heat at constant pressure? Normally this value is $7/5$ for diatomic molecules? Answer: "At high temperatures, the specific heat at constant volume $C_v$ has three degrees of freedom from rotation, two from translation, and two from vibration." I can't understand this line. $C_v$ is a physical quantity not a dynamical system. So how can it have a degrees of freedom?? You can say the degrees of freedom of an atom or molecule is something but it is wrong if you say the degrees of freedom of some physical quantity(like temperature, specific heat etc.) is something. Degrees of freedom is the number of independent coordinates necessary for specifying the position and configuration in space of a dynamical system. Now to answer your question, we know that the energy per mole of the system is $\frac{1}{2} fRT$. where $f$= degrees of freedom the gas. $\therefore$ molar heat capacity, $C_v=(\frac{dE}{dT})_v=\frac{d}{dT}(\frac{1}{2}fRT)_v=\frac{1}{2}fR$ Now, $C_p=C_v+R=\frac{1}{2}fR+R=R(1+ \frac{f}{2})$ $\therefore$ $\gamma=1+ \frac{2}{f}$ Now for a diaatomic gas:- A diaatomic gas has three translation(along x,y,z asis) and two rotational(about y and z axis) degrees of freedom. i.e. total degrees of freedom is $5$. Hence $C_v=\frac{1}{2}fR=\frac{5}{2}R$ and $C_p=R(1+ \frac{f}{2})=R(1+ \frac{5}{2})=\frac{7}{2}R$
{ "domain": "physics.stackexchange", "id": 25699, "tags": "thermodynamics, degrees-of-freedom" }
Complexity of "destroying" the graph's minimum spanning tree weight
Question: Assume we have a connected input graph $G=(V,E)$ and a weight function $w:E\to\mathbb N$. Denote by $w(G)$ the weight of a minimum spanning gree for a graph $G$. For this purpose, define $w(G')$ as $\infty$ for graph $G'$ which is not connected. Consider the following problem: Given an integer $k\in\mathbb N$, decide whether there exists an edge set $E'\subseteq E$ , such that $|E'|=k$ and $w((V,E\setminus E')) > w(G)$? What is the complexity of the above problem?. Answer: EDIT As noted in comments below, I originally read the question incorrectly. I thought the goal was to determine if removing $k$ edges could increase the MST weight of $G$ above some given threshold $t$. This problem is often known as "$k$ Most Vital Edges (for MST)", simply $k$-MVE (or sometimes $k$-MVE-MST to distinguish from other variations), as cited in my original answer. However, the asker poses instead the question of whether or not removing $k$ edges could increase the MST weight of $G$ by any amount. Let's call this problem "$k$ Any Vital Edges" or $k$-AVE. We will show that $k$-AVE is in P. Let $G=(V,E)$ be an edge-weighted graph with weight function $w:E\rightarrow \mathbb{N}$. The goal of $k$-AVE is to find a subset of edges $S \subseteq E$ of size $k$ such that the MST weight of $G$ is strictly less than the MST weight of $G\setminus S \triangleq (V, E\setminus S)$. We'll call such an $S$ a valid $k$-AVE set. We will proceed by outlining necessary and sufficient conditions for $S$ to be a valid $k$-AVE set. Let $T$ be a MST of $G$. For every edge $e \in T$, we may associate a partition of $V$ into two parts (i.e.: a cut) based on how $e$ splits $T$ (i.e.: each partition consists of the vertices reachable from either endpoint of $e$ using only edges in $T \setminus \{e\}$). In particular, we will denote by $C_e$ the cut-set of this associated partition (i.e.: set of edges in $G$ that straddle this partition, including $e$ itself). Now, by the Cut Property of MSTs, we know that each $e$ has minimum weight among all edges in $C_e$; there may be more than one edge in $C_e$ with that same minimum weight. Let these sets be denoted $M_e \triangleq \{e' \in C_e | w(e') = w(e)\}$. Suppose that, for some $e$, $M_e\subseteq S$. Then, we conclude that $G\setminus S$ has larger MST weight than $G$. Otherwise, we would have some MST $T'$ in $G$ that did not use any of the minimum weight edges across the cut-set $C_e$, a contradiction of the Cut Property. This shows that a sufficient condition for $S$ to be a valid $k$-AVE set is for it to contain $M_e$ for some $e$. Next, we will show that this is condition is also necessary. Suppose to the contrary that, for all $e\in T$, $S$ excludes some $e' \in M_e$ (of course allowing that $e'$ may equal $e$). In general, removing $S$ from $G$ splits $T$ into a forest. We will reconstruct a new minimum spanning tree $T'$ from $T\setminus S$ by using the edges spared by $S$, as follows: Let $T' = T\setminus S$ For each $e$ in $T$ do $\qquad$If no edge in $T'$ spans $C_e$ then let $T' = T' \cup \{e'\}$ Return $T'$ We leave as an exercise to the reader to prove that this procedure reconstructs a spanning tree $T'$ with the same weight as $T$. (Outline: each edge added in step 3 preserves acyclicity and no components of $T\setminus S$ can remain separated). These necessary and sufficient criteria naturally suggest a poly-time algorithm for determining if $G$ has a valid $k$-AVE set. First, construct a MST $T$. For each $e\in T$, we compute the cut-set $C_e$ and the set of minimum weight edges in $C_e$, $M_e$. If for some $e$, $|M_e| \leq k$, then we can simply take any $S \supseteq M_e$ as our valid $k$-AVE set. Otherwise, if all $M_e$ sets are larger than size $k$, we conclude that $G$ has no $k$-AVE set. Original Answer Below Shen, Hong. "Finding the k most vital edges with respect to minimum spanning tree." Acta Informatica 36.5 (1999): 405-424. For a connected, undirected and weighted graph $G = (V,E)$, the problem of finding the $k$ most vital edges of $G$ with respect to minimum spanning tree is to find $k$ edges in $G$ whose removal will cause greatest weight increase in the minimum spanning tree of the remaining graph. This problem is known to be NP-hard for arbitrary $k$.
{ "domain": "cstheory.stackexchange", "id": 3718, "tags": "cc.complexity-theory, ds.algorithms, graph-theory, graph-algorithms, tree" }
Vaccines against bacterial endotoxins
Question: Today in class, there was a discussion going on about what part of pathogens(which can act as an antigen) can be used to make vaccines. There was this point where our teacher said that bacterial exotoxins can be used for making vaccines. My questions are- Can bacterial endotoxins be used in making of vaccines? If not, then why is it so? What are the problems we face ? Answer: There are several papers that use alkaline hydrolysis to delipidate or "detoxify" the endotoxin for use as a vaccine (1, 2, 3). The point of many of these papers is to develop a detoxified LPS (dLPS) vaccine to protect against sepsis in the pathogenesis and treatment of gram-negative infections. The detoxification is important, as the lipid-A portion of the LPS contributes significantly to sepsis. In this case you would be targeting the O-antigen or core oligosaccharide. So one of the problems that I can see is that there are many bacterial serotypes, comprised of many different O- and core groups. I'm not aware of any vaccines with that sort of breadth. Issue two is you get an infection and you need to administer these vaccines quite fast, if not prophylactically, as adaptive immunity takes a number of days to come to speed. It's also not clear to me whether a prophylactic vaccine to E. coli is useful to the populace at large, epidemiologically (just as an example). The field also seems to lack clinical data on tolerability and efficacy, which is what I'd be more interested in, e.g. is it safe for humans and does it work?
{ "domain": "biology.stackexchange", "id": 7938, "tags": "vaccination, antigen, immunity, bacterial-toxins" }
Do time crystals require quantum entanglement?
Question: In this video (at this time in the video) the vlogger seems to make a statement that the only way time crystals can be achieved is if some of the atoms (at a periodic distance from each other) are entangled. But I can't tell if I trust him considering his background. In fact, even he states earlier in the video that he might be wrong. So is he right? Are time crystals only achieved via entanglement? https://youtu.be/ucwmGZ51X7E?t=126 Answer: A quantum time crystal does not from my reading of Wilczek and others appear to require entanglement, but the idea is interesting. It does seems plausible that a quantum time crystal model could be developed with entanglement. A quantum time crystal is just a periodicity in a system in time that has a lattice structure. An elementary quantum time crystal is then just a chain that is periodic in time. This chain would then be a measure of some periodicity in a system. Wilzcek's time crystal is a curious system then that exhibits dynamics in the ground state. Normally a ground state is where the Hamiltonian acts trivially. However if time translation symmetry is violated then some type of motion in the ground state is possible. This comes very close to being a form of perpetual motion machine. Breaking time symmetry may though be involved with the arrow of time. The Wilczek time crystal assumes a charge $q$ confined to a ring of unit radius contains a magnetic flux $2\pi\alpha/q$ with the gauge covariant momentum $\pi_\pi~=~\dot\phi~+~\alpha$, for $\phi$ an angle around the ring and $-i\partial/\partial\phi$ a generator of angular momentum. The Lagrangian is then $$ L~=~\frac{1}{2}\dot\phi^2~+~\alpha\dot\phi $$ and the Hamiltonian $$ H~=~\frac{1}{2}(\pi_\phi~-~\alpha)^2. $$ For states defined as $|\ell\rangle~=~|e^{iL\phi}\rangle$ it is not hard to see that $\langle\dot\phi\rangle~-~\ell~-~\alpha$ and even for the ground state with $\ell~=~0$ there is the expectation $\langle\ell_0|\dot\phi|\ell_0\rangle~=~-\alpha$ The Page-Wooters model has two Hilbert spaces $H_1$ and $H_2$ with an entanglement of their states given by the product states of these Hamiltonians $H_1\otimes I_2~+~ I_1\otimes H_2$. A state then of the form $$ |\Psi\rangle~ =~\sum_{ij}c_{ij}(|\psi_i\rangle|\phi_j\rangle~+~ |\psi_j\rangle|\phi_i\rangle), $$ $|\psi_i\rangle~\in~H_1$ and $|\phi_j\rangle~\in~H_2$ is an entanglement of state from these two Hamiltonians. Now we take an arbitrary state of the for $|\chi\rangle~=~ \sum_ka_i|\psi_i\rangle~\in~H_1$ and project onto $|\Psi\rangle$ with $$ \langle\chi|\Psi\rangle~=~\sum_{ijk}a^*_k c_{ij}(\langle\psi_k|\rangle|\psi_i\rangle|\phi_j\rangle~+~\langle\psi_k|\psi_j\rangle|\phi_i\rangle), $$ $$ ~=~\sum_{ijk}a^*_k c_{ij}(\delta_{ik}|\phi_j\rangle~+~\delta_{jk}|\phi_i\rangle) $$ $$ ~=~\sum_{ij}(a^*_ic_{ij}|\phi_j\rangle~+~a^*_jc_{ij}|\phi_i\rangle). $$ Since $c_{ij}~=~a^*_ia_j$ this is then written as $$ \langle\chi|\Psi\rangle ~=~ \sum_{ij}(|a_i|^2a_j|\phi_j\rangle ~+~ |a_j|^2a_i|\phi_i\rangle) ~=~ 2\sum_{ij}|a_i|^2a_j|\phi_j\rangle $$ The matrix element $c_{ij} ~=~ a^*_ia_j$ is a relative phase term $c_{ij} ~=~ exp(i\theta_i)exp(-i\theta'_j)$ and the difference in this relative phase $\theta_i ~-~\theta'_j~=~\omega t$. This projection does here then is a way of measuring the phase of one system relative to another. This relative phase definition of time holds for a system with different ground states for $H_1$ and $H_2$ We then have something analogous to a time crystal. The main interest with the Page-Wooters model is to define time within the Wheeler-DeWitt equation $H\Psi[g]~=~0$. The occurrence of time may then be a relative phase with entangled states, which have analogues to a time crystal. It has though been shown that time crystals are defined on an approximate vacuum, and hold for a Floquet oscillator. This means they are quasi-stable. This is certainly an interesting area to study, and I offer here only cursory observations F. Wilczek, "Quantum Time Crystals" PRL $\bf 109$ 16 (2012). https://arxiv.org/abs/1202.2539v2 D. V. Else, B. Bauer, C. Nayak, "Floquet Time Crystals, Phys. Rev. Lett. $\bf 117$, 090402 (2016) https://arxiv.org/abs/1603.08001
{ "domain": "physics.stackexchange", "id": 51115, "tags": "quantum-entanglement, time-crystals" }
Doubt about Faraday's Law
Question: Faraday's law states that the circulation of the electric field E around a closed loop is equal to the rate of change of the magnetic flux through the area enclosed by the loop In integral form: $$\varepsilon =\oint \vec E \cdot d \vec l = - \frac{d\phi}{dt}$$ $\varepsilon$ is defined as the electromotive force, that is the work received by a unit charge as it goes around the closed circuit once. The electric field is a function of both time and space, $\vec E\,(\vec s, t)$. Given the above definition of e.m.f., as a unit charge travels around the closed circuit time passes and the electric field varies as a consequence of this, since it is a function of time. As far as my understanding goes, instead the line integral $ \oint \vec E \cdot d\vec l $ treats the electric field as constant with respect to time all along the path. So, to me, it seems like this isn't the real work received by a unit charge travelling around the closed loop. Shouldn't this time-dependency be taken into account when computing the line integral? Answer: So, to me, it seems like this isn't the real work received by a unit charge travelling around the closed loop. Yes, that's right - unless of course the emf is constant in time. Shouldn't this time-dependency be taken into account when computing the line integral? If you're trying to calculate the work done on an actual charge which is physically traveling around the loop, then yes. If you're making a statement about the relationship between the circulation of the electric field and the rate of change of the magnetic flux at a particular instant of time, then no. Faraday's law is ultimately the statement that the circulation of the electric field at some point $\mathbf x$ and some time $t$ is equal to (minus) the rate of change of the magnetic field at $(\mathbf x,t)$. In differential form, this reads $$\bigg(\nabla \times \mathbf E\bigg)(\mathbf r,t) = -\frac{\partial \mathbf B}{\partial t}(\mathbf r,t)$$ This is true at every point $\mathbf r$ and time $t$. Alternatively, this expression can be integrated over some spatial surface $S$ which is bounded by a loop $\partial S$, with the result being $$\oint_{\partial S} \mathbf E(\mathbf r,t) \cdot d\mathbf r = -\frac{d}{dt} \int_S\mathbf B(\mathbf r,t) \cdot d\mathbf S$$ Both the line integral and the time derivative of the surface integral are computed at fixed time $t$. The electrical work done on a point charge actually moving around the loop would be $$\oint \mathbf E\bigg(\mathbf r(t),t\bigg) \cdot \frac{d\mathbf r}{dt} dt$$ Contrast this with how you'd evaluate the integral at fixed time: $$\oint \mathbf E\bigg(\mathbf r(\lambda),t\bigg) \cdot \frac{d\mathbf r}{d\lambda} d\lambda$$ where we've parameterized our loop with parameter $\lambda$. In the former case, the time argument of the electric field changes, but in the latter case it remains fixed. If $\mathbf E$ is constant in time, then the two expressions of course coincide.
{ "domain": "physics.stackexchange", "id": 72974, "tags": "electromagnetism, electric-fields, electric-current" }
Pi molecular orbitals of polyenes
Question: As an organic chemist, I'm comfortable deriving the pi molecular orbitals of linearly conjugate systems to give the following result: In a qualitative sense, these molecular orbitals are easily arrived at. For a polyene of n-atoms, the lowest energy combination will have 0 nodes (all in-phase) and the highest energy combination will have n-1 nodes (in, out, in, out...). Is there a way of arriving at the same result using a more rigorous, formal molecular orbital approach? Answer: These pi-type MOs are most commonly derived using Hückel MO theory. I've never been a fan of Wikipedia's technical articles, so for further reading I'd suggest using the QMUL resources online, which are very comprehensive. $\require{begingroup}\begingroup \newcommand{ket}[1]{\left|#1\right>} \newcommand{bra}[1]{\left<#1\right|} \newcommand{braket}[1]{\left< #1 \right>}$ Theory To begin with, MOs are obtained by solving the secular equations: $$\mathbf{H}\mathbf{c} = E\mathbf{S}\mathbf{c}$$ The derivation of this will not be covered here (it can be found in e.g. Atkins' Molecular Quantum Mechanics), but it is worth mentioning what these things are. If we consider a polyene with $n$ atoms (e.g. for ethene, $n = 2$), then $\mathbf{H}$ is the Hamiltonian matrix, an $n \times n$ matrix whose elements $H_{ij}$ refer to the the $i$-th and $j$-th p-orbitals: $H_{ij} = \braket{\mathrm{p}_i|H|\mathrm{p}_j}$. $\mathbf{c}$ is a $1 \times n$ column vector of coefficients: the element $c_i$ corresponds to the coefficient of the $i$-th p-orbital. That is to say, the MO $|\psi\rangle$ corresponding to the vector $\mathbf{c}$ is given by $$|\psi\rangle = c_1 |\mathrm{p}_1\rangle + c_2 |\mathrm{p}_1\rangle + \cdots + c_n |\mathrm{p}_n\rangle.$$ By solving the secular equations, we obtain a series of permitted vectors $\mathbf{c}$: each of these solutions correspond to one MO. In general, if there are $n$ p-orbitals used as inputs, then we will obtain $n$ permitted MOs. Each solution $\mathbf{c}$ is associated with a particular energy $E$, which is a scalar. $\mathbf{S}$ is the overlap matrix, whose elements are given by $S_{ij} = \braket{\mathrm{p}_i|\mathrm{p}_j}$. Hereafter, we will simplify the notation and use $|i\rangle$ in place of $|\mathrm{p}_i\rangle$ to denote the p-orbitals. Now, simple Hückel theory makes some key assumptions about the forms of the MOs as well as some of the integrals involved in a typical quantum chemical calculation: π-Type MOs are linear combinations of p-orbitals, and no other orbitals contribute (so-called "sigma-pi separation") The value of $\braket{a|b}$ (where $\ket{a},\ket{b}$ are p-type AOs on atoms $a$ and $b$) is $1$ if $a = b$ and $0$ otherwise The value of $\braket{a|H|b}$ is $$H_{ab} = \braket{a|H|b} = \begin{cases}\alpha \text{ if } a = b \\ \beta \text{ if atom }a\text{ is bonded to atom } b \\ 0 \text{ otherwise.}\end{cases}$$ Here, $\alpha$ and $\beta$ are just some constants. We don't have exact values for them (yet), but we can say that they are both negative. With these simplifications the secular equations can be readily solved. In fact, in simple Hückel theory the matrix $\mathbf{S}$ is simply equal to the identity matrix (see rule 2 above), so it can be completely ignored in the secular equations: $$\mathbf{Hc} = E\mathbf{c}.$$ You may recognise this an eigenvalue equation for $\mathbf{H}$ (the original form, where $\mathbf{S}$ is not necessarily equal to the identity matrix, is called a generalised eigenvalue equation). As mentioned previously, for $n$ atoms there will be $n$ vectors $\mathbf{c}$ that will satisfy the secular equations, and $n$ corresponding values of the energy $E$. Thus, for the allyl cation (for example), there will be three column vectors $\mathbf{c}^{(1)}, \mathbf{c}^{(2)}, \mathbf{c}^{(3)}$ and three associated energies $E^{(1)}, E^{(2)}, E^{(3)}$. Note here that $\mathbf{c}^{(1)}$ refers to the first possible solution for $\mathbf{c}$, whereas $c_1$ refers to the first component of $\mathbf{c}$ (i.e. the coefficient of p-orbital number one). The allyl cation Using the atom numbering scheme as shown above, and the assumptions mentioned in the previous section, we find that the Hamiltonian matrix $\mathbf{H}$ is: $$\mathbf{H} = \begin{pmatrix} \alpha & \beta & 0 \\ \beta & \alpha & \beta \\ 0 & \beta & \alpha \end{pmatrix}.$$ For example, the entry $\mathbf{H}_{12} = \beta$ because atoms 1 and 2 are bonded to each other, and the entry $\mathbf{H}_{13} = 0$ because atoms 1 and 3 are not bonded to each other. The secular equations then take the following form: $$\begin{pmatrix} \alpha & \beta & 0 \\ \beta & \alpha & \beta \\ 0 & \beta & \alpha \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix} = E\begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix},$$ and the challenge is then to find the permissible vectors $\mathbf{c} = (c_1, c_2, c_3)$ which obey this equation, which are called eigenvectors. This process is outlined in many maths books, and is thus not covered here in full detail. Briefly, however, the idea is that we move the RHS over: $$\begin{pmatrix} \alpha - E & \beta & 0 \\ \beta & \alpha - E & \beta \\ 0 & \beta & \alpha - E \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix} = \mathbf{0}, $$ and there can only be nontrivial solutions to this problem (i.e. solutions where $\mathbf{c} \neq \mathbf{0}$) if the secular determinant (i.e. the determinant of the matrix on the LHS) is zero: $$\begin{vmatrix} \alpha - E & \beta & 0 \\ \beta & \alpha - E & \beta \\ 0 & \beta & \alpha - E \end{vmatrix} = 0$$ Expanding this out allows you to obtain the eigenvalues. In increasing energy (recall that $\beta < 0$), they are $$\begin{align}E^{(1)} &= \alpha + \sqrt{2}\beta & E^{(2)} &= \alpha & E^{(3)} &= \alpha - \sqrt{2}\beta \end{align},$$ and the corresponding eigenvectors are $$\begin{align} \mathbf{c}^{(1)} &= \begin{pmatrix} 1/2 \\ 1/\sqrt{2} \\ 1/2 \end{pmatrix} & \mathbf{c}^{(2)} &= \begin{pmatrix} 1/\sqrt{2} \\ 0 \\ -1/\sqrt{2} \end{pmatrix} & \mathbf{c}^{(3)} &= \begin{pmatrix} 1/2 \\ -1/\sqrt{2} \\ 1/2 \end{pmatrix}. \end{align}$$ In other words, the lowest-energy pi-type MO, $\mathbf{c}^{(1)}$, is expressed as $$\ket{\psi^{(1)}} = \frac{1}{2}\ket{1} + \frac{1}{\sqrt{2}}\ket{2} + \frac{1}{2}\ket{3},$$ where $\ket{i}$ is the 2p atomic orbital on carbon $i$. Note that the coefficients all have the same sign, indicating that the p orbitals are all in phase with each other. However, the coefficient for carbon 2 (the middle carbon) is larger than the others. Hence, a sketch of this MO should technically show that the middle p-orbital makes a larger contribution than the outer two p-orbitals. This is what TAR86 meant by their comment on your question: Hückel MO theory will also give you different absolute values for the linear combination coefficients, whereas your drawing suggests the same coefficients throughout. Anyway, likewise for the LUMO we have $$\ket{\psi^{(2)}} = \frac{1}{\sqrt{2}}\ket{1} - \frac{1}{\sqrt{2}}\ket{3}$$ Note that: (1) the p-orbitals have different phases (different sign of coefficient), and (2) the p-orbital on carbon 2 does not contribute to the MO. This leads to the node observed in the second MO. This is why the allyl cation is electrophilic on C-1 and C-3, but not on C-2: the LUMO is $\ket{\psi^{(2)}}$, which has zero coefficient on C-2. All in all, the MO diagram for the allyl cation is as follows: Results for other / larger polyenes may be obtained in a similar fashion, albeit perhaps with more complications in finding the eigenvalues and eigenvectors. It is recommended to use software which can perform symbolic maths: for example, if you type eigenvalues of [[a, b, 0], [b, a, b], [0, b, a]] into WolframAlpha, then it tells you the eigenvalues (i.e. the energies) and the eigenvectors (i.e. the MOs), which is the same as what we found above, except that the solution above is normalised (i.e. every eigenvector is divided through by a constant such that $c_1^2 + c_2^2 + \cdots + c_n^2 = 1$):
{ "domain": "chemistry.stackexchange", "id": 8453, "tags": "molecular-orbital-theory" }
Verifying a solution vs. finding one
Question: There is an algorithmic problem $A(n)$, where $n$ is the size of the problem. It is known that, for every candidate solution S, the time it takes to verify whether it is a correct solution to $A(n)$ is $T(n)$. Does this imply a lower bound on the time it takes to find a correct solution to $A(n)$ (e.g. that it takes at least time $T(n)$)? This seems obvious to me, but obvious and true do not always coincide. Since $P \subseteq NP$, the implication probably holds if $T(n)$ means "polynomial time". But does it hold in general, for arbitrary complexity classes? Does this imply any upper bound on the time it takes to the time it takes to find a correct solution to $A(n)$ (e.g. that it can be done in time $2^{T(n)}$)? Answer: You should define carefully what it means to "find a correct solution". In trivial problems, this may be different than verifying. For example, consider the problem of deciding whether a graph is colorable in some number of colors. This problem is trivial, since every graph is colorable in $|V|$ colors. Moreover, it is easy to define such a coloring - give a distinct color for each vertex. However, verifying that a coloring is correct takes quadratic time. As for an upper bound - if verifying a solution takes time $T(n)$, than going over all solutions will take time $2^{O(T(n))}$, so not exactly $2^{T(n)}$, but close.
{ "domain": "cs.stackexchange", "id": 4112, "tags": "complexity-theory, runtime-analysis" }
Can we find the value of impulsive tension?
Question: Suppose a ball is attached to a string of length $l$ and the string is attached to the ceiling. Now when I keep the ball close to the point of suspension and release it, it will travel a distance $l$ and then impart an impulsive tension to the string. Is it possible to calculate this value of the tension developed in the string by knowing the final velocity: $(2gl)^{1/2}$ and mass of the ball $m$. I thought the impulse is $2mv$ since the ball immediately goes back up with the same speed. But to find the force I also need the time interval for the velocity change. Now how can I find that without physical measurement?? Answer: This requires some information about accalerations in circular motions etc but I will show every formula but here it is: Mass of the ball: $m$ Length of the rope: $l$ Velocity of the ball: $v$ $$F_n = \dfrac{m}{l}\cdot v^2$$ $$F_t = \text{not needed here}$$ $$F_g = m \cdot g$$ Furthermore, you need to calculate $v$. You can simply use conservation of energy: $$v = \sqrt{2 \cdot g \cdot h}$$ with h being the height difference from where you started. Assuming that the string starts with 0 velocity at a horizontal state: $$h = cos(\phi) \cdot l$$ Just do a "simple" force equilibrium in the direction of the force and you get: $$F = \dfrac{m}{l} * (2 g\cdot cos(\phi) l ) + cos(\phi) \cdot m g$$ $$\Rightarrow F = cos(\phi) \cdot 3 \cdot m \cdot g$$ Hope this helps.
{ "domain": "physics.stackexchange", "id": 58823, "tags": "newtonian-mechanics, momentum, energy-conservation" }
Why does solving integer factorization by using Shor’s algorithm surpass the conventional computation
Question: What is the reason that solving integer factorization by using Shor’s algorithm surpass the conventional computation? Can these quantum characteristics be used for different problems? I would like to understand why in this specific case the quantum solution surpasses the conventional computing solution, whereas in many other cases, the quantum has not showed yet a better solutions? What is so unique in Shor’s algorithm that enable the quantum to work better? Answer: This question is actually related to the open problem in computer Science about is there any theoretically provable quantum advantage. I don't think people have already solved the problem yet. However, from my perspective, I think there is a simple answer to the question: All quantum algorithms that is believed to be exponentially faster than all classical counterparts are equivalent to "finding a hidden subgroup" in some Abelian group. In other words, the quantum algorithm is finding the non trivial period in some discrete structure, if there is any. Let's take a look, first, on the Shor's algorithm. The most important observation, for solving the factoring, is that we only need to find the even order $d=2r$, by randomly choosing a basis $a$, of the modular group of the number $N$ that we want to factor (the order $d$ is defined to be the minimum integer such that $a^d \equiv 1 \mod N$, $a$ has to be coprime with $N$), which is defined as: $$a^d=a^{2r} \equiv 1 \mod N$$ After factoring the above equation, we get $$(a^r-1)(a^r+1) \equiv 0 \mod N \qquad$$ This implies two possibilities, for any prime factor $p$ of $N$: 1.$p$ is a factor of $(a^r+1)$ 2.$p$ is a factor of $(a^r-1)$. We are happy now because now we can use the famous Euclid's algorithm (GCD) for both $gcd(N,a^r+1)$ and $gcd(N,a^r-1)$ to get $p$. (There are bad cases, however, when $N$ is a factor of $(a^r+1)$. But the possibility of such case is bounded.) Now we can come back to the most important conclusion: The quantum computer, in its nature, is pretty good at finding such $r$. Because $r$ is the period of the subgroup generated by $a$ in the Abelian multiplication group of modular $N$. The "trick" used by quantum computer, is called Quantum Fourier Transform (QFT). First, we create a quantum circuit which do the modular exponentiation with respect to $a$ modular $N$. Then we prepare a uniform superposition off all $n$ qubits state. We can prove that, after using inverse QFT, we will finally measure a good approximation of $1/r$, with high probability. Take a look at the example of $N=15$ and $a=7$. The right answer for the even order, should be $7^4 \equiv 1 \mod 15$. And we can take $gcd(7^2-1,15)=3$ as the answer of our factoring. The hidden subgroup here, is $\{1,7, 7^2 \mod 15 =4, 7^3 \mod 15 = 13\}=\{1,7,4,13\}$. We can check that this is a subgroup of modular $15$ group because every element has an inverse. The goal of shor's algorithm, is exactly finding the period of this hidden subgroup! $(1 \rightarrow 7 \rightarrow 4 \rightarrow 13 \rightarrow 1 \rightarrow 7 \cdots)$. We can even take a look at the quantum circuit for solving the problem: $N=15$ when $a=7$" /> The measuring result of 20000 shot, of the above circuit is : Now we can see that with high probability, we will measure $010$, which means $1/r \approx 0.010= 1/4$. So shor's circuit, in this case, indeed help us find the non trivial even order $4$! The magic of the whole thing behind the period finding, is that there is a period in the quantum phase of a qubit. We can carefully match the period of our problem to the period of "phase" of a quantum state that we can construct. The trick of phase manipulation, is also used in all other quantum algorithms, such as Grover algorithm. But the idea behind the Grover algorithm is rotation the solution vector in the phase space, so it has no exponential speedup.
{ "domain": "quantumcomputing.stackexchange", "id": 5361, "tags": "shors-algorithm" }
When does this absolute of classical mechanics hold in quantum mechanics?
Question: Background + Question In classical mechanics we all know: $$ \dot U =\frac{dU}{dt} = \sum_i \frac{\partial U}{\partial x_i} \frac{dx_i}{dt} = \sum_i v_i \frac{\partial U}{\partial x_i} = v.\nabla U $$ Hence, $$ \dot U - v. \nabla U = 0 $$ But when would something similar hold in quantum mechanics? Assuming $U$ is a function of $x$, i.e they both commute. And simplifying to a $1$-d case: $$ \left\langle \dot U \right\rangle - \left\langle \frac{ \frac{\partial U}{\partial x}}{m} \right\rangle \left\langle p \right\rangle = 0 $$ (Note: In the Heisenberg picture the velocity operator is merely the momentum operator divided by mass) My Attempt Using the Heisenberg picture: $$\implies \left\langle \,[ \frac{\frac{\hat p^2}{2m}, \hat U \,]}{-i \hbar} \right\rangle - \left\langle \frac{ U_x}{m} \right\rangle \left\langle \hat p \right\rangle = 0 $$ $$\implies \left\langle \,[ \frac{ \hat p^2, \hat U \,]}{-i \hbar} \right\rangle - 2 \left\langle U_x \right\rangle \left\langle \hat p \right\rangle = 0 $$ $$\implies \left\langle \hat p^2 \hat U - \hat U \hat p^2 \right\rangle + 2 i \hbar \left\langle U_x \right\rangle \left\langle p \right\rangle = 0 $$ Going to the explicit integrals: $$\implies -\hbar^2 \int_{-\infty}^{\infty} \bar{\psi} ( \frac{\partial^2 }{\partial x^2} (\hat U \psi) - \hat U \frac{\partial^2 \psi }{\partial x^2} ) dx + 2 i \hbar \left\langle U_x \right\rangle \left\langle p \right\rangle = 0 $$ $$\implies -\hbar^2 \int_{-\infty}^{\infty} \bar{\psi} ( \frac{\partial^2 }{\partial x^2} (\hat U \psi) - \hat U \frac{\partial^2 \psi }{\partial x^2} )dx + 2 i \hbar \left\langle U_x \right\rangle \int_{-\infty}^{\infty} \bar{\psi} (-i \hbar)\frac{\partial }{\partial x} \psi = 0 $$ $$ \implies -\hbar^2 \int_{-\infty}^{\infty} \bar{\psi} ( \frac{\partial^2 }{\partial x^2} (\hat U \psi) - \hat U \frac{\partial^2 \psi }{\partial x^2} )dx + 2 \hbar^2 \left\langle U_x \right\rangle \int_{-\infty}^{\infty} \bar{\psi} \frac{\partial }{\partial x} \psi = 0 $$ $$ \implies \int_{-\infty}^{\infty} \bar{\psi} ( \frac{\partial^2 }{\partial x^2} (\hat U \psi) - \hat U \frac{\partial^2 \psi }{\partial x^2} )dx - 2 \left\langle U_x \right\rangle \int_{-\infty}^{\infty} \bar{\psi} \frac{\partial }{\partial x} \psi = 0 $$ $$ \implies \int_{-\infty}^{\infty} \bar{\psi}( U_{xx} + 2(U_{x} - \left\langle U_x \right\rangle ) \frac{\partial }{\partial x })\psi dx = 0 $$ And now I'm stuck ... Answer: You shouldn't expect a theorem like that to hold. Consider a particle in a potential $\frac{1}{2}kx^2$. You could imagine $|\psi(x)|^2$ as a symmetric function, with one wave packet moving fast and to the left, and one moving fast and to the right. Then $\frac{d}{dt}\langle U\rangle$ should be large and positive, while $\langle U_x\rangle=\langle k x\rangle=0$ (the expectation value of $x$ is zero because the particle is moving symmetrically outwards), and $\langle p\rangle=0$ (one part of the wavepacket is moving to the left and fast, and one to the right.) In fact, you can use Ehrenfest's theorem to get the time rate of change of $\langle U\rangle$: \begin{align*} \frac{d}{dt} \langle U\rangle&=\frac{1}{i\hbar}\langle[U,H]\rangle\\ &=\frac{1}{i\hbar}\langle U \frac{p^2}{2m}-\frac{p^2}{2m} U\rangle\\ &=\frac{-\hbar^2}{i\hbar 2 m}\langle U\partial_x^2-\partial_x^2U\rangle\\ &=\frac{i\hbar}{2 m}\langle U\partial_x^2-\partial_x(U_x+U\partial_x)\rangle\\ &=\frac{i\hbar}{2 m}\langle U\partial_x^2-U_{xx}-U_x\partial_x-U_x\partial_x-U\partial_x^2\rangle\\ &=\frac{i\hbar}{2 m}\langle -U_{xx}-2 U_x \partial_x\rangle\\ &=\frac{i\hbar}{2 m}\langle -U_{xx}-\frac{2}{-i \hbar} U_x p\rangle\\ &=-\frac{i\hbar}{2 m}\langle U_{xx}\rangle+\frac{1}{m}\langle U_x p\rangle \end{align*} As $\hbar\to 0$, the first term disappears while the second term (which is similar to the one you were searching for) stays. In general, you can't always simplify things to nice expressions in terms of $\langle x\rangle$ and $\langle p\rangle$. There will be cross-terms! The lesson is that Ehrenfest's theorem hints towards classical behavior at the macroscopic scale, but doesn't prove it. A better hint is the path integral - it sounds scary but is really just the identity $e^{-i H t}=e^{-i H \delta t}e^{-i H \delta t}\cdots e^{-i H \delta t}$ - which makes the classical mechanics principle of stationary action obvious. For even better "proof" of classical behavior for large numbers of particles, you need the phenomenon of decoherence. It would be nice if everything was solved by taking expectation values, but that isn't the case.
{ "domain": "physics.stackexchange", "id": 33574, "tags": "quantum-mechanics, classical-mechanics" }
Function to remove set of trailing characters from string
Question: I needed the ability to truncate the trailing end of path strings in a routine that builds new search paths as it recursively searches directories. After not finding what I was looking for I created function below. Expected behavior is that function remove_trailing_chars() will update in to remove any occurrences of chars contained in rem, iff they exist contiguously at the very end of the original version of in. Once a character in the in string becomes the trailing char and it is not included in rem, then function updates in with latest version and returns. it has been tested for several variations of input char arrays in and rem, including these. char in[] = "this is a string with \\ *\\*";//edit this string as needed to test char rem[] = "\\* ";//edit this string as needed to test results in "this is a string with" without following space char in[] = "this is a string with *\\*";//edit this string as needed to test char rem[] = "\\*";//edit this string as needed to test results in "this is a string with " includes following space I am interested in suggestions for efficiency improvements in speed, and readability improvements. (suggestions on more idiomatic methods are welcome.) I do not believe memory should be an issue with this for my usage, but if there are thoughts on any pitfalls in that area, please include them as well. Here is the code, including one usage case... (Compiler command line and its disassembly are included further down as well.) #include <stdbool.h>//bool #include <string.h>//strlen, strcpy #include <stdlib.h> //prototypes void remove_trailing_chars(char *in, const char *rem); /// demonstrate removing all chars in 'rem' if trailing in 'in'. int main(void) { char in[] = "this is a string with \\ *\\*";//edit this string as needed to test char rem[] = "\\* ";//edit this string as needed to test remove_trailing_chars(in, rem); return 0; } /// remove all occurrences of chars in 'rem' from end of 'in' void remove_trailing_chars(char *in, const char *rem) { bool found = true;//when false, last char of 'in' found no matches in 'rem' int len = strlen(in); char in_dup[len+1]; strcpy(in_dup, in); while(found) { found = false;//for this element of rem len = strlen(in_dup); int i = 0; while(rem[i]) { if(in_dup[len-1] == rem[i]) { in_dup[len - 1] = 0; found = true; break; } else { i++; } } } strcpy(in, in_dup); } Using GCC, build was done with: Release target: mingw32-gcc.exe -Wall -O2 -Wall -std=c99 -g -c C:\tempExtract\remove_trainling_chars\main.c -o obj\Release\main.o Debug target: (to allow viewing disassembly) gcc.exe -Wall -g -Wall -std=c99 -g -c C:\tempExtract\remove_trainling_chars\main.c -o obj\Debug\main.o Answer: Accessing array out of bounds The code here may try access in_dup[-1] in some cases: len = strlen(in_dup); int i = 0; while(rem[i]) { if(in_dup[len-1] == rem[i]) ^^^^^^^^^^^^^ That is, when the input string is empty, or when the entire input string is made of characters in rem, then in_dup will become empty, len becomes 0, and len - 1 will be an illegal access on in_dup. In short, the code is missing a check on reaching the beginning of the input. Avoid unnecessary copying The code copies in to in_dup, works with in_dup, then copies back from it to in. This is unnecessary, you could work directly with in. Avoid unnecessary computations len = strlen(in_dup) is executed every time after some characters are removed from the end. This is inefficient, because strlen needs to loop over the entire string. Instead, you could count the number of characters removed, and then you'll know exactly the end of the input string. Simplify algorithm Consider this simpler algorithm: Loop from the end of the input, going backwards, until the beginning Loop over the characters in rem, check if it matches the last character of the input If there is a match, delete the last character and break out of this inner loop If there is no match, then we're done, break out of the outer loop Implementation, including the other tips above applied as well: void remove_trailing_chars(char *in, const char *rem) { int remLength = strlen(rem); for (int i = strlen(in) - 1; i >= 0; i--) { int j = 0; while (j < remLength) { if (in[i] == rem[j]) { in[i] = '\0'; break; } j++; } if (j == remLength) break; } }
{ "domain": "codereview.stackexchange", "id": 42382, "tags": "algorithm, strings, c99" }
Why is the density of states required conceptually? Should it be seen as a mathematical trick related to Fourier series?
Question: [edit]: My misunderstanding is more precisely asked here: Density of states and boundary conditions: how the density of states is physical if it depends on box size :it was suggested to open a new post as the details of the question changed too much from this one. My question is about the motivation of defining density of states. This is a notion I did not use since a while so I might be saying stupid mistakes here. I take a simple example: density of states of $1D$ gas of electron. If I want to describe a function defined in $[0,L]$ I can describe use Fourier series, considering the wavevectors $k_n=n \frac{2 \pi}{L}, n \in \mathbb{Z}$. My function will then be $L$-periodic, but as soon as the physics I am interested in is within $[0,L]$, I can encode an "arbitrary" shape playing with the Fourier coefficients. Now, the energy of a free electron of wavelength $k$ is $E=\frac{\hbar^2 k^2}{2m}$. The density of states in the $|k|$ space is: $L/(2\pi)*2*2$ where the first $*2$ is here to take in account the two possible spins for the electron, the second one is to take in account the two direction of propagations (as it is the modulus of $|k|$ space) From this, I can deduce the density of states in the $E$ space: $$\rho(E)=\frac{d|k|}{dE} \rho(k(E))=\sqrt{\frac{m}{2E}}\frac{1}{\hbar}*L/(2\pi)*4=\sqrt{\frac{m}{2 E \hbar^2}}\frac{2 L}{\pi}$$ Then we use this density of state $\rho(E)$ in order to compute various quantities depending on $E$ by the mean of an integral: $$\int_0^{+\infty} dE\rho(E) f(E)$$ My questions When using the density of states to compute quantities, an approximation is being done. Indeed we approximate a sum by an integral. This will be more and more correct the bigger the length $L$ is. But for any fixed length $L$ we are doing a "mistake" by integrating. Is that correct ? The starting point of all this reasonning is a "mathematical trick". We want to use Fourier series to describe the physics. This comes at the cost of desribing the physics in a finite length space $[0,L]$ but this has the advantage to be easier to handle: we can easily find the density of states that then allow us to go into the continuum. Is that correct as well ? Can we then say that the notion of density of state is just a mathematical tool that is introduced to go from an easy physical situation (the physics is discrete) to a more complicated scenario (continuum). But someone could come and start by a continuum description right from the beginning. Then, he would never need to use any density of states. Answer: TL; DR: DOS is converting summation over (multiple) quantum numbers into the (single) integral over energies Normalization in a box Normalization in a box $[0,L]$ is indeed a mathematical trick, but in the end one usually takes the limit $L\longrightarrow+\infty$. The confusion here may arise due to the fact that taking this limit is rarely done explicitly, as $L$ often cancels out from the final result. One could perform calculations without using the mathematical trick, working directly with the states normalized to delta-function, and defining the density-of-states as an integral: $$ D(E)=\int dk \delta\left(E-\epsilon(k)\right) $$ DOS is a physical quantity As the answer by @JMurray correctly points out, density-of-states is a measurable physical quantity. In particular, it is measured on many kinds of spectroscopic measurements: those using electron spectroscopy, light spectroscopy or conductance measurements. DOS as an object of study Density-of-states is the central object of study in some physical theories, since it pretty much summarizes all the physics of the relevant phenomena. Notably, studies of localization and weak localization are often reduced to calculating DOS. Ballistic transport Another insight into the meaning of the density-of-state is ballistic transport. Transport through quantum dots often interpreted as measuring the density-of-states in the quantum dot (see, e.g., the classical Meir & Wingreen's paper). Another important case is the conductance quantization in 1D channels, which results from the exact cancellation of the 1D density-of-states and the group velocity. This phenomenon is also behind the integer quantum Hall effect (which is also often analyzed in terms of DOS). More general view of DOS The main point of DOS is converting summation over (multiple) quantum numbers into the (single) integral over energies. This is extremely useful, since many physical quantities either depend only on energy (e.g., partition function) or can be made dependent only on energy via integration (e.g., integrating particle cross-section over angles). Thus, given quantum numbers $k_1, k_2, ..., k_n, s_1, s_2, ..., s_m$, where $k_j$ are continuous and $s_j$ are discrete, we can write: $$ \int dk_1\int dk_2...\int dk_n\sum_{s_1,s_2,...,s_m}f\left(\epsilon_{s_1, s_2,..., s_m}(k_1, k_2,..., k_n)\right) = \int dE \rho(E)f(E), $$ from which the definition of $\rho(\epsilon)$ immediately follows: $$ \rho(E) = \int dk_1\int dk_2...\int dk_n\sum_{s_1,s_2,...,s_m}\delta\left(E - \epsilon_{s_1, s_2,..., s_m}(k_1, k_2,..., k_n)\right). $$ If we had only one continuous quantum number, $k$ and energy $\epsilon(k)$, the DOS would be simply the Jacobian: $$ \int dk f(\epsilon(k)) = 2\int dE f(E)\rho(E),\\ \rho(\epsilon) = 2\left|\frac{d\epsilon(k)}{dk}|_{\epsilon(k)=E}\right|^{-1}, $$ where factor $2$ appears, if we have symmetry $\epsilon(k)=\epsilon(-k)$ (in case of asymmetric branches, one has to sum over all of them).
{ "domain": "physics.stackexchange", "id": 78825, "tags": "quantum-mechanics, condensed-matter, fourier-transform, density-of-states" }
What's in my box?
Question: Let's say that I have a box which is 100% empty. I fly into the vacuum of space, open the box and close it after a certain time. Then I go back to earth and my question is.. What's in my box? (particles/atoms/molecules?) Answer: How big is your box? Where do you open the box? As the link to wikipedia says: the interstellar medium is extremely dilute by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of 10^6 molecules cm−3. In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as 10^−4 ions cm−3. Compare this with a number density of roughly 10^22 cm−3 for liquid water. By mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, 89% of atoms are hydrogen and 9% are helium, with 2% of atoms being elements heavier than hydrogen or helium You could have a wildly varying amount of "stuff" in your box. And are you talking only baryonic matter? Because there are many, many things that can be considered "stuff" in the universe, which may already have been in your box. My friend Steve (Larian) answers a similar question here: How vacuous is intergalactic space?
{ "domain": "physics.stackexchange", "id": 3223, "tags": "space, vacuum, interstellar-matter" }
Changes to a Length of Physical Ruler Caused by Gravity vs Caused by Cosmological Expansion of Space
Question: I read here (Feynmann Lectures, Lecture 42) that "Just as time scales change from place to place in a gravitational field, so do also the length scales. Rulers change lengths as you move around." (Rulers also change as you re-orient them; see footnote 2, in the link.) That reads to me like chemical bonds and other internal forces holding the physical ruler together do not stop it from changing length due to changes in space-time induced by the mass. However, in cosmology, where all of space is expanding, galaxies become further apart from each other, but an individual galaxy, itself, does not expand (due to the forces that hold it together), people do not expand (also due to the internal forces that hold us together), nor do physical rulers. That is, lengths of physical rulers do not change because of their internal forces. I believe distance, in cosmology, as measured by the physical ruler is called "proper distance", vs "co-moving distance", which does expand as the universe does. In the first paragraph, the change to space affects the length of the ruler, regardless of the ruler's internal forces but, in the second paragraph, the change to space does not affect the ruler because of the ruler's internal forces. I am confused regarding why the physical ruler's internal forces do not prevent length change in the first paragraph, but do in the second. After all, in both cases, space is changing in a way that affects length or distance. Maybe the reason is that the type of change to space is different since in one case it is caused by matter and in the other case it is caused by dark energy? Answer: It's a good question and you are right, there seems to be an unresolved contradiction at the heart of cosmology. It can be solved by presuming that, as the universe expands, all objects - people, atoms, galaxies etc... expand too. This leads to an alternative interpretation of redshift. If the size of atoms and Plancks constant were lower in the past, then from $E=hf$, the energy of photons arriving from a distant star would be lower, hence the redshift. Here, https://vixra.org/abs/2006.0209 in figure 3 is a cosmology that doesn't have the problem you highlighted with your question. The alternative approach naturally predicts that the matter density will be measured as 0.25 or 1/3 depending on how it's measured. This seems to be the case. So perhaps the alternative theory answers your question.
{ "domain": "physics.stackexchange", "id": 77049, "tags": "general-relativity, cosmology, space-expansion" }
Generate String Combinations
Question: I have created an algorithm that can generate all possible combinations of a string given its length, and character set (it uses a custom numeral system to do this). How could I improve its performance and readability? using System; using System.Text; public class Program { public static void Main(string[] args) { Bruteforce brute = new Bruteforce(9, new Range(0, Helpers.Chars.Length - 1)); brute.PrintResults(); Console.ReadKey(true); } } public class Bruteforce { private int _Length; private Range _Range; public int Length { get { return _Length; } } public Range Range { get { return _Range; } } public Bruteforce(int length, Range range) { _Length = length; _Range = range; } public Bruteforce(int length, int min, int max) { _Length = length; _Range = new Range(min, max); } public void PrintResults() { int num = this.Range.Max; int[] arr = new int[this.Length]; for (int c = 0; c < arr.Length; c++) arr[c] = this.Range.Min; while (true) { arr[0]++; if (arr[0] > this.Range.Max) { for (int i = 1; i < arr.Length; i++) { if (arr[i - 1] > this.Range.Max) { arr[i - 1] = this.Range.Min; arr[i]++; } } } int num2 = arr[0]; for (int i = 1; i < arr.Length; i++) num2 += arr[i]; Console.WriteLine(Helpers.ArrayToString(arr)); if (num2 == this.Range.Max * arr.Length) break; } } private void PrintArray(int[] arr) { for (int i = 0; i < arr.Length; i++) { if (i == arr.Length - 1) { Console.WriteLine(arr[i]); } else { Console.Write(arr[i] + ", "); } } } } public class Helpers { public static char[] Chars = new char[] { 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z' }; public static string ArrayToString(int[] arr) { StringBuilder builder = new StringBuilder(); for (int i = 0; i < arr.Length; i++) { if (arr[i] >= 0) { builder.Append(Helpers.Chars[arr[i]]); } else { throw new Exception(); } } return builder.ToString(); } } public class Range { private int _Max, _Min; public int Max { get { return _Max; } } public int Min { get { return _Min; } } public Range(int min, int max) { _Max = max; _Min = min; } } Answer: string.Join(string, IEnumerable<T>) Your method ArrayToString can be replaced with a single line : public static string ArrayToString(int[] arr) { return string.Join("", arr.Select(x => Chars[x])); } Your method PrintArray is never used but I assume it's just there for testing purpose. Code style Auto properties You can use auto-properties to avoid explicitly creating a backing field for your variables : private int _Length; public int Length { get { return _Length; } } Can be simplified to a single property : public int Length { get; } You can do the same for all of your properties. Redundant this qualifier You don't have to write this. everywhere e.g this.Range.Max as there are no variables with the same names there, the compiler will guess that you are referring to the container class.
{ "domain": "codereview.stackexchange", "id": 23661, "tags": "c#, strings, array" }
Understanding graph minor theorem
Question: This question is two-fold, and is mainly reference-oriented: Is there somewhere where the main intuitions for proving graph minor theorem are given, without going too much into the details? I know the proof is long and difficult, but surely there must be key ideas that can be communicated in an easier way. Are there other relations on graphs that can be shown to be well quasi-orders, maybe in a simpler way than for the minor relation? (obviously I am not interested in trivial results here, like comparing sizes). Directed graphs are also in the scope of the question. Answer: The following book covers some material related to the proof of the graph minor theorem (Chapter 12). Reinhard Diestel: Graph Theory, 4th edition, Graduate Texts in Mathematics 173. The author states: "[...] we have to be modest: of the actual proof of the minor theorem, this chapter will convey only a very rough impression. However, as with most truly fundamental results, the proof has sparked off the development of methods of quite independent interest and potential." An electronic version of the book can be viewed online. http://diestel-graph-theory.com/
{ "domain": "cstheory.stackexchange", "id": 3000, "tags": "reference-request, graph-theory, graph-minor" }
Why is pcl_ros::transformPointCloud changing the frame_id
Question: Hi, if got the following code: cloud_out->header.frame_id = "<out_frame>"; tf_listener_->lookupTransform( frame_out, frame_in, stamp, transform); ROS_INFO("Frame before: %s", cloud_out->header.frame_id.c_str()); pcl_ros::transformPointCloud( *cloud_in, *cloud_out, transform ); ROS_INFO("Frame after : %s", cloud_out->header.frame_id.c_str()); The first output, gives out_frame, but after the transformation the frame is set to the frame of cloud_in For me this looks like an error, is there a reason to change the frame_id like this? Originally posted by Tobias Neumann on ROS Answers with karma: 179 on 2015-01-23 Post score: 2 Original comments Comment by paulbovbel on 2015-01-29: are these sensor_msgs Pointcloud2s or pcl::Pointclouds? Comment by Tobias Neumann on 2015-01-30: They are both pcl::PointCloudpcl::PointXYZI::Ptr Comment by paulbovbel on 2015-01-30: Which version of ROS? Comment by Tobias Neumann on 2015-01-30: sry, its Ubuntu 12.04 with Hydro and pcl-1.7 Answer: So, this is not actually a bug. The cloud_out parameter passed in by reference is pretty much just a placeholder for allocation, and not expected to contain any useful data. These types of functions are everywhere in tf. The Transform object that you feed to ::transformPointCloud actually doesn't have any header information attached (not a StampedTransform), so this function is not expected to be doing anything useful with headers, it simply applies the transform to cloud_in, and overwrites cloud_out. Anyways, the function call you're actually looking for is http://docs.ros.org/indigo/api/pcl_ros/html/namespacepcl__ros.html#aad1ce4ad90ab784aae6158419ad54d5f, where you supply the target frame and a listener. That call will actually chain to the one you're using, and update the header after the transformation is applied. Originally posted by paulbovbel with karma: 4518 on 2015-02-03 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 20665, "tags": "ros, pcl, transform, frame-id, pointcloud" }
Are there two forces applied by the road to an accelerating car?
Question: A car is travelling straight in one direction. If the car is accelerating (a positive one, so it becomes faster) on a nonfrictionless road, then the ground exerts two forces to the car, each in opposite direction, to the car, right? The first one is required to accelerate the car, and the second one is the static friction. The two forces are opposite in direction, but different in magnitute. The first force is greater (in magnitude) than the second one (static frictional force) in order for $\sum F_x >0$ and thus it's accelerating. Answer: then the ground exerts two forces to the car, each in opposite direction, to the car, right? No. The ground only exerts one force, and that's the static friction force acting forward on the wheel. The other force is exerted by the wheel on the ground acting backward. The two are equal and opposite per Newton's third law. The first one is required to accelerate the car, and the second one is the static friction. The force that accelerates the car is the static friction force. It is the only external force acting forward on the car and is therefore responsible for its acceleration per Newton's second law. That force is the equal and opposite reaction to the force the wheel exerts backward on the ground per Newton's third law. The force the wheel exerts on the ground is responsible for accelerating the earth, per Newton's second law, as discussed below. The first force is greater (in magnitude) than the second one (static frictional force) in order for $\sum F_x >0$ and thus it's accelerating. Again the forces are equal and opposite per Newton's third law. You need to apply Newton's second law individually to the car and the ground to determine the acceleration of each. The static friction force is the only external force acting forward on the car. Then, per Newton's second law (neglecting air resistance and rolling resistance) the static friction force causes the car to accelerate forward with an acceleration of $a_{m}=F/m$ where $m$ is the mass of the car. Per Newton's second law the force the wheel exerts backwards on the ground applying a torque on the Earth giving it an angular acceleration $\alpha$ backwards equal to $\alpha=F/Mr$ where $M$ is the mass of the Earth and $r$ is its radius. Since the product $Mr$ is so large, its angular acceleration is infinitesimal and therefore it appears stationary. Hope this helps.
{ "domain": "physics.stackexchange", "id": 84226, "tags": "newtonian-mechanics, classical-mechanics, friction" }
Clean code for array comparison
Question: Following snippet reads CSV Line count using BinaryReader. Currently it checks \r and \n for line delimiters. private static int GetLineCount(string fileName) { BinaryReader reader = new BinaryReader(File.OpenRead(fileName)); int lineCount = 0; char lastChar = reader.ReadChar(); char newChar = new char(); do { newChar = reader.ReadChar(); if (lastChar == '\r' && newChar == '\n') { lineCount++; } lastChar = newChar; } while (reader.PeekChar() != -1); return lineCount; } I want to use Environment.NewLine string and make it work on windows\unix. I want to refactor above to find word occurance and then match for the word Environment.NewLine. The issue is that I am not able to refactor following for word (more specifically change lastChar , newChar into Array. do { newChar = reader.ReadChar(); if (lastChar == '\r' && newChar == '\n') { lineCount++; } lastChar = newChar; } while (reader.PeekChar() != -1); Answer: Environment.NewLine always returns \r\n so it won't help you in parsing different line endings. If your task is to count the number of lines then it would be much easier just to do smth. like: private static int GetLineCount(string fileName) { return File.ReadLines(fileName).Count(); } ReadLines method automatically parses different line endings.
{ "domain": "codereview.stackexchange", "id": 2977, "tags": "c#" }
String theory - OPE and primary operators
Question: First, a disclaimer: I am new to Physics SE, and I am primarily a mathematician, not a physicist. I apologise in advance for the possibly poor quality of the question, any and thank you for your patience. I am currently trying to understand some basics of String Theory, based on the script by D. Tong, available at: http://www.damtp.cam.ac.uk/user/tong/string.html . I am badly confused about the OPE and some related issues. (For definition, see the mentioned script, pages 69 onwards; I don't quite know enough to know what to mention here). I do understand that a number of "operators" $O(z)$ are supposed to be "inserted" at various points $z$ of the complex plane, and the underlying physics is somehow supposed to be encoded in the singular parts of expressions $O_1(z) O_2(w)$ with $w \simeq z$. It is not quite clear to me how it can be that the singular parts somehow seem to be the only thing that matters, but this is probably too philosophical. I would like for some explanation of the so called primary operators (page 76). Firstly, what is the intuition behind those? Is there some physical entity that they represent? The definition says that $O(z)$ is primary, if it has the OPE with the stress tensor $T(z)$ of the form: $$ T(z)O(w) ~=~ \frac{h}{(z-w)^2}O(w) + \frac{\partial O(w)}{z-w}+\ldots$$ At the same time, it says that this is just saying that the OPE terminates at the second order, so it would sound as if it is always the case that if OPE terminates at the second order, the OPE has this particular form. Is this the case? In particular, it would seem that if $O_1(z)$ is primary with $h_1$ and $O_2(z)$ is primary with $h_2$, then $(O_1+O_2)(z)$ has the pole of order at most $2$, but does not have OPE of this form (or am I getting it wrong?). Answer: If you consider the $T(z)O(0)$ OPE, you want to write down all the singular terms. First, there may be singular terms that are more singular than $1/z^2$. If they're there, it means that $O(0)$ isn't a "tensor field". For example, $T(z)$ itself isn't a tensor field in CFTs with $c\neq 0$ because there is a $c/z^4$ term in the OPE. However, even if $O(0)$ is a tensor field and the $1/z^2$ and $1/z$ are the only ones that appear in the OPE, it doesn't mean that $O(0)$ is a primary operator. Quite likely, it is not one. What the primary operator Ansatz requires that the term going like $1/z^2$ is a multiple of the original operator $O(0)$, the same one! So the primary operator is an "eigenstate" of the stress-energy tensor, in a sense. Most general superpositions of primary operators won't be primary operators. If you translate the primary operator to a state in the Hilbert space by the state-operator correspondence, it will be an eigenstate of $L_0$ and the highest state vector (a vector in a representation of the Virasoro algebra with the minimum possible eigenvalue of $L_0$ among the vectors in the representation). The absence of $1/z^3$ and higher singularities is equivalent to the corresponding state's being annihilated by $L_n$ for positive values of $n$; and then there's the extra "eigenstate" condition under $L_0$, one that can be seen in the coefficient of the $1/z^2$ term of the OPE. In some sense, it is unnatural to combine primary operators with different dimensions $h$ into superpositions: it violates the "dimensional analysis" because these operators have the units of ${\rm mass}^h$.
{ "domain": "physics.stackexchange", "id": 7001, "tags": "string-theory, conformal-field-theory, operators" }
How to express the linear growth equation in Cosmology in terms of $\partial_a$
Question: I am trying to understand the following claim from a professor, in the context of studying the evolution of the fluid that fills the universe according to Cosmology: If we take the linear growth equation: $$\partial^2_\tau\delta(\vec{k},\tau)+\mathcal{H}(\tau)\partial_\tau\delta(\vec{k},\tau)-\dfrac{3}{2}\Omega_m(\tau)\mathcal{H}^2(\tau)\delta(\vec{k},\tau)=0\ \ \ \ \ \ \ \ \ \ (1)$$ and rewrite it in terms of derivatives with respect to the scale factor $a$, we get: $$-a^2\mathcal{H}^2\partial^2_a\delta+\dfrac{3}{2}\mathcal{H}^2[\Omega_m(a)-2]a\partial_a\delta+\dfrac{3}{2}\Omega_m\mathcal{H}^2\delta=0\ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$ where the notation used is the following: $\tau$ denotes conformal time, where $d\tau=dt/a$ with $t$ being coordinate time and $a$ the scale factor that quantifies the expansion of the universe. $\rho=\bar{\rho}(1+\delta)$ is the total density of the cosmological fluid, where $\bar{\rho}$ is the background density and $\delta$ the density contrast. We are considering $\rho\simeq\rho_m$, that is, a matter-dominated universe. $\mathcal{H}=\dfrac{\partial_\tau a}{a}$ is the conformal Hubble parameter. I understand very well where equation (1) comes from, but I am having trouble getting to (2). In order to prove (2) from (1), the first thing I have done is to write the first and second order partial derivatives with respect to conformal time $\tau$ in terms of the partial derivatives with respect to the scale factor $a$, obtaining (I omit here the boring calculations): $$\begin{cases}\partial_\tau=a\mathcal{H}\partial_a \\ \partial^2_\tau=(\partial^2_\tau a)\partial_a+a^2\mathcal{H}^2\partial^2_a\end{cases}$$ If I use the second expression in order to rewrite (1), what I get is: $$\partial^2_\tau\delta+\mathcal{H}\partial_\tau\delta-\dfrac{3}{2}\Omega_m\mathcal{H}^2\delta=0\ \ \Rightarrow\ \ [(\partial^2_\tau a)\partial_a+a^2\mathcal{H}^2\partial^2_a]\delta+a\mathcal{H}^2\partial_a\delta-\dfrac{3}{2}\Omega_m\mathcal{H}^2\delta=0\ \ \Rightarrow$$ $$\Rightarrow\ \ -a^2\mathcal{H}^2\partial^2_a\delta-[(\partial^2_\tau a)+a\mathcal{H}^2]\partial_a\delta+\dfrac{3}{2}\Omega_m\mathcal{H}^2\delta=0$$ The middle term seems to be the problematic one. In order to recover (2), I would need to prove that: $$-\bigg[\dfrac{1}{a}(\partial^2_\tau a+\mathcal{H})\bigg]=\dfrac{3}{2}\mathcal{H}^2(\Omega_m-2)$$ But... how? I suspect I might need to use the second Friedmann equation, since its left hand side looks suspiciously similar to $(1/a)(\partial^2_\tau a)$, but with the derivative with respect to coordinate time instead of conformal time: $$\dfrac{1}{a}\partial^2_t a=\dfrac{4\pi G}{3}\bigg(\rho+\dfrac{3P}{c^2}\bigg)$$ but I don't know how to proceed to get the correct result. Help, please? Edit: I am now quite sure that equation (2) does in fact not contain any typos, since its Fourier space version is used several times later on in the notes. Answer: We have the linear growth equation $$\partial_\tau^2 \delta+ \mathcal H \partial_\tau \delta - \frac{3}{2}\Omega_m(\tau) \mathcal H^2 \delta = 0\tag{A}$$ And we want to rewrite this in terms of derivatives with respect to scale factor $a$ $$-a^2 \mathcal H^2 \partial_a^2 \delta + \frac{3}{2} \mathcal H^2 \left[ \Omega_m(a) -2\right] a \partial_a \delta + \frac{3}{2}\Omega_m \mathcal H^2 \delta = 0 $$ Where we have that $$d\tau = dt / a\tag{B}$$ $$\mathcal H = \frac{1}{a} \frac{ \partial a} { \partial \tau } = \frac{ \partial a} { \partial t } =a H \tag{C}$$ Assume we are in a universe with only matter and cosmological constant. We can start with the usual Friedmann equation $$\frac{H^2} {H_0^2} = \Omega_{m,0} a^{-3} + \Omega_{\Lambda,0}\tag{1}$$ Rearrange so that $$\begin{align} 1 &= \Omega_{m,0} a^{-3} \frac{H_0^2} {H^2} + \Omega_{\Lambda,0} \frac{H_0^2} {H^2} \\ &\equiv \Omega_m(a)+\Omega_\Lambda(a)\tag{2} \end{align} $$ Where in the second line I have introduced the definition of what people usually mean when they write $\Omega_m(a)$ and $\Omega_\Lambda(a)$ $$\Omega_m(a) = \frac{\Omega_{m,0}} {a^3} \frac{H_0^2} {H^2}\tag{3a}$$ $$\Omega_\Lambda(a) = \Omega_{\Lambda,0} \frac{H_0^2} {H^2} \tag{3b}$$ Something we will need is $\partial_\tau \mathcal H$ so lets compute that first $$\begin{align} \partial_\tau \mathcal H &= \frac{ \partial } { \partial \tau } (a H)\\ &= H \frac{ \partial a} { \partial \tau } + a \frac{ \partial H} { \partial \tau } \\ \textrm{(Use Eq. (B))} &= H a \frac{ \partial a} { \partial t } + a^2 \frac{ \partial H} { \partial t } \\ &= H^2 a^2 \left(1 + \frac{1}{H^2} \frac{ \partial H} { \partial t } \right)\\ \textrm{(Use Eq. (C))} &= \mathcal H^2\left( 1 + \frac{1}{H^2}\frac{ \partial H} { \partial t } \right) \tag{4} \end{align} $$ We can do some manipulations to get $\partial_t H$. Start by taking the time derivative of the Friedmann equation (Eq. (1)) and let $\dot A = \partial A / \partial t$ $$\begin{align} \frac{2 H \dot H} {H_0^2} &= -3a^{-4}\Omega_{m,0} \frac{ \partial a} { \partial t } \\ \Rightarrow \dot H &= H_0^2 \Omega_{m,0}\times \left(-\frac{3 a^{-4}}{2 H} \frac{ \partial a} { \partial t } \right)\\ \textrm{(Use Eq. (3a))} &= H^2 a^3 \Omega_m(a) \times \left( - \frac{3a^{-4}} {2H} \frac{ \partial a} { \partial t } \right)\\ \dot H&= -\frac{3}{2}\Omega_m(a) H^2 \tag{5} \end{align} $$ Plugging Eq.(5) into Eq.(4) gives us $$\partial_\tau \mathcal H = \mathcal H^2 \left( 1 - \frac{3}{2}\Omega_m(a) \right)\tag{6}$$ Now we can rewrite $\partial_\tau$ in terms of $\partial_a$ $$\begin{align} \partial_\tau &= a \mathcal H\partial_a\tag{7a}\\ \partial_\tau^2 &= a \mathcal H \partial_a \left( a \mathcal H \partial_a \right)\\ &= (a \mathcal H)^2 \partial_a^2 + a\mathcal H^2 \partial_a + a^2\mathcal H(\partial_a \mathcal H) (\partial_a)\\ \textrm{(Use Eq.(7a))} &= (a \mathcal H)^2\partial_a^2 +a \mathcal H^2 \partial_a + a^2 \mathcal H \left( \frac{1}{a \mathcal H} \partial_\tau \mathcal H \right) \partial_a\\ \textrm{(Use Eq.(6))} &=(a \mathcal H)^2\partial_a^2 +a \mathcal H^2 \partial_a + a^2 \mathcal H \left( \frac{1}{a \mathcal H} \times \mathcal H^2\left[ 1 - \frac{3}{2}\Omega_m(a) \right] \right) \partial_a\\ \partial_\tau^2&= (a \mathcal H)^2 \partial_a^2 + a \mathcal H^2 \partial_a + a \mathcal{H} ^2 \left[ 1 - \frac{3}{2}\Omega_m(a) \right]\partial_a\tag{7b} \end{align} $$ Plugging in Eq.(7) into the linear growth equation (Eq.(A)) then gives us what we want $$\begin{align} 0&=\color{red}{ \partial_\tau^2 \delta}+ \color{blue}{ \mathcal H\partial_\tau \delta} - \frac{3}{2}\Omega_m \mathcal H^2 \delta\\ &=\color{red}{ (a \mathcal H)^2 \partial_a^2 \delta + a \mathcal H^2\partial_a \delta + a \mathcal H^2 \left[1 - \frac{3}{2}\Omega_m(a) \right]\partial_a\delta} + \color{blue} { a\mathcal H^2\partial_a \delta} - \frac{3}{2}\Omega_m \mathcal H^2 \delta\\ &= (a \mathcal H)^2 \partial_a^2 \delta -\frac{3}{2} a\mathcal H^2 \left[ \Omega_m(a) -2\right]\partial_a\delta -\frac{3}{2}\Omega_m \mathcal H^2\delta \end{align} $$ $$\boxed{- a^2 \mathcal H^2 \partial_a^2 \delta + \frac{3}{2} a\mathcal H^2 \left[ \Omega_m(a) -2\right]\partial_a\delta + \frac{3}{2}\Omega_m \mathcal H^2\delta = 0}$$
{ "domain": "physics.stackexchange", "id": 97927, "tags": "cosmology, space-expansion" }
Problem when compiling OpenCV with ROS
Question: I am using rosbuild to build a project with OpenCV and ROS. The OpenCV version is 2.4.9, built and installed from source. The ROS version is indigo. When I include the nonfree module of OpenCV (has SIFT and SURF) in my project files, I got a linking error something like"libopencv.so.2.4.9: undefined reference to ocl::integral()" after I build the ROS package using the CMakeList.txt. I noticed that the compilation information in the terminal shows the compiler is also trying to link to some existing OpenCV 2.4.8 libs in the directory /lib/x86_64-linux-gnu/libgopencv*.so.2.4.8. I realized that the OpenCV 2.4.8 might come with the CV_Bridge package when I install it. It seems the cmake is trying to link to some libs of 2.4.8 and also 2.4.9, which leads to some undefined reference. Another observation is that without ROS, I am able to compile a project with OpenCV (using pkg-config). It seems the problem is the co-existence of two versions of OpenCV. How can I make it only link to the version I choose? I used find_package in the CMakeList file, but the compiler always tries to link to both version. Thanks you very much. Rui Originally posted by RuiH on ROS Answers with karma: 1 on 2015-03-21 Post score: 0 Original comments Comment by gvdhoorn on 2015-03-21: Just to make sure: are you really using rosbuild or catkin? Comment by RuiH on 2015-03-21: I am using rosbuild. Comment by Dinl on 2015-08-31: Hi, I have the same problem when I use the nonfree module in ROS Indigo, have you found a solution for this error? Answer: My system info: Ubuntu 14.04.3 LTS OpenCV 2.4.13 ROS Jade I have the same link error but fix by switching the following content in CMakeLists.txt ${OpenCV_LIBRARIES} ${catkin_LIBRARIES} in target_link_libraries ${OpenCV_INCLUDE_DIRS} ${catkin_INCLUDE_DIRS} in include_directories OpenCV first, then catkin. Hopefully this approach can resolve your problem. Originally posted by willSapgreen with karma: 61 on 2016-05-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by michalszczepanski on 2017-04-14: I had the same problem, these procedure solve it! Thanks
{ "domain": "robotics.stackexchange", "id": 21191, "tags": "ros, opencv, linking, ros-indigo" }
Syncing Move Group joint states with robot in simulation
Question: Hi, I have been trying to use a physics engine to do experiments instead of my real robot. As I was connecting MoveIt with the physics engine, one problem that I encountered was that move group is not changing the joint states according to my robot in simulation. All joint states are published with correct names to the topic "/yumi/joint_states", and I have joint_state_publisher and robot_state_publisher launched as well. <node name="yumi_joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher"> <rosparam param="source_list">["/yumi/joint_states"]</rosparam> </node> <node name="yumi_robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"/> Just in case that the move group did not subscribe to the joint states correctly, I echoed "/move_group/monitored_planning_scene/robot_state/joint_state". By manipulating the robot in the physics engine, I am affirmative that the joint states fed into MoveIt are from the physics engine. Here is a sample result of the echo command: header: seq: 0 stamp: secs: 1611415632 nsecs: 622175931 frame_id: "yumi_base_link" name: [yumi_joint_1_l, yumi_joint_2_l, yumi_joint_7_l, yumi_joint_3_l, yumi_joint_4_l, yumi_joint_5_l, yumi_joint_6_l, gripper_l_joint, gripper_l_joint_m, yumi_joint_1_r, yumi_joint_2_r, yumi_joint_7_r, yumi_joint_3_r, yumi_joint_4_r, yumi_joint_5_r, yumi_joint_6_r, gripper_r_joint, gripper_r_joint_m] position: [6.463619115493202e-07, -0.039591364562511444, 0.04112724959850311, 0.009112260304391384, 8.23559503260185e-07, 0.01217526476830244, -3.0866064548717986e-07, 0.0, 0.0, -3.3755272852431517e-06, -0.03959168866276741, -0.04112748056650162, 0.009136912412941456, 5.3803930200047034e-08, 0.012184608727693558, 3.047164298664029e-09, 0.0, 0.0] velocity: [] effort: [] However, the robot in RVIZ always returns to its default shape after executing trajectories. And I don't see robot moving in RVIZ when I move it in the physics engine as well. Has anybody worked on connecting MoveIt with real robots? Is there anything else I need to do to get the joint states updated in MoveIt? I am using ROS melodic on Ubuntu 18.04. Any idea is appreciated! Thanks! Originally posted by Henning Luo on ROS Answers with karma: 58 on 2021-01-23 Post score: 0 Original comments Comment by fvd on 2021-01-27: Please clarify what "echo command" you are using, how you are starting up MoveIt (which launch files?), and any other details that might be relevant from your setup. Comment by fvd on 2021-01-27: Also, you say "the robot in Rviz always returns to its default shape after executing trajectories". Can the trajectories be executed? Does the robot in your physics engine move when you do? Are there errors in your terminal when you execute a trajectory? Comment by gvdhoorn on 2021-02-02: out of curiosity: which "physics engine" are you using? Comment by Henning Luo on 2021-04-01: I was using Unity3D. Sorry for the late reply. Hope this helps. Answer: Just realised that the joint states from the physics engine is faulty, they were transformed from rad to deg twice and hence all the joint states were close to zero, which led to the weird behaviour. Originally posted by Henning Luo with karma: 58 on 2021-02-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by fvd on 2021-02-01: Thanks for updating your question.
{ "domain": "robotics.stackexchange", "id": 36002, "tags": "moveit, ros-melodic, joint-state-publisher" }
Is Biotin water-soluble or fat soluble?
Question: Vitamin $\ce{B_7}$ or Biotin is a member of the vitamin B complex. Members of the vitamin B complex are usually known to be water soluble. However, this webpage mentions that biotin is water soluble. Again, according to another webpage, biotin is neither fat soluble, nor water soluble. So, is biotin soluble in fat or water? As different sources are contradictory, please provide a reference. Answer: Water solubility The CRC Handbook of Chemistry and Physics, 97th ed., Physical Constants of Organic Compounds, p. 3-48, describes biotin solubility: soluble in water, ethanol; slightly soluble in diethyl ether and chloroform. Descriptive terms for solubility, p. 1-38 (solubility – parts of solvent required for 1 part of solute (recalculated to mass%)): very soluble – less than 1 (> 50%) freely soluble – 1 to 10 (50% to 9.1%) soluble – 10 to 30 (9.1% to 3.2%) sparingly soluble – 30 to 100 (3.2% to 0.99%) slightly soluble – 100 to 1000 (0.99% to 0.1%) very slightly soluble – 1000 to 10000 (0.1% to 0.01%) practically insoluble, or insoluble – >= 10000 (< 0.01%) Aqueous Solubility and Henry’s Law Constants of Organic Compounds, p. 5-141, states that biotin solubility is 0.35 g kg−1 H2O (0.035 mass%) at 25 °C. This solubility value would be described as very slightly soluble. I don't know how to explain this inconsistency. Octanol/water partition coefficient .. However, low water solubility does not necessarily mean high fat solubility. A standard measure of lipophilicity is octanol/water partition coefficient (or its decadic logarithm, log P). Positive means more soluble in octanol (fats), negative means more soluble in water. Experimental log P values for fat-soluble vitamins:lit vitamin A +5.68 vitamin D3 +10.24 vitamin E +12.18 vitamin K0 +2.20 Typical water soluble vitamin: vitamin C −1.85 Some B vitamins: vitamin B1 −3.93 vitamin B2 −1.46 vitamin B4 −0.09 vitamin B7 +0.39 vitamin B8 −1.68 vitamin B13 −0.38 Biotin (vitamin B7) is interestingly slightly lipophilic, or little more fat-soluble than water-soluble. Biological systems Even though biotin solubility in water is quite low, it is still classified as water soluble. Originally, biotin (named vitamin H, originally) was discovered in extracts from egg yolk or liver, which are rather fatty tissues, in methyl ester form, practically insoluble in water. On the other hand, biotin was originally not found in egg white, because it contains avidin protein, which binds biotin so strongly, that it's one of the strongest non-covalent interactions known. Biotin is often bound in dietary sources to proteins as amide on lysine. By digestion, it's metabolised to biocytin, then to free biotin. So the fat or water solubility view may be somewhat simplistic.
{ "domain": "chemistry.stackexchange", "id": 10589, "tags": "organic-chemistry, solubility, chemical-biology" }
Where can you find the quantities of each amino acid of a particular protein or food?
Question: Taking a potato as an example. If I wanted to know how much µg or % of each of it's amino acids there are in 1 gram of pure potato protein, where can I find this information? Is there a freely accessible database online with these statistics? Or, one that contains something for each protein from which this could be derived? (and if so how?) Answer: There's a fantastic database available from the United States Department of Agriculture that includes almost 9,000 common foods, including their nutritional information. This database is searchable and available from the USDA Agricultural Research Service. Here is a link for the online searchable database. Within the database you are able to search for a number of parameters; including, food composition, water content, individual amino acid content, fatty acid and total fat content, vitamins, minerals, phytonutrients, and more! One of the nice things is that you can ask it to rank the foods in order of nutrient composition. For example, ranking all the foods in the database in order of protein content or a specific amino acid (or whatever else you're interested in). It's a great database, especially for researchers that need a database of trusted nutrient data. Within the database you can search by nutrient or by food, whichever you are curious about. For example, as you mention in the question, I entered potato and in the second screen shot below you can see the listing of all the nutrients in the potato. These are listed as % weight per 100g of potatoes - with or without the skin. To get to the amino acid content breakdown, click on Full Report (All Nutrients) at the top.
{ "domain": "biology.stackexchange", "id": 4951, "tags": "food, nutrition, database, amino-acids" }
Construct a context-free grammar for a given set of words
Question: I have seen a few years back a nice and simple algorithm that, given a (finite) set of words in some alphabet, builds a context-free grammar for a language including these words and in some sense "natural" (e.g., the grammar doesn't produce all words in the alphabet). The algorithm is very simple, it has something like 3--4 rules for grammar transformation attempted on each new word. Any help in finding it would be appreciated. Answer: I think you might be referring to Sequitur. Edit It has been suggested by other commenters that I leave more information for posterity. Fair point. Sequitur is an algorithm by Craig Neville-Manning and Ian Witten (of Managing Gigabytes fame). It's linear time in the size of the input sequences (although so is the memory usage), and satisfies the twin properties of parsimony (no redundant rules are derived) and utility (every rule is useful). However, it can't (IIRC) discover arbitrary nesting structure. So a prototypical expression grammar, where an expression can contain an expression, is too much for it. But it will discover word boundaries in English text, and repeat regions in DNA. It's also useful for finding dictionaries for data compression (which is one of Witten's major research interests).
{ "domain": "cs.stackexchange", "id": 2124, "tags": "algorithms, formal-languages, reference-request, formal-grammars, machine-learning" }
What is the depth of an image in Convolutional Neural Network?
Question: I am learning cs231n Convolutional Neural Networks for Visual Recognition. The lecture notes introduce the concepts of width, height, depth. For example, In CIFAR-10, images are only of size 32x32x3 (32 wide, 32 high, 3 color channels) However, in another example, a volume of size [55x55x96] has 96 depth slices, each of size [55x55] What does width 96 means? Does it mean 96 color channels? Why can we have more than 3 color channels? Answer: It means that the number of filters (a.k.a. kernels, or feature detector) in the previous convolutional layer is 96. You may want to watch the video of the lecture, and in particular this slide, which mentions that a filter is applied to the full depth of your previous layer:
{ "domain": "datascience.stackexchange", "id": 764, "tags": "neural-network, image-classification, convolutional-neural-network, computer-vision" }
Swift/iOS: Subclassing UILabel and setting properties
Question: I'm writing this subclass to add an icon to a UILabel. It works, but I'm wondering if this is the best/cleanest way to do it. Do you see any improvements? Should maybe text and image properties be set using a init() method? import UIKit class IconLabel: UILabel { override var text: String? { set { nameLabel.text = newValue } get { return nameLabel.text } } var image: UIImage? { didSet { icon.image = image?.withRenderingMode(.alwaysTemplate) } } private lazy var icon: UIImageView = { let imageView = UIImageView() imageView.tintColor = .white imageView.translatesAutoresizingMaskIntoConstraints = false return imageView }() private lazy var nameLabel: UILabel = { let label = UILabel() label.translatesAutoresizingMaskIntoConstraints = false label.font = Theme.regular(size: .tiny) return label }() override init(frame: CGRect) { super.init(frame: frame) self.backgroundColor = Theme.supportLightGrayColor self.layer.cornerRadius = 5.0 self.clipsToBounds = true self.addSubviewsAndConstraints() } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func addSubviewsAndConstraints() { self.addSubview(icon) self.addSubview(nameLabel) self.directionalLayoutMargins = NSDirectionalEdgeInsets(top: 5.0, leading: 5.0, bottom: 5.0, trailing: 5.0) icon.leftAnchor.constraint(equalTo: self.layoutMarginsGuide.leftAnchor).isActive = true icon.bottomAnchor.constraint(equalTo: self.layoutMarginsGuide.bottomAnchor).isActive = true icon.heightAnchor.constraint(equalTo: self.layoutMarginsGuide.heightAnchor).isActive = true icon.widthAnchor.constraint(equalTo: icon.heightAnchor).isActive = true nameLabel.leftAnchor.constraint(equalTo: icon.rightAnchor, constant: 5.0).isActive = true nameLabel.centerYAnchor.constraint(equalTo: icon.centerYAnchor).isActive = true nameLabel.rightAnchor.constraint(equalTo: self.layoutMarginsGuide.rightAnchor).isActive = true } } Answer: You are using IconLabel to act as a container, it contains another UILabel and UIImageView. There is no need that IconLabel should be a UILabel you can just have your label nameLabel and your imageView icon in a UIView container. It means that you need to change : class IconLabel: UILabel { to something like class IconLabelView: UIView { another note, to keep your custom view generic it's better that you setup the background color outside the class like : let customView = IconLabelView() customView.backgroundColor = Theme.supportLightGrayColor also I think it's better to have two methods for the subview setup and constraints instead of one: private func addSubviews() and private func addConstraints()
{ "domain": "codereview.stackexchange", "id": 29184, "tags": "object-oriented, swift, ios" }
Real image with converging lens?
Question: I read in a AQA GCSE book that: A real image is formed by the converging lens if the object is further away from the principal focus/focal point. I did this experiment in class: Here is my table of results: However, from the quotation above, u needs to be larger than v. So why in my experiment did I get that u is 15cm and v is 30cm? Here is what I did in my experiment: A crossed wire object was placed in front of the ray box. After that, the converging lens was placed 15 cm in front of the ray box. The screen was moved until a clear, real image was visible. The distance from the lens to the screen was measured and recorded. This process was repeated with the following measurements (from the ray box to the lens): 20 cm, 25 cm, 30 cm & 50 cm. Thanks in advance Answer: The equation you need to get the distance of the image from the object and focal length is: $$ \frac{1}{u} + \frac{1}{v} = \frac{1}{f} $$ where $f$ is what your question describes as the principal focus/focal point. A bit of playing around with this should convince you that if $u = 2f$ then $v = 2f$ i.e. $u$ and $v$ are identical. If $u > 2f$ then $v < 2f$ and $u$ is greater than $v$. Conversely if $u < 2f$ then $v > 2f$ and $u$ is less than $v$. So, as your data shows, $v$ can be greater or less than $u$. However for the image to be real $u$ must be greater than $f$. If you put in a value of $u$ less than $f$ you'll find $v$ comes out negative, which with the sign convention I've used means the image is to the left of the lens i.e. a virtual image.
{ "domain": "physics.stackexchange", "id": 32065, "tags": "visible-light, refraction" }
The correct place to implode array - MVC and Repositories concept
Question: I am writing an application with the repositories concept in php with laravel framework. In my controller I have this method: /** * Update the specified resource in storage. * PUT /locais/{id} * * @param int $id * @return Response */ public function update($id) { $data = Input::all(); $data['str_categories'] = implode(',', $data['categories']); $this->place->update($id, $data); return Redirect::route('locais.index'); } I am concerned about this line in controller: $data['str_categories'] = implode(',', $data['categories']); This line can be in controller method? Or this is a responsibility of places repository? Answer: Logic or data transformation should usually be moved out of the controller. Your places repository seems like a better choice for the implode. After all, one of the major purposes of repositories is to decouple data handling and storage as much as possible from other parts of the application, including controller.s
{ "domain": "codereview.stackexchange", "id": 14979, "tags": "php, mvc, repository, laravel" }
Identify an insect (beetle?)
Question: Body size: 30mm Location: Poland, in the middle of a large meadow It was flying and landed directly on my cap :-) After consulting wikipedia it look similar to some of Tenebrionoidea but can't find an exact match. Help! Answer: This is a Buprestid beetle, and I think the species is Chalcophora mariana. It doesn't look all that coppery in your photos, but the large size and shape is perfect. Their larvae live mostly on pines, so perhaps there were pines close to your meadow? (Image from Wikipedia)
{ "domain": "biology.stackexchange", "id": 7287, "tags": "species-identification, entomology" }
Several spring coupled: can such a movement happen or is it only theoretical?
Question: We have 6 particles. We couple them 2 by 2 with a spring of strength $K$ (as in the picture below). We then have 3 harmonic oscillators. Then we couple each oscillator by a spring of strength $S\ll K$ (i.e. the strength is much smaller than $K$... We call it a weak coupling). But I think it's not important for my question. Anyway, the situation is shown on the picture 1 below. My problem is I really can't imagine such a mouvement. For example, in the picture 2 (when spring are attached to a wall), I really can imagine such a mouvement, but when it's not connected to a wall and is free like in picture 1, I can't see how such a mouvement could be. Does someone has a example in the nature ? Or a simulation ? Or can tell me where such a mouvement can happen in the nature ? Picture 1 Picture 2 Answer: I was interested in this question, so I built a model using Mathematica 11.3. Here is an example of the movement of 12 particles mass of $m=1$ connected with springs of different strength coefficients $k_1=990,k_2=10$. Particles are initially located on a circle. Then in the process of movement a hexagonal structure is formed. The numbers correspond to particles that were numbered in the initial state. The numbers above the pictures correspond to the time. After several periods of oscillation, the hexagonal structure is transformed into a less symmetrical one, and the movement becomes similar to chaotic. The movement of the system is shown below. The case of ten particles is also interesting. A pentagram is formed from 10 particles in the process of movement.
{ "domain": "physics.stackexchange", "id": 55674, "tags": "newtonian-mechanics, harmonic-oscillator, spring, coupled-oscillators" }
Why couldn't the decay $\pi^- \to e^- + \bar\nu_e$ occur if electrons were massless?
Question: If we assume that electrons (just like neutrinos) are massless, why can’t the decay $\pi^- \rightarrow e^- + \bar{\nu}_e$ occur under the weak interaction? Answer: Since the spin of the charged $\pi$ is $0$, the spins of the daughter particles need to add up to $0$ as well, i.e., their spins need to be anti-parallel. That's nothing else than the conservation of angular momentum. Assuming the anti-neutrino to be massless, it is always right-handed. Right-handed means that the momentum vector and the spin vector are parallel, while left-handed means that the momentum vector and the spin vector are anti-parallel. This is well defined for a massless particle, since it travels at the speed of light, and there is no inertial frame in which the momentum vector would switch direction. Since the anti-neutrino is right-handed, the electron would need to be right-handed as well to conserve linear momentum and angular momentum (spin). But the decay of the $\pi$ happens via the weak interaction, i.e. via the $W$ boson. Since the $W$ boson is known to couple only to left-handed particles, the decay would be forbidden. Since the electron is not massless, it has a small left-handed component. The decay is suppressed, but not forbidden. The heavier muon has a larger left-handed component, and its decay is less suppressed. Hence, pions usually decay into muons, although they have less phase space available.
{ "domain": "physics.stackexchange", "id": 28022, "tags": "particle-physics, elementary-particles, weak-interaction" }
Fractional Distillation Column Temperature
Question: The fractional distillation column becomes cooler with height (because the heat source is at the bottom and) has different points at which substances are tapped but how do we know that the temperature at position Y is going to be X? For example, fractional distillation of crude oil taps out several different substances, e.g. methane, gasoline, naphtha etc- how do we know the right points in the column at which the temperature is going to match their boiling points for them to condense there? Answer: That's where mathematical models of distillation columns come in. These models solve the non-linear mass balance equations, energy balance equations, and phase equilibrium equations for both the liquid and vapor streams to and from each and every tray of the column, and for the reboiler and the condenser. They include feed stream addition on indicated trays, and product removal from the condenser, reboiler, and indicated trays. They also take into account reflux from the condenser to the top tray. In the end, the models predict the temperature profile and species concentration profiles in both the liquid and vapor streams as a function of tray number. Developing and applying such models is one way in which we Chemical Engineers earn our keep. This falls within the realm of Chemical Process Modeling.
{ "domain": "physics.stackexchange", "id": 51432, "tags": "thermodynamics" }
Are proteins a different shape in space?
Question: Is the shape of a protein affected by gravity? In space, will the shape of a protein be different to what it is on Earth? If the structure and shape is in fact affected, then would it be enough of a change to cause a change in protein function? I've found studies from NASA saying that when protein crystals are grown in space, they are larger and form more perfect crystals than on Earth but I want to know about the effect of gravity on proteins that have already been formed. Thanks! Answer: Proteins are not made to be one way up or the other as they flow around and surround cells, so sea-level to space gravity gradient will not be the major cause of change to proteins at different altitudes. Pressure is more of a factor on proteins, and it has been studies very much, both for protein shape and protein interaction with elements and other molecules. http://www.annualreviews.org/doi/abs/10.1146/annurev.pc.44.100193.000513?journalCode=physchem People have awesome health when they are born and raised at altitudes of 4000 meters, there are even people living at 4800 meters OK. the pressure gradient there is halfway in between sea level and space. 100m depth also stresses nitrogen toxicity and other severe effects more than proteins, but deep sea fish do have specially adapted proteins because depths at 10000 meters require different proteins than the surface.
{ "domain": "biology.stackexchange", "id": 7825, "tags": "proteins, protein-structure, protein-folding, microgravity" }
Matrix chain multiplication and exponentiation
Question: If I have two matrices $A$ and $B$, of dimensions $1000\times2$ and $2\times1000$, respectively, and want to compute $(AB)^{5000}$, it's more efficient to first rewrite the expression as $A(BA)^{4999}B$ and only then evaluate numerically, because $AB$ is of dimension $1000\times1000$ but $BA$ is of dimension $2\times2$. I want to solve a generalized version of this problem. Is there a reasonably efficient algorithm (not brute force) to optimize an expression containing: Free matrix variables of known dimensions Products of arbitrary subexpressions Arbitrary subexpressions raised to natural power ... so that it takes the least amount of work to evaluate numerically, after substituting the free matrix variables with concrete matrix values? The matrix chain multiplication problem is a special case of my problem. Edit: This is a tentative answer. It seems intuitively right to me, but I have no proof that it's correct. If it turns out to be correct, I'm still interested in the proof. (If it's not correct, of course, please do correct me.) For every product raised to a power, say, $(A_1 A_2 \ldots A_k)^n$, consider every cyclic permutation of the factors: $(A_1 A_2 \ldots A_k)^n$ $A_1 (A_2 \ldots A_k A_1)^{n-1} A_2 \ldots A_k$ $A_1 A_2 (A_3 \ldots A_k A_1 A_2)^{n-1} A_3 \ldots A_k$ ... $A_1 A_2 \ldots A_{k-1} (A_k A_1 A_2 \ldots A_{k-1})^{n-1} A_k$ ... recursively. Each power is to be calculated using exponentiation by squaring (obviously), and all other products are to be calculated using the optimal order returned by the matrix chain multiplication algorithm. Edit: The idea outlined in my previous edit is still somewhat nonoptimal. The exponentiation by squaring algorithm actually evaluates expressions of the form $K A^n$ or $A^n K$, where $K$ isn't necessarily the identity matrix. But my algorithm doesn't consider the possibility of using the exponentiation by squaring algorithm with $K$ not equal to the identity matrix. Answer: Disclaimer: The following method has not been rigorously proven to be optimal. An informal proof is provided. The problem reduces to finding the most efficient ordering when considering the square of the product. For example, when looking at e.g. $(ABC)^{50}$, we only need to optimally solve $(ABC)^2$ since this expands to $ABCABC$. No useful ordering information is added by concatenating $ABC$ again. The intuition here is that since the problem of optimal ordering can be solved bottom-up, higher orderings consisting of more elements using the same matrices are irrelevant. Finding the best ordering of $ABCABC$ reduces to the Matrix Chain Multiplication problem. After finding an optimal ordering, apply exponentiation to the triplet (n-tuple generally) in the ordering. As an e.g., if the optimal ordering for the square is $A(B(CA))BC$, the solution to the initial problem is $A(B(CA))^{49}BC$. In summary: 1) The first step in solving $(A_1 A_2 \cdots A_n)^m$ is to solve $(A_1 A_2 \cdots A_n)^2$. 2) Solving $(A_1 A_2 \cdots A_n)^2$ is best approached as an instance of the Matrix Chain Multiplication problem. 3) Using the n-tuple ordering $G$ from the solution in (2) will give us the solution to (1) as some flavor of $A_1 \cdot A_2 \cdot G^{m-1} \cdot A_n$ (note that any other groupings from solving (2) should be applied as well). Informal proof Considering the simplest case using two matrices, $(AB)^n$, we note that $A$ and $B$ have dimension $X \times Y$ and $Y \times X$ respectively. Any product using $A$ and $B$ has one of the following dimensions: $X \times Y$ $Y \times X$ $Y \times Y$ $X \times X$ We have either $X < Y$ or $Y ≤ X$. Assumption 1a): $X < Y$ $AB$ has dimension $X \times X$, and this ordering is guaranteed to be optimal from a bottom-up approach. Any other configuration of $A$ and $B$ is either equally good, or worse. Thus, the problem is optimally solved as $(AB)^n$. Assumption 1b): $Y ≤ X$ $BA$ has dimension $Y \times Y$. This is the optimal ordering for all products involving $A$ and $B$. Thus, the solution is optimally found as $A(BA)^{n-1}B$. This concludes the proof, and we have only looked at the two orderings found in $ABAB$, the square problem. Using more matrices, the argument is similar. Perhaps an inductive proof is possible? The general idea is that solving the MCM for the square will find the optimal size for the operations with all involved matrices considered. Case study: julia> a=rand(1000,2); julia> b=rand(2,1000); julia> c=rand(1000,100); julia> d=rand(100,1000); julia> e=rand(1000,1000); julia> @time (a*b*c*d*e)^30; 0.395549 seconds (26 allocations: 77.058 MB, 1.58% gc time) # Here I use an MCM solver to find out the optimal ordering for the square problem julia> Using MatrixChainMultiply julia> matrixchainmultiply("SOLVE_SQUARED", a,b,c,d,e,a,b,c,d,e) Operation: SOLVE_SQUARED(A...) = begin # none, line 1: A[1] * (((((A[2] * A[3]) * (A[4] * (A[5] * A[6]))) * (A[7] * A[8])) * A[9]) * A[10]) end Cost: 6800800 # Use the ordering found, note that exponentiation is applied to the group of 5 elements julia> @time a*(((((b*c)*(d*(e*a)))^29*(b*c))*d)*e); 0.009990 seconds (21 allocations: 7.684 MB) # I also tried using the MCM for solving the problem directly julia> @time matrixchainmultiply([30 instances of a,b,c,d,e]); 0.094490 seconds (4.02 k allocations: 9.073 MB)
{ "domain": "cs.stackexchange", "id": 9074, "tags": "optimization, dynamic-programming, linear-algebra" }
If the spacetime singularity at a black hole's event horizon is an artificial one then how can it be real?
Question: If the metric belonging to the spacetime of a black hole is expressed as the Schwarzschild metric, it turns out that there is a coordinate singularity at the event horizon. If we use an alternative metric (for example one of these) then the expression of spacetime is regular. How can this be if time stops at the event horizon only in Schwarzschild coordinates? One thing I don't fully understand either is why the volume of a black hole, when calculated in the Schwarzschild metric, is zero, while it's non-zero when calculated in another coordinate system. How can this be? Answer: As a fairly close analogy for what happens with Schwarzschild coordinates, suppose you replace Cartesian $(x,y)$ coordinates for the Euclidean plane with $(x,z)$, where $z=y/x$. There is a bijection between these coordinate systems for all points with $x\ne 0$, but the points with $x=0, y\ne 0$ are not covered by $(x,z)$ coordinates at all, while the origin is covered redundantly (all $(0,z)$ with $z\in\mathbb R$ map to the origin). If you plot in $(x,z)$ coordinates a line (geodesic) that passes above or below the origin, it will appear to go off to infinity at $x=0$. It has to, because there's a "barrier" at $x=0$ in $(x,z)$ coordinates. The barrier represents the origin, the line doesn't go through the origin, so it has to go around the barrier, but the barrier is of infinite length, so the line has to go to infinity. In Schwarzschild coordinates, ignoring the angular coordinates, the line $r=r_s$ maps redundantly to the origin $X=T=0$ of Kruskal-Szekeres coordinates. When you take $r\to r_s$ while keeping $t$ finite, you do not approach the event horizon (which is the ray $X=T>0$), but rather the origin.* Geodesics that approach $r=r_s$ go off to infinity because they are heading toward the real event horizon, which is at infinity in these coordinates. It's not due to time stopping at the horizon. It's entirely due to the $t$ coordinate exhibiting pathological behavior near the horizon. I've never heard the claim that the interior volume in Schwarzschild coordinates is zero. I found this paper which says it's zero, but their argument involves, no joke, integrating from $r_s$ to $r_s$. I would rather say that the interior volume is just undefined, because the constant-$t$ interior surface isn't spacelike. Not long ago I read a paper arguing that the interior volume should rather be taken to be the derivative with respect to $Δt$ of an interior four-volume of thickness $Δt$. This has the nice property of giving the same result (namely $\frac43\pi r_s^3$) in all static coordinate systems, including Schwarzschild. Unfortunately, I now can't find the paper. * $X=T=0$ is part of the event horizon of the maximally entended Schwarzschild vacuum solution, but it's not a part of the physically relevant part of it, which is the half "plane" $X+T>0$. The origin is more or less the sphere at $t=-\infty,r=r_s$ in Gullstrand–Painlevé and similar coordinate systems, which realistically is long before the black hole formed.
{ "domain": "astronomy.stackexchange", "id": 5613, "tags": "black-hole" }
Why is it that neither silver nor copper react with a strong acid?
Question: $\ce{Fe}$, $\ce{Mg}$, $\ce{Ni}$, $\ce{Pb}$, $\ce{Sn}$, and $\ce{Zn}$ all react when they (in solid form) are submerged in an acid solution with the presence of a strong acid like $\ce{HCl}$, but silver and copper do not. I have examined the electronegativity values and both $\ce{Ag}$'s and $\ce{Cu}$'s electronegativity value is only different from $\ce{Ni}$ by $\mathrm{0.1}$ on the Pauling scale. I don't see a pattern; the only thing I see which is common between copper and silver is that they both may have an oxidation number of +1. So, why is it that $\ce{Ag}$ and $\ce{Cu}$ are immune to the H cations in the acidic solution? Answer: This will involve some degree of hand-waving. Let me first limit the analysis to the transition metals so that more effective comparisons can be made. The reaction between a metal and an acid requires the metal to be oxidized, i.e. lose electrons to the acid, creating positively-charged metal ions in solution. The general idea is that as you go along the transition metals in a period from left to right, there is an increase in effective nuclear charge felt by the valence $ns$ electrons (due to the additional $(n-1)d$ subshell electrons poorly shielding the $ns$ electrons from the nucleus) making the valence electrons harder to remove. This trend can be seen to some extent in the increase in ionization energies of the elements, for example, and also in the standard reduction potentials for metal cations of same charge (they become more positive, in other words making oxidation and hence attack by acid more difficult). Thus, metals at the left of the transition metals tend to be more easily attacked by acids than the metals at the right. There is a rather interesting exception though: zinc strongly breaks the trend, being far easier to oxidize than copper, the element before it, even though zinc has a much higher ionization energy, clearly indicating its electrons are held tighter (cadmium also exhibits this anomaly, to a milder extent). What gives? The thing is that the tendency for a metal to oxidize is the result of several combined factors, one of which is also how strongly bound the solid metal is in the first place. If a metal contains atoms which are strongly bonded to each other, then oxidation tends to be more unfavourable as it would require these bonds to break. When comparing copper and zinc, it is clear that the latter has far weaker metallic bonding in the solid (their melting/boiling points are $1360\ \rm{K}$ / $2835\ \rm{K}$ and $695\ \rm{K}$ / $1180\ \rm{K}$, respectively). Thus, if you include the energy necessary to separate the atoms from the solid before ionizing them, it turns out that the process is easier for zinc than copper. Comparing different rows of the transition metals is a bit less clear, in part because not all elements have ions of the same charge which can be directly compared. The general tendency is that the metals become tougher to oxidize as you go down the rows. For the transition metals in the sixth and seventh period this is probably a consequence of lanthanide/actinide contraction and relativistic effects, which decrease the energy of the valence $ns$ orbitals. Now for some other elements. Magnesium is very reactive towards acids because it is both a metal containing relatively weak bonds and because its ionization energy is comparatively low, being an alkaline earth element. For tin, the metallic bonding is not weak, but its ionization energy is not too high, so it will oxidize in acids, though not as strongly. Lead is somewhat less reactive to acids, possibly because its oxidation tends to stop at +2 instead of tin's +4, creating a significantly softer cation which isn't as well solvated by water.
{ "domain": "chemistry.stackexchange", "id": 8505, "tags": "acid-base, reaction-mechanism, redox" }
Bloom release failure
Question: Hi, I can't understand what is wrong when I try to release with bloom: In my repo https://github.com/lagadic/vision_visp I have done for groovy branch: $ catkin_generate_changelog $ catkin_tag_changelog $ catkin_prepare_release that has created the new tag groovy-0.7.4 Then to release this new version to https://github.com/lagadic/vision_visp-release I did: $ bloom-release vision_visp --track groovy --rosdistro groovy ... Everything up-to-date ==> Releasing 'vision_visp' using release track 'groovy' ==> git-bloom-release groovy Processing release track settings for 'groovy' Checking upstream devel branch for package.xml(s) Cloning into '/tmp/tmpePvEhb/upstream'... remote: Reusing existing pack: 3388, done. remote: Counting objects: 191, done. remote: Compressing objects: 100% (191/191), done. remote: Total 3579 (delta 119), reused 3 (delta 0) Receiving objects: 100% (3579/3579), 1.35 MiB | 527.00 KiB/s, done. Resolving deltas: 100% (2161/2161), done. Checking connectivity... done. Looking for packages in 'groovy' branch... found 6 packages. Detected version '0.7.4' from package(s): ['visp_camera_calibration', 'visp_hand2eye_calibration', 'vision_visp', 'visp_auto_tracker', 'visp_tracker', 'visp_bridge'] Executing release track 'groovy' ==> bloom-export-upstream /tmp/tmpePvEhb/upstream git --tag groovy-0.7.4 --display-uri https://github.com/lagadic/vision_visp.git --name upstream --output-dir /tmp/tmpruKz1m Checking out repository at 'https://github.com/lagadic/vision_visp.git' to reference 'groovy-0.7.4'. Exporting to archive: '/tmp/tmpruKz1m/upstream-groovy-0.7.4.tar.gz' md5: 746ef2c87d8cd72fc7031992d6e05604 ==> git-bloom-import-upstream /tmp/tmpruKz1m/upstream-groovy-0.7.4.tar.gz --release-version 0.7.4 --replace The latest upstream tag in the release repository is 'upstream/0.7.3'. Importing archive into upstream branch... The package(s) in upstream are version '0.7.3', but the version to be released is '0.7.4', aborting. <== Error running command '['/usr/bin/git-bloom-import-upstream', '/tmp/tmpruKz1m/upstream-groovy-0.7.4.tar.gz', '--release-version', '0.7.4', '--replace']' Release failed, exiting. Many thanks if you have some ideas ? Best regards Fabien Originally posted by Fabien Spindler on ROS Answers with karma: 126 on 2014-07-02 Post score: 1 Answer: The error is: The package(s) in upstream are version '0.7.3', but the version to be released is '0.7.4', aborting. This means that version in the package.xml(s) in the "development branch" do not match the version in the package.xml(s) of the tag it is trying to release. From you output I can see that the development branch is groovy, which looking at your repository has version 0.7.4: https://github.com/lagadic/vision_visp/blob/groovy/vision_visp/package.xml#L3 Then looking at the output I can see that it is trying to release from the tag groovy-0.7.4 which has version 0.7.3 in it's package.xml(s): https://github.com/lagadic/vision_visp/blob/groovy-0.7.4/vision_visp/package.xml#L3 So somehow you have gotten version 0.7.3 tagged as groovy-0.7.4. Originally posted by William with karma: 17335 on 2014-07-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18484, "tags": "bloom-release" }
Bird in Taipei knocking on the window glass - juvenile hawk? white throat, white beak(?) striped breast, black head(?) brownish body(?), not small
Question: I received these photos from a friend today - a slightly larger than average size bird tapping on the window glass; maybe a brownish body with a white throat and white beak and black head. The tail is spread in a fairly wide fan, maybe to scare "the other bird" reflected in the window? There's a front view and one with its head turned, but it's a little tricky to be sure of the colors since the sun is (roughly) behind the bird. I suppose it's possible that the beak may not be white, but just smooth and shiny and reflecting the sunshine. Taken on a sunny, hot July afternoon in July in Taipei. Coat hangars for scale suggest it's a bit larger than the typical bird. It reminds us somewhat of a Chinese Bulbul, but those distinct vertical stripes between its throat and breast are very distinctive. It looks scary/predatory, could it be a juvenile hawk? Chinese Bulbul photos (mostly from Hong Kong): https://nhillgarth.com/2013/10/25/more-birds-in-taiwan/ https://cms.hkbws.org.hk/cms/en/hkbws/work/resarch/hk-bird-atlas-2020-en http://birdingtaiwan.blogspot.com/p/idiots-guide.html Light-Vented Bulbul (also Hong Kong) https://theworldsrarestbirds.com/birds/birds-in-hong-kong/ Answer: The light-vented bulbul looks quite promising, but I think this bird looks less song-birdy - the size and the large beak makes me think probably not. I think the Taiwan scimitar-babbler (Pomatorhinus musicus) is the most promising. you can check out this gallery: https://media.ebird.org/catalog?taxonCode=taiscb1&sort=rating_rank_desc&mediaType=photo This bird shares the black stripe over the eyes, the thick and slightly curved beak, the breast feathers, the white stripe of feathers just above the eyes, the black feathers on the top of the head, and I think even the orange shoulder feathers if you look closely in the first image.
{ "domain": "biology.stackexchange", "id": 12329, "tags": "species-identification, ornithology" }
What's the difference between the Roche lobe and Roche sphere?
Question: I am just beginning to look into this topic, so apologies if there are any striking misconceptions in the following. From Wikipedia, the Roche lobe is "the region around a star in a binary system within which orbiting material is gravitationally bound to that star". So, to my understanding, this is the region around the star where matter is able to "remain near" the star and not get "sucked away", for example by the other star in the system. Roche lobe overflow seems to be the term used to describe the case where matter falls out of the Roche lobe of one star and hence is "lost" by the star. On the other hand, the Roche sphere (or Hill sphere) seems to be a confusingly similar concept! Wikipedia again states, "to be retained by a planet, a moon must have an orbit that lies within the planet's Hill sphere". But isn't this the same as the interpretation of the Roche lobe? The Wikipedia article also mentions stability in the face of perturbations from the gravity of a larger body, but I don't really understand what this means. Could someone explain in simple terms the difference between the two? Thanks in advance. (Note: I have seen the question What is the difference between Sphere of Influence and Hill sphere?, but I think that is a distinct question since the SoI is not the same as the Roche lobe; please correct me if I am mistaken!) Answer: The Roche lobe is a gravitational potential well of a two-body configuration. Technically, it is the potential energy per unit test mass which is orbiting the center-of-mass of a binary-star system at the same rate as the two stars. It is often depicted with equipotential surfaces and the 5 Lagrange points. The Roche lobe includes gravitational and centripetal energy, so it's derivative is the acceleration on the test mass. Thus, it is useful, among other things, for determining when a particle is gravitationally bound to a star in a binary system. You can think of the Roche limit of a body in a binary as the limit beyond which particles are no longer "part" of that body. The Hill sphere is an approximation of the perturbations of the larger body in a binary system onto the smaller (i.e. less massive) body in the binary system. So, you can see how it is distinct, necessarily, from the Roche lobe. You can consider the Hill sphere as the limit where gravitational perturbations from a more massive body become weak compared to the self gravity of a less massive body. Here is a figure that nicely depicts the difference of these boundaries from the wiki article about the Hill sphere: The sphere of influence is an approximation where you only consider gravity of nearby bodies, and not of more massive ones that are further away. As shown by the figure below, the SoI is a decent approximation to the Hill sphere for Solar System bodies that are not too close to the Sun.
{ "domain": "astronomy.stackexchange", "id": 6094, "tags": "orbit, gravity, astrophysics, orbital-mechanics" }
Sidebar with slide-out feature
Question: I made a sidebar that has a slide-out feature. When you click on it, it shows more items. I made it with JS and HTML but I think the code can be done more efficiently. How can I clean the code up? $('.slide-out').on('click', function() { let data = 'slideout-item'; if($(this).data(data)) { let item = $('.slide-out-' + $(this).data(data)); if(item.hasClass('slideout-active')) { item.hide("slide", { direction: "left" }, 250); item.removeClass('slideout-active'); $('.slide-out-overlay').fadeOut() } else { item.show("slide", { direction: "left" }, 250); item.addClass('slideout-active'); $('.slide-out-overlay').fadeIn() } } }); $('.slide-out-overlay').on('click', function () { let items = ['content', 'system', 'account', 'other']; let item = '.slide-out-'; for (i = 0; i < items.length; i++) { if($(item + items[i]).hasClass('slideout-active')) { $(item + items[i]).hide("slide", { direction: "left"}, 250); $(item + items[i]).removeClass('slideout-active'); $('.slide-out-overlay').fadeOut(); } } }); <ul class="list-unstyled"> <li class="sidebar-item title">{{ __('Information') }}</li> <li class="sidebar-item"><a href="{{ route('home') }}" class="sidebar-link {{ Route::currentRouteNamed('home') ? 'sidebar-active' : '' }}">{{ __('Dashboard') }}</a></li> <li class="sidebar-item title">{{ __('System') }}</li> <li class="sidebar-item"><a href="#" class="sidebar-link slide-out" data-slideout-item="content">Content<i class="far fa-chevron-right" style="position:absolute; right: 0;"></i></a></li> <li class="sidebar-item"><a href="#" class="sidebar-link slide-out" data-slideout-item="system">System<i class="far fa-chevron-right" style="position:absolute; right: 0;"></i></a></li> <li class="sidebar-item"><a href="#" class="sidebar-link slide-out" data-slideout-item="account">Account<i class="far fa-chevron-right" style="position:absolute; right: 0;"></i></a></li> <li class="sidebar-item"><a href="#" class="sidebar-link slide-out" data-slideout-item="other">Other<i class="far fa-chevron-right" style="position:absolute; right: 0;"></i></a></li> </ul> <div class="slide-out-block shadow slide-out-content" id="slide-out"> <nav class="sidebar-slideout"> <ul class="list-unstyled"> <li class="sidebar-item title">{{ __('Content') }}</li> <li class="sidebar-item"><a href="{{ route('pages') }}" class="sidebar-link {{ Route::currentRouteNamed('pages') ? 'sidebar-active' : '' }}">{{ __('Pages') }}</a></li> <li class="sidebar-item"><a href="{{ route('blocks') }}" class="sidebar-link {{ Route::currentRouteNamed('blocks') ? 'sidebar-active' : '' }}">{{ __('Blocks') }}</a></li> <li class="sidebar-item"><a href="{{ route('layouts') }}" class="sidebar-link {{ Route::currentRouteNamed('layouts') ? 'sidebar-active' : '' }}">{{ __('Layouts') }}</a></li> </ul> </nav> </div> Answer: It has been a while since I worked with jQuery so I may not get specific API's correct. You are repeating ("slide", { direction: "left"}, 250), if you ever changed one of the properties I would assume you'd want them to be consistent with each other. You can set an array then destructure that when calling show or hide. You can also apply this theory to your strings such as slide-out-overlay and slideout-active for example, if you find yourself typing the same strings out, generally you want to put them in a variable (ideally a const) // OLD $('.slide-out').on('click', function() { ... item.hide("slide", { direction: "left" }, 250); ... }); $('.slide-out-overlay').on('click', function () { ... item.hide("slide", { direction: "left" }, 250); }); // NEW const toggleArguments ["slide", { direction: "left" }, 250] $('.slide-out').on('click', function() { ... item.hide(...toggleArguments); ... }); $('.slide-out-overlay').on('click', function () { ... item.hide(...toggleArguments); }); It's common practice to use guard clauses, to reduce nesting. Closely related is early return's too, you should never really need the else statement. // OLD $('.slide-out').on('click', function() { let data = 'slideout-item'; if($(this).data(data)) { let item = $('.slide-out-' + $(this).data(data)); if (something) { ... } else { ... } } }); // NEW $('.slide-out').on('click', function() { let data = 'slideout-item'; if (!$(this).data(data)) { return; } if (item.hasClass('slideout-active') { ... return; } // do code for !item.hasClass('slideout-active') }); It is sometimes clearer to put a $ in front of variables that are jQuery objects // OLD let item = $('.slide-out-' + $(this).data(data)); // NEW let $item = $('.slide-out-' + $(this).data(data)); In your .slide-out-overlay click function, you are finding .slide-out-overlay again within the function block, when you already have the element in this // OLD $('.slide-out-overlay').on('click', function () { ... $('.slide-out-overlay')... }); // NEW $('.slide-out-overlay').on('click', function() { $(this)... }); In your .slide-out-overlay click function you are looping over any array of strings and then checking if the relevant element has class slideout-active. Could you not just look for elements that have slideout-active? // OLD $('.slide-out-overlay').on('click', function () { let items = ['content', 'system', 'account', 'other']; let item = '.slide-out-'; for (i = 0; i < items.length; i++) { if($(item + items[i]).hasClass('slideout-active')) { ... }); // NEW $('.slide-out-overlay').on('click', function () { const $items = $('.slideout-active'); $items.hide("slide", { direction: "left"}, 250); // .hide works on an array of elements, as do most jQuery methods ... }); You seem to be using let where you can use const. Only ever use let if you are modifying the value later on. Otherwise use const String interpolation is also an option to concatante your strings. Although with jQuery may look a little confusing // OLD let item = $('.slide-out-' + $(this).data(data)); // NEW let item = $(`.slide-out-${$(this).data(data)}`); I would also challenge you to write this without jQuery and use vanilla javascript and CSS transitions on the classes. Hopefully that helps!
{ "domain": "codereview.stackexchange", "id": 37109, "tags": "javascript, jquery, html" }
What happens when a slow wave reaches lower hybrid resonance?
Question: Lower hybrid resonance occurs when $n_{\perp}^2$ goes to infinity, and it occurs only for the slow wave solution, not the fast wave. Since $n_{\perp}$ is proportional to $k_{\perp}$, and $k = \frac{2 \pi}{\lambda}$, it means that the wavelength of the wave goes to zero. But what physically happens when the slow wave reaches the lower hybrid resonance? I should mention that I'm talking about in the cold plasma model, where the fast and slow wave modes have meaning. Answer: Well, I am not sure if your statements are entirely accurate because the fast mode can approach the lower hybrid resonance. In fact, in this regime, it becomes effectively indistinguishable from an electrostatic whistler mode. At low frequency and oblique angles, the fast (or magnetosonic) modes are right-hand polarized (with respect to $\mathbf{B}_{o}$) electromagnetic waves, which happen to be on the same branch of the dispersion relation as whistler mode waves. In any case, in the limit as $\mathbf{k}_{\parallel}$ $\rightarrow$ 0 the wave will become electrostatic and, for all intents and purposes, be a form of ion-acoustic wave (not to be confused with the much higher frequency electrostatic version that has $\mathbf{k}_{\perp}$ ~ 0) or lower-hybrid wave. At such oblique angles, the only important things are the wave frequency and that it is electrostatic. The name given to the mode is really just semantics. Edits/Additions I realized after the fact that I had forgotten to say that $\mathbf{k}$, in addition to knowledge of $\omega$ and polarization (e.g., electrostatic), are the only important things for these modes. The reason is that these properties let you know how and with what particles these modes interact. At the lower hybrid resonance, an electrostatic wave can simultaneously couple and exchange energy/momentum with both electrons and ions. This is why the waves are so popular in current dissipation theories, since $\mathbf{j}$ = $\sum_{s} n_{s} \ q_{s} \ \mathbf{V}_{s}$ . If the waves can transfer energy/moment from(to) electrons to(from) ions, then have the capacity to limit $\mathbf{j}$. If the interaction is stochastic in nature, then the result can be an irreversible form of energy transformation (i.e., energy dissipation).
{ "domain": "physics.stackexchange", "id": 16710, "tags": "waves, plasma-physics" }
Why is learning rate causing my neural network's weights to skyrocket?
Question: I am using tensorflow to write simple neural networks for a bit of research and I have had many problems with 'nan' weights while training. I tried many different solutions like changing the optimizer, changing the loss, the data size, etc. but with no avail. Finally, I noticed that a change in the learning rate made an unbelievable difference in my weights. Using a learning rate of .001 (which I thought was pretty conservative), the minimize function would actually exponentially raise the loss. After one epoch the loss could jump from a number in the thousands to a trillion and then to infinity ('nan'). When I lowered the learning rate to .0001, everything worked fine. 1) Why does a single order of magnitude have such an effect? 2) Why does the minimize function literally perform the opposite of its function and maximize the loss? It seems to me that that shouldn't occur, no matter the learning rate. Answer: You might find Chapter 8 of Deep Learning helpful. In it, the authors discuss training of neural network models. It's very intricate, so I'm not surprised you're having difficulties. One possibility (besides user error) is that your problem is highly ill-conditioned. Gradient descent methods use only the first derivative (gradient) information when computing an update. This can cause problems when the second derivative (the Hessian) is ill-conditioned. Quoting from the authors: Some challenges arise even when optimizing convex functions. Of these, the most prominent is ill-conditioning of the Hessian matrix $H$. This is a very general problem in most numerical optimization, convex or otherwise, and is described in more detail in section 4.3.1. The ill-conditioning problem is generally believed to be present in neural network training problems. Ill-conditioning can manifest by causing SGD to get “stuck” in the sense that even very small steps increase the cost function. [my emphasis added] The authors provide a simple derivation to show that this can be the case. Using gradient descent, the cost function should change (to second order) by \begin{equation} \frac{\varepsilon^2}{2} g^{T} H g - \varepsilon g^{T} g \end{equation} where $g$ is the gradient, $H$ is the Hessian, and $\varepsilon$ is the learning rate. Clearly, if the second derivatives are large, then the first term can swamp the second, and the cost function will increase, not decrease. Since the first and second terms scale differently with $\varepsilon$, one way to alleviate this problem is to reduce $\varepsilon$ (although, of course, this can result in learning too slowly).
{ "domain": "datascience.stackexchange", "id": 1383, "tags": "machine-learning, python, tensorflow, optimization, gradient-descent" }