text
stringlengths
1
3.05k
source
stringclasses
4 values
there are several things which affect the time to first fix ( ttfx ). getting the almanac and ephemeris. these two things are technically a little different from each other, but for our purposes we'll treat them as the same. they are the locations of the satellites, and you need a to know where they are in order to work out your own position. each satellite transmits the whole lot roughly once every 12 minutes. so from a completely cold start with a one - channel receiver and a decent signal, ttfx will be at least 12 minutes. you can speed things up by : downloading from the internet instead - generally a good choice for phones. downloading the almanac and ephemreris this way is known as msb assisted gps. remembering the almanac from last time ( it's good for many weeks ) and only downloading the ephemeris. having more than one receiving channel in the device so you can listen to more than one satellite at once. the transmissions are staggered to make this work, and with some care you can use the ephemeris without an almanac which saves a lot of time. the vast majority of modules on the market these days have multiple channels, so it would be rare to find one which still needs 12 minutes. identifying satellites. you need to listen to at least three satellites, preferably more, to get a good fix, but each receiver ( known as correlators ) can only be tuned to one at a time. if you know roughly where you are, what time it is, and have an almanac already, then you can guess which satellites you can see. phones tend to know roughly where they are from recognising wifi or bluetooth signals, knowing which cell tower they are using, and other sources. they regularly get very accurate time updates too, so they can usually go straight for the correct satellite. both phones and larger modules can also remember when and where they were last used, and use that to start from. number of correlators. due to the very low signal - to - noise of gps signals, you need a special bit of hardware to receive them. some receivers only have one, and need to rotate'round the satellites. others have more, and can listen to more at once. so even if you already have the almanac / ephemeris and know roughly where you are, then more correlators will still help you fix quicker. you might think more
https://api.stackexchange.com
is always better, but more does increase cost and power consumption. some phones and modules have more than others. signal and antennas. the correlators will do their job faster if you have a good signal - to - noise going into them. very poor signals might not work at all. a good antenna design, amplifier, sky view, and good pcb layout can make all the difference. some modules may work ok out of the box, and much better with an antenna plugged in. number of usable satellites. there are actually two large constellations of satellites up there, gps ( run by the usa ) and glonass ( run by russia ). there are also more under construction : galileo ( eu ) and beidou - 2 ( china ) and some with local coverage like india's navic or beidou - 1. a receiver which can work with satellites from more than one constellation has more satellites to chose from, and will get a quicker and more accurate fix. quality of correlators. new hardware designs are better than old ones, and will be able to pick out fragments of the gps message in a noisy signal better. another trick phones can do is to capture fragments of signal and pass them over the internet to a server with a very good software correlator, and complete almanac / ephemeris to examine. this is known as msa assisted gps. some phones ( and even a few modules ) might also use some slightly sneaky tricks to avoid or hide a long ttfx. since they are on all the time, they might briefly switch on the gps without telling the user in order to keep the location and ephemeris roughly up to date. others might display a recent position while still waiting for a real fix - which looks like a good ttfx most of the time, but looks bad if it turns out the position is very wrong. point 1 above is the thing that makes the most difference, and is usually the key thing that is different between basic modules, more advanced modules, and phones. the others usually make a smaller difference, but it can actually become a very complicated thing. if you want to read more, then " gps time to first fix " is the term to search for.
https://api.stackexchange.com
via gencode and bedops convert2bed : $ wget - qo - ftp : / / ftp. ebi. ac. uk / pub / databases / gencode / gencode _ human / release _ 28 / gencode. v28. annotation. gff3. gz \ | gunzip - - stdout - \ | awk'$ 3 = = " gene "'- \ | convert2bed - i gff - \ > genes. bed you can modify the awk statement to get exons, by replacing gene with exon. to get hgnc symbol names in the id field, you can add the - - attribute - key = " gene _ name " option to v2. 4. 40 or later of convert2bed. this slight modification extracts the gene _ name attribute from the annotation record and puts it in the fourth ( id ) column : $ wget - qo - ftp : / / ftp. ebi. ac. uk / pub / databases / gencode / gencode _ human / release _ 28 / gencode. v28. annotation. gff3. gz \ | gunzip - - stdout - \ | awk'$ 3 = = " gene "'- \ | convert2bed - i gff - - attribute - key = " gene _ name " - \ > genes. bed bedops : this is based off an answer i wrote on biostars, which includes a perl script for generating a bed file of introns from gene and exon annotations :
https://api.stackexchange.com
this question is usually posed as the length of the diagonal of a unit square. you start going from one corner to the opposite one following the perimeter and observe the length is $ 2 $, then take shorter and shorter stair - steps and the length is $ 2 $ but your path approaches the diagonal. so $ \ sqrt { 2 } = 2 $. in both cases, you are approaching the area but not the path length. you can make this more rigorous by breaking into increments and following the proof of the riemann sum. the difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant. edit : making the square more explicit. imagine dividing the diagonal into $ n $ segments and a stairstep approximation. each triangle is $ ( \ frac { 1 } { n }, \ frac { 1 } { n }, \ frac { \ sqrt { 2 } } { n } ) $. so the area between the stairsteps and the diagonal is $ n \ frac { 1 } { 2n ^ 2 } $ which converges to $ 0 $. the path length is $ n \ frac { 2 } { n } $, which converges even more nicely to $ 2 $.
https://api.stackexchange.com
( i assume for the purposes of this answer that the data has been preprocessed to have zero mean. ) simply put, the pca viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product $ \ frac { 1 } { n - 1 } \ mathbf x \ mathbf x ^ \ top $, where $ \ mathbf x $ is the data matrix. since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal : $ \ frac { 1 } { n - 1 } \ mathbf x \ mathbf x ^ \ top = \ frac { 1 } { n - 1 } \ mathbf w \ mathbf d \ mathbf w ^ \ top $ on the other hand, applying svd to the data matrix $ \ mathbf x $ as follows : $ \ mathbf x = \ mathbf u \ mathbf \ sigma \ mathbf v ^ \ top $ and attempting to construct the covariance matrix from this decomposition gives $ $ \ frac { 1 } { n - 1 } \ mathbf x \ mathbf x ^ \ top = \ frac { 1 } { n - 1 } ( \ mathbf u \ mathbf \ sigma \ mathbf v ^ \ top ) ( \ mathbf u \ mathbf \ sigma \ mathbf v ^ \ top ) ^ \ top = \ frac { 1 } { n - 1 } ( \ mathbf u \ mathbf \ sigma \ mathbf v ^ \ top ) ( \ mathbf v \ mathbf \ sigma \ mathbf u ^ \ top ) $ $ and since $ \ mathbf v $ is an orthogonal matrix ( $ \ mathbf v ^ \ top \ mathbf v = \ mathbf i $ ), $ \ frac { 1 } { n - 1 } \ mathbf x \ mathbf x ^ \ top = \ frac { 1 } { n - 1 } \ mathbf u \ mathbf \ sigma ^ 2 \ mathbf u ^ \ top $ and the correspondence is easily seen ( the square roots of the eigenvalues of $ \ mathbf x \ mathbf x ^ \ top $ are the singular values of $ \ mathbf x $, etc. ) in fact, using the svd to perform pca makes much better sense numerically than
https://api.stackexchange.com
forming the covariance matrix to begin with, since the formation of $ \ mathbf x \ mathbf x ^ \ top $ can cause loss of precision. this is detailed in books on numerical linear algebra, but i'll leave you with an example of a matrix that can be stable svd'd, but forming $ \ mathbf x \ mathbf x ^ \ top $ can be disastrous, the lauchli matrix : $ \ begin { pmatrix } 1 & 1 & 1 \ \ \ epsilon & 0 & 0 \ \ 0 & \ epsilon & 0 \ \ 0 & 0 & \ epsilon \ end { pmatrix } ^ \ top, $ where $ \ epsilon $ is a tiny number.
https://api.stackexchange.com
practically, what limits cpu speed is both the heat generated and the gate delays, but usually, the heat becomes a far greater issue before the latter kicks in. recent processors are manufactured using cmos technology. every time there is a clock cycle, power is dissipated. therefore, higher processor speeds means more heat dissipation. here are some figures : core i7 - 860 ( 45 nm ) 2. 8 ghz 95 w core i7 - 965 ( 45 nm ) 3. 2 ghz 130 w core i7 - 3970x ( 32 nm ) 3. 5 ghz 150 w you can really see how the cpu transition power increases ( exponentially! ). also, there are some quantum effects which kick in as the size of transistors shrink. at nanometer levels, transistor gates actually become " leaky ". i won't get into how this technology works here, but i'm sure you can use google to look up these topics. okay, now, for the transmission delays. each " wire " inside the cpu acts as a small capacitor. also, the base of the transistor or the gate of the mosfet act as small capacitors. in order to change the voltage on a connection, you must either charge the wire or remove the charge. as transistors shrink, it becomes more difficult to do that. this is why sram needs amplification transistors, because the actually memory array transistors are so small and weak. in typical ic designs, where density is very important, the bit - cells have very small transistors. additionally, they are typically built into large arrays, which have very large bit - line capacitances. this results in a very slow ( relatively ) discharge of the bit - line by the bit - cell. from : how to implement sram sense amplifier? basically, the point is that it is harder for small transistors to drive the interconnects. also, there are gate delays. modern cpus have more than ten pipeline stages, perhaps up to twenty. performance issues in pipelining there are also inductive effects. at microwave frequencies, they become quite significant. you can look up crosstalk and that kind of stuff. now, even if you do manage to get a 3265810 thz processor working, another practical limit is how fast the rest of the system can support it. you either must have ram, storage, glue logic, and other interconnects that
https://api.stackexchange.com
perform just as fast, or you need an immense cache.
https://api.stackexchange.com
combustion is a gas phase reaction. the heat of the flame vapourises the substrate and it's the vapour that reacts with the air. that's why heat is needed to get combustion started. anyhow, wood contains lots of relatively volatile compounds so it's not too hard to get combustion started. once combustion has started the heat of the flame keeps the reaction going. however sugar dehydrates and emits water when you heat it. water isn't flammable ( obviously ) so there's no way to get combustion started. dehydration leaves behind pure carbon and that is non - volatile so again there's no way to get this to burn. carbon will burn of course, but you need a high temperature to get it going.
https://api.stackexchange.com
i always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that " squashes " the data, such as a root or reciprocal. before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. some non - linear re - expression of the dependent variable is indicated when any of the following apply : the residuals have a skewed distribution. the purpose of a transformation is to obtain residuals that are approximately symmetrically distributed ( about zero, of course ). the spread of the residuals changes systematically with the values of the dependent variable ( " heteroscedasticity " ). the purpose of the transformation is to remove that systematic change in spread, achieving approximate " homoscedasticity. " to linearize a relationship. when scientific theory indicates. for example, chemistry often suggests expressing concentrations as logarithms ( giving activities or even the well - known ph ). when a more nebulous statistical theory suggests the residuals reflect " random errors " that do not accumulate additively. to simplify a model. for example, sometimes a logarithm can simplify the number and complexity of " interaction " terms. ( these indications can conflict with one another ; in such cases, judgment is needed. ) so, when is a logarithm specifically indicated instead of some other transformation? the residuals have a " strongly " positively skewed distribution. in his book on eda, john tukey provides quantitative ways to estimate the transformation ( within the family of box - cox, or power, transformations ) based on rank statistics of the residuals. it really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re - expression ; otherwise, some other re - expression is needed. when the sd of the residuals is directly proportional to the fitted values ( and not to some power of the fitted values ). when the relationship is close to exponential. when residuals are believed to reflect multiplicatively accumulating errors. you really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative ( percentage ) changes in the dependent variable. finally, some non - reasons to use a re - expression : making outliers not look like outliers. an outlier is a dat
https://api.stackexchange.com
##um that does not fit some parsimonious, relatively simple description of the data. changing one's description in order to make outliers look better is usually an incorrect reversal of priorities : first obtain a scientifically valid, statistically good description of the data and then explore any outliers. don't let the occasional outlier determine how to describe the rest of the data! because the software automatically did it. ( enough said! ) because all the data are positive. ( positivity often implies positive skewness, but it does not have to. furthermore, other transformations can work better. for example, a root often works best with counted data. ) to make " bad " data ( perhaps of low quality ) appear well behaved. to be able to plot the data. ( if a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. if the only reason for the transformation truly is for plotting, go ahead and do it - - but only to plot the data. leave the data untransformed for analysis. )
https://api.stackexchange.com
i'm a physicist, so apologies if the answer below is in a foreign language ; but this was too interesting of a problem to pass up. i'm going to focus on a particular question : if we have oxygen and nothing else in a box, how strong does the magnetic field need to be to concentrate the gas in a region? the tl ; dr is that thermal effects are going to make this idea basically impossible. the force on a magnetic dipole $ \ vec { m } $ is $ \ vec { f } = \ vec { \ nabla } ( \ vec { m } \ cdot \ vec { b } ) $, where $ \ vec { b } $ is the magnetic field. let us assume that the dipole moment of the oxygen molecule is proportional to the magnetic field at that point : $ \ vec { m } = \ alpha \ vec { b } $, where $ \ alpha $ is what we might call the " molecular magnetic susceptibility. " then we have $ \ vec { f } = \ vec { \ nabla } ( \ alpha \ vec { b } \ cdot \ vec { b } ) $. but potential energy is given by $ \ vec { f } = - \ vec { \ nabla } u $ ; which implies that an oxygen molecule moving in a magnetic field acts as though it has a potential energy $ u ( \ vec { r } ) = - \ alpha b ^ 2 $. now, if we're talking about a sample of gas at a temperature $ t $, then the density of the oxygen molecules in equilibrium will be proportional to the boltzmann factor : $ $ \ rho ( \ vec { r } ) \ propto \ mathrm e ^ { - u ( \ vec { r } ) / kt } = \ mathrm e ^ { - \ alpha b ^ 2 / kt } $ $ in the limit where $ kt \ gg \ alpha b ^ 2 $, this exponent will be close to zero, and the density will not vary significantly from point to point in the sample. to get a significant difference in the density of oxygen from point to point, we have to have $ \ alpha b ^ 2 \ gtrsim kt $ ; in other words, the magnetic potential energy must comparable to ( or greater than ) the thermal energy of the molecules, or otherwise random thermal motions will
https://api.stackexchange.com
cause the oxygen to diffuse out of the region of higher magnetic field. so how high does this have to be? the $ \ alpha $ we have defined above is approximately related to the molar magnetic susceptibility by $ \ chi _ \ text { mol } \ approx \ mu _ 0 n _ \ mathrm a \ alpha $ ; and so we have1 $ $ \ chi _ \ text { mol } b ^ 2 \ gtrsim \ mu _ 0 rt $ $ and so we must have $ $ b \ gtrsim \ sqrt { \ frac { \ mu _ 0 r t } { \ chi _ \ text { mol } } }. $ $ if you believe wikipedia, the molar susceptibility of oxygen gas is $ 4. 3 \ times 10 ^ { - 8 } \ \ text { m } ^ 3 / \ text { mol } $ ; and plugging in the numbers, we get a requirement for a magnetic field of $ $ b \ gtrsim \ pu { 258 t }. $ $ this is over five times stronger than the strongest continuous magnetic fields ever produced, and 25 – 100 times stronger than most mri machines. even at $ \ pu { 91 kelvin } $ ( just above the boiling point of oxygen ), you would need a magnetic field of almost $ \ pu { 150 t } $ ; still well out of range. 1 i'm making an assumption here that the gas is sufficiently diffuse that we can ignore the magnetic interactions between the molecules. a better approximation could be found by using a magnetic analog of the clausius - mossotti relation ; and if the gas gets sufficiently dense, then all bets are off.
https://api.stackexchange.com
yes, it has a lot to do with mass. since deuterium has a higher mass than protium, simple bohr theory tells us that the deuterium 1s electron will have a smaller orbital radius than the 1s electron orbiting the protium nucleus ( see " note " below for more detail on this point ). the smaller orbital radius for the deuterium electron translates into a shorter ( and stronger ) $ \ ce { c - d } $ bond length. a shorter bond has less volume to spread the electron density ( of the 1 electron contributed by $ \ ce { h } $ or $ \ ce { d } $ ) over resulting in a higher electron density throughout the bond, and, consequently, more electron density at the carbon end of the bond. therefore, the shorter $ \ ce { c - d } $ bond will have more electron density around the carbon end of the bond, than the longer $ \ ce { c - h } $ bond. the net effect is that the shorter bond with deuterium increases the electron density at carbon, e. g. deuterium is inductively more electron donating than protium towards carbon. similar arguments can be applied to tritium and it's even shorter $ \ ce { c - t } $ bond should be even more inductively electron donating towards carbon than deuterium. note : bohr radius detail most introductory physics texts show the radius of the $ n ^ \ text { th } $ bohr orbit to be given by $ $ r _ { n } = { n ^ 2 \ hbar ^ 2 \ over zk _ \ mathrm { c } e ^ 2 m _ \ mathrm { e } } $ $ where $ z $ is the atom's atomic number, $ k _ \ mathrm { c } $ is coulomb's constant, $ e $ is the electron charge, and $ m _ \ mathrm { e } $ is the mass of the electron. however, in this derivation it is assumed that the electron orbits the nucleus and the nucleus remains stationary. given the mass difference between the electron and nucleus, this is generally a reasonable assumption. however, in reality the nucleus does move too. it is relatively straightforward to remove this assumption and make the equation more accurate by replacing $ m _ \ mathrm { e } $ with the electron's reduced mass, $ \ mu _ \ mathrm { e } $ $ $ \ mu _ \ mathrm { e } =
https://api.stackexchange.com
\ frac { m _ \ mathrm { e } \ times m _ \ text { nucleus } } { m _ \ mathrm { e } + m _ \ text { nucleus } } $ $ now the equation for the bohr radius becomes $ $ r _ { n } = { n ^ 2 \ hbar ^ 2 \ over zk _ \ mathrm { c } e ^ 2 \ mu _ \ mathrm { e } } $ $ since the reduced mass of an electron orbiting a heavy nucleus is always larger than the reduced mass of an electron orbiting a lighter nucleus $ $ r _ \ text { heavy } \ lt r _ \ text { light } $ $ and consequently an electron will orbit closer to a deuterium nucleus than it will orbit a protium nucleus.
https://api.stackexchange.com
the two primary factors that describe a window function are : width of the main lobe ( i. e., at what frequency bin is the power half that of the maximum response ) attenuation of the side lobes ( i. e., how far away down are the side lobes from the mainlobe ). this tells you about the spectral leakage in the window. another not so frequently considered factor is the rate of attenuation of the sidelobes, i. e., how fast do the sidelobes die down. here's a quick comparison for four well known window functions : rectangular, blackman, blackman - harris and hamming. the curves below are 2048 - point ffts of 64 - point windows. you can see that the rectangular function has a very narrow main lobe, but the side lobes are quite high, at ~ 13 db. other filters have significantly fatter main lobes, but fare much better in the side lobe suppression. in the end, it's all a trade - off. you can't have both, you have to pick one. so that said, your choice of window function is highly dependent on your specific needs. for instance, if you're trying to separate / identify two signals that are fairly close in frequency, but similar in strength, then you should choose the rectangular, because it will give you the best resolution. on the other hand, if you're trying to do the same with two different strength signals with differing frequencies, you can easily see how energy from one can leak in through the high sidelobes. in this case, you wouldn't mind one of the fatter main lobes and would trade a slight loss in resolution to be able to estimate their powers more accurately. in seismic and geophysics, it is common to use slepian windows ( or discrete prolate spheroidal wavefunctions, which are the eigenfunctions of a sinc kernel ) to maximize the energy concentrated in the main lobe.
https://api.stackexchange.com
i'd like to find genes that were not expressed in a group of samples and were expressed in another group. this is, fundamentally, a differential expression analysis, with a twist. to solve this, you ’ d first use a differential expression library of your choice ( e. g. deseq2 ) and perform a one - tailed test of differential expression. briefly, you ’ d perform the normal setup and then use results ( dds, althypothesis ='greater') to perform a one - tailed test. this will give you only those genes that are significantly upregulated in one group. check chapter 3. 9 of the vignette for details. of course this won ’ t tell you that the genes are unexpressed in the other group. unfortunately i don ’ t know of a good value to threshold the results ; i would start by plotting a histogram of the ( variance stabilised ) expression values in your first group, and then visually choose an expression threshold that cleanly separates genes that are clearly expressed from zeros : vst _ counts = assay ( vst ( dds ) ) dens = density ( vst _ counts [, replicate ] ) plot ( dens, log ='y') ( this merges the replicates in the group, which should be fine. ) counts follow a multimodal distribution, with one mode for unexpressed and one or more for expressed genes. the expression threshold can be set somewhere between the clearly unexpressed and expressed peaks : here i used identify ( dens ) to identify the threshold interactively but you could also use an analytical method : threshold = identify ( dens ) quantile = sum ( dens $ x < dens $ x [ threshold ] ) / length ( dens $ x ) # using just one replicate here ; more robust would be to use a mean value. nonzero _ counts = counts ( dds, normalized = true ) [, replicates [ 1 ] ] nonzero _ counts = nonzero _ counts [ nonzero _ counts > 0 ] ( expression _ threshold = quantile ( nonzero _ counts, probs = quantile ) ) 26. 5625 % 4. 112033
https://api.stackexchange.com
it's simpler than you think. when we discretize frequencies, we get frequency bins. so, when you discretize your fourier transform : $ $ e ^ { - j \ omega } \ rightarrow e ^ { - j { 2 \ pi k } / { n } } $ $ our continuous frequencies become $ n $ discrete bins. this is exactly why the following is true : $ $ n ^ { th } \, \ text { bin } = n * \ dfrac { \ text { samplefreq } } { \ text { nfft } } $ $ where $ \ text { nfft } $ is the length of the dft. note that the fft represents frequencies $ 0 $ to $ \ text { samplefreq } $ hz. ( rab - actually, if $ \ text { nfft } = n $, then your bin index will span from $ 0 $ through $ n - 1 $. therefore, the frequencies generated will be ( 0 : n - 1 ) * samplefreq / nfft, and you won't get the $ n \ cdot \ text { samplefreq } / n = \ text { samplefreq } $ bin. that unrepresented bin will alias onto and be summed with the $ 0 $ bin. instead, you will get bins ( 0 : 9 ) * samplefreq / 10 in other words, if sampling 10 times per second, and sampling for 1 second, our frequency bins will be 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 hz. notice that the 10 hz bin is not there.
https://api.stackexchange.com
first, a note on spelling. both " ortholog " and " orthologue " are correct, one is the american and the other the british spelling. the same is true for homolog / homologue and paralog / paralogue. on to the biology. homology is the blanket term, both ortho - and paralogs are homologs. so, when in doubt use " homologs ". however : orthologs are homologous genes that are the result of a speciation event. paralogs are homologous genes that are the result of a duplication event. the following image, adapted ( slightly ) from [ 1 ], illustrates the differences : part ( a ) of the diagram above shows a hypothetical evolutionary history of a gene. the ancestral genome had two copies of this gene ( a and b ) which were paralogs. at some point, the ancestral species split into two daughter species, each of whose genome contains two copies of the ancestral duplicated gene ( a1, a2 and b1, b2 ). these genes are all homologous to one another but are they paralogs or orthologs? since the duplication event that created genes a and b occurred before the speciation event that created species 1 and 2, a genes will be paralogs of b genes and 1 genes will be orthologs of 2 genes : a1 and b1 are paralogs a1 and b2 are paralogs. a2 and b1 are paralogs. a2 and b2 are paralogs. a1 and a2 are orthologs. b1 and b2 are orthologs this however, is a very simple case. what happens when a duplication occurs after a speciation event? in part ( b ) of the above diagram, the ancestral gene was duplicated only in species 2's lineage. therefore, in ( b ) : a2 and b2 are orthologs of a1. a2 and b2 are paralogs of each other. a common misconception is that paralogous genes are those homologous genes that are in the same genome while orthologous genes are those that are in different genomes. as you can see in the example above, this is absolutely not true. while it can happen that way, ortho - vs paralogy depends exclusively on the evolutionary history of the genes involved. if you do not know whether a particular homology relationship is the result of a
https://api.stackexchange.com
gene duplication or a speciation event, then you cannot know if it is a case of paralogy or orthology. references r. a. jensen, orthologs and paralogs - we need to get it right, genome biology, 2 ( 8 ), 2001 suggested reading : i highly recommend the jensen article referenced above. i read it when i was first starting to work on comparative genomics and evolution and it is a wonderfully clear and succinct explanation of the terms. some of the articles referenced therein are also worth a read : koonin ev : an apology for orthologs - or brave new memes. genome biol, 2001, 2 : comment1005. 1 - 1005. 2. petsko ga : homologuephobia. genome biol 2001, 2 : comment1002. 1 - 1002. 2. fitch wm : distinguishing homologous from analogous proteins. syst zool 1970, 19 : 99 - 113. ( of historical interest, the terms were first used here ) fitch wm : homology a personal view on some of the problems. trends genet 2000, 16 : 227 - 31.
https://api.stackexchange.com
ftensor is a lightweight, header only, fully templated library that includes ergonomic summation notation. it has been tested extensively in 2, 3, and 4 dimensions, but should work fine for any number of dimensions.
https://api.stackexchange.com
a standard linear model ( e. g., a simple regression model ) can be thought of as having two'parts '. these are called the structural component and the random component. for example : $ $ y = \ beta _ 0 + \ beta _ 1x + \ varepsilon \ \ \ text { where } \ varepsilon \ sim \ mathcal { n } ( 0, \ sigma ^ 2 ) $ $ the first two terms ( that is, $ \ beta _ 0 + \ beta _ 1x $ ) constitute the structural component, and the $ \ varepsilon $ ( which indicates a normally distributed error term ) is the random component. when the response variable is not normally distributed ( for example, if your response variable is binary ) this approach may no longer be valid. the generalized linear model ( glim ) was developed to address such cases, and logit and probit models are special cases of glims that are appropriate for binary variables ( or multi - category response variables with some adaptations to the process ). a glim has three parts, a structural component, a link function, and a response distribution. for example : $ $ g ( \ mu ) = \ beta _ 0 + \ beta _ 1x $ $ here $ \ beta _ 0 + \ beta _ 1x $ is again the structural component, $ g ( ) $ is the link function, and $ \ mu $ is a mean of a conditional response distribution at a given point in the covariate space. the way we think about the structural component here doesn't really differ from how we think about it with standard linear models ; in fact, that's one of the great advantages of glims. because for many distributions the variance is a function of the mean, having fit a conditional mean ( and given that you stipulated a response distribution ), you have automatically accounted for the analog of the random component in a linear model ( n. b. : this can be more complicated in practice ). the link function is the key to glims : since the distribution of the response variable is non - normal, it's what lets us connect the structural component to the response - - it'links'them ( hence the name ). it's also the key to your question, since the logit and probit are links ( as @ vinux explained ), and understanding link functions will allow us to intelligently choose when to use which one. although there can be many link functions
https://api.stackexchange.com
that can be acceptable, often there is one that is special. without wanting to get too far into the weeds ( this can get very technical ) the predicted mean, $ \ mu $, will not necessarily be mathematically the same as the response distribution's canonical location parameter ; the link function that does equate them is the canonical link function. the advantage of this " is that a minimal sufficient statistic for $ \ beta $ exists " ( german rodriguez ). the canonical link for binary response data ( more specifically, the binomial distribution ) is the logit. however, there are lots of functions that can map the structural component onto the interval $ ( 0, 1 ) $, and thus be acceptable ; the probit is also popular, but there are yet other options that are sometimes used ( such as the complementary log log, $ \ ln ( - \ ln ( 1 - \ mu ) ) $, often called'cloglog'). thus, there are lots of possible link functions and the choice of link function can be very important. the choice should be made based on some combination of : knowledge of the response distribution, theoretical considerations, and empirical fit to the data. having covered a little of conceptual background needed to understand these ideas more clearly ( forgive me ), i will explain how these considerations can be used to guide your choice of link. ( let me note that i think @ david's comment accurately captures why different links are chosen in practice. ) to start with, if your response variable is the outcome of a bernoulli trial ( that is, $ 0 $ or $ 1 $ ), your response distribution will be binomial, and what you are actually modeling is the probability of an observation being a $ 1 $ ( that is, $ \ pi ( y = 1 ) $ ). as a result, any function that maps the real number line, $ ( - \ infty, + \ infty ) $, to the interval $ ( 0, 1 ) $ will work. from the point of view of your substantive theory, if you are thinking of your covariates as directly connected to the probability of success, then you would typically choose logistic regression because it is the canonical link. however, consider the following example : you are asked to model high _ blood _ pressure as a function of some covariates. blood pressure itself is normally distributed in the population ( i don't actually know that, but it seems reasonable prima facie
https://api.stackexchange.com
), nonetheless, clinicians dichotomized it during the study ( that is, they only recorded'high - bp'or'normal'). in this case, probit would be preferable a - priori for theoretical reasons. this is what @ elvis meant by " your binary outcome depends on a hidden gaussian variable ". another consideration is that both logit and probit are symmetrical, if you believe that the probability of success rises slowly from zero, but then tapers off more quickly as it approaches one, the cloglog is called for, etc. lastly, note that the empirical fit of the model to the data is unlikely to be of assistance in selecting a link, unless the shapes of the link functions in question differ substantially ( of which, the logit and probit do not ). for instance, consider the following simulation : set. seed ( 1 ) problower = vector ( length = 1000 ) for ( i in 1 : 1000 ) { x = rnorm ( 1000 ) y = rbinom ( n = 1000, size = 1, prob = pnorm ( x ) ) logitmodel = glm ( y ~ x, family = binomial ( link = " logit " ) ) probitmodel = glm ( y ~ x, family = binomial ( link = " probit " ) ) problower [ i ] = deviance ( probitmodel ) < deviance ( logitmodel ) } sum ( problower ) / 1000 [ 1 ] 0. 695 even when we know the data were generated by a probit model, and we have 1000 data points, the probit model only yields a better fit 70 % of the time, and even then, often by only a trivial amount. consider the last iteration : deviance ( probitmodel ) [ 1 ] 1025. 759 deviance ( logitmodel ) [ 1 ] 1026. 366 deviance ( logitmodel ) - deviance ( probitmodel ) [ 1 ] 0. 6076806 the reason for this is simply that the logit and probit link functions yield very similar outputs when given the same inputs. the logit and probit functions are practically identical, except that the logit is slightly further from the bounds when they'turn the corner ', as @ vinux stated. ( note that to get the logit and the probit to align optimal
https://api.stackexchange.com
##ly, the logit's $ \ beta _ 1 $ must be $ \ approx 1. 7 $ times the corresponding slope value for the probit. in addition, i could have shifted the cloglog over slightly so that they would lay on top of each other more, but i left it to the side to keep the figure more readable. ) notice that the cloglog is asymmetrical whereas the others are not ; it starts pulling away from 0 earlier, but more slowly, and approaches close to 1 and then turns sharply. a couple more things can be said about link functions. first, considering the identity function ( $ g ( \ eta ) = \ eta $ ) as a link function allows us to understand the standard linear model as a special case of the generalized linear model ( that is, the response distribution is normal, and the link is the identity function ). it's also important to recognize that whatever transformation the link instantiates is properly applied to the parameter governing the response distribution ( that is, $ \ mu $ ), not the actual response data. finally, because in practice we never have the underlying parameter to transform, in discussions of these models, often what is considered to be the actual link is left implicit and the model is represented by the inverse of the link function applied to the structural component instead. that is : $ $ \ mu = g ^ { - 1 } ( \ beta _ 0 + \ beta _ 1x ) $ $ for instance, logistic regression is usually represented : $ $ \ pi ( y ) = \ frac { \ exp ( \ beta _ 0 + \ beta _ 1x ) } { 1 + \ exp ( \ beta _ 0 + \ beta _ 1x ) } $ $ instead of : $ $ \ ln \ left ( \ frac { \ pi ( y ) } { 1 - \ pi ( y ) } \ right ) = \ beta _ 0 + \ beta _ 1x $ $ for a quick and clear, but solid, overview of the generalized linear model, see chapter 10 of fitzmaurice, laird, & ware ( 2004 ), ( on which i leaned for parts of this answer, although since this is my own adaptation of that - - and other - - material, any mistakes would be my own ). for how to fit these models in r, check out the documentation for the function? glm in the base package. ( one final note added later : ) i occasionally hear people say
https://api.stackexchange.com
that you shouldn't use the probit, because it can't be interpreted. this is not true, although the interpretation of the betas is less intuitive. with logistic regression, a one unit change in $ x _ 1 $ is associated with a $ \ beta _ 1 $ change in the log odds of'success'( alternatively, an $ \ exp ( \ beta _ 1 ) $ - fold change in the odds ), all else being equal. with a probit, this would be a change of $ \ beta _ 1 \ text { } z $'s. ( think of two observations in a dataset with $ z $ - scores of 1 and 2, for example. ) to convert these into predicted probabilities, you can pass them through the normal cdf, or look them up on a $ z $ - table. ( + 1 to both @ vinux and @ elvis. here i have tried to provide a broader framework within which to think about these things and then using that to address the choice between logit and probit. )
https://api.stackexchange.com
the central issue is the length of the critical path $ c $ relative to the total amount of computation $ t $. if $ c $ is proportional to $ t $, then parallelism offers at best a constant speed - up. if $ c $ is asymptotically smaller than $ t $, there is room for more parallelism as the problem size increases. for algorithms in which $ t $ is polynomial in the input size $ n $, the best case is $ c \ sim \ log t $ because very few useful quantities can be computed in less than logarithmic time. examples $ c = t $ for a tridiagonal solve using the standard algorithm. every operation is dependent on the previous operation completing, so there is no opportunity for parallelism. tridiagonal problems can be solved in logarithmic time on a parallel computer using a nested dissection direct solve, multilevel domain decomposition, or multigrid with basis functions constructed using harmonic extension ( these three algorithms are distinct in multiple dimensions, but can exactly coincide in 1d ). a dense lower - triangular solve with an $ m \ times m $ matrix has $ t = n = \ mathcal o ( m ^ 2 ) $, but the critical path is only $ c = m = \ sqrt t $, so some parallelism can be beneficial. multigrid and fmm both have $ t = n $, with a critical path of length $ c = \ log t $. explicit wave propagation for a time $ \ tau $ on a regular mesh of the domain $ ( 0, 1 ) ^ d $ requires $ k = \ tau / \ delta t \ sim \ tau n ^ { 1 / d } $ time steps ( for stability ), therefore the critical path is at least $ c = k $. the total amount of work is $ t = k n = \ tau n ^ { ( d + 1 ) / d } $. the maximum useful number of processors is $ p = t / c = n $, the remaining factor $ n ^ { 1 / d } $ cannot be recovered by increased parallelism. formal complexity the nc complexity class characterizes those problems that can be solved efficiently in parallel ( i. e., in polylogarithmic time ). it is unknown whether $ nc = p $, but it is widely hypothesized to be false. if this is indeed the case, then p - complete characterizes those problems that are " inherently
https://api.stackexchange.com
sequential " and cannot be sped up significantly by parallelism.
https://api.stackexchange.com
let $ v $ be a vector space ( over any field, but we can take it to be $ \ mathbb r $ if you like, and for concreteness i will take the field to be $ \ mathbb r $ from now on ; everything is just as interesting in that case ). certainly one of the interesting concepts in linear algebra is that of a hyperplane in $ v $. for example, if $ v = \ mathbb r ^ n $, then a hyperplane is just the solution set to an equation of the form $ $ a _ 1 x _ 1 + \ cdots + a _ n x _ n = b, $ $ for some $ a _ i $ not all zero and some $ b $. recall that solving such equations ( or simultaneous sets of such equations ) is one of the basic motivations for developing linear algebra. now remember that when a vector space is not given to you as $ \ mathbb r ^ n $, it doesn't normally have a canonical basis, so we don't have a canonical way to write its elements down via coordinates, and so we can't describe hyperplanes by explicit equations like above. ( or better, we can, but only after choosing coordinates, and this is not canonical. ) how can we canonically describe hyperplanes in $ v $? for this we need a conceptual interpretation of the above equation. and here linear functionals come to the rescue. more precisely, the map $ $ \ begin { align * } \ ell : \ mathbb { r } ^ n & \ rightarrow \ mathbb { r } \ \ ( x _ 1, \ ldots, x _ n ) & \ mapsto a _ 1 x _ 1 + \ cdots + a _ n x _ n \ end { align * } $ $ is a linear functional on $ \ mathbb r ^ n $, and so the above equation for the hyperplane can be written as $ $ \ ell ( v ) = b, $ $ where $ v = ( x _ 1, \ ldots, x _ n ). $ more generally, if $ v $ is any vector space, and $ \ ell : v \ to \ mathbb r $ is any non - zero linear functional ( i. e. non - zero element of the dual space ), then for any $ b \ in \ mathbb r, $ the set $ $ \ { v \, | \, \ ell ( v
https://api.stackexchange.com
) = b \ } $ $ is a hyperplane in $ v $, and all hyperplanes in $ v $ arise this way. so this gives a reasonable justification for introducing the elements of the dual space to $ v $ ; they generalize the notion of linear equation in several variables from the case of $ \ mathbb r ^ n $ to the case of an arbitrary vector space. now you might ask : why do we make them a vector space themselves? why do we want to add them to one another, or multiply them by scalars? there are lots of reasons for this ; here is one : remember how important it is, when you solve systems of linear equations, to add equations together, or to multiply them by scalars ( here i am referring to all the steps you typically make when performing gaussian elimination on a collection of simultaneous linear equations )? well, under the dictionary above between linear equations and linear functionals, these processes correspond precisely to adding together linear functionals, or multiplying them by scalars. if you ponder this for a bit, you can hopefully convince yourself that making the set of linear functionals a vector space is a pretty natural thing to do. summary : just as concrete vectors $ ( x _ 1, \ ldots, x _ n ) \ in \ mathbb r ^ n $ are naturally generalized to elements of vector spaces, concrete linear expressions $ a _ 1 x _ 1 + \ ldots + a _ n x _ n $ in $ x _ 1, \ ldots, x _ n $ are naturally generalized to linear functionals.
https://api.stackexchange.com
if you want to normalize your data, you can do so as you suggest and simply calculate the following : $ $ z _ i = \ frac { x _ i - \ min ( x ) } { \ max ( x ) - \ min ( x ) } $ $ where $ x = ( x _ 1,..., x _ n ) $ and $ z _ i $ is now your $ i ^ { th } $ normalized data. as a proof of concept ( although you did not ask for it ) here is some r code and accompanying graph to illustrate this point : # example data x = sample ( - 100 : 100, 50 ) # normalized data normalized = ( x - min ( x ) ) / ( max ( x ) - min ( x ) ) # histogram of example data and normalized data par ( mfrow = c ( 1, 2 ) ) hist ( x, breaks = 10, xlab = " data ", col = " lightblue ", main = " " ) hist ( normalized, breaks = 10, xlab = " normalized data ", col = " lightblue ", main = " " )
https://api.stackexchange.com
i think your question implicates another question ( which is also mentioned in some comments here ), namely : why are all energy eigenvalues of states with a different angular momentum quantum number $ \ ell $ but with the same principal quantum number $ n $ ( e. g., $ \ mathrm { 3s } $, $ \ mathrm { 3p } $, $ \ mathrm { 3d } $ ) degenerate in the hydrogen atom but non - degenerate in multi - electron atoms? although acidflask already gave a good answer ( mostly on the non - degeneracy part ) i will try to eleborate on it from my point of view and give some additional information. i will split my answer in three parts : the first will address the $ \ ell $ - degeneracy in the hydrogen atom, in the second i will try to explain why this degeneracy is lifted, and in the third i will try to reason why $ \ mathrm { 3s } $ states are lower in energy than $ \ mathrm { 3p } $ states ( which are in turn lower in energy than $ \ mathrm { 3d } $ states ). $ \ ell $ - degeneracy of the hydrogen atoms energy eigenvalues the non - relativistic electron in a hydrogen atom experiences a potential that is analogous to the kepler problem known from classical mechanics. this potential ( aka kepler potential ) has the form $ \ frac { \ kappa } { r } $, where $ r $ is the distance between the nucleus and the electron, and $ \ kappa $ is a proportionality constant. now, it is known from physics that symmetries of a system lead to conserved quantities ( noether theorem ). for example from the rotational symmetry of the kepler potential follows the conservation of the angular momentum, which is characterized by $ \ ell $. but while the length of the angular momentum vector is fixed by $ \ ell $ there are still different possibilities for the orientation of its $ z $ - component, characterized by the magnetic quantum number $ m $, which are all energetically equivalent as long as the system maintains its rotational symmetry. so, the rotational symmetry leads to the $ m $ - degeneracy of the energy eigenvalues for the hydrogen atom. analogously, the $ \ ell $ - degeneracy of the hydrogen atoms energy eigenvalues can also be traced
https://api.stackexchange.com
back to a symmetry, the $ so ( 4 ) $ symmetry. the system's $ so ( 4 ) $ symmetry is not a geometric symmetry like the one explored before but a so called dynamical symmetry which follows from the form of the schroedinger equation for the kepler potential. ( it corresponds to rotations in a four - dimensional cartesian space. note that these rotations do not operate in some physical space. ) this dynamical symmetry conserves the laplace - runge - lenz vector $ \ hat { \ vec { m } } $ and it can be shown that this conserved quantity leads to the $ \ ell $ - independent energy spectrum with $ e \ propto \ frac { 1 } { n ^ 2 } $. ( a detailed derivation, though in german, can be found here. ) why is the $ \ ell $ - degeneracy of the energy eigenvalues lifted in multi - electron atoms? as the $ m $ - degeneracy of the hydrogen atom's energy eigenvalues can be broken by destroying the system's spherical symmetry, e. g., by applying a magnetic field, the $ \ ell $ degeneracy is lifted as soon as the potential appearing in the hamilton operator deviates from the pure $ \ frac { \ kappa } { r } $ form. this is certainly the case for multielectron atoms since the outer electrons are screened from the nuclear coulomb attraction by the inner electrons and the strength of the screening depends on their distance from the nucleus. ( other factors, like spin and relativistic effects, also lead to a lifting of the $ \ ell $ - degeneracy even in the hydrogen atom. ) why do states with the same $ n $ but lower $ \ ell $ values have lower energy eigenvalues? two effects are important here : the centrifugal force puts an " energy penalty " onto states with higher angular momentum. $ { } ^ { 1 } $ so, a higher $ \ ell $ value implies a stronger centrifugal force, that pushes electrons away from the nucleus. the concept of centrifugal force can be seen in the radial schroedinger equation for the radial part $ r ( r ) $ of the wave function $ \ psi ( r, \ theta, \ varphi ) = r ( r ) y _ { \ ell, m } ( \ theta, \
https://api.stackexchange.com
varphi ) $ \ begin { equation } \ bigg ( \ frac { - \ hbar ^ { 2 } } { 2 m _ { \ mathrm { e } } } \ frac { \ mathrm { d } ^ { 2 } } { \ mathrm { d } r ^ { 2 } } + \ underbrace { \ frac { \ hbar ^ { 2 } } { 2 m _ { \ mathrm { e } } } \ frac { \ ell ( \ ell + 1 ) } { r ^ { 2 } } } - \ frac { z e ^ { 2 } } { 2 m _ { \ mathrm { e } } r } - e \ bigg ) r r ( r ) = 0 \ end { equation } \ begin { equation } { } ^ { = ~ v ^ { \ ell } _ { \ mathrm { cf } } ( r ) } \ qquad \ qquad \ end { equation } the radial part experiences an additional $ \ ell $ - dependent potential $ v ^ { \ ell } _ { \ mathrm { cf } } ( r ) $ that pushes the electrons away from the nucleus. core repulsion ( pauli repulsion ), on the other hand, puts an " energy penalty " on states with a lower angular momentum. that is because the core repulsion acts only between electrons with the same angular momentum $ { } ^ { 1 } $. so it acts stronger on the low - angular momentum states since there are more core shells with lower angular momentum. core repulsion is due to the condition that the wave functions must be orthogonal which in turn is a consequence of the pauli principle. because states with different $ \ ell $ values are already orthogonal by their angular motion, there is no pauli repulsion between those states. however, states with the same $ \ ell $ value feel an additional effect from core orthogonalization. the " accidental " $ \ ell $ - degeneracy of the hydrogen atom can be described as a balance between centrifugal force and core repulsion, that both act against the nuclear coulomb attraction. in the real atom the balance between centrifugal force and core repulsion is broken, the core electrons are contracted compared to the outer electrons because there are less inner electron - shells screening the nuclear attraction from the core shells than from the valence electrons. since the inner electron shells are more contracted
https://api.stackexchange.com
than the outer ones, the core repulsion is weakened whereas the effects due to the centrifugal force remain unchanged. the reduced core repulsion in turn stabilizes the states with lower angular momenta, i. e. lower $ \ ell $ values. so, $ \ mathrm { 3s } $ states are lower in energy than $ \ mathrm { 3p } $ states which are in turn lower in energy than $ \ mathrm { 3d } $ states. of course, one has to be careful when using results of the hydrogen atom to describe effects in multielectron atoms as acidflask mentioned. but since only a qualitative description is needed this might be justifiable. i hope this somewhat lengthy answer is helpful. if something is wrong with my arguments i'm happy to discuss those points.
https://api.stackexchange.com
i'm not sure about the existence of molecules with bridges through rings. however, there are several publications of synthesis of molecules mimicking wheels and axles ( [ 2 ] rotaxanes ; the β€œ [ 2 ] ” refers to the number of interlocked components ) as one shown below ( ref. 1 ) : ( the diagram is from reference 1 ) this specific molecule ( 8 ; an β€œ impossible ” [ 2 ] rotaxane ) represents a macro - cycle with a straight - chain molecule with bulky end groups going through its center. the inclusion of two bulky end groups prevents the straight - chain molecule leaving the macro - cycle ( mechanically interlocked ) as depicted in the diagram ( see ref. 2 for the total synthesis of the molecule ). note that ref. 1 also cited articles for the synthesis of [ 2 ] catenanes, which contain two interlocked rings ( instead of one axle and one macrocycle ). keep in mind that there are some advanced catenanes and rotaxanes that exist ( e. g., [ 3 ] catenanes and [ 3 ] rotaxanes ). ( the structures are from reference 1 ) references : edward a. neal, stephen m. goldup, " chemical consequences of mechanical bonding in catenanes and rotaxanes : isomerism, modification, catalysis and molecular machines for synthesis, " chem. commun. 2014, 50 ( 40 ), 5128 - 5142 ( jeffrey s. hannam, stephen m. lacy, david a. leigh, carlos g. saiz, alexandra m. z. slawin, sheila g. stitchell, " controlled submolecular translational motion in synthesis : a mechanically interlocking auxiliary, " angew. chem., intl. fd. 2004, 43 ( 25 ), 3260 - 3264 (
https://api.stackexchange.com
i think the ( first order ) right thing to do is look at the ratio of flops to bytes needed in the algorithm, which i call $ \ beta $. let $ f _ { \ mathrm { max } } $ be the maximum flop rate of the processor, and $ b _ { \ mathrm { max } } $ the maximum bandwidth. if $ \ frac { f _ { \ mathrm { max } } } { \ beta } > b _ { \ mathrm { max } } $, then the algorithm will be bandwidth limited. if $ b _ { \ mathrm { max } } \ beta > f _ { \ mathrm { max } } $, the algorithm is flop limited. i think counting memory accesses is mandatory, but we should also be thinking about : how much local memory is required how much possible concurrency we have then you can start to analyze algorithms for modern hardware.
https://api.stackexchange.com
$ $ x ^ 2 = \ underbrace { x + x + \ cdots + x } _ { ( x \ text { times } ) } $ $ $ $ \ frac { d } { dx } x ^ 2 = \ frac { d } { dx } [ \ underbrace { x + x + \ cdots + x } _ { ( x \ text { times } ) } ] $ $ $ $ 2x = 1 + 1 + \ cdots + 1 = x $ $ $ $ 2 = 1 $ $
https://api.stackexchange.com
short answer : no, you don't have to do integration for certain fems. but in your case, you have to do that. long answer : let's say $ u _ h $ is the finite element solution. if you choose piecewise linear polynomial as your basis, then taking $ \ delta $ on it will give you an order 1 distribution ( think taking derivative on a heaviside step function ), and the integration of $ - \ delta u _ h \ in h ^ { - 1 } $ multiplying with $ v $ will only make sense when you take it as a duality pair rather than an $ l ^ 2 $ - inner product. you will neither get a null matrix, the riesz representation theorem says that there is an element in $ \ varphi _ { - \ delta u _ h } \ in h ^ 1 _ 0 $ can characterize the duality pair by the inner product in $ h ^ 1 $ : $ $ \ langle - \ delta u _ h, v \ rangle _ { h ^ { - 1 }, h ^ 1 _ 0 } = \ underbrace { \ int _ { \ omega } \ nabla \ varphi _ { - \ delta u _ h } \ cdot \ nabla v } _ { \ text { inner product in } h ^ 1 }. $ $ integrating by parts element by element for $ u _ h $ will shed a light on this duality pair : for $ t $ an element in this triangulation $ $ \ int _ { \ omega } \ nabla u _ h \ cdot \ nabla v = - \ sum _ { t } \ left ( \ int _ { t } \ delta u _ h \, v + \ int _ { \ partial t } \ frac { \ partial u _ h } { \ partial n } v \, ds \ right ), $ $ this tells you that $ - \ delta u _ h $ should include inter - element flux jump in its duality pair representation, notice the integration on the boundary of each element is also a duality pair between $ h ^ { 1 / 2 } $ and $ h ^ { - 1 / 2 } $. even if you use quadratic basis, which has a non - vanishing $ \ delta $ on each element, you still can't write $ ( \ delta u, v ) $ as an inner product, because of this inter - element flux jump's presence.
https://api.stackexchange.com
integration by parts can be traced back to the sobolev theory for elliptic pde using smooth function, where the $ w ^ { k, p } $ - spaces are all closure of smooth functions under the $ w ^ { k, p } $ type of integral norm. then people say what is the minimum regularity here that we can perform inner product. also bearing in mind that an $ h ^ 1 $ - regular weak solution under certain condition is the $ h ^ 2 $ - strong solution ( elliptic regularity ). but piecewise continuous linear polynomial is not $ h ^ 2 $, from this point of view, it doesn't make any sense to take inner product using $ \ delta u _ h $ either. for certain fems, you don't have to do integration by parts. for example, least - square finite element. write the second order pde as a first order system : $ $ \ begin { cases } \ boldsymbol { \ sigma } = - \ nabla u, \ \ \ nabla \ cdot \ boldsymbol { \ sigma } = f. \ end { cases } $ $ then you wanna minimize the least - square functional : $ $ \ mathcal { j } ( v ) = \ | \ boldsymbol { \ sigma } + \ nabla u \ | _ { l ^ 2 { \ omega } } ^ 2 + \ | \ nabla \ cdot \ boldsymbol { \ sigma } - f \ | _ { l ^ 2 { \ omega } } ^ 2, $ $ bearing the same spirit with ritz - galerkin functional, the finite element formulation of minimizing above functional in a finite element space does not require integration by parts.
https://api.stackexchange.com
it depends a lot on the size of your matrix, in the large - scale case also on whether it is sparse, and on the accuracy you want to achieve. if your matrix is too large to allow a single factorization, and you need high accuracy, the lanczsos algorithm is probably the fastest way. in the nonsymmetric case, the arnoldi algorithm is needed, which is numerically unstable, so an implementation needs to address this ( is somewhat awkward to cure ). if this is not the case in your problem, give more specific information in your question. then add a comment to this answer, and i'll update it. edit : [ this was for the old version of the question, asling for the largest eigenvalue. ] as your matrix is small and apparently dense, i'd do arnoldi iteration on b = ( i - a ) ^ { - 1 }, using an initial permuted triangular factorization of i - a to have cheap multiplication by b. ( or compute an explicit inverse, but this costs 3 times as much as the factorization. ) you want to test whether b has a negative eigenvalue. working with b in place of a, negative eigenvalues are much better separated, so if there is one, you should converge rapidly. but i am curious about where your problem comes from. nonsymmetric matrices usually have complex eigenvalues, so'' largest'' isn't even well - defined. thus you must know more about your problem, which might help in suggesting how to solve it even faster and / or more reliably. edit2 : it is difficult to get with arnoldi a particular subset of interest. to get the absolutely largest eigenvalues reliably, you'd do subspace iteration using the original matrix, with a subspace size matching or exceeding the number of eigenvalues expected to be close to 1 or larger in magnitude. on small matrices, this will be slower than the qr algorithm but on large matrices it will be much faster.
https://api.stackexchange.com
it is a result from the insecticide you are using. from this excerpt from the 10th edition of the mallis handbook on pest control : neurotoxic insecticides cause tremors and muscle spasms, flipping the cockroach on its back. a healthy cockroach can easily right itself, but without muscle coordination, the cockroach dies on its back. cockroaches exposed to slow - acting insecticides that target respiration ( energy production ) also can die β€œ face - down, ” as they run out of energy without experiencing muscle spasms. here's also a website from umass describing it in more detail : most of these insecticides are organophosphate nerve poisons. the nerve poison often inhibits cholinesterase, an enzyme that breaks down acetyl choline ( ach ), a neurotransmitter. with extra ach in the nervous system, the cockroach has muscular spasms which often result in the cockroach flipping on its back. without muscular coordination the cockroach cannot right itself and eventually dies in its upside down - position. and an entomology professor even answered this for maxim : most insecticides are poisons that target a bug ’ s nervous system. when you spray a roach, those neurotoxins cause tremors and muscle spasms, which flip it onto its back, and without muscle coordination, that ’ s the position it dies in
https://api.stackexchange.com
the odour threshold for hydrogen cyanide $ ( \ ce { hcn } ) $ is in fact quite a bit lower than the lethal toxicity threshold. data for $ \ ce { hcn } $ can be found in many places, but here and here are a couple of good references. that subset of the human population that can detect bitter almonds do so at a threshold of $ 0. 58 $ to $ \ pu { 5 ppm } $. the lethal exposure dose is upwards of $ \ pu { 135 ppm } $. that's a whole $ \ pu { 100 ppm } $ range in which to detect and report the fragrant properties.
https://api.stackexchange.com
was xkcd, so time for dilbert : source :
https://api.stackexchange.com
if you are not well - acquainted with special relativity, there is no way to truly explain this phenomenon. the best one could do is give you rules steeped in esoteric ideas like " electromagnetic field " and " lorentz invariance. " of course, this is not what you're after, and rightly so, since physics should never be about accepting rules handed down from on high without justification. the fact is, magnetism is nothing more than electrostatics combined with special relativity. unfortunately, you won't find many books explaining this - either the authors mistakenly believe maxwell's equations have no justification and must be accepted on faith, or they are too mired in their own esoteric notation to pause to consider what it is they are saying. the only book i know of that treats the topic correctly is purcell's electricity and magnetism, which was recently re - released in a third edition. ( the second edition works just fine if you can find a copy. ) a brief, heuristic outline of the idea is as follows. suppose there is a line of positive charges moving along the $ z $ - axis in the positive direction - a current. consider a positive charge $ q $ located at $ ( x, y, z ) = ( 1, 0, 0 ) $, moving in the negative $ z $ - direction. we can see that there will be some electrostatic force on $ q $ due to all those charges. but let's try something crazy - let's slip into $ q $'s frame of reference. after all, the laws of physics had better hold for all points of view. clearly the charges constituting the current will be moving faster in this frame. but that doesn't do much, since after all the coulomb force clearly doesn't care about the velocity of the charges, only on their separation. but special relativity tells us something else. it says the current charges will appear closer together. if they were spaced apart by intervals $ \ delta z $ in the original frame, then in this new frame they will have a spacing $ \ delta z \ sqrt { 1 - v ^ 2 / c ^ 2 } $, where $ v $ is $ q $'s speed in the original frame. this is the famous length contraction predicted by special relativity. if the current charges appear closer together, then clearly $ q $ will feel a larger electrostatic force from the $ z $ - axis as a whole. it will experience an additional
https://api.stackexchange.com
force in the positive $ x $ - direction, away from the axis, over and above what we would have predicted from just sitting in the lab frame. basically, coulomb's law is the only force law acting on a charge, but only the charge's rest frame is valid for using this law to determine what force the charge feels. rather than constantly transforming back and forth between frames, we invent the magnetic field as a mathematical device that accomplishes the same thing. if defined properly, it will entirely account for this anomalous force seemingly experienced by the charge when we are observing it not in its own rest frame. in the example i just went through, the right - hand rule tells you we should ascribe a magnetic field to the current circling around the $ z $ - axis such that it is pointing in the positive $ y $ - direction at the location of $ q $. the velocity of the charge is in the negative $ z $ - direction, and so $ q \ vec { v } \ times \ vec { b } $ points in the positive $ x $ - direction, just as we learned from changing reference frames.
https://api.stackexchange.com
you tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. using the correlation matrix is equivalent to standardizing each of the variables ( to mean 0 and standard deviation 1 ). in general, pca with and without standardizing will give different results. especially when the scales are different. as an example, take a look at this r heptathlon data set. some of the variables have an average value of about 1. 8 ( the high jump ), whereas other variables ( run 800m ) are around 120. library ( hsaur ) heptathlon [, - 8 ] # look at heptathlon data ( excluding'score'variable ) this outputs : hurdles highjump shot run200m longjump javelin run800m joyner - kersee ( usa ) 12. 69 1. 86 15. 80 22. 56 7. 27 45. 66 128. 51 john ( gdr ) 12. 85 1. 80 16. 23 23. 65 6. 71 42. 56 126. 12 behmer ( gdr ) 13. 20 1. 83 14. 20 23. 10 6. 68 44. 54 124. 20 sablovskaite ( urs ) 13. 61 1. 80 15. 23 23. 92 6. 25 42. 78 132. 24 choubenkova ( urs ) 13. 51 1. 74 14. 76 23. 93 6. 32 47. 46 127. 90... now let's do pca on covariance and on correlation : # scale = t bases the pca on the correlation matrix hep. pc. cor = prcomp ( heptathlon [, - 8 ], scale = true ) hep. pc. cov = prcomp ( heptathlon [, - 8 ], scale = false ) biplot ( hep. pc. cov ) biplot ( hep. pc. cor ) notice that pca on covariance is dominated by run800m and javelin : pc1 is almost equal to run800m ( and explains $ 82 \ % $ of the variance ) and pc2 is almost equal to javelin ( together they explain $ 97 \ % $ ). pca on correlation is much more informative and reveals some structure in the data and relationships between variables ( but note that the explained variances drop to $ 64 \ % $ and $ 71 \
https://api.stackexchange.com
% $ ). notice also that the outlying individuals ( in this data set ) are outliers regardless of whether the covariance or correlation matrix is used.
https://api.stackexchange.com
this is a really interesting question. it turns out that your body is reasonably conductive ( think salt water, more on that in the answer to this question ), and that it can couple to rf sources capacitively. referring to the wikipedia article on keyless entry systems ; they typically operate at an rf frequency of $ 315 \ text { mhz } $, the wavelength of which is about $ 1 \ text { m } $. effective antennas ( ignoring fractal antennas ) typically have a length of $ \ frac { \ lambda } { 2 } = \ frac { 1 } { 2 } \ text { m } \ approx1. 5 \ text { ft } $. so, the effect is probably caused by one or more of the cavities in your body ( maybe your head or chest cavity ) acting as a resonance chamber for the rf signal from your wireless remote. for another example of how a resonance chamber can amplify waves think about the hollow area below the strings of a guitar. without the hollow cavity the sound from the guitar would be almost imperceptible. edit : as elucidated in the comments, a cavity doesn't necessarily need to be an empty space ; just a bounded area which partially reflects electromagnetic waves at the boundaries. the area occupied by your brain satisfies these conditions. edit 2 : as pointed out in the comments, a string instrument is significantly louder with just a sounding board behind the strings, so my analogy, though true, is a bit misleading. edit 3 : as promised in the comments, i made some more careful measurements of the effect in question, using a number of different orientations of remote position and pointing. i've posted these as a separate answer to this question.
https://api.stackexchange.com
/ / gcc impredictivepropositionallogic1. c - o impredictivepropositionallogic1. exe - std = c99 - wall - o3 / * which answer in this list is the correct answer to this question? ( a ) all of the below. ( b ) none of the below. ( c ) all of the above. ( d ) one of the above. ( e ) none of the above. ( f ) none of the above. * / # include < stdio. h > # define iff ( x, y ) ( ( x ) = = ( y ) ) int main ( ) { printf ( " a b c d e f \ n " ) ; for ( int a = 0 ; a < = 1 ; a + + ) for ( int b = 0 ; b < = 1 ; b + + ) for ( int c = 0 ; c < = 1 ; c + + ) for ( int d = 0 ; d < = 1 ; d + + ) for ( int e = 0 ; e < = 1 ; e + + ) for ( int f = 0 ; f < = 1 ; f + + ) { int ra = iff ( a, b & & c & & d & & e & & f ) ; int rb = iff ( b,! c & &! d & &! e & &! f ) ; int rc = iff ( c, a & & b ) ; int rd = iff ( d, ( a & &! b & &! c ) | | (! a & & b & &! c ) | | (! a & &! b & & c ) ) ; int re = iff ( e,! a & &! b & &! c & &! d ) ; int rf = iff ( f,! a & &! b & &! c & &! d & &! e ) ; int r = ra & & rb & & rc & & rd & & re & & rf ; if ( r ) printf ( " % d % d % d % d % d % d \ n ", a, b, c, d, e, f ) ; } return 0 ; } this outputs : a b c d e f 0 0 0 0 1 0 the main point i'd like to get across is that you cannot assume at the outset that there is only 1
https://api.stackexchange.com
satisfying assignment. for example consider the question : which of the following is true? ( a ) both of these ( b ) both of these you might be tempted to say that both ( a ) and ( b ) are true. but it is also consistent that both ( a ) and ( b ) are false. the tendency to assume singularity from definitions isn't correct when the definitions are impredictive.
https://api.stackexchange.com
error estimates usually have the form $ $ \ | u - u _ h \ | \ leq c ( h ), $ $ where $ u $ is the exact solution you are interested in, $ u _ h $ is a computed approximate solution, $ h $ is an approximation parameter you can control, and $ c ( h ) $ is some function of $ h $ ( among other things ). in finite element methods, $ u $ is the solution of a partial differential equation and $ u _ h $ would be the finite element solution for a mesh with mesh size $ h $, but you have the same structure in inverse problems ( with the regularization parameter $ \ alpha $ in place of $ h $ ) or iterative methods for solving equations or optimization problems ( with the iteration index $ k $ - - or rather $ 1 / k $ - - in place of $ h $ ). the point of such an estimate is to help answer the question " if i want to get within, say, $ 10 ^ { - 3 } $ of the exact solution, how small do i have to choose $ h $? " the difference between a priori and a posterior estimates is in the form of the right - hand side $ c ( h ) $ : in a priori estimates, the right - hand side depends on $ h $ ( usually explicitly ) and $ u $, but not on $ u _ h $. for example, a typical a priori estimate for the finite element approximation of poisson's equation $ - \ delta u = f $ would have the form $ $ \ | u - u _ h \ | _ { l ^ 2 } \ leq c h ^ 2 | u | _ { h ^ 2 }, $ $ with a constant $ c $ depending on the geometry of the domain and the mesh. in principle, the right - hand side can be evaluated prior to computing $ u _ h $ ( hence the name ), so you'd be able to choose $ h $ before solving anything. in practice, neither $ c $ nor $ | u | _ { h ^ 2 } $ is known ( $ u $ is what you're looking for in the first place ), but you can sometimes get order - or - magnitude estimates for $ c $ by carefully going through the proofs and for $ | u | $ using the data $ f $ ( which is known ). the main use is as a qualitative estimate - - it tells you that if
https://api.stackexchange.com
you want to make the error smaller by a factor of four, you need to halve $ h $. in a posteriori estimates, the right - hand side depends on $ h $ and $ u _ h $, but not on $ u $. a simple residual - based a posterior estimate for poisson's equation would be $ $ \ | u - u _ h \ | _ { l ^ 2 } \ leq c h \ | f + \ delta u _ h \ | _ { h ^ { - 1 } }, $ $ which could in theory be evaluated after computing $ u _ h $. in practice, the $ h ^ { - 1 } $ norm is problematic to compute, so you'd further manipulate the right - hand side to get an element - wise bound $ $ \ | u - u _ h \ | _ { l ^ 2 } \ leq c \ left ( \ sum _ { k } h _ k ^ 2 \ | f + \ delta u _ h \ | _ { l ^ 2 ( k ) } + \ sum _ { f } h _ k ^ { 3 / 2 } \ | j ( \ nabla u _ h ) \ | _ { l ^ 2 ( f ) } \ right ), $ $ where the first sum is over the elements $ k $ of the triangulation, $ h _ k $ is the size of $ k $, the second sum is over all element boundaries $ f $, and $ j ( \ nabla u _ h ) $ denotes the jump of the normal derivative of $ u _ h $ across $ f $. this is now fully computable after obtaining $ u _ h $, except for the constant $ c $. so again the use is mainly qualitative - - it tells you which elements give a larger error contribution than others, so instead of reducing $ h $ uniformly, you just select some elements with large error contributions and make those smaller by subdividing them. this is the basis of adaptive finite element methods.
https://api.stackexchange.com
i used to implement everything myself, but lately have begun using libraries much more. i think there are several very important advantages of using a library, beyond just the issue of whether you have to write a routine yourself or not. if you use a library, you get code that has been tested by hundreds / thousands / more users code that will continue to be updated and improved in the future, without any work on your part optimized code that is more efficient and perhaps more scalable than what you would write in a first attempt depending on the library, by establishing an interface to it in your code you may get access to many algorithms that you currently don't use but may want to in the future in the last bullet point above, i'm thinking of large libraries like trilinos or petsc. i can reinforce this with a couple of concrete personal examples in development of pyclaw. although it would have been straightforward to parallelize clawpack with mpi calls, we chose to use petsc. this allowed us to limit the paralle code in the package to less than 300 lines of python, but even better, by putting our data in petsc's format we gained immediate access to petsc's implicit solvers, enabling current work on an implicit solver in pyclaw. as a second example, pyclaw initially included hand - code fifth - order weno reconstruction, but we eventually decided to rely on the pyweno package for this. this was a huge gain, since pyweno can automatically generate weno routines of any order in several languages. finally, if you use libraries, you can contribute back by developing improvements or finding bugs, which will benefit many other people, whereas debugging or improving your own code only benefits you.
https://api.stackexchange.com
pedagogical dimension due to its simplicity lomuto's partitioning method might be easier to implement. there is a nice anecdote in jon bentley's programming pearl on sorting : β€œ most discussions of quicksort use a partitioning scheme based on two approaching indices [... ] [ i. e. hoare's ]. although the basic idea of that scheme is straightforward, i have always found the details tricky - i once spent the better part of two days chasing down a bug hiding in a short partitioning loop. a reader of a preliminary draft complained that the standard two - index method is in fact simpler than lomuto's and sketched some code to make his point ; i stopped looking after i found two bugs. ” performance dimension for practical use, ease of implementation might be sacrificed for the sake of efficiency. on a theoretical basis, we can determine the number of element comparisons and swaps to compare performance. additionally, actual running time will be influenced by other factors, such as caching performance and branch mispredictions. as shown below, the algorithms behave very similar on random permutations except for the number of swaps. there lomuto needs thrice as many as hoare! number of comparisons both methods can be implemented using $ n - 1 $ comparisons to partition an array of length $ n $. this is essentially optimal, since we need to compare every element to the pivot for deciding where to put it. number of swaps the number of swaps is random for both algorithms, depending on the elements in the array. if we assume random permutations, i. e. all elements are distinct and every permutation of the elements is equally likely, we can analyze the expected number of swaps. as only relative order counts, we assume that the elements are the numbers $ 1, \ ldots, n $. that makes the discussion below easier since the rank of an element and its value coincide. lomuto's method the index variable $ j $ scans the whole array and whenever we find an element $ a [ j ] $ smaller than pivot $ x $, we do a swap. among the elements $ 1, \ ldots, n $, exactly $ x - 1 $ ones are smaller than $ x $, so we get $ x - 1 $ swaps if the pivot is $ x $. the overall expectation then results by averaging over all pivots. each value in $ \ { 1, \
https://api.stackexchange.com
ldots, n \ } $ is equally likely to become pivot ( namely with prob. $ \ frac1n $ ), so we have $ $ \ frac1n \ sum _ { x = 1 } ^ n ( x - 1 ) = \ frac n2 - \ frac12 \ ;. $ $ swaps on average to partition an array of length $ n $ with lomuto's method. hoare's method here, the analysis is slightly more tricky : even fixing pivot $ x $, the number of swaps remains random. more precisely : the indices $ i $ and $ j $ run towards each other until they cross, which always happens at $ x $ ( by correctness of hoare's partitioning algorithm! ). this effectively divides the array into two parts : a left part which is scanned by $ i $ and a right part scanned by $ j $. now, a swap is done exactly for every pair of β€œ misplaced ” elements, i. e. a large element ( larger than $ x $, thus belonging in the right partition ) which is currently located in the left part and a small element located in the right part. note that this pair forming always works out, i. e. there the number of small elements initially in the right part equals the number of large elements in the left part. one can show that the number of these pairs is hypergeometrically $ \ mathrm { hyp } ( n - 1, n - x, x - 1 ) $ distributed : for the $ n - x $ large elements we randomly draw their positions in the array and have $ x - 1 $ positions in the left part. accordingly, the expected number of pairs is $ ( n - x ) ( x - 1 ) / ( n - 1 ) $ given that the pivot is $ x $. finally, we average again over all pivot values to obtain the overall expected number of swaps for hoare's partitioning : $ $ \ frac1n \ sum _ { x = 1 } ^ n \ frac { ( n - x ) ( x - 1 ) } { n - 1 } = \ frac n6 - \ frac13 \ ;. $ $ ( a more detailed description can be found in my master's thesis, page 29. ) memory access pattern both algorithms use two pointers into the array that scan it sequentially. therefore both behave almost optimal w. r
https://api.stackexchange.com
. t. caching. equal elements and already sorted lists as already mentioned by wandering logic, the performance of the algorithms differs more drastically for lists that are not random permutations. on an array that is already sorted, hoare's method never swaps, as there are no misplaced pairs ( see above ), whereas lomuto's method still does its roughly $ n / 2 $ swaps! the presence of equal elements requires special care in quicksort. ( i stepped into this trap myself ; see my master's thesis, page 36, for a β€œ tale on premature optimization ” ) consider as extreme example an array which filled with $ 0 $ s. on such an array, hoare's method performs a swap for every pair of elements - which is the worst case for hoare's partitioning - but $ i $ and $ j $ always meet in the middle of the array. thus, we have optimal partitioning and the total running time remains in $ \ mathcal o ( n \ log n ) $. lomuto's method behaves much more stupidly on the all $ 0 $ array : the comparison a [ j ] < = x will always be true, so we do a swap for every single element! but even worse : after the loop, we always have $ i = n $, so we observe the worst case partitioning, making the overall performance degrade to $ \ theta ( n ^ 2 ) $! conclusion lomuto's method is simple and easier to implement, but should not be used for implementing a library sorting method. clarification in this answer, i explained why a good implementation of the β€œ crossing - pointer scheme ” from hoare's partitioning method is superior to the simpler scheme of lomuto's method, and i stand by everything i said on that topic. alas, this is strictly speaking not what the op was asking! the pseudocode for hoare - partition as given above does not have the desirable properties i lengthily praised, since it fails to exclude the pivot element from the partitioning range. as a consequence, the pivot is β€œ lost ” in the swapping and cannot be put into its final position after partitioning, and hence be excluded it from recursive calls. ( that means the recursive calls do no longer fulfill the same randomness assumptions and the whole analysis seems to break down! robert sedgewick's phd dissertation discusses this issue in detail. )
https://api.stackexchange.com
for pseudocode of the desirable implementation analyzed above, see my master's thesis, algorithm 1. ( that code is due to robert sedgewick ).
https://api.stackexchange.com
i can't tell you which to learn, but here's some contrasting points ( from a very vhdl - centric user, but i've tried to be as fair as possible! ), which may help you make a choice based on your own preferences in terms of development style : and keep in mind the famous quote which goes along the lines of " i prefer whichever of the two i'm not currently using " ( sorry, i can't recall who actually wrote this - possibly janick bergeron? ) vhdl strongly - typed more verbose very deterministic non - c - like syntax ( and mindset ) lots of compilation errors to start with, but then mostly works how you expect. this can lead to a very steep feeling learning curve ( along with the unfamiliar syntax ) verilog weakly - typed more concise only deterministic if you follow some rules carefully more c - like syntax ( and mindset ) errors are found later in simulation - the learning curve to " feeling like getting something done " is shallower, but goes on longer ( if that's the right metaphor? ) also in verilog's favour is that high - end verification is leaning more and more to systemverilog which is a huge extension to verilog. but the high - end tools can also combine vhdl synthesis code with systemverilog verification code. for another approach entirely : myhdl - you get all the power of python as a verification language with a set of synthesis extensions from which you can generate either vhdl or verilog. or cocotb - all the power of python as a verification language, with your synthesisable code still written in whichever hdl you decided to learn ( ie vhdl or verilog ). systemc is also a good option for an hdl. systemc supports both system level and register transfer level ( rtl ) design. you need only a c + + compiler to simulate it. high - level synthesis tools will then convert systemc code to verilog or vhdl for logic synthesis.
https://api.stackexchange.com
for many years i was under the misapprehension that i didn't have enough time to write unit tests for my code. when i did write tests, they were bloated, heavy things which only encouraged me to think that i should only ever write unit tests when i knew they were needed. then i started to use test driven development and i found it to be a complete revelation. i'm now firmly convinced that i don't have the time not to write unit - tests. in my experience, by developing with testing in mind you end up with cleaner interfaces, more focussed classes & modules and generally more solid, testable code. every time i work with legacy code which doesn't have unit tests and have to manually test something, i keep thinking " this would be so much quicker if this code already had unit tests ". every time i have to try and add unit test functionality to code with high coupling, i keep thinking " this would be so much easier if it had been written in a de - coupled way ". comparing and contrasting the two experimental stations that i support. one has been around for a while and has a great deal of legacy code, while the other is relatively new. when adding functionality to the old lab, it is often a case of getting down to the lab and spending many hours working through the implications of the functionality they need and how i can add that functionality without affecting any of the other functionality. the code is simply not set up to allow off - line testing, so pretty much everything has to be developed on - line. if i did try to develop off - line then i would end up with more mock objects than would be reasonable. in the newer lab, i can usually add functionality by developing it off - line at my desk, mocking out only those things which are immediately required, and then only spending a short time in the lab, ironing out any remaining problems not picked up off - line. for clarity, and since @ naught101 asked... i tend to work on experimental control and data acquisition software, with some ad hoc data analysis, so the combination of tdd with revision control helps to document both changes in the underlying experiment hardware and as well as changes in data collection requirements over time. even in the situation of developing exploratory code however, i could see a significant benefit from having assumptions codified, along with the ability to see how those assumptions evolve over time.
https://api.stackexchange.com
a great question, and since a textbook could probably be written to answer it, there's probably not going to be any single answer. i want to provide a general answer tailored to hobbyists, and hope that people more knowledgeable can come in and tie up specifics. summary solder is basically metal wire with a " low " melting point, where low for our purposes means low enough to be melted with a soldering iron. for electronics, it is traditionally a mix of tin and lead. tin has a lower melting point than lead, so more tin means a lower melting point. most common lead - based solder you'll find at the gadget store will be 60sn / 40pb ( for 60 % tin, 40 % lead ). there's some other minor variations you're likely to see, such as 63sn / 37pb, but for general hobbyist purposes i have used 60 / 40 for years with no issue. science content now, molten metal is a tricky beast, because it behaves a bit like water : of particular interest is its surface tension. molten metal will ball up if it doesn't find something to " stick " to. that's why solder masks work to keep jumpers from forming, and why you see surface - mount soldering tricks. in general, metal likes to stick to metal, but doesn't like to stick to oils or oxidized metals. by simply being exposed to air, our parts and boards start to oxidize, and through handling they get exposed to grime ( such as oils from our skin ). the solution to this is to clean the parts and boards first. that's where flux cores come in to solder. flux cores melt at a lower temperature than the solder, and coat the area to be soldered. the flux cleans the surfaces, and if they're not too dirty the flux is sufficient to make a good strong solder joint ( makes it " sticky " enough ). flux cores there are two common types of flux cores : acid and rosin. acid is for plumbing, and should not be used in electronics ( it is likely to eat your components or boards ). you do need to keep an eye out for that, but in general if it's in the electronics section of a gadget store it's good, if it's in the plumbing section of a home supply / home improvement store, it's bad. in general, for hobbyist use,
https://api.stackexchange.com
as long as you keep your parts clean and don't let them sit around too long, a flux core isn't necessary. however, if you are looking for solder then you probably should pick up something with a rosin core. the only reason you wouldn't use a flux core solder as a hobbyist is if you knew exactly why you didn't need the flux in the first place, but again, if you have some solder without flux you can probably use it for hobbyist purposes without issue. lead free that's pretty much all a hobbyist needs to know, but it doesn't hurt to know about lead - free solder since things are going that way. the eu now requires pretty much all commercially - available electronics ( with exceptions for the health and aerospace industries, as i recall ) to use lead - free components, including solder. this is catching on, and while you can still find lead - based solder it can lead to confusion. the purpose of lead - free solder is exactly the same : it's an evolution in the product meant to be more environmentally friendly. the issue is that lead ( which is used to reduce melting point of the solder ) is very toxic, so now different metals are used instead which aren't as effective at controlling melting point. in general, you can use lead - free and lead - based solder interchangeably for hobbyist uses, but lead - free solder is a bit harder to work with because it doesn't flow as nicely or at as low a temperature as its lead - based equivalent. it's nothing that will stop you from successfully soldering something, and in general lead - free and lead - based solders are pretty interchangeable to the hobbyist. tutorials there are plenty of soldering videos on youtube, just plugging in " soldering " to the search should turn up plenty. nasa has some old instructional videos that are great, because they deal with a lot of through - hole components. some of these are relevant because they discuss the techniques and how the solder types relate. in general, if you got it at the electronics hobby shop, it's good to use for hobbyist purposes.
https://api.stackexchange.com
a common error i think is to use greedy algorithms, which is not always the correct approach, but might work in most test cases. example : coin denominations, $ d _ 1, \ dots, d _ k $ and a number $ n $, express $ n $ as a sum of $ d _ i $ : s with as few coins as possible. a naive approach is to use the largest possible coin first, and greedily produce such a sum. for instance, the coins with value $ 6 $, $ 5 $ and $ 1 $ will give correct answers with greedy for all numbers between $ 1 $ and $ 14 $ except for the number $ 10 = 6 + 1 + 1 + 1 + 1 = 5 + 5 $.
https://api.stackexchange.com
you're attempting to take a limit. $ $ x _ { n + 1 } = 1 - \ frac { 1 } { x _ n } $ $ this recurrence actually never converges, from any real starting point. indeed, $ $ x _ 2 = 1 - \ frac { 1 } { x _ 1 } ; \ \ x _ 3 = 1 - \ frac { 1 } { 1 - 1 / x _ 1 } = 1 - \ frac { x _ 1 } { x _ 1 - 1 } = \ frac { 1 } { 1 - x _ 1 } ; \ \ x _ 4 = x _ 1 $ $ so the sequence is periodic with period 3. therefore it converges if and only if it is constant ; but the only way it could be constant is, as you say, if $ x _ 1 $ is one of the two complex numbers you found. therefore, what you have is actually basically a proof by contradiction that the sequence doesn't converge when you consider it over the reals. however, you have found exactly the two values for which the iteration does converge ; that is their significance. alternatively viewed, the map $ $ z \ mapsto 1 - \ frac { 1 } { z } $ $ is a certain transformation of the complex plane, which has precisely two fixed points. you might find it an interesting exercise to work out what that map does to the complex plane, and examine in particular what it does to points on the real line.
https://api.stackexchange.com
the reason is simple : chocolate contains cocoa which contains theobromine. the darker the chocolate is ( meaning the more cocoa it contains ) the more theobromine it contains. this is a bitter alkaloid which is toxic to dogs ( and also cats ), but can be tolerated by humans. the reason for this is the much slower metabolization of theobromine in the animals ( there are reports for poisonings of dogs, cats, birds, rabbits and even bear cubs ) so that the toxic effect can happen. depending on the size of the dog, something between 50 and 400g of milk chocolate can be fatal. as mentioned by @ anongoodnurse the cocoa content in milk chocolate is the lowest and much higher the darker the chocolate gets. the poisoning comes from the theobromine itself, which has different mechanisms of action : first it is an unselective antagonist of the adenosine receptor, which is a subclass of g - protein coupled receptors on the cell surface which usually bind adenosine as a ligand. this influences cellular signalling. then it is a competitive nonselective phosphodiesterase inhibitor, which prevents the breakdown of cyclic amp in the cell. camp is an important second messenger in the cell playing an important role in the mediation of signals from the outside of the cells via receptors to a reaction of a cell to changing conditions. the levels of camp are tightly controlled and the half - life of the molecule is generally short. elevated levels lead to an activation of the protein kinase a, an inhibition tnf - alpha and leukotriene synthesis and reduces inflammation and innate immunity. for references see here. the ld50 for theobromine is very different among species ( table from here ), with ld50 as the lethal dose killing 50 % of the individuals and tdlo the lowest published toxic dose : the ld50 also differs between different breeds of dogs, so there are online calculators available to make an estimation, if there is a problem or not. you can find them for example here and here. the selective toxicity makes it even an interesting poison for pest control of coyotes, see reference 4 for some details. references : chocolate - veterinary manual chocolate intoxication the poisonous chemistry of chocolate evaluation of cocoa - and coffee - derived methylxanthines as toxicants for the control of pest coyotes.
https://api.stackexchange.com
can i predict the products of any chemical reaction? in theory, yes! every substance has characteristic reactivity behavior. likewise pairs and sets of substances have characteristic behavior. for example, the following combinations of substances only have one likely outcome each : $ $ \ ce { hcl + naoh - > nacl + h2o } \ \ [ 2ex ] \ ce { ch3ch2ch2oh - > [ $ 1. $ ( cocl ) 2, ( ch3 ) 2so ] [ $ 2. $ et3n ] ch3ch2cho } $ $ however, it is not a problem suited to brute force or exhaustive approaches there are millions or perhaps billions of known or possible substances. let's take the lower estimate of 1 million substances. there are $ 999 \, 999 \, 000 \, 000 $ possible pairwise combinations. any brute force method ( in other words a database that has an answer for all possible combinations ) would be large and potentially resource prohibitive. likewise you would not want to memorize the nearly 1 trillion combinations. if more substances are given, the combination space gets bigger. in the second example reaction above, there are four substances combined : $ \ ce { ch3ch2ch2oh } $, $ \ ce { ( cocl ) 2 } $, $ \ ce { ( ch3 ) 2so } $, and $ \ ce { et3n } $. pulling four substances at random from the substance space generates a reaction space on the order of $ 1 \ times 10 ^ { 24 } $ possible combinations. and that does not factor in order of addition. in the second reaction above, there is an implied order of addition : $ \ ce { ch3ch2ch2oh } $ $ \ ce { ( cocl ) 2 } $, $ \ ce { ( ch3 ) 2so } $ $ \ ce { et3n } $ however, there are $ 4! = 24 $ different orders of addition for four substances, some of which might not generate the same result. our reaction space is up to $ 24 \ times 10 ^ { 24 } $, a bewildering number of combinations. and this space does not include other variables, like time, temperature, irradiation, agitation, concentration, pressure, control of environment, etc. if each reaction in the space could somehow be stored for as little as 100 kb of memory, then the whole space of combinations up to 4 substances would require $ 2
https://api.stackexchange.com
. 4 \ times 10 ^ { 27 } $ bytes of data, or $ 2. 4 \ times 10 ^ 7 $ zb ( zettabytes ) or $ 2. 4 \ times 10 ^ 4 $ trillion terabytes. the total digital data generated by the human species was estimated recently ( nov. 2015 ) to be 4. 4 zb. we need $ 5. 5 \ times 10 ^ 5 $ times more data in the world to hold such a database. and that does not even count the program written to search it or the humans needed to populate it, the bandwidth required to access it, or the time investment of any of these steps. in practice, it can be manageable! even though the reaction space is bewilderingly huge, chemistry is an orderly predictable business. folks in the natural product total synthesis world do not resort to random combinations and alchemical mumbo jumbo. they can predict with some certainty what type of reactions do what to which substances and then act on that prediction. when we learn chemistry, we are taught to recognize if a molecule belongs to a certain class with characteristic behavior. in the first example above, we can identify $ \ ce { hcl } $ as an acid and $ \ ce { naoh } $ as a base, and then predict an outcome that is common to all acid - base reactions. in the second example above, we are taught to recognize $ \ ce { ch3ch2ch2oh } $ as a primary alcohol and the reagents given as an oxidant. the outcome is an aldehyde. these examples are simple ones in which the molecules easily fit into one class predominantly. more complex molecules may belong to many categories. organic chemistry calls these categories β€œ functional groups ”. the ability to predict synthetic outcomes then begins and ends with identifying functional groups within a compound's structure. for example, even though the following compound has a more complex structure, it contains a primary alcohol, which will be oxidized to an aldehyde using the same reagents presented above. we can also be reasonably confident that no unpleasant side reactions will occur. if the reagents in the previous reaction had been $ \ ce { lialh4 } $ followed by $ \ ce { h3o + } $, then more than one outcome is possible since more than one functional group in the starting compound will react. controlling the reaction to give one of the possible outcomes is possible, but requires further careful thought. there are rules
https://api.stackexchange.com
, but they are not few in number. there are too many classes of compounds to list here. likewise even one class, like primary alcohols ( an hydroxyl group at the end of a hydrocarbon chain ) has too many characteristic reactions to list here. if there are 30 classes of compounds ( an underestimate ) and 30 types of reactions ( an underestimate ), then there are 900 reaction types ( an underestimate ). the number of viable reaction types is more manageable than the total reaction space, but would still be difficult to commit to memory quickly. and new reaction types are being discovered all the time. folks who learn how to analyze combinations of compounds spend years taking courses and reading books and research articles to accumulate the knowledge and wisdom necessary. it can be done. computer programs can be ( and have been ) designed to do the same analysis, but they were designed by people who learned all of the characteristic combinations. there is no shortcut.
https://api.stackexchange.com
the correct answer is because the ethernet specification requires it. although you didn't ask, others may wonder why this method of connection was chosen for that type of ethernet. keep in mind that this applies only to the point - to - point ethernet varieties, like 10base - t and 100base - t, not to the original ethernet or to thinlan ethernet. the problem is that ethernet can support fairly long runs such that equipment on different ends can be powered from distant branches of the power distribution network within a building or even different buildings. this means there can be significant ground offset between ethernet nodes. this is a problem with ground - referenced communication schemes, like rs - 232. there are several ways of dealing with ground offsets in communications lines, with the two most common being opto - isolation and transformer coupling. transformer coupling was the right choice for ethernet given the tradeoffs between the methods and what ethernet was trying to accomplish. even the earliest version of ethernet that used transformer coupling runs at 10 mbit / s. this means, at the very least, the overall channel has to support 10 mhz digital signals, although in practice with the encoding scheme used it actually needs twice that. even a 10 mhz square wave has levels lasting only 50 ns. that is very fast for opto - couplers. there are light transmission means that go much much faster than that, but they are not cheap or simple at each end like the ethernet pulse transformers are. one disadvantage of transformer coupling is that dc is lost. that's actually not that hard to deal with. you make sure all information is carried by modulation fast enough to make it thru the transformers. if you look at the ethernet signalling, you will see how this was considered. there are nice advantages to transformers too, like very good common mode rejection. a transformer only " sees " the voltage across its windings, not the common voltage both ends of the winding are driven to simultaneously. you get a differential front end without a deliberate circuit, just basic physics. once transformer coupling was decided on, it was easy to specify a high isolation voltage without creating much of a burden. making a transformer that insulates the primary and secondary by a few 100 v pretty much happens unless you try not to. making it good to 1000 v isn't much harder or much more expensive. given that, ethernet can be used to communicate between two nodes actively driven to significantly different voltages, not just to deal with a few volts of ground offset. for example, it
https://api.stackexchange.com
is perfectly fine and within the standard to have one node riding on a power line phase with the other referenced to the neutral.
https://api.stackexchange.com
adapted from an answer to a different question ( as mentioned in a comment ) in the hope that this question will not get thrown up repeatedly by community wiki as one of the top questions.... there is no " flipping " of the impulse response by a linear ( time - invariant ) system. the output of a linear time - invariant system is the sum of scaled and time - delayed versions of the impulse response, not the " flipped " impulse response. we break down the input signal $ x $ into a sum of scaled unit pulse signals. the system response to the unit pulse signal $ \ cdots, ~ 0, ~ 0, ~ 1, ~ 0, ~ 0, \ cdots $ is the impulse response or pulse response $ $ h [ 0 ], ~ h [ 1 ], \ cdots, ~ h [ n ], \ cdots $ $ and so by the scaling property the single input value $ x [ 0 ] $, or, if you prefer $ $ x [ 0 ] ( \ cdots, ~ 0, ~ 0, ~ 1, ~ 0, ~ 0, \ cdots ) = \ cdots ~ 0, ~ 0, ~ x [ 0 ], ~ 0, ~ 0, \ cdots $ $ creates a response $ $ x [ 0 ] h [ 0 ], ~ ~ x [ 0 ] h [ 1 ], \ cdots, ~ ~ x [ 0 ] h [ n ], \ cdots $ $ similarly, the single input value $ x [ 1 ] $ or $ $ x [ 1 ] ( \ cdots, ~ 0, ~ 0, ~ 0, ~ 1, ~ 0, \ cdots ) = \ cdots ~ 0, ~ 0, ~ 0, ~ x [ 1 ], ~ 0, \ cdots $ $ creates a response $ $ 0, x [ 1 ] h [ 0 ], ~ ~ x [ 1 ] h [ 1 ], \ cdots, ~ ~ x [ 1 ] h [ n - 1 ], x [ 1 ] h [ n ] \ cdots $ $ notice the delay in the response to $ x [ 1 ] $. we can continue further in this vein, but it is best to switch to a more tabular form and show the various outputs aligned properly in time. we have $ $ \ begin { array } { l | l | l | l | l | l | l | l } \ text { time } \ to & 0 & 1 & 2
https://api.stackexchange.com
& \ cdots & n & n + 1 & \ cdots \ \ \ hline x [ 0 ] & x [ 0 ] h [ 0 ] & x [ 0 ] h [ 1 ] & x [ 0 ] h [ 2 ] & \ cdots & x [ 0 ] h [ n ] & x [ 0 ] h [ n + 1 ] & \ cdots \ \ \ hline x [ 1 ] & 0 & x [ 1 ] h [ 0 ] & x [ 1 ] h [ 1 ] & \ cdots & x [ 1 ] h [ n - 1 ] & x [ 1 ] h [ n ] & \ cdots \ \ \ hline x [ 2 ] & 0 & 0 & x [ 2 ] h [ 0 ] & \ cdots & x [ 2 ] h [ n - 2 ] & x [ 2 ] h [ n - 1 ] & \ cdots \ \ \ hline \ vdots & \ vdots & \ vdots & \ vdots & \ ddots & \ \ \ hline x [ m ] & 0 & 0 & 0 & \ cdots & x [ m ] h [ n - m ] & x [ m ] h [ n - m + 1 ] & \ cdots \ \ \ hline \ vdots & \ vdots & \ vdots & \ vdots & \ ddots \ end { array } $ $ the rows in the above array are precisely the scaled and delayed versions of the impulse response that add up to the response $ y $ to input signal $ x $. but if you ask a more specific question such as what is the output at time $ n $? then you can get the answer by summing the $ n $ - th column to get $ $ \ begin { align * } y [ n ] & = x [ 0 ] h [ n ] + x [ 1 ] h [ n - 1 ] + x [ 2 ] h [ n - 2 ] + \ cdots + x [ m ] h [ n - m ] + \ cdots \ \ & = \ sum _ { m = 0 } ^ { \ infty } x [ m ] h [ n - m ], \ end { align * } $ $ the beloved convolution formula that befuddles generations of students because the impulse response seems to be " flipped over " or running backwards in time. but, what people seem to forget is that instead
https://api.stackexchange.com
we could have written $ $ \ begin { align * } y [ n ] & = x [ n ] h [ 0 ] + x [ n - 1 ] h [ 1 ] + x [ n - 2 ] h [ 2 ] + \ cdots + x [ 0 ] h [ n ] + \ cdots \ \ & = \ sum _ { m = 0 } ^ { \ infty } x [ n - m ] h [ m ], \ end { align * } $ $ so that it is the input that seems " flipped over " or running backwards in time! in other words, it is human beings who flip the impulse response ( or the input ) over when computing the response at time $ n $ using the convolution formula, but the system itself does nothing of the sort.
https://api.stackexchange.com
historical perspective it is really impossible to say what the new paradigms will be like in the future, for example a good historical perspective i suggest reading ken kennedy's rise and fall of hpf. kennedy gives an account of two emerging patterns, mpi versus a smart compiler, and details how mpi had the right amount of early adopters and flexibility to dominate. hpf eventually fixed its problems but it was too late. in many ways, several paradigms, such as pgas and openmp, are following that same hpf trend. the early codes have not been flexible enough to use well and left a lot of performance on the table. but the promise of not having to write every iota of the parallel algorithm is a attractive goal. so the pursuit of new models are always being pursued. clear trends in hardware now the success of mpi has often been cited as to being closely tied to how it models the hardware it runs on. roughly each node has a few number of processes and passing the messages to local point - to - point or through coordinated collective operations is easily done in the cluster space. because of this, i don't trust anyone who gives a paradigm that doesn't follow closely to new hardware trends, i actually was convinced of this opinion by the work from vivak sarakar. in keeping with that here are three trends that are clearly making headway in new architectures. and let me be clear, there are now twelve different architectures being marketed in hpc. this up from less than 5 years ago only featuring x86, so the coming days will see lots of opportunities for using hardware in different and interesting ways special purpose chips : think large vector units like accelerators ( view espoused by bill dally of nvidia ) low power chips : arm based clusters ( to accomodate power budgets ) tiling of chips : think tiling of chips with different specifications ( work of avant argwal ) current models the current model is actually 3 levels deep. while there are many codes using two of these levels well, not many have emerged using all three. i believe that to first get to exascale one needs to invest in determining if you code can run at all three levels. this is probably the safest path for iterating well with the current trends. let me iterate on the models and how they will need to change based on the predicted new hardware views. distributed the players at the distributed level largely fall into mpi and pgas languages. mpi is a
https://api.stackexchange.com
clear winner right now, but pgas languages such as upc and chapel are making headways into the space. one good indication is the hpc benchmark challenge. pgas languages are giving very elegant implementations of the benchmarks. the most interesting point here is that while this model currently only works at the node level, it will be an important model inside a node for tiled architectures. one indication is the intel scc chip, which fundamentally acted like a distributed system. the scc team created their own mpi implementation and many teams were successful at porting community libraries to this architecture. but to be honest pgas really has a good story for stepping in to this space. do you really want to program mpi internode and then have to do the same trick intranode? one big deal with these tiled architectures is that they will have different clock speeds on the chips and major differences in bandwidth to memory so performant codes must take this into account. on - node shared memory here we see mpi often being " good enough ", but pthreads ( and libraries deriving from pthreads such as intel parallel building blocks ) and openmp are still used often. the common view is that there will be a time when there are enough shared memory threads that mpi's socket model will break down for rpc or you need a lighter weight process running on the core. already you can see the indications of ibm bluegene systems having problems with shared memory mpi. as matt comments, the largest performance boost for compute intensive codes is the vectorization of the serial code. while many people assume this is true in accelerators, it is also critical for on - node machines as well. i believe westmere has a 4 wide fpu, thus one can only get a quarter of the flops without vectorization. while i don't see the current openmp stepping into this space well, there is a place for low - powered or tiles chips to use more light threads. openmp has difficulty describing how the data flow works and as more threads are used i only see this trend becoming more exaggerated, just look at examples of what one has to do to get proper prefetching with openmp. both openmp and pthreads at a course enough level can take advantage of the vectorization necessary to get a good percentage of peak, but doing so requires breaking down your algorithms in a way that vectorization is natural. co - processor finally the emergence of the co - processor ( gpu,
https://api.stackexchange.com
mic, cell acclerators ) has taken hold. it is becoming clear that no path to exascale will be complete without them. at sc11, every bell prize contestent used them very effectively to get to the low petaflops. while cuda and opencl have dominated the current market, i have hopes for openacc and pgas compilers entering the space. now to get to exascale, one proposal is to couple the low powered chips to lots of co - processors. this will pretty well kill off the middle layer of the current stack and use codes that manage decision problems on the main chip and shuffle off work to the co - processors. this means that for code to work quite effectively a person must rethink the algorithms in terms of kernels ( or codelets ), that is branchless instruction level parallel snippets. as far as i know, a solution to this evolution is pretty wide open. how this affects the app developer now to get to your question. if you want to protect yourself from the oncoming complexities of exascale machines, you should do a few things : develop your algorithms to fit at least three levels of parallel hierarchy. design your algorithms in terms of kernels that can be moved between the heirarchy. relax your need for any sequential processes, all of these effects will happen asynchronously because synchronous execution is just not possible. if you want to be performant today, mpi + cuda / opencl is good enough but upc is getting there so its not a bad idea to take a few days and learn it. openmp gets you started but leads to problems once the code needs to be refactored. pthreads requires completely rewriting your code to its style. which makes mpi + cuda / opencl the current best model. what is not discussed here while all this talk of exascale is nice, something not really discussed here is getting data onto and off of the machines. while there have been many advances in memory systems, we don't see them in commodity cluster ( just too darned expensive ). now that data intensive computing is becoming a large focus of all the super computing conferences, there is bound to be a bigger movement into the high memory bandwidth space. this brings to the other trend that might happen ( if the right funding agencies get involved ). machines are going to become more and more specialized for the type of computing required. we already see
https://api.stackexchange.com
" data - intensive " machines being funded by the nsf, but these machines are on a different track than the 2019 exascale grand challenge. this became longer than expected ask for references where you need them in the comments
https://api.stackexchange.com
this is really a footnote to the accepted answer. light cannot escape from an event horizon. but how can you check that light can never escape? you can watch the surface for some time $ t $, but all you have proved is that light can't escape in the time $ t $. this is what we mean by an apparent horizon, i. e. it is a surface from which light can't escape within a time $ t $. to prove the surface really was an event horizon you would have to watch it for an infinite time. the problem is that hawking radiation means that no event horizon can exist for an infinite time. the conclusion is that only apparent horizons can exist, though the time $ t $ associated with them can be exceedingly long, e. g. many times longer than the current age of the universe. a point worth mentioning because it's easy to overlook : when you start learning about black holes you'll start with a solution to einstein's equations called the schwarzschild metric, and this has a true horizon. however the schwarzschild metric is time independent so it would only describe a real black hole if that black hole had existed for an infinite time and would continue to exist for an infinite time. both of these are not possible in the real universe. so the schwarzschild metric is only an approximate description of a real black hole, though we expect it to be a very good approximation.
https://api.stackexchange.com
$ \ text { error = bias + variance } $ boosting is based on weak learners ( high bias, low variance ). in terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps ( trees with two leaves ). boosting reduces error mainly by reducing bias ( and also to some extent variance, by aggregating the output from many models ). on the other hand, random forest uses as you said fully grown decision trees ( low bias, high variance ). it tackles the error reduction task in the opposite way : by reducing variance. the trees are made uncorrelated to maximize the decrease in variance, but the algorithm cannot reduce bias ( which is slightly higher than the bias of an individual tree in the forest ). hence the need for large, unpruned trees, so that the bias is initially as low as possible. please note that unlike boosting ( which is sequential ), rf grows trees in parallel. the term iterative that you used is thus inappropriate.
https://api.stackexchange.com
i can tell you why i don't believe in it. i think my reasons are different from most physicists'reasons, however. regular quantum mechanics implies the existence of quantum computation. if you believe in the difficulty of factoring ( and a number of other classical problems ), then a deterministic underpinning for quantum mechanics would seem to imply one of the following. there is a classical polynomial - time algorithm for factoring and other problems which can be solved on a quantum computer. the deterministic underpinnings of quantum mechanics require $ 2 ^ n $ resources for a system of size $ o ( n ) $. quantum computation doesn't actually work in practice. none of these seem at all likely to me. for the first, it is quite conceivable that there is a polynomial - time algorithm for factoring, but quantum computation can solve lots of similar periodicity problems, and you can argue that there can't be a single algorithm that solves all of them on a classical computer, so you would have to have different classical algorithms for each classical problem that a quantum computer can solve by period finding. for the second, deterministic underpinnings of quantum mechanics that require $ 2 ^ n $ resources for a system of size $ o ( n ) $ are really unsatisfactory ( but maybe quite possible... after all, the theory that the universe is a simulation on a classical computer falls in this class of theories, and while truly unsatisfactory, can't be ruled out by this argument ). for the third, i haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results.
https://api.stackexchange.com
i believe this can also be solved using double integrals. it is possible ( if i remember correctly ) to justify switching the order of integration to give the equality : $ $ \ int _ { 0 } ^ { \ infty } \ bigg ( \ int _ { 0 } ^ { \ infty } e ^ { - xy } \ sin x \, dy \ bigg ) \, dx = \ int _ { 0 } ^ { \ infty } \ bigg ( \ int _ { 0 } ^ { \ infty } e ^ { - xy } \ sin x \, dx \ bigg ) \, dy $ $ notice that $ $ \ int _ { 0 } ^ { \ infty } e ^ { - xy } \ sin x \, dy = \ frac { \ sin x } { x } $ $ this leads us to $ $ \ int _ { 0 } ^ { \ infty } \ big ( \ frac { \ sin x } { x } \ big ) \, dx = \ int _ { 0 } ^ { \ infty } \ bigg ( \ int _ { 0 } ^ { \ infty } e ^ { - xy } \ sin x \, dx \ bigg ) \, dy $ $ now the right hand side can be found easily, using integration by parts. $ $ \ begin { align * } i & = \ int e ^ { - xy } \ sin x \, dx = - e ^ { - xy } { \ cos x } - y \ int e ^ { - xy } \ cos x \, dx \ \ & = - e ^ { - xy } { \ cos x } - y \ big ( e ^ { - xy } \ sin x + y \ int e ^ { - xy } \ sin x \, dx \ big ) \ \ & = \ frac { - ye ^ { - xy } \ sin x - e ^ { - xy } \ cos x } { 1 + y ^ 2 }. \ end { align * } $ $ thus $ $ \ int _ { 0 } ^ { \ infty } e ^ { - xy } \ sin x \, dx = \ frac { 1 } { 1 + y ^ 2 } $ $ thus $ $ \ int _
https://api.stackexchange.com
{ 0 } ^ { \ infty } \ big ( \ frac { \ sin x } { x } \ big ) \, dx = \ int _ { 0 } ^ { \ infty } \ frac { 1 } { 1 + y ^ 2 } \, dy = \ frac { \ pi } { 2 }. $ $
https://api.stackexchange.com
all three are so - called " meta - algorithms " : approaches to combine several machine learning techniques into one predictive model in order to decrease the variance ( bagging ), bias ( boosting ) or improving the predictive force ( stacking alias ensemble ). every algorithm consists of two steps : producing a distribution of simple ml models on subsets of the original data. combining the distribution into one " aggregated " model. here is a short description of all three methods : bagging ( stands for bootstrap aggregating ) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality / size as your original data. by increasing the size of your training set you can't improve the model predictive force, but just decrease the variance, narrowly tuning the prediction to expected outcome. boosting is a two - step approach, where one first uses subsets of the original data to produce a series of averagely performing models and then " boosts " their performance by combining them together using a particular cost function ( = majority vote ). unlike bagging, in the classical boosting the subset creation is not random and depends upon the performance of the previous models : every new subsets contains the elements that were ( likely to be ) misclassified by previous models. stacking is a similar to boosting : you also apply several models to your original data. the difference here is, however, that you don't have just an empirical formula for your weight function, rather you introduce a meta - level and use another model / approach to estimate the input together with outputs of every model to estimate the weights or, in other words, to determine what models perform well and what badly given these input data. here is a comparison table : as you see, these all are different approaches to combine several models into a better one, and there is no single winner here : everything depends upon your domain and what you're going to do. you can still treat stacking as a sort of more advances boosting, however, the difficulty of finding a good approach for your meta - level makes it difficult to apply this approach in practice. short examples of each : bagging : ozone data. boosting : is used to improve optical character recognition ( ocr ) accuracy. stacking : is used in classification of cancer microarrays in medicine.
https://api.stackexchange.com
essentially, os is slightly more efficient since it does not require the addition of the overlapping transients. however, you may want to use oa if you need to reuse the ffts with zero - padding rather than repeated samples. here is a quick overview from an article i wrote a while ago fast convolution refers to the blockwise use of circular convolution to accomplish linear convolution. fast convolution can be accomplished by oa or os methods. os is also known as β€œ overlap - scrap ”. in oa filtering, each signal data block contains only as many samples as allows circular convolution to be equivalent to linear convolution. the signal data block is zero - padded prior to the fft to prevent the filter impulse response from β€œ wrapping around ” the end of the sequence. oa filtering adds the input - on transient from one block with the input - off transient from the previous block. in os filtering, shown in figure 1, no zero - padding is performed on the input data, thus the circular convolution is not equivalent to linear convolution. the portions that β€œ wrap around ” are useless and discarded. to compensate for this, the last part of the previous input block is used as the beginning of the next block. os requires no addition of transients, making it faster than oa.
https://api.stackexchange.com
i think possibly the problem here is the way you're approaching the issue. you're considering improvement as anything that increases the abilities or complexity of the organism β€” that isn't necessarily what an improvement is though. the outcome of natural selection is that the organism best equipped to survive / reproduce in a certain environment is the most successful. so, for example, thermophillic archaea do much better in 60Β°c - plus pools of water than humans do. our capacity to process information, use tools, etc. doesn't actually confer much advantage in that situation. and there can be downsides to that kind of complexity as well, requiring more energy and longer developmental periods. so, natural selection in 60Β°c - plus pools of water gives you archaea, and in ( presumably ) the plains of east africa, it gives you humans. the comment you quote mentions sickle - cell anaemia, which is a different example. while there is little benefit to having the sickle - cell anaemia allele in a temperate region, in those regions where malaria is endemic, heterozygosity can provide a survival advantage, and so the allele is maintained in the population. if you're someone living in a malaria - endemic region, and you don't have access to antimalarials, heterozygosity for the sickle - cell anaemia allele is arguably an improvement. it depends entirely on how you define the word. the fundamental principal of natural selection is that it favours the organism most suited to a particular environment. but, that isn't always the most complex organism. it's important not to confuse human - like with better. it isn't the universal endpoint of evolution to produce an organism similar to us, just the organism most suited to the environment in question. also, to briefly address the previous question you asked β€” you asserted that we must be missing something from the process of evolution because we were unable to simulate it. you also pointed out that ( in your opinion ) we have sufficient computing power to simulate the kinds of organisms you're referring to. but natural selection is intrinsically linked to the environment it occurs in, so the simulation wouldn't just have to accurately simulate the biological processes of the organism, but also all of the external pressures the organism faces. i'd imagine that, in simulating evolution, that would be the real obstacle.
https://api.stackexchange.com
gravitational waves are qualitatively different from other detections. as much as we have tested gr before, it's still reassuring to find a completely different test that works just as well. the most notable tests so far have been the shifting of mercury's orbit, the correct deflection of light by massive objects, and the redshifting of light moving against gravity. in these cases, spacetime is taken to be static ( unchanging in time, with no time - space cross terms in the metric ). gravitational waves, on the other hand, involve a time - varying spacetime. gravitational waves provide a probe of strong - field gravity. the tests so far have all been done in weak situations, where you have to measure things pretty closely to see the difference between gr and newtonian gravity. while gravitational waves themselves are a prediction of linearized gravity and are the very essence of small perturbations, their sources are going to be very extreme environments - - merging black holes, exploding stars, etc. now a lot of things can go wrong between our models of these extreme phenomena and our recording of a gravitational wave signal, but if the signal agrees with our predictions, that's a sign that not only are we right about the waves themselves, but also about the sources. gravitational waves are a new frontier in astrophysics. this point is often forgotten when we get so distracted with just finding any signal. finding the first gravitational waves is only the beginning for astronomical observations. with just two detectors, ligo for instance cannot pinpoint sources on the sky any better than " somewhere out there, roughly. " eventually, as more detectors come online, the hope is to be able to localize signals better, so we can simultaneously observe electromagnetic counterparts. that is, if the event causing the waves is the merger of two neutron stars, one might expect there to be plenty of light released as well. by combining both types of information, we can gain quite a bit more knowledge about the system. gravitational waves are also good at probing the physics at the innermost, most - obscured regions in cataclysmic events. for most explosions in space, all we see now is the afterglow - - the hot, radioactive shell of material left behind - - and we can only infer indirectly what processes were happening at the core. gravitational waves provide a new way to gain insight in this respect.
https://api.stackexchange.com
sometimes, especially in introductory courses the instructor will try to keep things " focused " in order to promote learning. still, it's unfortunate that the instructor couldn't respond in a more positive and stimulating way to your question. these reactions do occur at $ \ ce { sp ^ 2 } $ hybridized carbon atoms, they are often just energetically more costly, and therefore somewhat less common. consider when a nucleophile reacts with a carbonyl compound, the nucleophile attacks the carbonyl carbon atom in an $ \ ce { s _ { n } 2 } $ manner. the electrons in the c - o $ \ pi $ – bond can be considered as the leaving group and a tetrahedral intermediate is formed with a negative charge on oxygen. it is harder to do this with a carbon - carbon double bond ( energetically more costly ) because you would wind up with a negative charge on carbon ( instead of oxygen ), which is energetically less desirable ( because of the relative electronegativities of carbon and oxygen ). if you look at the michael addition reaction, the 1, 4 - addition of a nucleophile to the carbon - carbon double bond in an $ \ ce { \ alpha - \ beta } $ unsaturated carbonyl system, this could be viewed as an $ \ ce { s _ { n } 2 } $ attack on a carbon - carbon double bond, but again, it is favored ( lower in energy ) because you create an intermediate with a negative charge on oxygen. $ \ ce { s _ { n } 1 } $ reactions at $ \ ce { sp ^ 2 } $ carbon are well documented. solvolysis of vinyl halides in very acidic media is an example. the resultant vinylic carbocations are actually stable enough to be observed using nmr spectroscopy. the picture below helps explain why this reaction is so much more difficult ( energetically more costly ) than the more common solvolysis of an alkyl halide. in the solvolysis of the alkyl halide we produce a traditional carbocation with an empty p orbital. in the solvolysis of the vinyl halide we produce a carbocation with the positive charge residing in an $ \ ce { sp ^ 2 } $ orbital. placing positive charge in an $ \ ce { sp ^ 2 } $ orbital is a higher energy situation compared to placing it in a p orbital ( electrons prefer to be in orbitals with higher s density,
https://api.stackexchange.com
it stabilizes them because the more s character in an orbital the lower its energy ; conversely, in the absence of electrons, an orbital prefers to have high p character and mix the remaining s character into other bonding orbitals that do contain electrons in order to lower their energy ).
https://api.stackexchange.com
a schematic is a visual representation of a circuit. as such, its purpose is to communicate a circuit to someone else. a schematic in a special computer program for that purpose is also a machine - readable description of the circuit. this use is easy to judge in absolute terms. either the proper formal rules for describing the circuit are followed and the circuit is correctly defined or it isn't. since there are hard rules for that and the result can be judged by machine, this isn't the point of the discussion here. this discussion is about rules, guidelines, and suggestions for good schematics for the first purpose, which is to communicate a circuit to a human. good and bad will be judged here in that context. since a schematic is to communicate information, a good schematic does this quickly, clearly, and with a low chance of misunderstanding. it is necessary but far from sufficient for a schematic to be correct. if a schematic is likely to mislead a human observer, it is a bad schematic whether you can eventually show that after due deciphering it was in fact correct. the point is clarity. a technically correct but obfuscated schematic is still a bad schematic. some people have their own silly - ass opinions, but here are the rules ( actually, you'll probably notice broad agreement between experienced people on most of the important points ) : use component designators this is pretty much automatic with any schematic capture program, but we still often see schematics here without them. if you draw your schematic on a napkin and then scan it, make sure to add component designators. these make the circuit much easier to talk about. i have skipped over questions when schematics didn't have component designators because i didn't feel like bothering with the second 10 kω resistor from the left by the top pushbutton. it's a lot easier to say r1, r5, q7, etc. clean up text placement schematic programs generally plunk down part names and values based on a generic part definition. this means they often end up in inconvenient places in the schematic when other parts are placed nearby. fix it. that's part of the job of drawing a schematic. some schematic capture programs make this easier than others. in eagle for example, unfortunately, there can only be one symbol for a part. some parts
https://api.stackexchange.com
are commonly placed in different orientations, horizontal and vertical in the case of resistors for example. diodes can be placed in at least 4 orientations since they have direction too. the placement of text around a part, like the component designator and value, probably won't work in other orientations than it was originally drawn in. if you rotate a stock part, move the text around afterward so that it is easily readable, clearly belongs to that part, and doesn't collide with other parts of the drawing. vertical text looks stupid and makes the schematic hard to read. i make separate redundant parts in eagle that differ only in the symbol orientation and therefore the text placement. that's more work upfront but makes it easier when drawing a schematic. however, it doesn't matter how you achieve a neat and clear end result, only that you do. there is no excuse. sometimes we hear whines like " but circuitbarf 0. 1 doesn't let me do that ". so get something that does. besides, circuitbarf 0. 1 probably does let you do it, just that you were too lazy to read the manual to learn how and too sloppy to care. draw it ( neatly! ) on paper and scan it if you have to. again, there is no excuse. for example, here are some parts at different orientations. note how the text is in different places relative to parts to make things neat and clear. don't let this happen to you : yes, this is actually a small snippet of what someone dumped on us here. basic layout and flow in general, it is good to put higher voltages towards the top, lower voltages towards the bottom and logical flow left to right. that's clearly not possible all the time, but at least a generally higher level effort to do this will greatly illuminate the circuit to those reading your schematic. one notable exception to this is feedback signals. by their very nature, they feed " back " from downstream to upstream, so they should be shown sending information opposite of the main flow. power connections should go up to positive voltages and down to negative voltages. don't do this : there wasn't room to show the line going down to ground because other stuff was already there. move it. you made the mess, you can unmake it. there is always a way. following these rules causes common subcircuits to be drawn
https://api.stackexchange.com
similarly most of the time. once you get more experience looking at schematics, these will pop out at you and you will appreciate this. if stuff is drawn every which way, then these common circuits will look visually different every time and it will take others longer to understand your schematic. what's this mess, for example? after some deciphering, you realize " oh, it's a common emitter amplifier. why didn't that # % & ^ $ @ # $ % just draw it like one in the first place!? " : draw pins according to function show pins of ics in a position relevant to their function, not how they happen to stick out of the chip. try to put positive power pins at the top, negative power pins ( usually grounds ) at the bottom, inputs at left, and outputs at right. note that this fits with the general schematic layout as described above. of course, this isn't always reasonable and possible. general - purpose parts like microcontrollers and fpgas have pins that can be input and output depending on use and can even vary at run time. at least you can put the dedicated power and ground pins at top and bottom, and possibly group together any closely related pins with dedicated functions, like crystal driver connections. ics with pins in physical pin order are difficult to understand. some people use the excuse that this aids in debugging, but with a little thought you can see that's not true. when you want to look at something with a scope, which question is more common " i want to look at the clock, what pin is that? " or " i want to look at pin 5, what function is that? ". in some rare cases, you might want to go around a ic and look at all the pins, but the first question is by far more common. physical pin order layouts obfuscate the circuit and make debugging more difficult. don't do it. direct connections, within reason spend some time with placement reducing wire crossings and the like. the recurring theme here is clarity. of course, drawing a direct connection line isn't always possible or reasonable. obviously, it can't be done with multiple sheets, and a messy rats nest of wires is worse than a few carefully chosen " air wires ". it is impossible to come up with a universal rule here, but if you constantly think of the mythical person looking over your shoulder trying to understand the circuit from
https://api.stackexchange.com
the schematic you are drawing, you'll probably do alright. you should be trying to help people understand the circuit easily, not make them figure it out despite the schematic. design for regular size paper the days of electrical engineers having drafting tables and being set up to work with d size drawings are long gone. most people only have access to regular page - size printers, like for 8 1 / 2 x 11 - inch paper here in the us. the exact size is a little different all around the world, but they are all roughly what you can easily hold in front of you or place on your desk. there is a reason this size evolved as a standard. handling larger paper is a hassle. there isn't room on the desk, it ends up overlapping the keyboard, pushes things off your desk when you move it, etc. the point is to design your schematic so that individual sheets are nicely readable on a single normal page, and on the screen at about the same size. currently, the largest common screen size is 1920 x 1080. having to scroll a page at that resolution to see necessary detail is annoying. if that means using more pages, go ahead. you can flip pages back and forth with a single button press in acrobat reader. flipping pages is preferable to panning a large drawing or dealing with outsized paper. i also find that one normal page at reasonable detail is a good size to show a subcircuit. think of pages in schematics like paragraphs in a narrative. breaking a schematic into individually labeled sections by pages can actually help readability if done right. for example, you might have a page for the power input section, the immediate microcontroller connections, the analog inputs, the h bridge drive power outputs, the ethernet interface, etc. it's actually useful to break up the schematic this way even if it had nothing to do with drawing size. here is a small section of a schematic i received. this is from a screenshot displaying a single page of the schematic maximized in acrobat reader on a 1920 x 1200 screen. in this case, i was being paid in part to look at this schematic so i put up with it, although i probably used more time and therefore charged the customer more money than if the schematic had been easier to work with. if this was from someone looking for free help like on this web the site, i would have thought
https://api.stackexchange.com
to myself screw this and gone on to answer someone else's question. label key nets schematic capture programs generally let you give nets nicely readable names. all nets probably have names inside the software, just that they default to some gobbledygook unless you explicitly set them. if a net is broken up into visually unconnected segments, then you absolutely have to let people know the two seemingly disconnected nets are really the same. different packages have different built - in ways to show that. use whatever works with the software you have, but in any case, give the net a name and show that name at each separately drawn segment. think of that as the lowest common denominator or using " air wires " in a schematic. if your software supports it and you think it helps with clarity, by all means, use little " jump point " markers or whatever. sometimes these even give you the sheet and coordinates of one or more corresponding jump points. that's all great but label any such net anyway. the important point is that the little name strings for these nets are derived automatically from the internal net name by the software. never draw them manually as arbitrary text that the software doesn't understand as the net name. if separate sections of the net ever get disconnected or separately renamed by accident, the software will automatically show this since the name shown comes from the actual net name, not something you type in separately. this is a lot like a variable in a computer language. you know that multiple uses of the variable symbol refer to the same variable. another good reason for net names is short comments. i sometimes name and then show the names of nets only to give a quick idea what the purpose of that net is. for example, seeing that a net is called " 5v " or " miso " could help a lot in understanding the circuit. many short nets don't need a name or clarification, and adding names would hurt more due to clutter than they would illuminate. again, the whole point is clarity. show a meaningful net name when it helps in understanding the circuit, and don't when it would be more distracting than useful. keep names reasonably short just because your software lets you enter 32 or 64 character net names, doesn't mean you should. again, the point is about clarity. no names is no information, but lots of long names are clutter, which then decreases clarity. somewhere in between is a good tradeoff. don't get silly and write
https://api.stackexchange.com
" 8 mhz clock to my pic ", when simply " clock ", " clk ", or " 8mhz " would convey the same information. see this ansi / ieee standard for recommended pin name abbreviations. upper case symbol names use all caps for net names and pin names. pin names are almost always shown upper case in datasheets and schematics. various schematic programs, eagle included, don't even allow for lower case names. one advantage of this, which is also helped when the names aren't too long, is that they stick out in the regular text. if you do write real comments in the schematic, always write them in mixed case but make sure to upper case symbol names to make it clear they are symbol names and not part of your narrative. for example, " the input signal test1 goes high to turn on q1, which resets the processor by driving mclr low. ". in this case, it is obvious that test1, q1, and mclr refer to names in the schematic and aren't part of the words you are using in the description. show decoupling caps by the part decoupling caps must be physically close to the part they are decoupling due to their purpose and basic physics. show them that way. sometimes i've seen schematics with a bunch of decoupling caps off in a corner. of course, these can be placed anywhere in the layout, but by placing them by their ic you at least show the intent of each cap. this makes it much easier to see that proper decoupling was at least thought about, more likely a mistake is caught in a design review, and more likely the cap actually ends up where intended when the layout is done. dots connect, crosses don't draw a dot at every junction. that's the convention. don't be lazy. any competent software will enforce this any way, but surprisingly we still see schematics without junction dots here occasionally. it's a rule. we don't care whether you think it's silly or not. that's how it's done. sort of related, try to keep junctions to ts, not 4 - way crosses. this isn't as hard a rule, but stuff happens. with two lines crossing, one vertical the other horizontal, the only way to know whether they are connected is whether the little junction dot is present. in past days when sc
https://api.stackexchange.com
##hematics were routinely photocopied or otherwise optically reproduced, junction dots could disappear after a few generations, or could sometimes even appear at crosses when they weren't there originally. this is less important now that schematics are generally in a computer, but it's not a bad idea to be extra careful. the way to do that is to never have a 4 - way junction. if two lines cross, then they are never connected, even if after some reproduction or compression artifacts it looks like there maybe is a dot there. ideally connections or crossovers would be unambiguous without junction dots, but in reality, you want as little chance of misunderstanding as possible. make all junctions ts with dots, and all crossing lines are therefore different nets without dots. look back and you can see the point of all these rules is to make it as easy as possible for someone else to understand the circuit from the schematic, and to maximize the chance that understanding is correct. good schematics show you the circuit. bad schematics make you decipher them. there is another human point to this too. a sloppy schematic shows lack of attention to detail and is irritating and insulting to anyone you ask to look at it. think about it. it says to others " your aggravation with this schematic isn't worth my time to clean it up " which is basically saying " i'm more important than you are ". that's not a smart thing to say in many cases, like when you are asking for free help here, showing your schematic to a customer, teacher, etc. neatness and presentation count. a lot. you are judged by your presentation quality every time you present something, whether you think that's how it should be or not. in most cases, people won't bother to tell you either. they'll just go on to answer a different question, not look for some good points that might make the grade one notch higher, or hire someone else, etc. when you give someone a sloppy schematic ( or any other sloppy work from you ), the first thing they're going to think is " what a jerk ". everything else they think of you and your work will be colored by that initial impression. don't be that loser.
https://api.stackexchange.com
this method has worked well for me ( but what works well for one person won't necessarily work well for everyone ). i take it in several passes : read 0 : don't read the book, read the wikipedia article or ask a friend what the subject is about. learn about the big questions asked in the subject, and the basics of the theorems that answer them. often the most important ideas are those that can be stated concisely, so you should be able to remember them once you are engaging the book. read 1 : let your eyes jump from definition to lemma to theorem without reading the proofs in between unless something grabs your attention or bothers you. if the book has exercises, see if you can do the first one of each chapter or section as you go. read 2 : read the book but this time read the proofs. but don't worry if you don't get all the details. if some logical jump doesn't make complete sense, feel free to ignore it at your discretion as long as you understand the overall flow of reasoning. read 3 : read through the lens of a skeptic. work through all of the proofs with a fine toothed comb, and ask yourself every question you think of. you should never have to ask yourself " why " you are proving what you are proving at this point, but you have a chance to get the details down. this approach is well suited to many math textbooks, which seem to be written to read well to people who already understand the subject. most of the " classic " textbooks are labeled as such because they are comprehensive or well organized, not because they present challenging abstract ideas well to the uninitiated. ( steps 1 - 3 are based on a three step heuristic method for writing proofs : convince yourself, convince a friend, convince a skeptic )
https://api.stackexchange.com
if you can assign the total electron geometry ( geometry of all electron domains, not just bonding domains ) on the central atom using vsepr, then you can always automatically assign hybridization. hybridization was invented to make quantum mechanical bonding theories work better with known empirical geometries. if you know one, then you always know the other. linear - $ \ ce { sp } $ - the hybridization of one $ \ ce { s } $ and one $ \ ce { p } $ orbital produce two hybrid orbitals oriented $ 180 ^ \ circ $ apart. trigonal planar - $ \ ce { sp ^ 2 } $ - the hybridization of one $ \ ce { s } $ and two $ \ ce { p } $ orbitals produce three hybrid orbitals oriented $ 120 ^ \ circ $ from each other all in the same plane. tetrahedral - $ \ ce { sp ^ 3 } $ - the hybridization of one $ \ ce { s } $ and three $ \ ce { p } $ orbitals produce four hybrid orbitals oriented toward the points of a regular tetrahedron, $ 109. 5 ^ \ circ $ apart. trigonal bipyramidal - $ \ ce { dsp ^ 3 } $ or $ \ ce { sp ^ 3d } $ - the hybridization of one $ \ ce { s } $, three $ \ ce { p } $, and one $ \ ce { d } $ orbitals produce five hybrid orbitals oriented in this weird shape : three equatorial hybrid orbitals oriented $ 120 ^ \ circ $ from each other all in the same plane and two axial orbitals oriented $ 180 ^ \ circ $ apart, orthogonal to the equatorial orbitals. octahedral - $ \ ce { d ^ 2sp ^ 3 } $ or $ \ ce { sp ^ 3d ^ 2 } $ - the hybridization of one $ \ ce { s } $, three $ \ ce { p } $, and two $ \ ce { d } $ orbitals produce six hybrid orbitals oriented toward the points of a regular octahedron $ 90 ^ \ circ $ apart. i assume you haven't learned any of the geometries above steric number 6 ( since they are rare ), but they each correspond to a specific hybridization also. $ \ ce { nh3 } $ for $ \ ce { nh3 } $, which category does it fit in above? remember to count the lone pair as
https://api.stackexchange.com
an electron domain for determining total electron geometry. since the sample question says $ \ ce { nh3 } $ is $ \ ce { sp ^ 3 } $, then $ \ ce { nh3 } $ must be tetrahedral. make sure you can figure out how $ \ ce { nh3 } $ has tetrahedral electron geometry. for $ \ ce { h2co } $ start by drawing the lewis structure. the least electronegative atom that is not a hydrogen goes in the center ( unless you have been given structural arrangement ). determine the number of electron domains on the central atom. determine the electron geometry using vsepr. correlate the geometry with the hybridization. practice until you can do this quickly.
https://api.stackexchange.com
the fundamental difference between discriminative models and generative models is : discriminative models learn the ( hard or soft ) boundary between classes generative models model the distribution of individual classes to answer your direct questions : svms ( support vector machines ) and dts ( decision trees ) are discriminative because they learn explicit boundaries between classes. svm is a maximal margin classifier, meaning that it learns a decision boundary that maximizes the distance between samples of the two classes, given a kernel. the distance between a sample and the learned decision boundary can be used to make the svm a " soft " classifier. dts learn the decision boundary by recursively partitioning the space in a manner that maximizes the information gain ( or another criterion ). it is possible to make a generative form of logistic regression in this manner. note that you are not using the full generative model to make classification decisions, though. there are a number of advantages generative models may offer, depending on the application. say you are dealing with non - stationary distributions, where the online test data may be generated by different underlying distributions than the training data. it is typically more straightforward to detect distribution changes and update a generative model accordingly than do this for a decision boundary in an svm, especially if the online updates need to be unsupervised. discriminative models also do not generally function for outlier detection, though generative models generally do. what's best for a specific application should, of course, be evaluated based on the application. ( this quote is convoluted, but this is what i think it's trying to say ) generative models are typically specified as probabilistic graphical models, which offer rich representations of the independence relations in the dataset. discriminative models do not offer such clear representations of relations between features and classes in the dataset. instead of using resources to fully model each class, they focus on richly modeling the boundary between classes. given the same amount of capacity ( say, bits in a computer program executing the model ), a discriminative model thus may yield more complex representations of this boundary than a generative model.
https://api.stackexchange.com
what we're seeing is arithmetic progressions ( not prime - producing polynomials ) of primes, combined with a classical phenomenon about rational approximations. when the integers ( or any subset of them ) are represented by the polar points $ ( n, n ) $, of course integers that are close to a multiple of $ 2 \ pi $ apart from each other will lie close to the same central ray. figuring out when integers are close to a multiple of $ 2 \ pi $ apart is a perfect job for continued fractions. the continued fraction of $ 2 \ pi $ is $ \ langle 6 ; 3, 1, 1, 7, 2, 146, \ dots \ rangle $, giving the convergents $ $ \ left \ { 6, \ frac { 19 } { 3 }, \ frac { 25 } { 4 }, \ frac { 44 } { 7 }, \ frac { 333 } { 53 }, \ frac { 710 } { 113 }, \ frac { 103993 } { 16551 }, \ dots \ right \ }, $ $ which are the rational approximations of $ 2 \ pi $ that will dominate the picture on different scales. for example, if you plot the polar points $ ( n, n ) $ for $ 1 \ le n \ le 25000 $, you will notice the points aligning themselves into $ 44 $ spirals : jumping ahead from $ n $ to $ n + 44 $ is almost the same as going around the circle $ 7 $ times ( note the convergent $ \ frac { 44 } 7 $ showing up ) ; moving from $ n $ to $ n + 1 $ jumps ahead $ 7 $ spirals. each spiral corresponds to an arithmetic progression $ a \ pmod { 44 } $ ; going from one spiral to the next one counterclockwise corresponding to changing the arithmetic progression from $ a \ pmod { 44 } $ to $ a + 19 \ pmod { 44 } $ ( note that $ 19 \ equiv7 ^ { - 1 } \ pmod { 44 } $ ). if instead you plot only the primes $ ( p, p ) $, you will get reasonable representation in the $ \ phi ( 44 ) = 20 $ spirals corresponding to arithmetic progressions $ a \ pmod { 44 } $ where $ \ gcd ( a, 44 ) = 1 $, and no primes in the other $ 24 $ spirals. that's
https://api.stackexchange.com