text
stringlengths
1
3.05k
source
stringclasses
4 values
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 ; ac _ afr = 0 ; ac _ amr = 0 ; ac _ asj = 0 ; ac _ eas = 0 ; ac _ fin = 1 ; ac _ nfe = 0 ; ac _ oth = 0 ; ac _ sas = 0 ; ac _ male = 1 ; ac _ female = 0 ; an _ afr = 11994 ; an _ amr = 31324 ; an _ asj = 7806 ; an _ eas = 13112 ; an _ fin = 20076 ; an _ nfe = 94516 ; an _ oth = 4656 ; an _ sas = 25696 ; an _ male = 114366 ; an _ female = 94814 ; af _ afr = 0. 00000e + 00 ; af _ amr = 0. 00000e + 00 ; af _ asj = 0. 00000e + 00 ; af _ eas = 0. 00000e + 00 ; af _ fin = 4. 98107e - 05 ; af _ nfe = 0. 00000e + 00 ; af _ oth = 0. 00000e + 00 ; af _ sas = 0. 00000e + 00 ; af _ male = 8. 74386e - 06 ; af _ female = 0. 00000e + 00 ; gc _ afr = 5997, 0, 0 ; gc _ amr = 15662, 0, 0 ; gc _ asj = 3903, 0, 0 ; gc _ eas = 6556, 0, 0 ; gc _ fin = 10037, 1, 0 ; gc _ nfe = 47258, 0, 0 ; gc _ oth = 2328, 0, 0 ; gc _ sas = 12848, 0, 0 ; gc _ male = 57182, 1, 0 ; gc _ female = 47407, 0, 0 ; ac _ raw = 1 ; an _ raw = 216642 ; af _ raw = 4. 61591e - 06 ; gc _ raw = 108320, 1, 0 ; gc = 104589, 1, 0 ; hom _ afr = 0 ; hom _ amr = 0
https://api.stackexchange.com
; hom _ asj = 0 ; hom _ eas = 0 ; hom _ fin = 0 ; hom _ nfe = 0 ; hom _ oth = 0 ; hom _ sas = 0 ; hom _ male = 0 ; hom _ female = 0 ; hom _ raw = 0 ; hom = 0 ; popmax = fin ; ac _ popmax = 1 ; an _ popmax = 20076 ; af _ popmax = 4. 98107e - 05 ; dp _ median = 58 ; dref _ median = 5. 01187e - 84 ; gq _ median = 99 ; ab _ median = 6. 03448e - 01 ; as _ rf = 9. 18451e - 01 ; as _ filterstatus = pass ; csq = t | missense _ variant | moderate | xkr3 | ensg00000172967 | transcript | enst00000331428 | protein _ coding | 4 / 4 | | enst00000331428. 5 : c. 707t > a | ensp00000331704. 5 : p. phe236tyr | 810 | 707 | 236 | f / y | ttc / tac | | 1 | | - 1 | | snv | 1 | hgnc | 28778 | yes | | | ccds42975. 1 | ensp00000331704 | q5gh77 | | upi000013efae | | deleterious ( 0 ) | benign ( 0. 055 ) | hmmpanther : pthr14297 & hmmpanther : pthr14297 : sf7 & pfam _ domain : pf09815 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |, t | regulatory _ region _ variant | modifier | | | regulatoryfeature | ensr00000672806 | tf _ binding _ site | | | | | | | | | | | 1 | | | | snv | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
https://api.stackexchange.com
|, t | regulatory _ region _ variant | modifier | | | regulatoryfeature | ensr00001729562 | ctcf _ binding _ site | | | | | | | | | | | 1 | | | | snv | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | update : 2019 : the current server for gnomad doesn't support byte - range requests.
https://api.stackexchange.com
say you are sequencing to 2x coverage. suppose at a site, sample s has one reference base and one alternate base. it is hard to tell if this is a sequencing error or a heterozygote. now suppose you have 1000 other samples, all at 2x read depth. one of them has two alt bases ; 10 of them have one ref and one alt. it is usually improbable that all these samples have the same sequencing error. then you can assert sample s has a het. multi - sample calling helps to increase the sensitivity of not so rare snps. note that what matters here is the assumption of error independency. ancestry only has a tiny indirect effect. multi - sample calling penalizes very rare snps, in particular singletons. when you care about variants only, this is for good. naively combining single - sample calls yields a higher error rate. multi - sample calling also helps variant filtering at a later stage. for example, for a sample sequenced to 30x coverage, you would not know if a site at 45x depth is caused by a potential cnv / mismapping or by statistical fluctuation. when you see 1000 30x samples at 45x depth, you can easily know you are looking at a cnv / systematic mismapping. multiple samples enhance most statistical signals. older methods pool all bams when calling variants. this is necessary because a single low - coverage sample does not have enough data to recover hidden indels. however, this strategy is not that easy to massively parallelized ; adding a new sample triggers re - calling, which is very expensive as well. as we are mostly doing high - coverage sequencing these days, the old problem with indel calling does not matter now. gatk has this new single - sample calling pipeline where you combine per - sample gvcfs at a later stage. such sample combining strategy is perhaps the only sensible solution when you are dealing with 100k samples. the so - called haplotype based variant calling is a separate question. this type of approach helps to call indels, but is not of much relevance to multi - sample calling. also, of the three variant callers in your question, only gatk ( and scalpel which you have not mentioned ) use assembly at large. freebayes does not. platypus does but only to a limited extent and does not work well in practice. i guess what you really want to talk about is imputation based
https://api.stackexchange.com
calling. this approach further improves sensitivity with ld. with enough samples, you can measure the ld between two positions. suppose at position 1000, you see one ref read and no alt reads ; at position 1500, you see one ref read and two alt reads. you would not call any snps at position 1000 even given multiple samples. however, when you know the two positions are strongly linked and the dominant haplotypes are ref - ref and alt - alt, you know the sample under investigation is likely to have a missing alt allele. ld transfers signals across sites and enhances the power to make correct genotyping calls. nonetheless, as we are mostly doing high - coverage sequencing nowadays, imputation based methods only have a minor effect and are rarely applied.
https://api.stackexchange.com
i believe it is for this reason : the female body plan is the default one. males are a variation upon that, in humans at least. nipples are part of the basic body plan. for a man to not have them, he would need to actively evolve something that would prevent nipples from developing. there is no selective pressure for the development of such a thing, so it hasn't happened. keep in mind that the code for the general body plan is shared between males and females. the y chromosome modifies the development of that body plan so the person becomes male.
https://api.stackexchange.com
answering my own question based on the comments, tert - butyl - hydroperoxide is at least one such chemical. as stated on this msds from a government website, it's a 4 - 4 - 4, with additional special warning of being a strong oxidizer. the only thing that it does not do that could make the 704 diamond any worse is react strongly with water. it is in fact water soluble, though marginally, preferring to float on top ( and therefore traditional water - based fire suppression is ineffective, but foam / co2 will work ). if anyone else can find a chemical that, in a form that is used in the lab or industrially, is a 4 - 4 - 4 that is a strong oxidizer and reacts strongly with water, that's pretty much " as bad as it gets " and they'll get the check.
https://api.stackexchange.com
to xi'an's first point : when you're talking about $ \ sigma $ - algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. i'll try to build up to that gently, though. a theory of probability admitting all subsets of uncountable sets will break mathematics consider this example. suppose you have a unit square in $ \ mathbb { r } ^ 2 $, and you're interested in the probability of randomly selecting a point that is a member of a specific set in the unit square. in lots of circumstances, this can be readily answered based on a comparison of areas of the different sets. for example, we can draw some circles, measure their areas, and then take the probability as the fraction of the square falling in the circle. very simple. but what if the area of the set of interest is not well - defined? if the area is not well - defined, then we can reason to two different but completely valid ( in some sense ) conclusions about what the area is. so we could have $ p ( a ) = 1 $ on the one hand and $ p ( a ) = 0 $ on the other hand, which implies $ 0 = 1 $. this breaks all of math beyond repair. you can now prove $ 5 < 0 $ and a number of other preposterous things. clearly this isn't too useful. $ \ boldsymbol { \ sigma } $ - algebras are the patch that fixes math what is a $ \ sigma $ - algebra, precisely? it's actually not that frightening. it's just a definition of which sets may be considered as events. elements not in $ \ mathscr { f } $ simply have no defined probability measure. basically, $ \ sigma $ - algebras are the " patch " that lets us avoid some pathological behaviors of mathematics, namely non - measurable sets. the three requirements of a $ \ sigma $ - field can be considered as consequences of what we would like to do with probability : a $ \ sigma $ - field is a set that has three properties : closure under countable unions. closure under countable intersections. closure under complements. the countable unions and countable intersections components are direct consequences of the non - measurable set issue. closure under complements is a consequence of the kolmogorov axioms : if $ p ( a ) = 2 / 3 $, $ p ( a
https://api.stackexchange.com
^ c ) $ ought to be $ 1 / 3 $. but without ( 3 ), it could happen that $ p ( a ^ c ) $ is undefined. that would be strange. closure under complements and the kolmogorov axioms let us to say things like $ p ( a \ cup a ^ c ) = p ( a ) + 1 - p ( a ) = 1 $. finally, we are considering events in relation to $ \ omega $, so we further require that $ \ omega \ in \ mathscr { f } $ good news : $ \ boldsymbol { \ sigma } $ - algebras are only strictly necessary for uncountable sets but! there's good news here, also. or, at least, a way to skirt the issue. we only need $ \ sigma $ - algebras if we're working in a set with uncountable cardinality. if we restrict ourselves to countable sets, then we can take $ \ mathscr { f } = 2 ^ \ omega $ the power set of $ \ omega $ and we won't have any of these problems because for countable $ \ omega $, $ 2 ^ \ omega $ consists only of measurable sets. ( this is alluded to in xi'an's second comment. ) you'll notice that some textbooks will actually commit a subtle sleight - of - hand here, and only consider countable sets when discussing probability spaces. additionally, in geometric problems in $ \ mathbb { r } ^ n $, it's perfectly sufficient to only consider $ \ sigma $ - algebras composed of sets for which the $ \ mathcal { l } ^ n $ measure is defined. to ground this somewhat more firmly, $ \ mathcal { l } ^ n $ for $ n = 1, 2, 3 $ corresponds to the usual notions of length, area and volume. so what i'm saying in the previous example is that the set needs to have a well - defined area for it to have a geometric probability assigned to it. and the reason is this : if we admit non - measureable sets, then we can end up in situations where we can assign probability 1 to some event based on some proof, and probability 0 to the same event event based on some other proof. but don't let the connection to uncountable sets confuse you! a common misconception that $ \ sigma $ - algebras are count
https://api.stackexchange.com
##able sets. in fact, they may be countable or uncountable. consider this illustration : as before, we have a unit square. define $ $ \ mathscr { f } = \ text { all subsets of the unit square with defined $ \ mathcal { l } ^ 2 $ measure }. $ $ you can draw a square $ b $ with side length $ s $ for all $ s \ in ( 0, 1 ) $, and with one corner at $ ( 0, 0 ) $. it should be clear that this square is a subset of the unit square. moreover, all of these squares have defined area, so these squares are elements of $ \ mathscr { f } $. but it should also be clear that there are uncountably many squares $ b $ : the number of such squares is uncountable, and each square has defined lebesgue measure. so as a practical matter, simply making that observation is often enough to make the observation that you only consider lebesgue - measurable sets to gain headway against the problem of interest. but wait, what's a non - measurable set? i'm afraid i can only shed a little bit of light on this myself. but the banach - tarski paradox ( sometimes the " sun and pea " paradox ) can help us some : given a solid ball in 3 ‑ dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their shape. however, the pieces themselves are not " solids " in the usual sense, but infinite scatterings of points. the reconstruction can work with as few as five pieces. a stronger form of the theorem implies that given any two " reasonable " solid objects ( such as a small ball and a huge ball ), either one can be reassembled into the other. this is often stated informally as " a pea can be chopped up and reassembled into the sun " and called the " pea and the sun paradox ". 1 so if you're working with probabilities in $ \ mathbb { r } ^ 3 $ and you're using the geometric probability measure ( the ratio of volumes ), you want to work out the probability of some event. but you'll struggle to
https://api.stackexchange.com
define that probability precisely, because you can rearrange the sets of your space to change volumes! if probability depends on volume, and you can change the volume of the set to be the size of the sun or the size of a pea, then the probability will also change. so no event will have a single probability ascribed to it. even worse, you can rearrange $ s \ in \ omega $ such that the volume of $ s $ has $ v ( s ) > v ( \ omega ) $, which implies that the geometric probability measure reports a probability $ p ( s ) > 1 $, in flagrant violation of the kolmogorov axioms which require that probability has measure 1. to resolve this paradox, one could make one of four concessions : the volume of a set might change when it is rotated. the volume of the union of two disjoint sets might be different from the sum of their volumes. the axioms of zermelo – fraenkel set theory with the axiom of choice ( zfc ) might have to be altered. some sets might be tagged " non - measurable ", and one would need to check whether a set is " measurable " before talking about its volume. option ( 1 ) doesn't help use define probabilities, so it's out. option ( 2 ) violates the second kolmogorov axiom, so it's out. option ( 3 ) seems like a terrible idea because zfc fixes so many more problems than it creates. but option ( 4 ) seems attractive : if we develop a theory of what is and is not measurable, then we will have well - defined probabilities in this problem! this brings us back to measure theory, and our friend the $ \ sigma $ - algebra.
https://api.stackexchange.com
proximity to ncbi may not necessarily give you the fastest transfer speed. aws may be deliberately throttling the internet connection to limit the likelihood that people will use it for undesirable things. there's a chance that a home network might be faster, but you're likely to get the fastest connection to ncbi by using an academic system that is linked to ncbi via a research network. another possibility is using aspera for downloads. this is unlikely to help if bandwidth is being throttled, but it might help if there's a bit of congestion through the regular methods : ncbi also has an online book about best practises for downloading data from their servers. on a related note, just in case someone sees this and ebi / ena is an option, there's a great guide for how to do file transfer using aspera on the ebi web site : your command should look similar to this on unix : ascp - qt - l 300m - i < aspera connect installation directory > / etc / asperaweb _ id _ dsa. openssh era - fasp @ fasp. sra. ebi. ac. uk : < file or files to download > < download location > in my case, i've just started downloading some files from a minion sequencing run. the estimated completion time via standard ftp was 12 hours for about 32gb of data ; ascp has reduced that estimated download time to about an hour. here's the command i used for downloading : ascp - qt - l 300m - i ~ /. aspera / connect / etc / asperaweb _ id _ dsa. openssh era - fasp @ fasp. sra. ebi. ac. uk : / vol1 / era932 / era932268 / oxfordnanopore _ native / 20160804 _ mock. tar. gz.
https://api.stackexchange.com
the cart tool let's you upload a set of names and map them ( optionally in a fuzzy way ) to stitch 4 identifiers, and then use those to map to atc codes ( using the chemicals sources download file ). it's a bit indirect, and i'm not sure what cart will do with the dosage info you mention.
https://api.stackexchange.com
one publication for you : β€œ negative ph does exist ”, k. f. lim, j. chem. educ. 2006, 83, 1465. quoting the abstract in full : the misconception that ph lies between 0 and 14 has been perpetuated in popular - science books, textbooks, revision guides, and reference books. the article text provides some counterexamples : for example, commercially available concentrated hcl solution ( 37 % by mass ) has $ \ mathrm { ph } \ approx - 1. 1 $, while saturated naoh solution has $ \ mathrm { ph } \ approx 15. 0 $.
https://api.stackexchange.com
the short version is that the beta distribution can be understood as representing a distribution of probabilities, that is, it represents all the possible values of a probability when we don't know what that probability is. here is my favorite intuitive explanation of this : anyone who follows baseball is familiar with batting averages β€” simply the number of times a player gets a base hit divided by the number of times he goes up at bat ( so it's just a percentage between 0 and 1 ).. 266 is in general considered an average batting average, while. 300 is considered an excellent one. imagine we have a baseball player, and we want to predict what his season - long batting average will be. you might say we can just use his batting average so far - but this will be a very poor measure at the start of a season! if a player goes up to bat once and gets a single, his batting average is briefly 1. 000, while if he strikes out, his batting average is 0. 000. it doesn't get much better if you go up to bat five or six times - you could get a lucky streak and get an average of 1. 000, or an unlucky streak and get an average of 0, neither of which are a remotely good predictor of how you will bat that season. why is your batting average in the first few hits not a good predictor of your eventual batting average? when a player's first at - bat is a strikeout, why does no one predict that he'll never get a hit all season? because we're going in with prior expectations. we know that in history, most batting averages over a season have hovered between something like. 215 and. 360, with some extremely rare exceptions on either side. we know that if a player gets a few strikeouts in a row at the start, that might indicate he'll end up a bit worse than average, but we know he probably won't deviate from that range. given our batting average problem, which can be represented with a binomial distribution ( a series of successes and failures ), the best way to represent these prior expectations ( what we in statistics just call a prior ) is with the beta distribution - it's saying, before we've seen the player take his first swing, what we roughly expect his batting average to be. the domain of the beta distribution is ( 0, 1 ), just like a probability, so we already know we're on the right track, but
https://api.stackexchange.com
the appropriateness of the beta for this task goes far beyond that. we expect that the player's season - long batting average will be most likely around. 27, but that it could reasonably range from. 21 to. 35. this can be represented with a beta distribution with parameters $ \ alpha = 81 $ and $ \ beta = 219 $ : curve ( dbeta ( x, 81, 219 ) ) i came up with these parameters for two reasons : the mean is $ \ frac { \ alpha } { \ alpha + \ beta } = \ frac { 81 } { 81 + 219 } =. 270 $ as you can see in the plot, this distribution lies almost entirely within (. 2,. 35 ) - the reasonable range for a batting average. you asked what the x axis represents in a beta distribution density plot β€” here it represents his batting average. thus notice that in this case, not only is the y - axis a probability ( or more precisely a probability density ), but the x - axis is as well ( batting average is just a probability of a hit, after all )! the beta distribution is representing a probability distribution of probabilities. but here's why the beta distribution is so appropriate. imagine the player gets a single hit. his record for the season is now 1 hit ; 1 at bat. we have to then update our probabilities - we want to shift this entire curve over just a bit to reflect our new information. while the math for proving this is a bit involved ( it's shown here ), the result is very simple. the new beta distribution will be : $ \ mbox { beta } ( \ alpha _ 0 + \ mbox { hits }, \ beta _ 0 + \ mbox { misses } ) $ where $ \ alpha _ 0 $ and $ \ beta _ 0 $ are the parameters we started with - that is, 81 and 219. thus, in this case, $ \ alpha $ has increased by 1 ( his one hit ), while $ \ beta $ has not increased at all ( no misses yet ). that means our new distribution is $ \ mbox { beta } ( 81 + 1, 219 ) $, or : curve ( dbeta ( x, 82, 219 ) ) notice that it has barely changed at all - the change is indeed invisible to the naked eye! ( that's because one hit doesn't really mean anything ). however, the more the player hits over the course of the season,
https://api.stackexchange.com
the more the curve will shift to accommodate the new evidence, and furthermore the more it will narrow based on the fact that we have more proof. let's say halfway through the season he has been up to bat 300 times, hitting 100 out of those times. the new distribution would be $ \ mbox { beta } ( 81 + 100, 219 + 200 ) $, or : curve ( dbeta ( x, 81 + 100, 219 + 200 ) ) notice the curve is now both thinner and shifted to the right ( higher batting average ) than it used to be - we have a better sense of what the player's batting average is. one of the most interesting outputs of this formula is the expected value of the resulting beta distribution, which is basically your new estimate. recall that the expected value of the beta distribution is $ \ frac { \ alpha } { \ alpha + \ beta } $. thus, after 100 hits of 300 real at - bats, the expected value of the new beta distribution is $ \ frac { 81 + 100 } { 81 + 100 + 219 + 200 } =. 303 $ - notice that it is lower than the naive estimate of $ \ frac { 100 } { 100 + 200 } =. 333 $, but higher than the estimate you started the season with ( $ \ frac { 81 } { 81 + 219 } =. 270 $ ). you might notice that this formula is equivalent to adding a " head start " to the number of hits and non - hits of a player - you're saying " start him off in the season with 81 hits and 219 non hits on his record " ). thus, the beta distribution is best for representing a probabilistic distribution of probabilities : the case where we don't know what a probability is in advance, but we have some reasonable guesses.
https://api.stackexchange.com
as so often, the choice depends on ( 1 ) the problem you are trying to solve, ( 2 ) the skills you have, and ( 3 ) the people you work with ( unless it's a solo project ). i'll leave ( 3 ) aside for the moment because it depends on everyone's individual situation. problem dependence : fortran excels at array processing. if your problem can be described in terms of simple data structures and in particular arrays, fortran is well adapted. fortran programmers end up using arrays even in non - obvious cases ( e. g. for representing graphs ). c + + is better suited for complex and highly dynamic data structures. skill dependence : it takes a lot more programming experience to write good c + + programs than to write good fortran programs. if you start out with little programming experience and only have so much time to learn that aspect of your job, you probably get a better return on investment learning fortran than learning c + +. assuming, of course, that your problem is suited to fortran. however, there's more to programming than just fortran and c + +. i'd recommend to anyone going into computational science to start with a dynamic high - level language such as python. always remember that your time is more valuable than cpu time!
https://api.stackexchange.com
introduction the bonding situation in $ \ ce { ( alcl3 ) 2 } $ and $ \ ce { ( bcl3 ) 2 } $ is nothing trivial and the reason why aluminium chloride forms dimers, while boron trichloride does not, cannot only be attributed to size. in order to understand this phenomenon we need to look at both, the monomers and the dimers, and compare them to each other. understanding the respective bonding situation of the monomers, is key to understand which deficiencies lead to dimerisations. computational details since i was unable to find any compelling literature on the subject, i ran some calculations of my own. i used the df - m06l / def2 - tzvpp for geometry optimisations. each structure has been optimised to a local minimum in their respective symmetry restrictions, i. e. $ d _ \ mathrm { 3h } $ for the monomers and $ c _ \ mathrm { 2v } $ for the dimers. analyses with the natural bond orbital model ( nbo6 program ) and the quantum theory of atoms in molecules ( qtaim, multiwfn ) have been run on single point energy calculations at the m06 / def2 - qzvpp / / df - m06 - l / def2 - tzvpp level of theory. a rudimentary energy decomposition analysis has been done on that level, too. energy decomposition analysis the dissociation energy of the dimers $ \ ce { ( xy3 ) 2 } $ to the monomers $ \ ce { xy3 } $ is defined as the difference of the energy of the dimer $ e _ \ mathrm { opt } [ \ ce { ( xy3 ) 2 } ] $ and double the energy of the monomer $ e _ \ mathrm { opt } [ \ ce { xy3 } ] $ at their optimised ( relaxed ) geometries $ \ eqref { e - diss - def } $. the interaction energy is defined as the difference of energy of the relaxed dimer and double the energy of the monomers in the geometry of the dimer $ e _ \ mathrm { frag } [ \ ce { ( xy3 ) ^ { \ neq } } ] $ $ \ eqref { e - int - def } $. that basically means breaking the molecule in two parts, but keeping
https://api.stackexchange.com
these fragments in the same geometry. the deformation energy ( or preparation energy ) is defined as the difference of the energy of the optimised and the non - optimised monomer $ \ eqref { e - def - def } $. this is the energy required to distort the monomer ( in its ground state ) to the configuration it will have in the dimer. $ $ \ begin { align } e _ \ mathrm { diss } & = e _ \ mathrm { opt } [ \ ce { ( xy3 ) 2 } ] - 2e _ \ mathrm { opt } [ \ ce { xy3 } ] \ tag1 \ label { e - diss - def } \ \ e _ \ mathrm { int } & = e _ \ mathrm { opt } [ \ ce { ( xy3 ) 2 } ] - 2e _ \ mathrm { frag } [ \ ce { ( xy3 ) ^ { \ neq } } ] % \ ddag not implemented \ tag2 \ label { e - int - def } \ \ e _ \ mathrm { def } & = e _ \ mathrm { frag } [ \ ce { ( xy3 ) ^ { \ neq } } ] - e _ \ mathrm { opt } [ \ ce { xy3 } ] \ tag3 \ label { e - def - def } \ \ e _ \ mathrm { diss } & = e _ \ mathrm { int } + 2e _ \ mathrm { def } \ tag { 1'} \ end { align } $ $ results & discussion the monomers $ \ ce { xcl3 ; x { = } \ { b, al \ } } $. let's just get the obvious out of the way : boron is ( vdw - radius 205 pm ) smaller than aluminium ( vdw - radius 240 pm ). for comparison chlorine has a vdw - radius of 205 pm, too. that is pretty much reflected in the bond lengths and the chlorine - chlorine distance. \ begin { array } { llrrr } \ hline & \ ce { x { = } } & \ ce { al } & \ ce { b } & \ ce { cl } \ \ \ hline \ mathbf { d } ( \ ce { x - cl } ) & / \ pu { pm } &
https://api.stackexchange.com
206. 0 & 173. 6 & - - \ \ \ mathbf { d } ( \ ce { cl \ bond { ~ } cl'} ) & / \ pu { pm } & 356. 8 & 300. 6 & - - \ \ \ hline \ mathbf { r } _ \ mathrm { vdw } & / \ pu { pm } & 240 & 205 & 205 \ \ \ mathbf { r } _ \ mathrm { sing } & / \ pu { pm } & 126 & 85 & 99 \ \ \ mathbf { r } _ \ mathrm { doub } & / \ pu { pm } & 113 & 78 & 95 \ \ \ hline \ end { array } from this data we can draw certain conclusions without further looking. the boron monomer is much more compact than the aluminium monomer. when we compare the bond lengths to the covalent radii ( pyykko and atsumi ) we find that the boron chloride bond is about the length that we would expect from a double bond ( $ \ mathbf { r } _ \ mathrm { doub } ( \ ce { b } ) + \ mathbf { r } _ \ mathrm { doub } ( \ ce { cl } ) = 173 ~ \ pu { pm } $ ). while the aluminium chloride bond is still significantly shorter than a single bond ( $ \ mathbf { r } _ \ mathrm { sing } ( \ ce { al } ) + \ mathbf { r } _ \ mathrm { sing } ( \ ce { cl } ) = 225 ~ \ pu { pm } $ ), it is still also much longer than a double bond ( $ \ mathbf { r } _ \ mathrm { doub } ( \ ce { al } ) + \ mathbf { r } _ \ mathrm { doub } ( \ ce { cl } ) = 191 ~ \ pu { pm } $ ). this itself offers compelling evidence, that there is more Ο€ - backbonding in $ \ ce { bcl3 } $ than in $ \ ce { alcl3 } $. molecular orbital theory offers more evidence for this. in both compounds is a doubly occupied Ο€ orbital. the following pictures are for a contour value of 0. 05 ; aluminium ( left / top ) and boron ( right / bottom ) in numbers, the main contributions are as follows ( this is just a representation, not
https://api.stackexchange.com
the actual formula ) : $ $ \ begin { align } \ pi ( \ ce { bcl3 } ) & = 21 \ % ~ \ ce { p _ { $ z $ } - b } + \ sum _ { i = 1 } ^ 3 26 \ % ~ \ ce { p _ { $ z $ } - cl ^ { $ ( i ) $ } } \ \ \ pi ( \ ce { alcl3 } ) & = 13 \ % ~ \ ce { p _ { $ z $ } - al } + \ sum _ { i = 1 } ^ 3 29 \ % ~ \ ce { p _ { $ z $ } - cl ^ { $ ( i ) $ } } \ end { align } $ $ there is still some more evidence. the natural atomic charges ( npa of nbo6 ) fairly well agree with that assesment ; aluminium is far more positive than boron. $ $ \ begin { array } { lrr } & \ ce { alcl3 } & \ ce { bcl3 } \ \ \ hline \ mathbf { q } ( \ ce { x } ) ~ \ text { [ npa ] } & + 1. 4 & + 0. 3 \ \ \ mathbf { q } ( \ ce { cl } ) ~ \ text { [ npa ] } & - 0. 5 & - 0. 1 \ \ \ hline % \ mathbf { q } ( \ ce { x } ) ~ \ text { [ qtaim ] } & + 2. 4 & + 2. 0 \ \ % \ mathbf { q } ( \ ce { cl } ) ~ \ text { [ qtaim ] } & - 0. 8 & - 0. 7 \ \ \ hline \ end { array } $ $ the analysis in terms of qtaim also shows that the bonds in $ \ ce { alcl3 } $ they are predominantly ionic ( left / top ) while in $ \ ce { bcl3 } $ are predominantly covalent ( right / bottom ). one final thought on the bonding can be supplied with a natural resonance theory analysis ( nbo6 ). i have chosen the following starting configurations and let the program calculate their contribution. the overall structures in terms of resonance are the same for both cases, that is if you force resonance treatment of the aluminium monomer. structure a does not contribute, while the others contribute to about 31 %. however, when not forced into
https://api.stackexchange.com
resonance, structure a is the best approximation of the bonding situation for $ \ ce { alcl3 } $. in the case of $ \ ce { bcl3 } $ the algorithm finds a hyperbond between the chlorine atoms, a strongly delocalised bond between multiple centres. in this case these are 3 - centre - 4 - electron bonds between the chlorine atoms, resulting from the higher lying degenerated Ο€ orbitals. this all is quite good evidence that the monomer of boron chloride should be more stable towards dimerisation than the monomer of aluminium. the dimers $ \ ce { ( xcl3 ) 2 ; x { = } \ { b, al \ } } $. the obvious change is that the co - ordination of the central elements goes from trigonal planar to distorted tertrahedral. a look at the geometries will give us something to talk about. \ begin { array } { llrrr } \ hline & \ ce { x { = } } & \ ce { al } & \ ce { b } & \ ce { cl } \ \ \ hline \ mathbf { d } ( \ ce { x - cl } ) & / \ pu { pm } & 206. 7 & 175. 9 & - - \ \ \ mathbf { d } ( \ ce { x - { \ mu } cl } ) & / \ pu { pm } & 226. 1 & 198. 7 & - - \ \ \ mathbf { d } ( \ ce { cl \ bond { ~ } { \ mu } cl } ) & / \ pu { pm } & 354. 1 & 308. 0 & - - \ \ \ mathbf { d } ( \ ce { { \ mu } cl \ bond { ~ } { \ mu } cl'} ) & / \ pu { pm } & 323. 6 & 287. 3 & - - \ \ \ mathbf { d } ( \ ce { b \ bond { ~ } b'} ) & / \ pu { pm } & 315. 7 & 274. 7 & - - \ \ \ hline \ mathbf { r } _ \ mathrm { vdw } & / \ pu { pm } & 240 & 205 & 205 \ \ \ mathbf { r } _ \ mathrm { sing } & / \ pu { pm } & 126 & 85 & 99 \ \ \ mathbf { r } _ \ mathrm {
https://api.stackexchange.com
doub } & / \ pu { pm } & 113 & 78 & 95 \ \ \ hline \ end { array } in principle nothing much changes other than the expected elongation of the bonds that are now bridging. in case of aluminium the stretch is just below 10 % and for boron it is slightly above 14 %, having a bit more impact. in the boron dimer also the terminal bonds are slightly ( > + 1 % ) affected, while for aluminium there is almost no change. the charges are not really a reliable tool, especially when they are that close to zero as they are for boron. in both cases one can see that charge density is transferred from the bridging chlorine to the central $ \ ce { x } $. $ $ \ begin { array } { lrr } & \ ce { ( alcl3 ) 2 } & \ ce { ( bcl3 ) 2 } \ \ \ hline \ mathbf { q } ( \ ce { x } ) ~ \ text { [ npa ] } & + 1. 3 & + 0. 2 \ \ \ mathbf { q } ( \ ce { cl } ) ~ \ text { [ npa ] } & - 0. 5 & - 0. 1 \ \ \ hline \ mathbf { q } ( \ ce { { \ mu } cl } ) ~ \ text { [ npa ] } & - 0. 4 & + 0. 1 \ \ \ hline \ end { array } $ $ a look at the central four - membered - ring of in terms of qtaim offers that the overall bonding does not change. in aluminium they get a little more ionic, while in boron they stay largely covalent. the nbo analysis offers a maybe quite surprising result. there are no hyperbonds in any of the dimers. while a description in these terms is certainly possible, after all it is just an interpretation tool, it is completely unnecessary. so after all we have two kinds of bonds in the dimers four terminal $ \ ce { x - cl } $ and four bridging $ \ ce { x - { \ mu } cl } $ bonds. therefore the most accurate description is with formal charges ( also the simplest ). the notation with the arrows is not wrong, but it does not represent the fact that the bonds are equal for symmetry reasons alone. to make this straight : there are no hyperbonds in $ \ ce { (
https://api.stackexchange.com
xcl3 ) 2 ; x { = } \ { b, al \ } } $ ; this includes three - centre - two - electron bonds, and three - centre - four - electron bonds. and deeper insight to those will be offered on another day. the differentiation between a dative bond and some other for of bond does not make sense, as the bonds are equal and only introduced by a deficiency of the used description model. a natural resonance theory for $ \ ce { ( bcl3 ) 2 } $ gives us a overall contribution of the main ( all single bonds ) structure of 46 % ; while all other structure do contribute, there are too many and their contribution is too little ( > 5 % ). i did not run this analysis for the aluminium case as i did not expect any more insight and i did not want to waste calculation time. dimerisation - yes or no the energies offer us a clear trend. aluminium likes to dimerise, boron not. however, there are still some things to discuss. i am going to argue for the reaction $ $ \ begin { align } \ ce { 2xcl3 & - > ( xcl3 ) 2 } & \ delta e - \ mathrm { diss } / e _ \ mathrm { o } / h / g &. \ end { align } ; $ $ therefore if reaction energies are negative the dimerisation is favoured. the following table includes all calculated energies, including the energy decomposition analysis mentioned at the beginning. all energies are given in $ \ pu { kj mol ^ - 1 } $. \ begin { array } { lrcrcrcrr } \ delta & e _ \ mathrm { diss } & ( & e _ \ mathrm { int } & + 2 \ times & e _ \ mathrm { def } & ) & e _ \ mathrm { o } & h & g \ \ \ hline \ ce { al } & - 113. 5 & ( & - 224. 2 & + 2 \ times & 55. 4 & ) & - 114. 7 & - 60. 4 & - 230. 4 \ \ \ ce { b } & 76. 4 & ( & - 111. 2 & + 2 \ times & 93. 8 & ) & 82. 6 & - 47. 1 & 152. 5 \ \ \ hline \ end { array } the result is fairly obvious at first. the association for aluminium is strongly exergonic, while for bo
https://api.stackexchange.com
##ron it is strongly endergonic. while both reactions should be exothermic, stronger for aluminium, the trend for the observed electronic energies ( $ e _ \ mathrm { o } $ including the zero - point energy correction ) and the ( electronic ) dissociation energies reflect the overall trend for the gibbs enthalpies. while it is fairly surprising how strongly entropy favours association of $ \ ce { alcl3 } $, it is also surprising how it strongly disfavours it for $ \ ce { bcl3 } $. a look at the decomposed electronic energy offers great insight into the reasons why one dimer is stable and the other not ( at room temperature ). the interaction energy of the fragments is double for aluminium than it is for boron. this can be traced back to the very large difference in the atomic partial charges. one could expect that the electrostatic energy is a lot more attractive for aluminium than it is for boron. the deformation energy on the other hand clearly reflects the changes in the geometry discussed above. for aluminium there is a smaller penalty resulting from the elongation of the $ \ ce { al - cl } $ bond and pyramidalisation. for boron on the other hand this has a 1. 5 times larger effect. the distortion also weakens the Ο€ - backbonding, which the additional bonding would need to compensate. the four - membered - ring is certainly not an ideal geometry and the bridging chlorine atoms come dangerously close. conclusion, summary and tl ; dr : the distortion of the geometry of the monomer $ \ ce { bcl3 } $ cannot be compensated by the additional bonding between the two fragments. therefore the monomers are more stable than the dimer. additionally entropy considerations at room temperature favour the monomer, too. on the other hand, the distortion of the molecular geometry in $ \ ce { alcl3 } $ is less severe. the gain in interaction energy of the two fragments well overcompensates for the change. entropy also favours the dimerisation. while size of the central atom is certainly the distinguishing factor, its impact is only severe on the electronic structure. steric crowding would not be a problem when the interaction energy would compensate for that. this is quite evident because $ \ ce { bcl3 } $ is still a very good lewis acid and forms stable compounds with much larger moieties than itself. references the used van der waals radii
https://api.stackexchange.com
were taken from s. s. batsanov inorg. mat. 2001, 37 ( 9 ), 871 - 885. and the covalent radii have been taken from p. pyykko and m. atsumi chem. eur. j. 2009, 15, 12770 - 12779. computations have been carried out using gaussian 09 rev d. 01 with nbo 6. 0. additional analyses have been performed with multiwfn 3. 3. 8. orbital pictures were generated with the incredible chemcraft.
https://api.stackexchange.com
there are some good answers here already but i hope this is a nice short summary : electromagnetic radiation cannot escape a black hole, because it travels at the speed of light. similarly, gravitational radiation cannot escape a black hole either, because it too travels at the speed of light. if gravitational radiation could escape, you could theoretically use it to send a signal from the inside of the black hole to the outside, which is forbidden. a black hole, however, can have an electric charge, which means there is an electric field around it. this is not a paradox because a static electric field is different from electromagnetic radiation. similarly, a black hole has a mass, so it has a gravitational field around it. this is not a paradox either because a gravitational field is different from gravitational radiation. you say the gravitational field carries information about the amount of mass ( actually energy ) inside, but that does not give a way for someone inside to send a signal to the outside, because to do so they would have to create or destroy energy, which is impossible. thus there is no paradox.
https://api.stackexchange.com
the hilbert transform is used to calculate the " analytic " signal. see for example if your signal is a sine wave or an modulated sine wave, the magnitude of the analytic signal will indeed look like the envelope. however, the computation of the hilbert transform is not trivial. technically it requires a non - causal fir filter of considerable length so it will require a fair amount of mips, memory and latency. for a broad band signal, it really depends on how you define " envelope " for your specific application. for your application of dynamic range compression you want a metric that is well correlated with the the perception of loudness over time. the hilbert transform is not the right tool for that. a better option would be to apply an a - weighted filter ( and then do a lossy peak or lossy rms detector. this will correlate fairly well with perceived loudness over time and is relatively cheap to do.
https://api.stackexchange.com
this is a interesting question and for a long time it was thought that they do not age. in the meantime there are some new papers which say that bacteria do indeed age. aging can be defined as the accumulation of non - genetic damages ( for example oxidative damage to proteins ) over time. if too much of these damages are accumulated, the cell will eventually die. for bacteria there seems to be an interesting way around this. the second paper cited below found that bacteria do not divide symmetrically into two daughter cells, but seem to split into one cell which receives more damage and one which receives less. the latter one can be called rejuvenated and seems to make sure that the bacteria can seemingly divide forever. using this strategy limits the non - genetic damage to relatively few cells ( if you consider the doubling mechanism ) which could eventually die to save the others. have a look at the following publications which go into detail ( the first is a summary of the second but worth reading ) : do bacteria age? biologists discover the answer follows simple economics temporal dynamics of bacterial aging and rejuvenation aging and death in an organism that reproduces by morphologically symmetric division.
https://api.stackexchange.com
i'd say the culprit is the contact area between the two surfaces relative to the deformation. when there are other pieces of paper below it, all the paper is able to deform when you push down ; because the paper is fairly soft and deformable fiber. if there is more soft deformable paper below it, the layers are able to bend and stretch more. ( a simplified example of this is springs in series, where the overall stiffness decreases when you stack up multiple deformable bodies in a row ) this deformation creates the little indents on the page ( and on pages below it ; you can often see on the next page the indents for the words you wrote on the page above ). the deeper these indents are, the more of the ballpoint is able to make contact with the surface. if there is barely any deformation, then the flat surface doesn't get to make good contact with the page. this makes it hard for the tip of the pen to actually roll, which is what moves the ink from the cartridge to the tip. it would also make a thinner line due to less contact area. here is an amazing exaggerated illustration i made on microsoft paint : the top one has more pages, the bottom one has fewer. i've exaggerated how much the pages deform obviously ; but the idea is that having more pages below with make that indent larger ; leading to the increased surface area on the pen tip. note that this doesn't really apply to other types of pens. pens that use other ways to get the ink out have less of an issue writing with solid surfaces behind ; but ballpoint pens are usually less expensive and more common.
https://api.stackexchange.com
the best algorithm that is known is to express the factorial as a product of prime powers. one can quickly determine the primes as well as the right power for each prime using a sieve approach. computing each power can be done efficiently using repeated squaring, and then the factors are multiplied together. this was described by peter b. borwein, on the complexity of calculating factorials, journal of algorithms 6 376 – 380, 1985. ( pdf ) in short, $ n! $ can be computed in $ o ( n ( \ log n ) ^ 3 \ log \ log n ) $ time, compared to the $ \ omega ( n ^ 2 \ log n ) $ time required when using the definition. what the textbook perhaps meant was the divide - and - conquer method. one can reduce the $ n - 1 $ multiplications by using the regular pattern of the product. let $ n? $ denote $ 1 \ cdot 3 \ cdot 5 \ dotsm ( 2n - 1 ) $ as a convenient notation. rearrange the factors of $ ( 2n )! = 1 \ cdot 2 \ cdot 3 \ dotsm ( 2n ) $ as $ $ ( 2n )! = n! \ cdot 2 ^ n \ cdot 3 \ cdot 5 \ cdot 7 \ dotsm ( 2n - 1 ). $ $ now suppose $ n = 2 ^ k $ for some integer $ k > 0 $. ( this is a useful assumption to avoid complications in the following discussion, and the idea can be extended to general $ n $. ) then $ ( 2 ^ k )! = ( 2 ^ { k - 1 } )! 2 ^ { 2 ^ { k - 1 } } ( 2 ^ { k - 1 } )? $ and by expanding this recurrence, $ $ ( 2 ^ k )! = \ left ( 2 ^ { 2 ^ { k - 1 } + 2 ^ { k - 2 } + \ dots + 2 ^ 0 } \ right ) \ prod _ { i = 0 } ^ { k - 1 } ( 2 ^ i )? = \ left ( 2 ^ { 2 ^ k - 1 } \ right ) \ prod _ { i = 1 } ^ { k - 1 } ( 2 ^ i )?. $ $ computing $ ( 2 ^ { k - 1 } )? $ and multiplying the partial products at each stage takes $ ( k - 2 ) +
https://api.stackexchange.com
2 ^ { k - 1 } - 2 $ multiplications. this is an improvement of a factor of nearly $ 2 $ from $ 2 ^ k - 2 $ multiplications just using the definition. some additional operations are required to compute the power of $ 2 $, but in binary arithmetic this can be done cheaply ( depending on what precisely is required, it may just require adding a suffix of $ 2 ^ k - 1 $ zeroes ). the following ruby code implements a simplified version of this. this does not avoid recomputing $ n? $ even where it could do so : def oddprod ( l, h ) p = 1 ml = ( l % 2 > 0 )? l : ( l + 1 ) mh = ( h % 2 > 0 )? h : ( h - 1 ) while ml < = mh do p = p * ml ml = ml + 2 end p end def fact ( k ) f = 1 for i in 1.. k - 1 f * = oddprod ( 3, 2 * * ( i + 1 ) - 1 ) end 2 * * ( 2 * * k - 1 ) * f end print fact ( 15 ) even this first - pass code improves on the trivial f = 1 ; ( 1.. 32768 ). map { | i | f * = i } ; print f by about 20 % in my testing. with a bit of work, this can be improved further, also removing the requirement that $ n $ be a power of $ 2 $ ( see the extensive discussion ).
https://api.stackexchange.com
your trouble with determinants is pretty common. they ’ re a hard thing to teach well, too, for two main reasons that i can see : the formulas you learn for computing them are messy and complicated, and there ’ s no β€œ natural ” way to interpret the value of the determinant, the way it ’ s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. it ’ s hard to believe things like the invertibility condition you ’ ve stated when it ’ s not even clear what the numbers mean and where they come from. rather than show that the many usual definitions are all the same by comparing them to each other, i ’ m going to state some general properties of the determinant that i claim are enough to specify uniquely what number you should get when you put in a given matrix. then it ’ s not too bad to check that all of the definitions for determinant that you ’ ve seen satisfy those properties i ’ ll state. the first thing to think about if you want an β€œ abstract ” definition of the determinant to unify all those others is that it ’ s not an array of numbers with bars on the side. what we ’ re really looking for is a function that takes n vectors ( the n columns of the matrix ) and returns a number. let ’ s assume we ’ re working with real numbers for now. remember how those operations you mentioned change the value of the determinant? switching two rows or columns changes the sign. multiplying one row by a constant multiplies the whole determinant by that constant. the general fact that number two draws from : the determinant is linear in each row. that is, if you think of it as a function $ \ det : \ mathbb { r } ^ { n ^ 2 } \ rightarrow \ mathbb { r } $, then $ $ \ det ( a \ vec v _ 1 + b \ vec w _ 1, \ vec v _ 2, \ ldots, \ vec v _ n ) = a \ det ( \ vec v _ 1, \ vec v _ 2, \ ldots, \ vec v _ n ) + b \ det ( \ vec w _ 1, \ vec v _ 2, \ ldots, \ vec v _ n ), $ $ and the corresponding condition in each other slot. the determinant of the identity matrix $ i $
https://api.stackexchange.com
is $ 1 $. i claim that these facts are enough to define a unique function that takes in n vectors ( each of length n ) and returns a real number, the determinant of the matrix given by those vectors. i won ’ t prove that, but i ’ ll show you how it helps with some other interpretations of the determinant. in particular, there ’ s a nice geometric way to think of a determinant. consider the unit cube in n dimensional space : the set of n vectors of length 1 with coordinates 0 or 1 in each spot. the determinant of the linear transformation ( matrix ) t is the signed volume of the region gotten by applying t to the unit cube. ( don ’ t worry too much if you don ’ t know what the β€œ signed ” part means, for now ). how does that follow from our abstract definition? well, if you apply the identity to the unit cube, you get back the unit cube. and the volume of the unit cube is 1. if you stretch the cube by a constant factor in one direction only, the new volume is that constant. and if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes : this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors. finally, when you switch two of the vectors that define the unit cube, you flip the orientation. ( again, this is something to come back to later if you don ’ t know what that means ). so there are ways to think about the determinant that aren ’ t symbol - pushing. if you ’ ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants ( the jacobian ) pop up when we change coordinates doing integration. hint : a derivative is a linear approximation of the associated function, and consider a β€œ differential volume element ” in your starting coordinate system. it ’ s not too much work to check that the area of the parallelogram formed by vectors $ ( a, b ) $ and $ ( c, d ) $ is $ \ big | { } ^ { a \ ; b } _ { c \ ; d } \ big | $ either : you might try that to get a sense for things.
https://api.stackexchange.com
here's a graphic i use to explain the difference in my general chemistry courses : all electrons that have the same value for $ n $ ( the principle quantum number ) are in the same shell within a shell ( same $ n $ ), all electrons that share the same $ l $ ( the angular momentum quantum number, or orbital shape ) are in the same sub - shell when electrons share the same $ n $, $ l $, and $ m _ l $, we say they are in the same orbital ( they have the same energy level, shape, and orientation ) so to summarize : same $ n $ - shell same $ n $ and $ l $ - sub - shell same $ n $, $ l $, and $ m _ l $ - orbital now, in the other answer, there is some discussion about spin - orbitals, meaning that each electron would exist in its own orbital. for practical purposes, you don't need to worry about that - by the time those sorts of distinctions matter to you, there won't be any confusion about what people mean by " shells " and " sub - shells. " for you, for now, orbital means " place where up to two electrons can exist, " and they will both share the same $ n $, $ l $, and $ m _ l $ values, but have opposite spins ( $ m _ s $ ).
https://api.stackexchange.com
it is possible to write most specific finite difference methods as petrov - galerkin finite element methods with some choice of local reconstruction and quadrature, and most finite element methods can also be shown to be algebraically equivalent to some finite difference method. therefore, we should choose a method based on which analysis framework we want to use, which terminology we like, which system for extensibility we like, and how we would like to structure software. the following generalizations hold true in the vast majority of variations in practical use, but many points can be circumvented. finite difference pros efficient quadrature - free implementation aspect ratio independence and local conservation for certain schemes ( e. g. mac for incompressible flow ) robust nonlinear methods for transport ( e. g. eno / weno ) m - matrix for some problems discrete maximum principle for some problems ( e. g. mimetic finite differences ) diagonal ( usually identity ) mass matrix inexpensive nodal residual permits efficient nonlinear multigrid ( fas ) cell - wise vanka smoothers give efficient matrix - free smoothers for incompressible flow cons more difficult to implement " physics " staggered grids are sometimes quite technical higher than second order on unstructured grids is difficult no galerkin orthogonality, so convergence may be more difficult to prove not a galerkin method, so discretization and adjoints do not commute ( relevant to optimization and inverse problems ) self - adjoint continuum problems often yield non - symmetric matrices solution is only defined pointwise, so reconstruction at arbitrary locations is not uniquely defined boundary conditions tend to be complicated to implement discontinuous coefficients usually make the methods first order stencil grows if physics includes " cross terms " finite element pros galerkin orthogonality ( discrete solution to coercive problems is within a constant of the best solution in the space ) simple geometric flexibility discontinuous galerkin offers robust transport algorithm, arbitrary order on unstructured grids cellwise entropy inequality guaranteeing $ l ^ 2 $ stability holds independent of mesh, dimension, order of accuracy, and presence of discontinuous solutions, without needing nonlinear limiters easy of implementing boundary conditions can choose conservation statement by choosing test space discretization and adjoints commute ( for galerkin methods ) elegant foundation in functional analysis at high order, local kernels can exploit tensor product structure that is missing with fd lobatto quadrature can make methods energy - conserving (
https://api.stackexchange.com
assuming a symplectic time integrator ) high order accuracy even with discontinuous coefficients, as long as you can align to boundaries discontinuous coefficients inside elements can be accommodated with xfem easy to handle multiple inf - sup conditions cons many elements have trouble at high aspect ratio continuous fem has trouble with transport ( supg is diffusive and oscillatory ) dg usually has more degrees of freedom for same accuracy ( though hdg is much better ) continuous fem does not provide cheap nodal problems, so nonlinear smoothers have much poorer constants usually more nonzeros in assembled matrices have to choose between consistent mass matrix ( some nice properties, but has full inverse, thus requiring an implicit solve per time step ) and lumped mass matrix.
https://api.stackexchange.com
apart from bitcoin and ethereum ( if we are generous ) there are no major and important uses today. it is important to notice that blockchains have some severe limitations. a couple of them being : it only really works for purely digital assets the digital asset under control needs to keep its value even if it's public all transactions need to be public a rather bad confirmation time smart contracts are scary purely digital assets if an asset is actually a physical asset with just a digital " twin " that is being traded, we will risk that local jurisdiction ( i. e. your law enforcement ) can have a different opinion of ownership than what is on the blockchain. to take an example ; suppose that we are trading ( real and physical ) bikes on the blockchain, and that on the blockchain, we put its serial number. suppose further that i hack your computer and put the ownership of your bike to be me. now, if you go to the police, you might be able to convince them that the real owner of the bike is you, and thus i have to give it back. however, there is no way of making me give you the digital twin back, thus there is a dissonance : the bike is owned by you, but the blockchain claims it's owned by me. there are many such proposed use cases ( trading physical goods on a blockchain ) out in the open of trading bikes, diamonds, and even oil. the digital assets keep value even if public there are many examples where people want to put assets on the blockchain, but are somehow under the impression that that gives some kind of control. for instance, musician imogen heap is creating a product in which all musicians should put their music on the blockchain and automatically be paid when a radio plays your hit song. they are under the impression that this creates an automatic link between playing the song and paying for the song. the only thing it really does is to create a very large database for music which is probably quite easy to download. there is currently no way around having to put the full asset visible on the chain. some people are talking about " encryptions ", " storing only the hash ", etc., but in the end, it all comes down to : publish the asset, or don't participate. public transactions in business it is often important to keep your cards close to your chest. you don't want real time exposure of your daily operations. some people try to make solutions
https://api.stackexchange.com
where we put all the dairy farmers'production on the blockchain together with all the dairy stores'inventory. in this way we can easily send trucks to the correct places! however, this makes both farmers and traders liable for inflated prices if they are overproducing / under - stocked. other people want to put energy production ( solar panels, wind farms ) on the blockchain. however, no serious energy producer will have real time production data out for the public. this has major impact on the stock value and that kind of information is the type you want to keep close to your chest. this also holds for so - called green certificates, where you ensure you only use " green energy ". note : there are theoretical solutions that build on zero - knowledge proofs that would allow transactions to be secret. however, these are nowhere near practical yet, and time will show if this item can be fixed. confirmation time you can, like ethereum, make the block time as small as you would like. in bitcoin, the block time is 10 minutes, and in ethereum it is less than a minute ( i don't remember the specific figure ). however, the smaller block time, the higher the chance of long - lived forks. to ensure your transaction is confirmed you still have to wait quite long. there are currently no good solutions here either. smart contracts are scary smart contract are difficult to write. they are computer programs that move assets from one account to another ( or more complicated ). however, we want traders and " normal " people to be able to write these contracts, and not rely on computer science programming experts. you can't undo a transaction. this is a tough nut to crack! if you are doing high value trading, and end up writing a zero too much in the transaction ( say \ $ 10m instead of \ $ 1m ), you call your bank immediately! that fixes it. if not, let's hope you have insurance. in a blockchain setting, you have neither a bank, nor insurance. those \ $ 9m are gone and it was due to a typo in a smart contract or in a transaction. smart contracts is really playing with fire. it's too easy to empty all your assets in a single click. and it has happened, several times. people have lost hundreds of millions of dollars due to smart contract errors. source : i am working for an energy company doing wind and solar energy production as well as trading oil and gas
https://api.stackexchange.com
. have been working on blockchain solution projects.
https://api.stackexchange.com
let's start with a triviliaty : deep neural network is simply a feedforward network with many hidden layers. this is more or less all there is to say about the definition. neural networks can be recurrent or feedforward ; feedforward ones do not have any loops in their graph and can be organized in layers. if there are " many " layers, then we say that the network is deep. how many layers does a network have to have in order to qualify as deep? there is no definite answer to this ( it's a bit like asking how many grains make a heap ), but usually having two or more hidden layers counts as deep. in contrast, a network with only a single hidden layer is conventionally called " shallow ". i suspect that there will be some inflation going on here, and in ten years people might think that anything with less than, say, ten layers is shallow and suitable only for kindergarten exercises. informally, " deep " suggests that the network is tough to handle. here is an illustration, adapted from here : but the real question you are asking is, of course, why would having many layers be beneficial? i think that the somewhat astonishing answer is that nobody really knows. there are some common explanations that i will briefly review below, but none of them has been convincingly demonstrated to be true, and one cannot even be sure that having many layers is really beneficial. i say that this is astonishing, because deep learning is massively popular, is breaking all the records ( from image recognition, to playing go, to automatic translation, etc. ) every year, is getting used by the industry, etc. etc. and we are still not quite sure why it works so well. i base my discussion on the deep learning book by goodfellow, bengio, and courville which went out in 2017 and is widely considered to be the book on deep learning. ( it's freely available online. ) the relevant section is 6. 4. 1 universal approximation properties and depth. you wrote that 10 years ago in class i learned that having several layers or one layer ( not counting the input and output layers ) was equivalent in terms of the functions a neural network is able to represent [... ] you must be referring to the so called universal approximation theorem, proved by cybenko in 1989 and generalized by various people in the 1990s. it basically says that a shallow neural network ( with 1 hidden layer ) can approximate any function, i. e. can
https://api.stackexchange.com
in principle learn anything. this is true for various nonlinear activation functions, including rectified linear units that most neural networks are using today ( the textbook references leshno et al. 1993 for this result ). if so, then why is everybody using deep nets? well, a naive answer is that because they work better. here is a figure from the deep learning book showing that it helps to have more layers in one particular task, but the same phenomenon is often observed across various tasks and domains : we know that a shallow network could perform as good as the deeper ones. but it does not ; and they usually do not. the question is - - - why? possible answers : maybe a shallow network would need more neurons then the deep one? maybe a shallow network is more difficult to train with our current algorithms ( e. g. it has more nasty local minima, or the convergence rate is slower, or whatever )? maybe a shallow architecture does not fit to the kind of problems we are usually trying to solve ( e. g. object recognition is a quintessential " deep ", hierarchical process )? something else? the deep learning book argues for bullet points # 1 and # 3. first, it argues that the number of units in a shallow network grows exponentially with task complexity. so in order to be useful a shallow network might need to be very big ; possibly much bigger than a deep network. this is based on a number of papers proving that shallow networks would in some cases need exponentially many neurons ; but whether e. g. mnist classification or go playing are such cases is not really clear. second, the book says this : choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. this can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. i think the current " consensus " is that it's a combination of bullet points # 1 and # 3 : for real - world tasks deep architecture are often beneficial and shallow architecture would be inefficient and require a lot more neurons for the same performance. but it's far from proven. consider e. g. zagoruyko and komodakis, 2016, wide residual networks. residual networks with 150 + layers appeared in 2015 and won various image recognition contests. this was a big success
https://api.stackexchange.com
and looked like a compelling argument in favour of deepness ; here is one figure from a presentation by the first author on the residual network paper ( note that the time confusingly goes to the left here ) : but the paper linked above shows that a " wide " residual network with " only " 16 layers can outperform " deep " ones with 150 + layers. if this is true, then the whole point of the above figure breaks down. or consider ba and caruana, 2014, do deep nets really need to be deep? : in this paper we provide empirical evidence that shallow nets are capable of learning the same function as deep nets, and in some cases with the same number of parameters as the deep nets. we do this by first training a state - of - the - art deep model, and then training a shallow model to mimic the deep model. the mimic model is trained using the model compression scheme described in the next section. remarkably, with model compression we are able to train shallow nets to be as accurate as some deep models, even though we are not able to train these shallow nets to be as accurate as the deep nets when the shallow nets are trained directly on the original labeled training data. if a shallow net with the same number of parameters as a deep net can learn to mimic a deep net with high fidelity, then it is clear that the function learned by that deep net does not really have to be deep. if true, this would mean that the correct explanation is rather my bullet # 2, and not # 1 or # 3. as i said - - - nobody really knows for sure yet. concluding remarks the amount of progress achieved in the deep learning over the last ~ 10 years is truly amazing, but most of this progress was achieved by trial and error, and we still lack very basic understanding about what exactly makes deep nets to work so well. even the list of things that people consider to be crucial for setting up an effective deep network seems to change every couple of years. the deep learning renaissance started in 2006 when geoffrey hinton ( who had been working on neural networks for 20 + years without much interest from anybody ) published a couple of breakthrough papers offering an effective way to train deep networks ( science paper, neural computation paper ). the trick was to use unsupervised pre - training before starting the gradient descent. these papers revolutionized the field, and for a couple of years people thought that unsupervised pre - training was the key. then in 2010 martens showed that deep
https://api.stackexchange.com
neural networks can be trained with second - order methods ( so called hessian - free methods ) and can outperform networks trained with pre - training : deep learning via hessian - free optimization. then in 2013 sutskever et al. showed that stochastic gradient descent with some very clever tricks can outperform hessian - free methods : on the importance of initialization and momentum in deep learning. also, around 2010 people realized that using rectified linear units instead of sigmoid units makes a huge difference for gradient descent. dropout appeared in 2014. residual networks appeared in 2015. people keep coming up with more and more effective ways to train deep networks and what seemed like a key insight 10 years ago is often considered a nuisance today. all of that is largely driven by trial and error and there is little understanding of what makes some things work so well and some other things not. training deep networks is like a big bag of tricks. successful tricks are usually rationalized post factum. we don't even know why deep networks reach a performance plateau ; just 10 years people used to blame local minima, but the current thinking is that this is not the point ( when the perfomance plateaus, the gradients tend to stay large ). this is such a basic question about deep networks, and we don't even know this. update : this is more or less the subject of ali rahimi's nips 2017 talk on machine learning as alchemy : [ this answer was entirely re - written in april 2017, so some of the comments below do not apply anymore. ]
https://api.stackexchange.com
let me add the following graphic to the great answers already given, with the intention of a specific and clear answer to the question posed. the other answers detail what linear phase is, this details why it is important in one graphic : when a filter has linear phase, then all the frequencies within that signal will be delayed the same amount in time ( as described mathematically in fat32's answer ). when a filter has non - linear phase, individual frequencies or bands of frequencies within the spectrum of the signal are delayed different amounts in time. any signal can be decomposed ( via fourier series ) into separate frequency components. when the signal gets delayed through any channel ( such as a filter ), as long as all of those frequency components get delayed the same amount, the same signal ( signal of interest, within the passband of the channel ) will be recreated after the delay. consider a square wave, which through the fourier series expansion is shown to be made up of an infinite number of odd harmonic frequencies. in the graphic above i show the summation of the first three components. if these components are all delayed the same amount, the waveform of interest is intact when these components are summed. however, significant group delay distortion will result if each frequency component gets delayed a different amount in time. the following may help give additional intuitive insight for those with some rf or analog background. consider an ideal lossless broadband delay line ( such as approximated by a length of coaxial cable ), which can pass wideband signals without distortion. the transfer function of such a cable is shown in the graphic below, having a magnitude of 1 for all frequencies ( given it is lossless ) and a phase negatively increasing in direct linear proportion to frequency. the longer the cable, the steeper the slope of the phase, but in all cases " linear phase ". this is also consistent with the equation for group delay, which is the negative derivative of phase with respect to frequency. this makes sense ; the phase delay of 1 hz signal passing through a cable with a 1 second delay will be 360Β°, while a 2 hz signal with the same delay will be 720Β°, etc... bringing this back to the digital world, $ z ^ { - 1 } $ is the z - transform of a 1 sample delay ( therefore a delay line ), with a similar frequency response to what is shown, just in terms of h ( z ) ; a constant magnitude = 1 and a phase that goes linearly from $ 0 $ to $
https://api.stackexchange.com
- 2 \ pi $ from f = 0 hz to f = fs ( the sampling rate ). the simplest mathematical explanation is that the a phase that is linear with frequency and a constant delay are fourier transform pairs. this is the shift property of the fourier transform. a constant time delay in time of $ \ tau $ seconds results in a linear phase in frequency $ - \ omega \ tau $, where $ \ omega $ is the angular frequency axis in radians / sec : $ $ \ mathscr { f } \ { g ( t - \ tau ) \ } = \ int _ { - \ infty } ^ { \ infty } g ( t - \ tau ) e ^ { j \ omega t } dt $ $ $ $ u = t - \ tau $ $ $ $ \ mathscr { f } \ { g ( u ) \ } = \ int _ { - \ infty } ^ { \ infty } g ( u ) e ^ { - j \ omega ( u + \ tau ) } du $ $ $ $ = e ^ { - j \ omega \ tau } \ int _ { - \ infty } ^ { \ infty } g ( u ) e ^ { - j \ omega u } du $ $ $ $ = e ^ { - j \ omega \ tau } g ( j \ omega ) $ $ if this post was helpful, i provide more intuitive details such as this in online courses on dsp that are combined with live workshops. you can find more details on current course offerings here : dsp _ coach. com
https://api.stackexchange.com
strategy i would like to apply rational decision theory to the analysis, because that is one well - established way to attain rigor in solving a statistical decision problem. in trying to do so, one difficulty emerges as special : the alteration of sb ’ s consciousness. rational decision theory has no mechanism to handle altered mental states. in asking sb for her credence in the coin flip, we are simultaneously treating her in a somewhat self - referential manner both as subject ( of the sb experiment ) and experimenter ( concerning the coin flip ). let ’ s alter the experiment in an inessential way : instead of administering the memory - erasure drug, prepare a stable of sleeping beauty clones just before the experiment begins. ( this is the key idea, because it helps us resist distracting - - but ultimately irrelevant and misleading - - philosophical issues. ) the clones are like her in all respects, including memory and thought. sb is fully aware this will happen. we can clone, in principle. e. t. jaynes replaces the question " how can we build a mathematical model of human common sense " - - something we need in order to think through the sleeping beauty problem - - by " how could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense? " thus, if you like, replace sb by jaynes'thinking robot, and clone that. ( there have been, and still are, controversies about " thinking " machines. " they will never make a machine to replace the human mind β€” it does many things which no machine could ever do. " you insist that there is something a machine cannot do. if you will tell me precisely what it is that a machine cannot do, then i can always make a machine which will do just that! ” - - j. von neumann, 1948. quoted by e. t. jaynes in probability theory : the logic of science, p. 4. ) - - rube goldberg the sleeping beauty experiment restated prepare $ n \ ge 2 $ identical copies of sb ( including sb herself ) on sunday evening. they all go to sleep at the same time, potentially for 100 years. whenever you need to awaken sb during the experiment, randomly select a clone who has not yet been awakened. any awakenings will occur on monday and, if needed, on tuesday. i claim that this version of the experiment creates exactly the same set of possible results, right down to sb's mental states and awareness, with exactly the same
https://api.stackexchange.com
probabilities. this potentially is one key point where philosophers might choose to attack my solution. i claim it's the last point at which they can attack it, because the remaining analysis is routine and rigorous. now we apply the usual statistical machinery. let's begin with the sample space ( of possible experimental outcomes ). let $ m $ mean " awakens monday " and $ t $ mean " awakens tuesday. " similarly, let $ h $ mean " heads " and $ t $ mean " tails ". subscript the clones with integers $ 1, 2, \ ldots, n $. then the possible experimental outcomes can be written ( in what i hope is a transparent, self - evident notation ) as the set $ $ \ eqalign { \ { & hm _ 1, hm _ 2, \ ldots, hm _ n, \ \ & ( tm _ 1, tt _ 2 ), ( tm _ 1, tt _ 3 ), \ ldots, ( tm _ 1, tt _ n ), \ \ & ( tm _ 2, tt _ 1 ), ( tm _ 2, tt _ 3 ), \ ldots, ( tm _ 2, tt _ n ), \ \ & \ cdots, \ \ & ( tm _ n, tt _ 1 ), ( tm _ n, tt _ 2 ), \ ldots, ( tm _ n, tt _ { n - 1 } ) & \ }. } $ $ monday probabilities as one of the sb clones, you figure your chance of being awakened on monday during a heads - up experiment is ( $ 1 / 2 $ chance of heads ) times ( $ 1 / n $ chance i ’ m picked to be the clone who is awakened ). in more technical terms : the set of heads outcomes is $ h = \ { hm _ j, j = 1, 2, \ ldots, n \ } $. there are $ n $ of them. the event where you are awakened with heads is $ h ( i ) = \ { hm _ i \ } $. the chance of any particular sb clone $ i $ being awakened with the coin showing heads equals $ $ \ pr [ h ( i ) ] = \ pr [ h ] \ times \ pr [ h ( i ) | h ] = \ frac { 1 } { 2 } \ times \ frac { 1 } { n } = \ frac { 1 }
https://api.stackexchange.com
{ 2n }. $ $ tuesday probabilities the set of tails outcomes is $ t = \ { ( tm _ j, tt _ k ) : j \ ne k \ } $. there are $ n ( n - 1 ) $ of them. all are equally likely, by design. you, clone $ i $, are awakened in $ ( n - 1 ) + ( n - 1 ) = 2 ( n - 1 ) $ of these cases ; namely, the $ n - 1 $ ways you can be awakened on monday ( there are $ n - 1 $ remaining clones to be awakened tuesday ) plus the $ n - 1 $ ways you can be awakened on tuesday ( there are $ n - 1 $ possible monday clones ). call this event $ t ( i ) $. your chance of being awakened during a tails - up experiment equals $ $ \ pr [ t ( i ) ] = \ pr [ t ] \ times p [ t ( i ) | t ] = \ frac { 1 } { 2 } \ times \ frac { 2 ( n - 1 ) } { n ( n - 1 ) } = \ frac { 1 } { n }. $ $ bayes'theorem now that we have come this far, bayes'theorem - - a mathematical tautology beyond dispute - - finishes the work. any clone's chance of heads is therefore $ $ \ pr [ h | t ( i ) \ cup h ( i ) ] = \ frac { \ pr [ h ] \ pr [ h ( i ) | h ] } { \ pr [ h ] \ pr [ h ( i ) | h ] + \ pr [ t ] \ pr [ t ( i ) | t ] } = \ frac { 1 / ( 2n ) } { 1 / n + 1 / ( 2n ) } = \ frac { 1 } { 3 }. $ $ because sb is indistinguishable from her clones - - even to herself! - - this is the answer she should give when asked for her degree of belief in heads. interpretations the question " what is the probability of heads " has two reasonable interpretations for this experiment : it can ask for the chance a fair coin lands heads, which is $ \ pr [ h ] = 1 / 2 $ ( the halfer answer ), or it can ask for the chance the coin lands heads, conditioned on the fact that you were the clone awakened. this is $ \ pr [ h
https://api.stackexchange.com
| t ( i ) \ cup h ( i ) ] = 1 / 3 $ ( the thirder answer ). in the situation in which sb ( or rather any one of a set of identically prepared jaynes thinking machines ) finds herself, this analysis - - which many others have performed ( but i think less convincingly, because they did not so clearly remove the philosophical distractions in the experimental descriptions ) - - supports the thirder answer. the halfer answer is correct, but uninteresting, because it is not relevant to the situation in which sb finds herself. this resolves the paradox. this solution is developed within the context of a single well - defined experimental setup. clarifying the experiment clarifies the question. a clear question leads to a clear answer. comments i guess that, following elga ( 2000 ), you could legitimately characterize our conditional answer as " count [ ing ] your own temporal location as relevant to the truth of h, " but that characterization adds no insight to the problem : it only detracts from the mathematical facts in evidence. to me it appears to be just an obscure way of asserting that the " clones " interpretation of the probability question is the correct one. this analysis suggests that the underlying philosophical issue is one of identity : what happens to the clones who are not awakened? what cognitive and noetic relationships hold among the clones? - - but that discussion is not a matter of statistical analysis ; it belongs on a different forum.
https://api.stackexchange.com
those are isolated turtle bones : specifically, they are part of the carapace, or upper shell. the projections would articulate with the backbone. the " toothlike " structure at the other end projects down toward the margin of the shell. based on the size, and the fact that you are in missouri, i'm guessing they are snapping turtle bones. here's a photo of the inside of a snapping turtle shell : they are a little hard to make out, but you can faintly see the marginal projections.
https://api.stackexchange.com
converting full history to limited history this is a first step in solving recurrences where the value at any integer depends on the values at all smaller integers. consider, for example, the recurrence $ $ t ( n ) = n + \ frac { 1 } { n } \ sum _ { k = 1 } ^ n \ big ( t ( k - 1 ) + t ( n - k ) \ big ) $ $ which arises in the analysis of randomized quicksort. ( here, $ k $ is the rank of the randomly chosen pivot. ) for any integer $ n $, the value of $ t ( n ) $ depends on all $ t ( k ) $ with $ k < n $. recurrences of this form are called full history recurrences. to solve this recurrence, we can transform it into a limited history recurrence, where $ t ( n ) $ depends on only a constant number of previous values. but first, it helps to simplify the recurrence a bit, to collect common terms and eliminate pesky fractions. \ begin { align * } n t ( n ) & = n ^ 2 + 2 \ sum _ { k = 1 } ^ { n - 1 } t ( k ) \ end { align * } now to convert to a limited - history recurrence, we write down the recurrence for $ t ( n - 1 ) $, subtract, and regather terms : \ begin { align * } ( n - 1 ) t ( n - 1 ) & = ( n - 1 ) ^ 2 + 2 \ sum _ { k = 1 } ^ { n - 2 } t ( k ) \ \ \ implies nt ( n ) - ( n - 1 ) t ( n - 1 ) & = ( 2n - 1 ) + 2t ( n - 1 ) \ \ [ 1ex ] \ implies n t ( n ) & = ( 2n - 1 ) + ( n + 1 ) t ( n - 1 ) \ \ [ 1ex ] \ implies \ frac { t ( n ) } { n + 1 } & = \ frac { 2n - 1 } { n ( n + 1 ) } + \ frac { t ( n - 1 ) } { n } \ end { align * } now if we define $ t ( n ) = t ( n ) / ( n + 1 ) $ and replace the fraction
https://api.stackexchange.com
$ \ frac { 2n - 1 } { n ( n + 1 ) } $ with the simpler asymptotic form $ \ theta ( 1 / n ) $, we obtain the much simpler recurrence $ $ t ( n ) = \ theta ( 1 / n ) + t ( n - 1 ). $ $ expanding this recurrence into a summation immediately gives us $ t ( n ) = \ theta ( h _ n ) = \ theta ( \ log n ) $, where $ h _ n $ is the $ n $ th harmonic number. we conclude that $ \ boldsymbol { t ( n ) = \ theta ( n \ log n ) } $.
https://api.stackexchange.com
in fact, the output vectors are not computed from the input using any mathematical operation. instead, each input integer is used as the index to access a table that contains all possible vectors. that is the reason why you need to specify the size of the vocabulary as the first argument ( so the table can be initialized ). the most common application of this layer is for text processing. let's see a simple example. our training set consists only of two phrases : hope to see you soon nice to see you again so we can encode these phrases by assigning each word a unique integer number ( by order of appearance in our training dataset for example ). then our phrases could be rewritten as : [ 0, 1, 2, 3, 4 ] [ 5, 1, 2, 3, 6 ] now imagine we want to train a network whose first layer is an embedding layer. in this case, we should initialize it as follows : embedding ( 7, 2, input _ length = 5 ) the first argument ( 7 ) is the number of distinct words in the training set. the second argument ( 2 ) indicates the size of the embedding vectors. the input _ length argument, of course, determines the size of each input sequence. once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size ( 7, 2 ) and can be thought as the table used to map integers to embedding vectors : + - - - - - - - - - - - - + - - - - - - - - - - - - + | index | embedding | + - - - - - - - - - - - - + - - - - - - - - - - - - + | 0 | [ 1. 2, 3. 1 ] | | 1 | [ 0. 1, 4. 2 ] | | 2 | [ 1. 0, 3. 1 ] | | 3 | [ 0. 3, 2. 1 ] | | 4 | [ 2. 2, 1. 4 ] | | 5 | [ 0. 7, 1. 7 ] | | 6 | [ 4. 1, 2. 0 ] | + - - - - - - - - - - - - + - - - - - - - - - - - - + so according to these embeddings, our second training phrase will be represented as : [ [ 0. 7, 1
https://api.stackexchange.com
. 7 ], [ 0. 1, 4. 2 ], [ 1. 0, 3. 1 ], [ 0. 3, 2. 1 ], [ 4. 1, 2. 0 ] ] it might seem counterintuitive at first, but the underlying automatic differentiation engines ( e. g., tensorflow or theano ) manage to optimize these vectors associated with each input integer just like any other parameter of your model. for an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a ( 7, 2 ) matrix. then, for a given word, you create a one - hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. for instance, for the word " soon " the index is 4, and the one - hot vector is [ 0, 0, 0, 0, 1, 0, 0 ]. if you multiply this ( 1, 7 ) matrix by the ( 7, 2 ) embeddings matrix you get the desired two - dimensional embedding, which in this case is [ 2. 2, 1. 4 ]. it is also interesting to use the embeddings learned by other methods / people in different domains ( see as done in [ 1 ]. [ 1 ] lopez - sanchez, d., herrero, j. r., arrieta, a. g., & corchado, j. m. hybridizing metric learning and case - based reasoning for adaptable clickbait detection. applied intelligence, 1 - 16.
https://api.stackexchange.com
0 quartile = 0 quantile = 0 percentile 1 quartile = 0. 25 quantile = 25 percentile 2 quartile =. 5 quantile = 50 percentile ( median ) 3 quartile =. 75 quantile = 75 percentile 4 quartile = 1 quantile = 100 percentile
https://api.stackexchange.com
given the large eyes, the almost non - existent antennae, the humped back, elongated abdomen and the wings, i'd say it is a robber fly. it is one of many insects known to prey on wasps. note the description on the linked page : this spindly piece of nastiness is a robber fly in the genus diogmites. it seems that it's members of this particular genus that are adorned with the name hanging thief. you may remember that this was to denote their habit of dangling from a leg or two while the other limbs held onto prey, stabbed it to death with venom and then sucked out the insides. [ emphasis mine. ]
https://api.stackexchange.com
for a rather simple version of dependent type theory, gilles dowek gave a proof of undecidability of typability in a non - empty context : gilles dowek, the undecidability of typability in the $ \ lambda \ pi $ - calculus which can be found here. first let me clarify what is proven in that paper : he shows that in a dependent calculus without annotations on the abstractions, it is undecidable to show typeability of a term in a non - empty context. both of those hypotheses are necessary : in the empty context, typability reduces to that of the simply - typed $ \ lambda $ - calculus ( decidable by hindley - milner ) and with the annotations on the abstractions, the usual type - directed algorithm applies. the idea is to encode a post correspondence problem as a type conversion problem, and then carefully construct a term which is typeable iff the two specific types are convertible. this uses knowledge of the shape of normal forms, which always exist in this calculus. the article is short and well - written, so i won't go into more detail here. now in polymorphic calculi like system - f, it would be nice to be able to infer the type abstractions and applications, and omit the annotations on $ \ lambda $ s as above. this is also undecidable, but the proof is much harder and the question was open for quite some time. the matter was resolved by wells : j. b. wells, typability and type checking in system f are equivalent and undecidable. this can be found here. all i know about it is that it reduces the problem of semi - unification ( which is unification modulo instantiation of universal quantifiers, and is undecidable ) to type checking in system f. finally it is quite easy to show that inhabitation of dependent families is undecidable : simply encode a post problem into the constructor indices. here are some slides by nicolas oury that sketch the argument. as to whether there is a " limit ", it much depends on what you are trying to do with your dependent types, and there are many approximations which try to be either decidable, or at least close enough to be usable. these questions are still very much part of active research though. one possible avenue is the field of " refinement types " where the language
https://api.stackexchange.com
of expression of type dependencies is restricted to allow for decidable checking see, e. g. liquid types. it's rare that full type inference is decidable even in these systems though.
https://api.stackexchange.com
there are many for different subjects - efg's algorithm collection : dsp forum : data compression - about rendering - for all research papers - resources on mp3 and audio - steve on image processing - image processing and retrieval accelerated image processing - the digital signal processing blog - noise & vibration measurement blog - image processing with matlab, open blog -
https://api.stackexchange.com
two additional major benefits of relus are sparsity and a reduced likelihood of vanishing gradient. but first recall the definition of a relu is $ h = \ max ( 0, a ) $ where $ a = wx + b $. one major benefit is the reduced likelihood of the gradient to vanish. this arises when $ a > 0 $. in this regime the gradient has a constant value. in contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. the constant gradient of relus results in faster learning. the other benefit of relus is sparsity. sparsity arises when $ a \ le 0 $. the more such units that exist in a layer the more sparse the resulting representation. sigmoids on the other hand are always likely to generate some non - zero value resulting in dense representations. sparse representations seem to be more beneficial than dense representations.
https://api.stackexchange.com
not as far as i am aware. the ray assembler used to ( and possibly still does ) store the kmers as fasta files where the header was the count of the sequence, which i thought was a pretty neat bastardisation of the fasta file format. it looks like this format is also used by jellyfish when reporting kmer frequencies by the dump command ( but its default output format is a custom binary format ) : the dump subcommand outputs a list of all the k - mers in the file associated with their count. by default, the output is in fasta format, where the header line contains the count of the k - mer and the sequence part is the sequence of the k - mer. this format has the advantage that the output contains the sequence of k - mers and can be directly fed into another program expecting the very common fasta format. a more convenient column format ( for human beings ) is selected with the - c switch. jellyfish changed their internal format between v1 and v2 ( both not fasta ), because they changed to doing counts based on bloom filters. jellyfish2 has an optional two - pass method that sets up a bloom filter intermediate file to record kmers, and multiple different final reporting formats. khmer also uses bloom filters, but in a slightly different way. it also has been extended to be useful for partitioning and comparing datasets.
https://api.stackexchange.com
there are many more out there, all with different goals and views of the problems. it really depends on what you are trying to solve. here is an incomplete list of packages out there. feel free to add more details. large distributed iterative solver packages petsc β€” packages focused around krylov subspace methods and easy switching between linear solvers. much lighter weight than others in this category. trilinos β€” a large set of packages aimed at fem applications hypre β€” similar to the two above. notable because of its very good multigrid solvers ( which can be downloaded by petsc ). parallel direct solver packages mumps superlu serial direct solver packages suitesparse β€” umfpack is a really good solver, but many other special purpose solvers exist here. intel math kernel library β€” high - quality library from intel ; also has a parallel iterative solver ( but nothing massively parallel ). matrix template library β€” generics can sometimes make the code much faster. interactive environments ( more for very small systems ) matlab β€” industry standard scipy. sparse β€” if you like python mathematica β€” supports the manipulation of sparsearray [ ] objects. other lists jack dongarra's list of freely available software for linear algebra.
https://api.stackexchange.com
that is certainly an interesting question! first, to clarify definitions : to be considered venomous the toxic substance must be produced in specialized glands or tissue. often these are associated with some delivery apparatus ( fangs, stinger, etc. ), but not necessarily. to be poisonous, the toxins must be produced in non - specialized tissues and are only toxic after ingestion. interestingly, many venoms are not poisonous if ingested. [ 1 ] i know of at least three species that produce both poison and venom. one is a snake ( although not a rattlesnake, which are, in fact, edible ) : rhabdophis tigrinus, which accumulates toxins in its tissues, but also delivers venom via fangs. [ 2 ] the other two are frogs : corythomantis greeningi and aparasphenodon brunoi, which have spines on their snout that they use to deliver the venom. [ 3 ] [ 1 ] meier and white ( eds. ). 1995. handbook of clinical toxicology of animal venoms and poisons. boca raton, fla. : crc press, 477p. [ 2 ] hutchinson et al. 2007. dietary sequestration of defensive steroids in nuchal glands of the asian snake rhabdophis tigrinus. pnas 104 ( 7 ) : 2265 - 2270. [ 3 ] jared et al. 2015. venomous frogs use heads as weapons. current biology 25, 2166 - 2170.
https://api.stackexchange.com
the energy consumption doesn't vary that much between resting and performing tasks, as discussed in a review by marcus raichle and mark a. mintun : in the average adult human, the brain represents approximately 2 % of the total body weight but approximately 20 % of the energy consumed ( clark & sokoloff 1999 ), 10 times that predicted by its weight alone. relative to this high rate of ongoing or β€œ basal ” metabolism ( usually measured while resting quietly awake with eyes closed ), the amount dedicated to task - evoked regional imaging signals is remarkably small. the regional increases in absolute blood flow associated with imaging signals as measured with pet are rarely more than 5 % – 10 % of the resting blood flow of the brain. these are modest modulations in ongoing circulatory activity that rarely affect the overall rate of brain blood flow during even the most arousing perceptual and vigorous motor activity ( fox et al. 1987, friston et al. 1990, lennox 1931, madsen et al. 1995, roland et al. 1987, sokoloff et al. 1955 ). [... ] from knowledge of these relationships, one can estimate that if blood flow and glucose utilization increase by 10 %, but oxygen consumption does not, the local energy consumption increase owing to a typical task - related response could be as little as 1 %. it becomes clear, then, that the brain continuously expends a considerable amount of energy even in the absence of a particular task ( i. e., when a subject is awake and at rest ). techniques like fmri measure relatively small differences, their existence does not contradict the claim that the energy consumption of the brain doesn't change a lot between the resting state and performing an activity. 1. raichle me, mintun ma. brain work and brain imaging. annual review of neuroscience 2006 jul ; 29 ( 1 ) : 449 - 476.
https://api.stackexchange.com
negative frequency doesn't make much sense for sinusoids, but the fourier transform doesn't break up a signal into sinusoids, it breaks it up into complex exponentials ( also called " complex sinusoids " or " cisoids " ) : $ $ f ( \ omega ) = \ int _ { - \ infty } ^ { \ infty } f ( t ) \ color { red } { e ^ { - j \ omega t } } \, dt $ $ these are actually spirals, spinning around in the complex plane : ( source : richard lyons ) spirals can be either left - handed or right - handed ( rotating clockwise or counterclockwise ), which is where the concept of negative frequency comes from. you can also think of it as the phase angle going forward or backward in time. in the case of real signals, there are always two equal - amplitude complex exponentials, rotating in opposite directions, so that their real parts combine and imaginary parts cancel out, leaving only a real sinusoid as the result. this is why the spectrum of a sine wave always has 2 spikes, one positive frequency and one negative. depending on the phase of the two spirals, they could cancel out, leaving a purely real sine wave, or a real cosine wave, or a purely imaginary sine wave, etc. the negative and positive frequency components are both necessary to produce the real signal, but if you already know that it's a real signal, the other side of the spectrum doesn't provide any extra information, so it's often hand - waved and ignored. for the general case of complex signals, you need to know both sides of the frequency spectrum.
https://api.stackexchange.com
i think that you are missing something still in your understanding of the purpose of cross - validation. let's get some terminology straight, generally when we say'a model'we refer to a particular method for describing how some input data relates to what we are trying to predict. we don't generally refer to particular instances of that method as different models. so you might say'i have a linear regression model'but you wouldn't call two different sets of the trained coefficients different models. at least not in the context of model selection. so, when you do k - fold cross validation, you are testing how well your model is able to get trained by some data and then predict data it hasn't seen. we use cross validation for this because if you train using all the data you have, you have none left for testing. you could do this once, say by using 80 % of the data to train and 20 % to test, but what if the 20 % you happened to pick to test happens to contain a bunch of points that are particularly easy ( or particularly hard ) to predict? we will not have come up with the best estimate possible of the models ability to learn and predict. we want to use all of the data. so to continue the above example of an 80 / 20 split, we would do 5 - fold cross validation by training the model 5 times on 80 % of the data and testing on 20 %. we ensure that each data point ends up in the 20 % test set exactly once. we've therefore used every data point we have to contribute to an understanding of how well our model performs the task of learning from some data and predicting some new data. but the purpose of cross - validation is not to come up with our final model. we don't use these 5 instances of our trained model to do any real prediction. for that we want to use all the data we have to come up with the best model possible. the purpose of cross - validation is model checking, not model building. now, say we have two models, say a linear regression model and a neural network. how can we say which model is better? we can do k - fold cross - validation and see which one proves better at predicting the test set points. but once we have used cross - validation to select the better performing model, we train that model ( whether it be the linear regression or the neural network ) on all the data. we don't use the actual model instances we trained during cross - validation for
https://api.stackexchange.com
our final predictive model. note that there is a technique called bootstrap aggregation ( usually shortened to'bagging') that does in a way use model instances produced in a way similar to cross - validation to build up an ensemble model, but that is an advanced technique beyond the scope of your question here.
https://api.stackexchange.com
we can define a solution to this problem in the following way. assume the input intervals can be defined as $ i _ { a } = [ a _ s, a _ e ] $ and $ i _ { b } = [ b _ s, b _ e ] $, while the output interval is defined as $ i _ { o } = [ o _ s, o _ e ] $. we can find the intersection $ i _ { o } = i _ { a } \ bigcap i _ { b } $ doing the following : if ( $ b _ s \ gt a _ e $ or $ a _ s \ gt b _ e $ ) { return $ \ emptyset $ } else { $ o _ s = \ max ( a _ s, b _ s ) $ $ o _ e = \ min ( a _ e, b _ e ) $ return $ [ o _ s, o _ e ] $ }
https://api.stackexchange.com
first of all the definitions are different : phase delay : ( the negative of ) phase divided by frequency group delay : ( the negative of ) first derivative of phase vs frequency in words that means : phase delay : phase angle at this point in frequency group delay : rate of change of the phase around this point in frequency. when to use one or the other really depends on your application. the classical application for group delay is modulated sine waves, for example am radio. the time that it takes for the modulation signal to get through the system is given by the group delay not by the phase delay. another audio example could be a kick drum : this is mostly a modulated sine wave so if you want to determine how much the kick drum will be delayed ( and potentially smeared out in time ) the group delay is the way to look at it.
https://api.stackexchange.com
you may consider using ruvseq. here is an excerpt from the 2013 nature biotechnology publication : we evaluate the performance of the external rna control consortium ( ercc ) spike - in controls and investigate the possibility of using them directly for normalization. we show that the spike - ins are not reliable enough to be used in standard global - scaling or regression - based normalization procedures. we propose a normalization strategy, called remove unwanted variation ( ruv ), that adjusts for nuisance technical effects by performing factor analysis on suitable sets of control genes ( e. g., ercc spike - ins ) or samples ( e. g., replicate libraries ). ruvseq essentially fits a generalized linear model ( glm ) to the expression data, where your expression matrix $ y $ is a $ m $ by $ n $ matrix, where $ m $ is the number of samples and $ n $ the number of genes. the model boils down to $ y = x * \ beta + z * \ gamma + w * \ alpha + \ epsilon $ where $ x $ describes the conditions of interest ( e. g., treatment vs. control ), $ z $ describes observed covariates ( e. g., gender ) and $ w $ describes unobserved covariates ( e. g., batch, temperature, lab ). $ \ beta $, $ \ gamma $ and $ \ alpha $ are parameter matrices which record the contribution of $ x $, $ z $ and $ w $, and $ \ epsilon $ is random noise. for subset of carefully selected genes ( e. g., ercc spike - ins, housekeeping genes, or technical replicates ) we can assume that $ x $ and $ z $ are zero, and find $ w $ - the " unwanted variation " in your sample.
https://api.stackexchange.com
for people like me who study algorithms for a living, the 21st - century standard model of computation is the integer ram. the model is intended to reflect the behavior of real computers more accurately than the turing machine model. real - world computers process multiple - bit integers in constant time using parallel hardware ; not arbitrary integers, but ( because word sizes grow steadily over time ) not fixed size integers, either. the model depends on a single parameter $ w $, called the word size. each memory address holds a single $ w $ - bit integer, or word. in this model, the input size $ n $ is the number of words in the input, and the running time of an algorithm is the number of operations on words. standard arithmetic operations ( addition, subtraction, multiplication, integer division, remainder, comparison ) and boolean operations ( bitwise and, or, xor, shift, rotate ) on words require $ o ( 1 ) $ time by definition. formally, the word size $ w $ is not a constant for purposes of analyzing algorithms in this model. to make the model consistent with intuition, we require $ w \ ge \ log _ 2 n $, since otherwise we cannot even store the integer $ n $ in a single word. nevertheless, for most non - numerical algorithms, the running time is actually independent of $ w $, because those algorithms don't care about the underlying binary representation of their input. mergesort and heapsort both run in $ o ( n \ log n ) $ time ; median - of - 3 - quicksort runs in $ o ( n ^ 2 ) $ time in the worst case. one notable exception is binary radix sort, which runs in $ o ( nw ) $ time. setting $ w = \ theta ( \ log n ) $ gives us the traditional logarithmic - cost ram model. but some integer ram algorithms are designed for larger word sizes, like the linear - time integer sorting algorithm of andersson et al., which requires $ w = \ omega ( \ log ^ { 2 + \ varepsilon } n ) $. for many algorithms that arise in practice, the word size $ w $ is simply not an issue, and we can ( and do ) fall back on the far simpler uniform - cost ram model. the only serious difficulty comes from nested multiplication, which can be used to build very large integers very quickly. if we could perform arithmetic on arbitrary integers in constant time, we could solve any problem in ps
https://api.stackexchange.com
##pace in polynomial time. update : i should also mention that there are exceptions to the " standard model ", like furer's integer multiplication algorithm, which uses multitape turing machines ( or equivalently, the " bit ram " ), and most geometric algorithms, which are analyzed in a theoretically clean but idealized " real ram " model. yes, this is a can of worms.
https://api.stackexchange.com
you'd be surprised. this is actually topic of ongoing research, and of several phd dissertations. the question which radar waveforms and algorithms can be used to mitigate interference is a long - fought over one ; in essence, however, this breaks down to the same problem that any ad - hoc communication system has. different systems solve that differently ; you can do coded radars, where you basically do the same as in cdma systems and divide your spectrum by giving each car a collision - free code sequence. the trick is coordinating these codes, but an observation phase and collision detection might be sufficient here. more likely to succeed is collision detection and avoidance in time : simply observe the spectrum for radar bursts of your neighbors, and ( assuming some regularity ), extrapolate when they won't be transmitting. use that time. notice that wifi solves this problem inherently, much like described above, in a temporal fashion. in fact, you can double - use your wifi packets as radar signals and do a radar estimation on their reflection. and since automotive radar ( 802. 11p ) is a thing, and the data you'd send is known to you and also unique, you could benefit from the orthogonal correlation properties of a coded radar and the higher spectral density and thus increased estimate quality of time - exclusive transmission. there's a dissertation which imho aged well on that, and it's martin braun : ofdm radar algorithms in mobile communication networks, 2014.
https://api.stackexchange.com
congratulations, you found an inverted pyramid ice spike, sometimes called an ice vase! the bally - dorsey model of how it happens is that first the surface of the water freezes, sealing off the water below except for a small opening. if the freezing rate is high enough the expansion of ice under the surface will increase pressure ( since the ice is less dense than the water and displaces more volume ), and this forces water up through the opening, where it will freeze around the rim. as the process goes on a spike emerges. if the initial opening or the crystal planes near it are aligned in the right way the result is a pyramid rather than a cylinder / spike. the process is affected by impurities, the water has to be fairly clean. it also requires fairly low temperatures so freezing is fast enough ( but not too fast ).
https://api.stackexchange.com
the siam journals, especially sisc ( scientific computing ) and mms ( multiscale modeling and simulation ) are obvious established and high - quality choices.
https://api.stackexchange.com
veins have several advantages over arteries. from a purely practical standpoint, veins are easier to access due to their superficial location compared to the arteries which are located deeper under the skin. they have thinner walls ( much less smooth muscle surrounding them ) than arteries, and have less innervation, so piercing them with a needle requires less force and doesn't hurt as much. venous pressure is also lower than arterial pressure, so there is less of a chance of blood seeping back out through the puncture point before it heals. because of their thinner walls, veins tend to be larger than the corresponding artery in the area, so they hold more blood, making collection easier and faster. finally, it is somewhat safer if a small embolism ( bubble in the blood ) is introduced into a vein rather than an artery. blood flow in veins always goes to larger and larger vessels, so there is very little chance of a vessel being blocked by the embolism before the bubble reaches the heart / lungs and is hopefully destroyed. blood flow in an artery, on the other hand, always moves into smaller and smaller vessels, eventually ending in capilllaries, and there is a chance that a bubble introduced by a blood draw ( generally rare ) or more commonly an intravenous line ( iv ) could block a small blood vessel, potentially leading to hypoxia in the affected tissues.
https://api.stackexchange.com
principal component analysis involves extracting linear composites of observed variables. factor analysis is based on a formal model predicting observed variables from theoretical latent factors. in psychology these two techniques are often applied in the construction of multi - scale tests to determine which items load on which scales. they typically yield similar substantive conclusions ( for a discussion see comrey ( 1988 ) factor - analytic methods of scale development in personality and clinical psychology ). this helps to explain why some statistics packages seem to bundle them together. i have also seen situations where " principal component analysis " is incorrectly labelled " factor analysis ". in terms of a simple rule of thumb, i'd suggest that you : run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables. run principal component analysis if you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables.
https://api.stackexchange.com
you can think of water as the ash from burning hydrogen : it's already given off as much energy as possible from reacting hydrogen with oxygen. you can, however, still burn it. you just need an even stronger oxidizer than oxygen. there aren't many of them, but fluorine will work, $ $ \ ce { 2f2 + 2h2o - > 4hf + o2 } $ $ as will chlorine trifluoride : $ $ \ ce { clf3 + 2h2o - > 3hf + hcl + o2 } $ $
https://api.stackexchange.com
there is a wide variety of techniques for non - uniform fft, and the most efficient ones are all meant for exactly your case : quasi - uniform samples. the basic idea is to smear the unevenly sampled sources onto a slightly finer ( " oversampled " ) uniform grid though local convolutions against gaussians. a standard fft can then be run on the oversampled uniform grid, and then the convolution against the gaussians can be undone. good implementations are something like $ c ^ d $ times more expensive than a standard fft in $ d $ dimensions, where $ c $ is something close to 4 or 5. i recommend reading accelerating the nonuniform fast fourier transform by greengard and lee. there also exist fast, i. e., $ o ( n ^ d \ log n ) $ or faster, techniques when the sources and / or evaluation points are sparse, and there are also generalizations to more general integral operators, e. g., fourier integral operators. if you are interested in these techniques, i recommend sparse fourier transform via butterfly algorithm and a fast butterfly algorithm for the computation of fourier integral operators. the price paid in these techniques versus standard fft's is a much higher coefficient. disclaimer : my advisor wrote / cowrote those two papers, and i have spent a decent amount of time parallelizing those techniques. an important point is that all of the above techniques are approximations that can be made arbitrarily accurate at the expense of longer runtimes, whereas the standard fft algorithm is exact.
https://api.stackexchange.com
let a quantum system with hamiltonian $ h $ be given. suppose the system occupies a pure state $ | \ psi ( t ) \ rangle $ determined by the hamiltonian evolution. for any observable $ \ omega $ we use the shorthand $ $ \ langle \ omega \ rangle = \ langle \ psi ( t ) | \ omega | \ psi ( t ) \ rangle. $ $ one can show that ( see eq. 3. 72 in griffiths qm ) $ $ \ sigma _ h \ sigma _ \ omega \ geq \ frac { \ hbar } { 2 } \ left | \ frac { d \ langle \ omega \ rangle } { dt } \ right | $ $ where $ \ sigma _ h $ and $ \ sigma _ \ omega $ are standard deviations $ $ \ sigma _ h ^ 2 = \ langle h ^ 2 \ rangle - \ langle h \ rangle ^ 2, \ qquad \ sigma _ \ omega ^ 2 = \ langle \ omega ^ 2 \ rangle - \ langle \ omega \ rangle ^ 2 $ $ and angled brackets mean expectation in $ | \ psi ( t ) \ rangle $. it follows that if we define $ $ \ delta e = \ sigma _ h, \ qquad \ delta t = \ frac { \ sigma _ \ omega } { | d \ langle \ omega \ rangle / dt | } $ $ then we obtain the desired uncertainty relation $ $ \ delta e \ delta t \ geq \ frac { \ hbar } { 2 } $ $ it remains to interpret the quantity $ \ delta t $. it tells you the approximate amount of time it takes for the expectation value of an observable to change by a standard deviation provided the system is in a pure state. to see this, note that if $ \ delta t $ is small, then in a time $ \ delta t $ we have $ $ | \ delta \ langle \ omega \ rangle | = \ left | \ int _ t ^ { t + \ delta t } \ frac { d \ langle \ omega \ rangle } { dt } \, dt \ right | \ approx \ left | \ frac { d \ langle \ omega \ rangle } { dt } \ delta t \ right | = \ left | \ frac { d \ langle \ omega \ rangle } { dt } \
https://api.stackexchange.com
right | \ delta t = \ sigma _ \ omega $ $
https://api.stackexchange.com
during the process of selection, individuals having disadvantageous traits are weeded out. if the selection pressure isn't strong enough then mildly disadvantageous traits will continue to persist in the population. so the reasons for why a trait is not evolved even though it may be advantageous to the organism, are : there is no strong pressure against the individuals not having that trait. in other words lack of the trait is not strongly disadvantageous. the trait might have a tradeoff which essentially makes no change to the overall fitness. not enough time has elapsed for an advantageous mutation to get fixed. this doesn't mean that the mutation had not happened yet. it means that the situation that rendered the mutation advantageous had arisen quite recently. consider the example of a mutation that confers resistance against a disease. the mutation wouldn't be advantageous if there was no disease. when a population encounters the disease for the first time, then the mutation would gain advantage but it will take some time to establish itself in the population. the rate for that specific mutation is low and therefore it has not yet happened. mutation rates are not uniform across the genome and certain regions acquire mutations faster than the others. irrespective of that, if the overall mutation rate is low then it would take a lot of time for a mutation to arise and until then its effects cannot be seen. the specific trait is too genetically distant : it cannot be the result of a mutation in a single generation. it might, conceivably, develop after successive generations, each mutating farther, but if the intervening mutations are at too much of a disadvantage, they will not survive to reproduce and allow a new generation to mutate further away from the original population. the disadvantage from not having the trait normally arises only after the reproductive stage of the individual's lifecycle is mostly over. this is a special case of " no strong pressure ", because evolution selects genes, not the organism. in other words the beneficial mutation does not alter the reproductive fitness. koinophillia resulted in the trait being unattractive to females. since most mutations are detrimental females don't want to mate with anyone with an obvious mutation, since there is a high chance it will be harmful to their child. thus females instinctually find any obvious physical difference unattractive, even if it would have been beneficial. this tends to limit the rate or ability for physical differences to appear in a large & stable mating community. evolution is not a directed process and it does not
https://api.stackexchange.com
actively try to look for an optimum. the fitness of an individual does not have any meaning in the absence of the selection pressure. * if you have a relevant addition then please feel free to edit this answer. *
https://api.stackexchange.com
since this is physicsse, i am happy with an answer based purely on theoretical analysis of the forces involved. oh boy, time to spend way too much time on a response. lets assume the simple model of a peg that makes an angle $ \ alpha $ with the wall and ends in a circular cap of radius $ r $. then a towel of total length $ l $ and linear mass density $ \ rho $ has three parts : one part that hangs vertically, one that curves over the circular cap, and one that rests on the inclined portion like drawn. this is very simplistic, but it does encapsulate the basic physics. also, we ignore the folds of the towel. let $ s $ be the length of the towel on the inclined portion of the peg. i will choose a generalized $ x $ - axis that follows the curve of the peg. note this model works for both the front - back direction and side - side direction of the peg. in the side - side ( denoted $ z $ ) $ \ alpha $ is simply zero ( totally vertical ) : where $ \ eta $ is the fraction of the towel on the right side of the picture. then the total gravitational force $ f _ { g, x } $ will be : $ $ f _ { g, x } = \ rho g ( l - r ( \ pi - \ alpha ) - s ( 1 + \ cos ( \ alpha ) ) - \ int ^ { \ pi / 2 - \ alpha } _ { - \ pi / 2 } \ rho g r \ sin ( \ theta ) \, \ mathrm d \ theta $ $ $ $ f _ { g, x } = \ rho g ( l + r ( \ sin ( \ alpha ) - \ pi + \ alpha ) - s ( 1 + \ cos ( \ alpha ) ) $ $ the infinitesimal static frictional force will be $ \ mathrm df _ { s, x } = - \ mu _ s \, \ mathrm dn $. $ n $ is constant on the inclined part and varies with $ \ theta $ over the circular cap as $ \ mathrm dn = \ rho g r \ cos ( \ theta ) \, \ mathrm d \ theta $. then : $ $ f _ s = - \ mu _ s \ rho g s \ sin ( \ alpha ) - \ int ^ { \ pi / 2 - \ alpha } _ { - \
https://api.stackexchange.com
pi / 2 } \ mu _ s \ rho g r \ cos ( \ theta ) \, \ mathrm d \ theta $ $ $ $ f _ s = - \ mu _ s \ rho g ( s \ sin ( \ alpha ) + r ( \ cos ( \ alpha ) + 1 ) ) $ $ now we can set the frictional force equal to the gravitational force and solve for what values of $ \ mu _ s $ will satisfy static equilibrium. you get : $ $ \ mu _ s = \ frac { l + r ( \ sin ( \ alpha ) + \ alpha - \ pi ) - s ( \ cos ( \ alpha ) + 1 ) } { r ( \ cos ( \ alpha ) + 1 ) + s \ sin ( \ alpha ) } $ $ $ $ \ mu _ s = \ frac { 1 + \ gamma ( \ sin ( \ alpha ) + \ alpha - \ pi ) - \ eta ( \ cos ( \ alpha ) + 1 ) } { \ gamma ( \ cos ( \ alpha ) + 1 ) + \ eta \ sin ( \ alpha ) } $ $ where the second line $ \ gamma = r / l $ and $ \ eta = s / l $, the fraction of the towel on the peg's cap and incline, respectively. thus $ \ mu _ s $ depends on three factors : the angle of the peg, $ \ alpha $ the fraction of the towel past the cap of the peg, $ \ eta $. the fraction of the towel on the circular cap, $ \ gamma $. lets make some graphs : the above graph shows what $ \ mu _ s $ would have to be with a $ \ gamma = 0 $ ( no end cap, just a 1d stick ). the above graph shows what $ \ mu _ s $ would have to be with a $ \ eta = 0 $ ( no stick, just a circular cap that the towel drapes over. the above graph shows what $ \ mu _ s $ would have to be when the angle is fixed $ \ alpha = \ pi / 4 $ and the length of the peg ( $ \ eta $ ) is varied. summary what all the graphs above should show you is that the coefficient of static friction has to be enormous ( $ \ mu _ s > 50 $ - - most $ \ mu _ s $ are close to 1 ) unless the fraction of the towel on the peg ( $ \ eta $ and $ \
https://api.stackexchange.com
gamma $ ) is large, like over 50 % combined. the large values for $ \ eta $ can only be accomplished when you put the towel at approximately position $ \ mathbf { a } $, whereas its very difficult to hang a towel from position $ \ mathbf { b } $ because it reduces $ \ eta $ in both the $ z $ and $ x $ - directions. 3 ) the towel has a center of mass below the peg this isn't a sufficient condition for static equilibrium ; a towel isn't a rigid object. as a counter - example, see an atwood's machine. the block - rope system has a center of mass below the pulley, but that doesn't prevent motion of the blocks.
https://api.stackexchange.com
brain, indeed, cannot feel pain, as it lacks pain receptors ( nociceptors ). however, what you feel when you have a headache is not your brain hurting - - there are plenty of other areas in your head and neck that do have nociceptors which can perceive pain, and they literally cause the headaches. in especially, many types of headaches are generally thought to have a neurovascular background, and the responsible pain receptors are associated with blood vessels. however, the pathophysiology of migraines and headaches is still poorly understood.
https://api.stackexchange.com
an excerpt from history of lambda - calculus and combinatory logic by f. cardone and j. r. hindley ( 2006 ) : by the way, why did church choose the notation β€œ $ \ lambda $ ”? in [ church, 1964, Β§ 2 ] he stated clearly that it came from the notation β€œ $ \ hat { x } $ ” used for class - abstraction by whitehead and russell, by first modifying β€œ $ \ hat { x } $ ” to β€œ $ \ wedge x $ ” to distinguish function abstraction from class - abstraction, and then changing β€œ $ \ wedge $ ” to β€œ $ \ lambda $ ” for ease of printing. this origin was also reported in [ rosser, 1984, p. 338 ]. on the other hand, in his later years church told two enquirers that the choice was more accidental : a symbol was needed and β€œ $ \ lambda $ ” just happened to be chosen.
https://api.stackexchange.com
if the goal of the standard deviation is to summarise the spread of a symmetrical data set ( i. e. in general how far each datum is from the mean ), then we need a good method of defining how to measure that spread. the benefits of squaring include : squaring always gives a non - negative value, so the sum will always be zero or higher. squaring emphasizes larger differences, a feature that turns out to be both good and bad ( think of the effect outliers have ). squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data ( think of squared pounds, squared dollars, or squared apples ). hence the square root allows us to return to the original units. i suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not ( for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution ) it is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view'spread'( sort of how some people see 5 % as some magical threshold for $ p $ - values, when in fact it is situation dependent ). indeed, there are in fact several competing methods for measuring spread. my view is to use the squared values because i like to think of how it relates to the pythagorean theorem of statistics : $ c = \ sqrt { a ^ 2 + b ^ 2 } $ … this also helps me remember that when working with independent random variables, variances add, standard deviations don't. but that's just my personal subjective preference which i mostly only use as a memory aid, feel free to ignore this paragraph. an interesting analysis can be read here : revisiting a 90 - year - old debate : the advantages of the mean deviation - stephen gorard ( department of educational studies, university of york ) ; paper presented at the british educational research association annual conference, university of manchester, 16 - 18 september 2004
https://api.stackexchange.com
whenever you want to save space ( this can be a substantial savings ). until quite recently ( samtools / htslib 1. 7 ), only cram supported long cigar strings. if you need to guarantee that any random obscure downstream program will be able to handle it. uptake of cram has been pretty slow. java programs using htsjdk ( e. g., picard, igv and gatk ) have only relatively recently added support for cram. if you need to use an old version of those for some very odd reason then cram may not be supported. there are a lot of programs written in python that use pysam to open bam files and these should, theoretically, support cram. the issue is that some of the functions may fail and one can't assume that authors will have always written the code needed to handle this. i'll use deeptools as an example, since i'm one of its developers. one of the things about cram files is that they ( by default ) are made such that you require a reference genome in order to construct the sequence field in each alignment. this works fine if you're using a standard genome ( htslib, via pysam, can fetch many standard genomes from the web automatically ), but if you're not, then you need to specify a fasta file to use for decompression. every tool, then, needs to add an option for this. with pysam 0. 14 and htslib 1. 7 this can be circumvented by not decompressing the sequence, but behavior has to be explicitly requested. another issue is that many tools will use features from the file index, such as the. mapped accessor, to get the number of mapped reads in a file. cram files contain very very little information, so this then fails. consequently, tool authors need to check for cram files and both derive and propagate this information through their functions if it's needed. this can be a time - consuming task ( e. g., it took me a couple days to get this implemented in deeptools ). relatedly, samtools idxstats is useless on cram files, since there are no statistics stored in the index. that having been said, it's likely that crams slowly gaining acceptance will eventually make it the standard. it's already a convenient archival format, it '
https://api.stackexchange.com
s just a matter of time before users can assume that most analysis programs are written to handle it.
https://api.stackexchange.com
consider the set of keys $ k = \ { 0, 1,..., 100 \ } $ and a hash table where the number of buckets is $ m = 12 $. since $ 3 $ is a factor of $ 12 $, the keys that are multiples of $ 3 $ will be hashed to buckets that are multiples of $ 3 $ : keys $ \ { 0, 12, 24, 36,... \ } $ will be hashed to bucket $ 0 $. keys $ \ { 3, 15, 27, 39,... \ } $ will be hashed to bucket $ 3 $. keys $ \ { 6, 18, 30, 42,... \ } $ will be hashed to bucket $ 6 $. keys $ \ { 9, 21, 33, 45,... \ } $ will be hashed to bucket $ 9 $. if $ k $ is uniformly distributed ( i. e., every key in $ k $ is equally likely to occur ), then the choice of $ m $ is not so critical. but, what happens if $ k $ is not uniformly distributed? imagine that the keys that are most likely to occur are the multiples of $ 3 $. in this case, all of the buckets that are not multiples of $ 3 $ will be empty with high probability ( which is really bad in terms of hash table performance ). this situation is more common that it may seem. imagine, for instance, that you are keeping track of objects based on where they are stored in memory. if your computer's word size is four bytes, then you will be hashing keys that are multiples of $ 4 $. needless to say that choosing $ m $ to be a multiple of $ 4 $ would be a terrible choice : you would have $ 3m / 4 $ buckets completely empty, and all of your keys colliding in the remaining $ m / 4 $ buckets. in general : every key in $ k $ that shares a common factor with the number of buckets $ m $ will be hashed to a bucket that is a multiple of this factor. therefore, to minimize collisions, it is important to reduce the number of common factors between $ m $ and the elements of $ k $. how can this be achieved? by choosing $ m $ to be a number that has very few factors : a prime number.
https://api.stackexchange.com
good observation! the 3'poly ( a ) tail is actually a very common feature of positive - strand rna viruses, including coronaviruses and picornaviruses. for coronaviruses in particular, we know that the poly ( a ) tail is required for replication, functioning in conjunction with the 3'untranslated region ( utr ) as a cis - acting signal for negative strand synthesis and attachment to the ribosome during translation. mutants lacking the poly ( a ) tail are severely compromised in replication. jeannie spagnolo and brenda hogue report : the 3 β€² poly ( a ) tail plays an important, but as yet undefined role in coronavirus genome replication. to further examine the requirement for the coronavirus poly ( a ) tail, we created truncated poly ( a ) mutant defective interfering ( di ) rnas and observed the effects on replication. bovine coronavirus ( bcv ) and mouse hepatitis coronavirus a59 ( mhv - a59 ) di rnas with tails of 5 or 10 a residues were replicated, albeit at delayed kinetics as compared to di rnas with wild type tail lengths ( > 50 a residues ). a bcv di rna lacking a poly ( a ) tail was unable to replicate ; however, a mhv di lacking a tail did replicate following multiple virus passages. poly ( a ) tail extension / repair was concurrent with robust replication of the tail mutants. binding of the host factor poly ( a ) - binding protein ( pabp ) appeared to correlate with the ability of di rnas to be replicated. poly ( a ) tail mutants that were compromised for replication, or that were unable to replicate at all exhibited less in vitro pabp interaction. the data support the importance of the poly ( a ) tail in coronavirus replication and further delineate the minimal requirements for viral genome propagation. spagnolo j. f., hogue b. g. ( 2001 ) requirement of the poly ( a ) tail in coronavirus genome replication. in : lavi e., weiss s. r., hingley s. t. ( eds ) the nidoviruses. advances in experimental medicine and biology, vol 494. springer, boston, ma yu - hui peng et al. also report that the length of the poly ( a ) tail is regulated during infection : similar to eukaryotic mrna, the positive - strand coronavirus genome of ~ 30 kilobases is 5 ’ - capped and 3
https://api.stackexchange.com
’ - polyadenylated. it has been demonstrated that the length of the coronaviral poly ( a ) tail is not static but regulated during infection ; however, little is known regarding the factors involved in coronaviral polyadenylation and its regulation. here, we show that during infection, the level of coronavirus poly ( a ) tail lengthening depends on the initial length upon infection and that the minimum length to initiate lengthening may lie between 5 and 9 nucleotides. by mutagenesis analysis, it was found that ( i ) the hexamer aguaaa and poly ( a ) tail are two important elements responsible for synthesis of the coronavirus poly ( a ) tail and may function in concert to accomplish polyadenylation and ( ii ) the function of the hexamer aguaaa in coronaviral polyadenylation is position dependent. based on these findings, we propose a process for how the coronaviral poly ( a ) tail is synthesized and undergoes variation. our results provide the first genetic evidence to gain insight into coronaviral polyadenylation. peng y - h, lin c - h, lin c - n, lo c - y, tsai t - l, wu h - y ( 2016 ) characterization of the role of hexamer aguaaa and poly ( a ) tail in coronavirus polyadenylation. plos one 11 ( 10 ) : e0165077 this builds upon prior work by hung - yi wu et al, which showed that the coronaviral 3'poly ( a ) tail is approximately 65 nucleotides in length in both genomic and sgmrnas at peak viral rna synthesis, and also observed that the precise length varied throughout infection. most interestingly, they report : functional analyses of poly ( a ) tail length on specific viral rna species, furthermore, revealed that translation, in vivo, of rnas with the longer poly ( a ) tail was enhanced over those with the shorter poly ( a ). although the mechanisms by which the tail lengths vary is unknown, experimental results together suggest that the length of the poly ( a ) and poly ( u ) tails is regulated. one potential function of regulated poly ( a ) tail length might be that for the coronavirus genome a longer poly ( a ) favors translation. the regulation of coronavirus translation by poly ( a ) tail length resembles that during embryonal development suggesting there may be mechanistic parallels. wu hy, ke ty, liao
https://api.stackexchange.com
wy, chang ny. regulation of coronaviral poly ( a ) tail length during infection. plos one. 2013 ; 8 ( 7 ) : e70548. published 2013 jul 29. doi : 10. 1371 / journal. pone. 0070548 it's also worth pointing out that poly ( a ) tails at the 3'end of rna are not an unusual feature of viruses. eukaryotic mrna almost always contains poly ( a ) tails, which are added post - transcriptionally in a process known as polyadenylation. it should not therefore be surprising that positive - strand rna viruses would have poly ( a ) tails as well. in eukaryotic mrna, the central sequence motif for identifying a polyadenylation region is aauaaa, identified way back in the 1970s, with more recent research confirming its ubiquity. proudfoot 2011 is a nice review article on poly ( a ) signals in eukaryotic mrna.
https://api.stackexchange.com
you are right, the planetary model of the atom does not make sense when one considers the electromagnetic forces involved. the electron in an orbit is accelerating continuously and would thus radiate away its energy and fall into the nucleus. one of the reasons for " inventing " quantum mechanics was exactly this conundrum. the bohr model was proposed to solve this, by stipulating that the orbits were closed and quantized and no energy could be lost while the electron was in orbit, thus creating the stability of the atom necessary to form solids and liquids. it also explained the lines observed in the spectra from excited atoms as transitions between orbits. if you study further into physics you will learn about quantum mechanics and the axioms and postulates that form the equations whose solutions give exact numbers for what was the first guess at a model of the atom. quantum mechanics is accepted as the underlying level of all physical forces at the microscopic level, and sometimes quantum mechanics can be seen macroscopically, as with superconductivity, for example. macroscopic forces, like those due to classical electric and magnetic fields, are limiting cases of the real forces which reign microscopically.
https://api.stackexchange.com
there are basically two major, commercial choices out there : ddt from allinea ( which is what we use at tacc ) and totalview ( as mentioned in the other comment ). they have comparable features, are both actively developed, and are direct competitors. eclipse has their parallel tools platform, which should include mpi and openmp programming support and a parallel debugger.
https://api.stackexchange.com
uranium and thorium in heavy rocks have a decay chain which includes a three - day isotope of radon. if a building has materials with some chemically - insignificant mixture of uranium and thorium, such as concrete or granite, then the radon can diffuse out of the material into the air. this is part of your normal background radiation, unless you have accidentally built a concrete basement with granite countertops and poor air exchange with the outdoors, in which case the radon can accumulate. when radon does decay, the decays leave behind ionized atoms of the heavy metals polonium, lead, and bismuth. these ions neutralize by reacting with the air. here my chemistry is weak, but my assumption is that they are most likely to oxide, and i assume further that the oxide molecules are electrically polarized, like the water molecule ( the stable oxide of hydrogen ) is polarized. polarized or polarizable objects are attracted to strong electric fields, even when the polarized object is electrically neutral. imagine a static electric field around a positive charge. a dipole nearby will feel a torque until its negative end points towards the static positive charge. but because the field gets weaker away from the static charge, there ’ s now more attractive force on the negative end of the dipole than there is repulsive force on the positive end, so the dipole accelerates towards the stronger field. if you used to have a cathode - ray television, you may remember the way the positively - charged screen would attract dust much more than other nearby surfaces. clothes dryers are very effective at making statically charged surfaces. ( dryer sheets help. ) so when radon and its temporary decay products are blown through the dryer, electrically - polarized molecules tend to be attracted to the charged surfaces. the decay chain is isotope half - life decay mode 222 - rn 3. 8 days alpha 218 - po 3. 1 minutes alpha 214 - pb 27 minutes beta 214 - bi 20 minutes beta 214 - po microseconds alpha 210 - pb years irrelevant if your geiger counter is actually detecting radiation, it's almost certainly the half - hour lead and bismuth. constructing a decay curve would make a neat home experiment ( but challenging given what you've told us here ). true story : i was once prevented from leaving a neutron - science facility at los alamos after the seat of my pants set off a radiation alarm on exit. this was odd because the neutron beam had been off for weeks
https://api.stackexchange.com