text
stringlengths 1
3.05k
| source
stringclasses 4
values |
|---|---|
what we're seeing in the top two pictures. as the scale moves farther out, these particular spirals become more tightly wound and harder to see ( go from the 1st picture to the 5th, then the 3rd, then the 4th ), and the next convergent takes over. in this case, the convergent $ \ frac { 710 } { 113 } $ is an extremely good rational approximation to $ 2 \ pi $ ( as we know from the large partial quotient $ 146 $ ). therefore the integer points $ ( n, n ) $ will group themselves into $ 710 $ spirals, but these spirals are so close to straight lines at the beginning that they almost don't look like spirals, and persist for a large interval of possible scales. each ray thus corresponds to an arithmetic progression $ a \ pmod { 710 } $. when we plot only prime points $ ( p, p ) $ ( the 4th picture is best here ), we will only see the $ \ phi ( 710 ) = 280 $ arithmetic progressions $ a \ pmod { 710 } $ where $ \ gcd ( a, 710 ) = 1 $. the fact that the visible rays are mostly grouped in fours is a consequence of the fact that $ 5 \ mid710 $ and so every fifth ray doesn't contain primes. really, though, we are seeing four out of every ten rays rather than four out of every five ; the arithmetic progressions $ a \ pmod { 710 } $ with $ a $ even have no primes at all and are thus invisible. there are four exceptional groups containing only three rays instead of four ; these correspond to the four arithmetic progressions $ a \ pmod { 710 } $ where $ a $ is a multiple of $ 71 $ but not a multiple of $ 2 $ or $ 5 $.
|
https://api.stackexchange.com
|
i think the key will be whether or not libraries start being developed for julia. it's all well and good to see toy examples ( even if they are complicated toys ) showing that julia blows r out of the water at tasks r is bad at. but poorly done loops and hand coded algorithms are not why many of the people i know who use r use r. they use it because for nearly any statistical task under the sun, someone has written r code for it. r is both a programming language and a statistics package - at present julia is only the former. i think its possible to get there, but there are much more established languages ( python ) that still struggle with being usable statistical toolkits.
|
https://api.stackexchange.com
|
good question. if you look at the spectral energy distribution in the accepted answer here, we see that photons with wavelengths less than ~ 300 nm are absorbed by species such as ozone. much beyond 750 infrared radiation is largely absorbed by species such as water and carbon dioxide. therefore the vast majority of solar photons reaching the surface have wavelengths that lie between these two extremes. therefore, i would suggest that surface organisms will have adapted to use these wavelengths of light whether it be used in photoreceptors or in photosynthesis since these are the wavelengths available ; i. e., organisms have adapted to use these wavelengths of light, rather than these wavelengths being special per se ( although in the specific case of photosynthesis there is a photon energy sweet spot ). for example this study suggests that some fungi might actually be able to utilize ionizing radiation in metabolism. this suggests that hypothetical organisms on a world bathed in ionizing radiation may evolve mechanisms to utilize this energy.
|
https://api.stackexchange.com
|
no, it is not meaningful. 25 % is correct iff 50 % is correct, and 50 % is correct iff 25 % is correct, so it can be neither of those two ( because if both are correct, the only correct answer could be 75 % which is not even an option ). but it cannot be 0 % either, because then the correct answer would be 25 %. so none of the answers are correct, so the answer must be 0 %. but then it is 25 %. and so forth. it's a multiple - choice variant ( with bells and whistles ) of the classical liar paradox, which asks whether the statement this statement is false. is true or false. there are various more or less contrived " philosophical " attempts to resolve it, but by far the most common resolution is to deny that the statement means anything in the first place ; therefore it is also meaningless to ask for its truth value. edited much later to add : there's a variant of this puzzle that's very popular on the internet at the moment, in which answer option ( c ) is 60 % rather than 0 %. in this variant it is at least internally consistent to claim that all of the answers are wrong, and so the possibility of getting a right one by choosing randomly is 0 %. whether this actually resolves the variant puzzle is more a matter of taste and temperament than an objective mathematical question. it is not in general true for self - referencing questions that simply being internally consistent is enough for an answer to be unambiguously right ; otherwise the question is the correct answer to this question " yes "? would have two different " right " answers, because " yes " and " no " are both internally consistent. in the 60 % variant of the puzzle it is happens that the only internally consistent answer is " 0 % ", but even so one might, as a matter of caution, still deny that such reasoning by elimination is valid for self - referential statements at all. if one adopts this stance, one would still consider the 60 % variant meaningless. one rationale for taking this strict position would be that we don't want to accept reasoning by elimination on true or false? the great pumpkin exists. both of these statements are false. where the only internally consistent resolution is that the first statement is true and the second one is false. however, it appears to be unsound to conclude that the great pumpkin exists on the basis simply that the puzzle was posed. on the other hand
|
https://api.stackexchange.com
|
, it is difficult to argue that there is no possible principle that will cordon off the great pumpkin example as meaningless while still allowing the 60 % variant to be meaningful. in the end, though, these things are more matters of taste and philosophy than they are mathematics. in mathematics we generally prefer to play it safe and completely refuse to work with explicitly self - referential statements. this avoids the risk of paradox, and does not seem to hinder mathematical arguments about the things mathematicians are ordinarily interested in. so whatever one decides to do with the question - about - itself, what one does is not really mathematics.
|
https://api.stackexchange.com
|
i've written my own integrator, quadcc, which copes substantially better than the matlab integrators with singularities, and provides a more reliable error estimate. to use it for your problem, i did the following : > > lambda = 0. 00313 ; kappa = 0. 00825 ; nu = 0. 33 ; > > x = 10 ; > > e = @ ( r ) r. ^ 4. * ( lambda * sqrt ( kappa ^ 2 + r. ^ 2 ) ). ^ ( - nu - 5 / 2 ). * besselk ( - nu - 5 / 2, lambda * sqrt ( kappa ^ 2 + r. ^ 2 ) ) ; > > sincp = @ ( x ) cos ( x ). / x - sin ( x ). / x. ^ 2 ; > > f = @ ( r ) sincp ( x * r ). * r. * sqrt ( e ( r ) ) ; the function f is now your integrand. note that i've just assigned any old value to x. in order to integrate on an infinite domain, i apply a substitution of variables : > > g = @ ( x ) f ( tan ( pi / 2 * x ) ). * ( 1 + tan ( pi * x / 2 ). ^ 2 ) * pi / 2 ; i. e. integrating g from 0 to 1 should be the same as integrating f from 0 to $ \ infty $. different transforms may produce different quality results : mathematically all transforms should give the same result, but different transforms may produce smoother, or more easily integrable gs. i then call my own integrator, quadcc, which can deal with the nans on both ends : > > [ int, err, npoints ] = quadcc ( g, 0, 1, 1e - 6 ) int = - 1. 9552e + 06 err = 1. 6933e + 07 npoints = 20761 note that the error estimate is huge, i. e. quadcc doesn't have much confidence in the results. looking at the function, though, this is not surprising as it oscillates at values three orders of magnitude above the actual integral. again, using a different interval transform may produce better results. you may also want to look at more specific methods such as this. it's a bit more involved, but
|
https://api.stackexchange.com
|
definitely the right method for this type of problem.
|
https://api.stackexchange.com
|
2017 - 10 - 27 update [ note : my earlier notation - focused answer, unchanged, is below this update. ] yes. while having an octet of valence electrons creates an exceptionally deep energy minimum for most atoms, it is only a minimum, not a fundamental requirement. if there are sufficiently strong compensating energy factors, even atoms that strongly prefer octets can form stable compounds with more ( or less ) than the 8 valence shell electrons. however, the same bonding mechanisms that enable the formation of greater - than - 8 valence shells also enable alternative structural interpretations of such shells, depending mostly on whether such bonds are interpreted as ionic or covalent. manishearth's excellent answer explores this issue in much greater detail than i do here. sulfur hexafluoride, $ \ ce { sf6 } $, provides a delightful example of this ambiguity. as i described diagrammatically in my original answer, the central sulfur atom in $ \ ce { sf6 } $ can be interpreted as either : ( a ) a sulfur atom in which all 6 of its valence electrons have been fully ionized away by six fluorine atoms, or ( b ) a sulfur atom with a stable, highly symmetric 12 - electron valence shell that is both created and stabilized by six octahedrally located fluorine atoms, each of which covalently shares an electron pair with the central sulfur atom. while both of these interpretations are plausible from a purely structural perspective, the ionization interpretation has serious problems. the first and greatest problem is that fully ionizing all 6 of sulfur's valence electrons would require energy levels that are unrealistic ( " astronomical ” might be a more apt word ). a second issue is that the stability and clean octahedral symmetry of $ \ ce { sf6 } $ strongly suggest that the 12 electrons around the sulfur atom have reached a stable, well - defined energy minimum that is different from its usual octet structure. both points imply that the simpler and more energetically accurate interpretation of the sulfur valence shell in $ \ ce { sf6 } $ is that it has 12 electrons in a stable, non - octet configuration. notice also that for sulfur this 12 - electron stable energy minimum is unrelated to the larger numbers of valence - related electrons seen in transition element shells, since sulfur simply does not have enough electrons to access those more complex orbitals. the 12 electron valence shell of $ \ ce { sf6 } $ is instead a
|
https://api.stackexchange.com
|
true bending of the rules for an atom that in nearly all other circumstances prefers to have an octet of valence electrons. that is why my overall answer to this question is simply " yes ". question : why are octets special? the flip side of whether stable non - octet valence shells exist is this : why do octet shells provide an energy minimum that is so deep and universal that the entire periodic table is structured into rows that end ( except for helium ) with noble gases with octet valence shells? in a nutshell, the reason is that for any energy level above the special case of the $ n = 1 $ shell ( helium ), the " closed shell " orbital set $ \ { s, p _ x, p _ y, p _ z \ } $ is the only combination of orbitals whose angular momenta are ( a ) all mutually orthogonal, and ( b ) cover all such orthogonal possibilities for three - dimensional space. it is this unique orthogonal partitioning of angular momentum options in 3d space that makes the $ \ { s, p _ x, p _ y, p _ z \ } $ orbital octet both especially deep and relevant even in the highest energy shells. we see the physical evidence of this in the striking stability of the noble gases. the reason orthogonality of angular momentum states is so important at atomic scales is the pauli exclusion principle, which requires that every electron have its own unique state. having orthogonal angular momentum states provides a particularly clean and easy way to provide strong state separation between electron orbitals, and thus avoid the larger energy penalties imposed by pauli exclusion. pauli exclusion conversely makes incompletely orthogonal sets of orbitals substantially less attractive energetically. because they force more orbitals to share the same spherical space as the fully orthogonal $ p _ x $, $ p _ y $, and $ p _ d $ orbitals of the octet, the $ d $, $ f $, and higher orbitals are increasingly less orthogonal, and thus subject to increasing pauli exclusion energy penalties. a final note i may later add another addendum to explain angular momentum orthogonality in terms of classical, satellite - type circular orbits. if i do, i'll also add a bit of explanation as to why the $ p $ orbitals have such bizarrely different dumbell shapes. ( a hint : if you have ever watched people create two loops in a single skip rope, the equations behind such double loops have unexpected similarities to the equations behind $ p $ orbital
|
https://api.stackexchange.com
|
##s. ) original 2014 - ish answer ( unchanged ) this answer is intended to supplement manishearth's earlier answer, rather than compete with it. my objective is to show how octet rules can be helpful even for molecules that contain more than the usual complement of eight electrons in their valence shell. i call it donation notation, and it dates back to my high school days when none of the chemistry texts in my small - town library bothered to explain how those oxygen bonds worked in anions such as carbonate, chlorate, sulfate, nitrate, and phosphate. the idea behind this notation is simple. you begin with the electron dot notation, then add arrows that show whether and how other atoms are " borrowing " each electron. a dot with an arrow means that the electron " belongs " mainly to the atom at the base of the arrow, but is being used by another atom to help complete that atom's octet. a simple arrow without any dot indicates that the electron has effectively left the original atom. in that case, the electron is no longer attached to the arrow at all but is instead shown as an increase in the number of valence electrons in the atoms at the end of the arrow. here are examples using table salt ( ionic ) and oxygen ( covalent ) : notice that the ionic bond of $ \ ce { nacl } $ shows up simply as an arrow, indicating that it has " donated " its outermost electron and fallen back to its inner octet of electrons to satisfy its own completion priorities. ( such inner octets are never shown. ) covalent bonds happen when each atom contributes one electron to a bond. donation notation shows both electrons, so doubly bonded oxygen winds up with four arrows between the atoms. donation notation is not really needed for simple covalent bonds, however. it's intended more for showing how bonding works in anions. two closely related examples are calcium sulfate ( $ \ ce { caso4 } $, better known as gypsum ) and calcium sulfite ( $ \ ce { caso3 } $, a common food preservative ) : in these examples the calcium donates via a mostly ionic bond, so its contribution becomes a pair of arrows that donate two electrons to the core of the anion, completing the octet of the sulfur atom. the oxygen atoms then attach to the sulfur and " borrow " entire electrons pairs, without really contributing anything in return. this borrowing model is a major factor in why there can be more
|
https://api.stackexchange.com
|
than one anion for elements such as sulfur ( sulfates and sulfites ) and nitrogen ( nitrates and nitrites ). since the oxygen atoms are not needed for the central atom to establish a full octet, some of the pairs in the central octet can remain unattached. this results in less oxidized anions such as sulfites and nitrites. finally, a more ambiguous example is sulfur hexafluoride : the figure shows two options. should $ \ ce { sf6 } $ be modeled as if the sulfur is a metal that gives up all of its electrons to the hyper - aggressive fluorine atoms ( option a ), or as a case where the octet rule gives way to a weaker but still workable 12 - electron rule ( option b )? there is some controversy even today about how such cases should be handled. the donation notation shows how an octet perspective can still be applied to such cases, though it is never a good idea to rely on first - order approximation models for such extreme cases. 2014 - 04 - 04 update finally, if you are tired of dots and arrows and yearn for something closer to standard valence bond notation, these two equivalences come in handy : the upper straight - line equivalence is trivial since the resulting line is identical in appearance and meaning to the standard covalent bond of organic chemistry. the second u - bond notation is the novel one. i invented it out of frustration in high school back in the 1970s ( yes i'm that old ), but never did anything with it at the time. the main advantage of u - bond notation is that it lets you prototype and assess non - standard bonding relationships while using only standard atomic valences. as with the straight - line covalent bond, the line that forms the u - bond represents a single pair of electrons. however, in a u - bond, it is the atom at the bottom of the u that donates both electrons in the pair. that atom gets nothing out of the deal, so none of its bonding needs are changed or satisfied. this lack of bond completion is represented by the absence of any line ends on that side of the u - bond. the beggar atom at the top of the u gets to use both of the electrons for free, which in turn means that two of its valence - bond needs are met. notationally, this is reflected by the fact that both of the line ends of the u are next to that atom. taken as
|
https://api.stackexchange.com
|
a whole, the atom at the bottom of a u - bond is saying " i don't like it, but if you are that desperate for a pair of electrons, and if you promise to stay very close by, i'll let you latch onto a pair of electrons from my already - completed octet. " carbon monoxide with its baffling " why does carbon suddenly have a valence of two? " structure nicely demonstrates how u - bonds interpret such compounds in terms of more traditional bonding numbers : notice that two of carbon's four bonds are resolved by standard covalent bonds with oxygen, while the remaining two carbon bonds are resolved by the formation of a u - bond that lets the beggar carbon " share " one of the electron pairs from oxygen's already - full octet. carbon ends up with four - line ends, representing its four bonds, and oxygen ends up with two. both atoms thus have their standard bonding numbers satisfied. another more subtle insight from this figure is that since a u - bond represents a single pair of electrons, the combination of one u - bond and two traditional covalent bonds between the carbon and oxygen atoms involves a total of six electrons, and so should have similarities to the six - electron triple bond between two nitrogen atoms. this small prediction turns out to be correct : nitrogen and carbon monoxide molecules are in fact electron configuration homologs, one of the consequences of which is that they have nearly identical physical chemistry properties. below are a few more examples of how u - bond notation can make anions, noble gas compounds, and odd organic compounds seem a bit less mysterious :
|
https://api.stackexchange.com
|
both thawing and evaporation involve heat exchange between the stone tile, the water sitting atop the stone tile, any water that's been absorbed by the stone tile, and the air around. the basic reason that the center and the edges of the tile evaporate differently is that the gaps between the tiles change the way that heat is exchanged there. however the details of how that works are a little more involved than i can get into at the moment, and would be lost on a three - year - old anyway. a good way to explain this phenomenon to a three - year - old would be to bake a batch of brownies in a square pan, and watch how the brownies get done from the outside of the pan inwards. even after you have finished them you can still tell the difference between the super - crispy corner brownies, the medium - crispy edge brownies, and the gooey middle - of - the - pan brownies. the three - year - old would probably ask you to repeat this explanation many times. i think the shapes are not exactly circles, superellipses, or any other simple mathematical object - - - there's too much real life in the way - - - but they do become more circular as the remaining puddle gets further from the edges. a related explanation.
|
https://api.stackexchange.com
|
paper, especially when freshly cut, might appear to have smooth edges, but in reality, its edges are serrated ( i. e. having a jagged edge ), making it more like a saw than a smooth blade. this enables the paper to tear through the skin fairly easily. the jagged edges greatly reduce contact area, and causes the pressure applied to be rather high. thus, the skin can be easily punctured, and as the paper moves in a transverse direction, the jagged edge will tear the skin open. paper may bend easily, but it's very resistant to lateral compression ( along its surface ). try squeezing a few sheets of paper in a direction parallel to its surface ( preferably by placing them flat on a table and attempting to " compress " it laterally ), and you will see what i mean. this is analogous to cutting skin with a metal saw versus a rubber one. the paper is more like a metal one in this case. paper is rather stiff in short lengths, such as a single piece of paper jutting out from a stack ( which is what causes cuts a lot of the time ). most of the time, holding a single large piece of paper and pressing it against your skin won't do much more than bend the paper, but holding it such that only a small length is exposed will make it much harder to bend. the normal force from your skin and the downward force form what is known as a torque couple. there is a certain threshold torque before the paper gives way and bends instead. a shorter length of paper will have a shorter lever arm, which greatly increases the tolerance of the misalignment of the two forces. holding the paper at a longer length away decreases this threshold ( i. e. you have to press down much more precisely over the contact point for the paper to not bend ). this is also an important factor in determining whether the paper presses into your skin or simply bends. paper is made of cellulose short fibers / pulp, which are attached to each other through hydrogen bonding and possibly a finishing layer. when paper is bent or folded, fibers at the folding line separate and detach, making the paper much weaker. even if we unfold the folded paper, those detached fibers do not re - attach to each other as before, so the folding line remains as a mechanically weak region and decreasing its stiffness. this is why freshly made, unfolded paper is also more likely to cause cuts. lastly, whether a piece of paper cuts skin easily, of
|
https://api.stackexchange.com
|
course depends on its stiffness. this is why office paper is much more likely to cut you than toilet paper. the paper's density ( mass per unit area ), also known as grammage, has a direct influence on its stiffness.
|
https://api.stackexchange.com
|
there are two points relevant for the discussion : air itself carries a very small amount of thermal energy and it is a very poor thermal conductor. for the first point, i think it is interesting to consider the product $ \ text { density } \ times \ text { specific heat } $, that is the amount of energy per unit volume that can be transferred for every $ \ text { k } $ of temperature difference. as of order of magnitudes, the specific heat is roughly comparable, but the density of air is $ 10 ^ 3 $ times smaller than the density of a common metal ; this means that for a given volume there are much less " molecules " of air that can store thermal energy than in a solid metal, and hence air has much less thermal energy and it is not enough to cause you a dangerous rise of the temperature. the rate at which energy is transferred to your hand, that is the flow of heat from the other objects ( air included ) to your hand. in the same amount of time and exposed surface, touching air or a solid object causes you get a very different amount of energy transferred to you. the relevant quantity to consider is thermal conductivity, that is the energy transferred per unit time, surface and temperature difference. i added this to give more visibility to his comment ; my original answer follows. air is a very poor conductor of heat, the reason being the fact that the molecules are less concentrated and less interacting with each other, as you conjectured ( this is not very precise, but in general situations this way of thinking works ). on the opposite, solids are in general better conductors : this is the reason why you should not touch anything inside the oven. considering order of magnitudes, according to wikipedia, air has a thermal conductivity $ \ lesssim 10 ^ { - 1 } \ \ text { w / ( m k ) } $, whereas for metals is higher at least of two orders of magnitude. i really thank zephyr and chemical engineer for the insight that they brought to my original answer, that was much poorer but got an unexpected fame.
|
https://api.stackexchange.com
|
ok, here's my favorite. i thought of this after reading a proof from the book " proofs from the book " by aigner & ziegler, but later i found more or less the same proof as mine in a paper published a few years earlier by josef hofbauer. on robin's list, the proof most similar to this is number 9 ( edit :... which is actually the proof that i read in aigner & ziegler ). when $ 0 < x < \ pi / 2 $ we have $ 0 < \ sin x < x < \ tan x $ and thus $ $ \ frac { 1 } { \ tan ^ 2 x } < \ frac { 1 } { x ^ 2 } < \ frac { 1 } { \ sin ^ 2 x }. $ $ note that $ 1 / \ tan ^ 2 x = 1 / \ sin ^ 2 x - 1 $. split the interval $ ( 0, \ pi / 2 ) $ into $ 2 ^ n $ equal parts, and sum the inequality over the ( inner ) " gridpoints " $ x _ k = ( \ pi / 2 ) \ cdot ( k / 2 ^ n ) $ : $ $ \ sum _ { k = 1 } ^ { 2 ^ n - 1 } \ frac { 1 } { \ sin ^ 2 x _ k } - \ sum _ { k = 1 } ^ { 2 ^ n - 1 } 1 < \ sum _ { k = 1 } ^ { 2 ^ n - 1 } \ frac { 1 } { x _ k ^ 2 } < \ sum _ { k = 1 } ^ { 2 ^ n - 1 } \ frac { 1 } { \ sin ^ 2 x _ k }. $ $ denoting the sum on the right - hand side by $ s _ n $, we can write this as $ $ s _ n - ( 2 ^ n - 1 ) < \ sum _ { k = 1 } ^ { 2 ^ n - 1 } \ left ( \ frac { 2 \ cdot 2 ^ n } { \ pi } \ right ) ^ 2 \ frac { 1 } { k ^ 2 } < s _ n. $ $ although $ s _ n $ looks like a complicated sum, it can actually be computed fairly easily. to begin with, $ $ \ frac { 1 } { \ sin ^ 2 x } + \ frac { 1 } { \ sin ^
|
https://api.stackexchange.com
|
2 ( \ frac { \ pi } { 2 } - x ) } = \ frac { \ cos ^ 2 x + \ sin ^ 2 x } { \ cos ^ 2 x \ cdot \ sin ^ 2 x } = \ frac { 4 } { \ sin ^ 2 2x }. $ $ therefore, if we pair up the terms in the sum $ s _ n $ except the midpoint $ \ pi / 4 $ ( take the point $ x _ k $ in the left half of the interval $ ( 0, \ pi / 2 ) $ together with the point $ \ pi / 2 - x _ k $ in the right half ) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint ; that is, over those gridpoints that correspond to splitting the interval into $ 2 ^ { n - 1 } $ parts. and the midpoint $ \ pi / 4 $ contributes with $ 1 / \ sin ^ 2 ( \ pi / 4 ) = 2 $ to the sum. in short, $ $ s _ n = 4 s _ { n - 1 } + 2. $ $ since $ s _ 1 = 2 $, the solution of this recurrence is $ $ s _ n = \ frac { 2 ( 4 ^ n - 1 ) } { 3 }. $ $ ( for example like this : the particular ( constant ) solution $ ( s _ p ) _ n = - 2 / 3 $ plus the general solution to the homogeneous equation $ ( s _ h ) _ n = a \ cdot 4 ^ n $, with the constant $ a $ determined by the initial condition $ s _ 1 = ( s _ p ) _ 1 + ( s _ h ) _ 1 = 2 $. ) we now have $ $ \ frac { 2 ( 4 ^ n - 1 ) } { 3 } - ( 2 ^ n - 1 ) \ leq \ frac { 4 ^ { n + 1 } } { \ pi ^ 2 } \ sum _ { k = 1 } ^ { 2 ^ n - 1 } \ frac { 1 } { k ^ 2 } \ leq \ frac { 2 ( 4 ^ n - 1 ) } { 3 }. $ $ multiply by $ \ pi ^ 2 / 4 ^ { n + 1 } $ and let $ n \ to \ infty $. this squeezes the partial sums between two
|
https://api.stackexchange.com
|
sequences both tending to $ \ pi ^ 2 / 6 $. voila!
|
https://api.stackexchange.com
|
i reproduce a blog post i wrote some time ago : we tend to not use higher derivative theories. it turns out that there is a very good reason for this, but that reason is rarely discussed in textbooks. we will take, for concreteness, $ l ( q, \ dot q, \ ddot q ) $, a lagrangian which depends on the 2nd derivative in an essential manner. inessential dependences are terms such as $ q \ ddot q $ which may be partially integrated to give $ { \ dot q } ^ 2 $. mathematically, this is expressed through the necessity of being able to invert the expression $ $ p _ 2 = \ frac { \ partial l \ left ( q, \ dot q, \ ddot q \ right ) } { \ partial \ ddot q }, $ $ and get a closed form for $ \ ddot q ( q, \ dot q, p _ 2 ) $. note that usually we also require a similar statement for $ \ dot q ( q, p ) $, and failure in this respect is a sign of having a constrained system, possibly with gauge degrees of freedom. in any case, the non - degeneracy leads to the euler - lagrange equations in the usual manner : $ $ \ frac { \ partial l } { \ partial q } - \ frac { d } { dt } \ frac { \ partial l } { \ partial \ dot q } + \ frac { d ^ 2 } { dt ^ 2 } \ frac { \ partial l } { \ partial \ ddot q } = 0. $ $ this is then fourth order in $ t $, and so require four initial conditions, such as $ q $, $ \ dot q $, $ \ ddot q $, $ q ^ { ( 3 ) } $. this is twice as many as usual, and so we can get a new pair of conjugate variables when we move into a hamiltonian formalism. we follow the steps of ostrogradski, and choose our canonical variables as $ q _ 1 = q $, $ q _ 2 = \ dot q $, which leads to \ begin { align } p _ 1 & = \ frac { \ partial l } { \ partial \ dot q } - \ frac { d } { dt } \ frac { \ partial l } { \ partial \ ddot q }, \ \ p _ 2 &
|
https://api.stackexchange.com
|
= \ frac { \ partial l } { \ partial \ ddot q }. \ end { align } note that the non - degeneracy allows $ \ ddot q $ to be expressed in terms of $ q _ 1 $, $ q _ 2 $ and $ p _ 2 $ through the second equation, and the first one is only necessary to define $ q ^ { ( 3 ) } $. we can then proceed in the usual fashion, and find the hamiltonian through a legendre transform : \ begin { align } h & = \ sum _ i p _ i \ dot { q } _ i - l \ \ & = p _ 1 q _ 2 + p _ 2 \ ddot { q } \ left ( q _ 1, q _ 2, p _ 2 \ right ) - l \ left ( q _ 1, q _ 2, \ ddot { q } \ right ). \ end { align } again, as usual, we can take time derivative of the hamiltonian to find that it is time independent if the lagrangian does not depend on time explicitly, and thus can be identified as the energy of the system. however, we now have a problem : $ h $ has only a linear dependence on $ p _ 1 $, and so can be arbitrarily negative. in an interacting system this means that we can excite positive energy modes by transferring energy from the negative energy modes, and in doing so we would increase the entropy — there would simply be more particles, and so a need to put them somewhere. thus such a system could never reach equilibrium, exploding instantly in an orgy of particle creation. this problem is in fact completely general, and applies to even higher derivatives in a similar fashion.
|
https://api.stackexchange.com
|
this is easy to check, you can download both specs in. tex format and do diff. changes to the v4. 2 compared to v4. 1 : information field format : adding source and version as recommended fields. info field can have one value for each possible allele ( code r ). for all of the # # info, # # format, # # filter, and # # alt metainformation, extra fields can be included after the default fields. alternate base ( alt ) can include * : missing due to a upstream deletion. quality scores, a sentence removed : high qual scores indicate high confidence calls. although traditionally people use integer phred scores, this field is permitted to be a floating point to enable higher resolution for low confidence calls if desired. examples changed a bit.
|
https://api.stackexchange.com
|
i don't know which algorithm google uses. but, since you wanted a best guess, let me give some ideas on how a similar system could be constructed. the whole field dealing with search - image - base - by - image is called content based image retrieval ( cbir ). the idea is somehow to construct an image representation ( not necessarily understandable by humans ) that contains the information about image content. two basic approaches exist : retrieval using low - level ( local ) features : color, texture, shape at specific parts of images ( an image is a collection of descriptors of local features ) semantic approaches where an image is, in some way, represented as a collection of objects and their relations low - level local approach is very well researched. the best current approach extracts local features ( there's a choice of feature extraction algorithm involved here ) and uses their local descriptors ( again, choice of descriptors ) to compare the images. in newer works, the local descriptors are clustered first and then clusters are treated as visual words - - the technique is then very similar to google document search, but using visual words instead of letter - words. you can think of visual words as equivalents to word roots in language : for example, the words : work, working, worked all belong to the same word root. one of the drawbacks of these kinds of methods is that they usually under - perform on low - texture images. i've already given and seen a lot of answers detailing these approaches, so i'll just provide links to those answers : cbir : 1, 2 feature extraction / description : 1, 2, 3, 4 semantic approaches are typically based on hierarchical representations of the whole image. these approaches have not yet been perfected, especially for the general image types. there is some success in applying these kind of techniques to specific image domains. as i am currently in the middle of research of these approaches, i can not make any conclusions. now, that said, i explained a general idea behind these techniques in this answer. once again, shortly : the general idea is to represent an image with a tree - shaped structure, where leaves contain the image details and objects can be found in the nodes closer to the root of such trees. then, somehow, you compare the sub - trees to identify the objects contained in different images. here are some references for different tree representations. i did not read all of them, and some of them use this kind of representations for segmentation instead of cbir
|
https://api.stackexchange.com
|
, but still, here they are : binary partition trees and mention of min / max trees : p. salembier, m. h. f wilkinson : connected operators binary partition trees : v. vilaplana, f. marques, p. selembier : binary partition trees for object detection tree of shapes ( component tree ) : p. monasse, f. guichard : fast computation of contrast - invariant image representation, c. ballester, v. castellis, p. monasse : the tree of shapes of an image monotonic trees : y. song, a. zhang : analyzing scenery images by monotonic tree edit : further digging shows that tree of shapes and monotonic tree are equivalent, except processing the image in 4 - / 8 - ( tree of shapes ) or 6 - connectivity ( monotonic ) extrema - watershed tree : a. vichik, r. keshet, d. malah : self - dual morphology on tree semilattices and applications constrained connectivity, alpha - trees, ultrametric waterseads : p. soille, l. najman : on morphological hierarchical representations for image processing and spatial data clustering
|
https://api.stackexchange.com
|
no, d - is not ground. data is sent over a differential line, which means that d - is a mirror image of d +, so both data lines carry the signal. the receiver subtracts d - from d +. if some noise signal would be picked up by both wires, the subtraction will cancel it. so differential signalling helps suppressing noise. so does the type of wiring, namely twisted pair. if the wires ran just parallel they would form a ( narrow ) loop which could pick up magnetic interference. but thanks to the twists the orientation of the wires with respect to the field changes continuously. an induced current will be cancelled by a current with the opposite sign half a twist further. suppose you have a disturbance working vertically on the twisted wire. you could regard each half twist as a small loop picking up the disturbance. then it's easy to see that the next tiny loop sees the opposite field ( upside down, so to speak ), so that cancels the first field. this happens for each pair of half twists. a similar balancing effect occurs for capacitance to ground. in a straight pair one conductor shows a higher capacitance to ground than the other, while in a twisted pair each wire will show the same capacitance. edit cables with several twisted pairs like cat5 have a different twist length for each pair to minimize crosstalk.
|
https://api.stackexchange.com
|
if you ran an electrical current through a material today, it would travel at the same speed as if you did it with the same material 50 years ago. with that in mind, how is it computers have become faster? what main area of processor design is it that has given these incredible speed increases? you get erroneous conclusions because your initial hypothesis is wrong : you think that cpu speed is equivalent to the speed of the electrons in the cpu. in fact, the cpu is some synchronous digital logic. the limit for its speed is that the output of a logical equation shall be stable within one clock period. with the logic implemented with transistors, the limit is mainly linked to the time required to make transistors switch. by reducing their channel size, we are able to make them switch faster. this is the main reason for improvement in max frequency of cpus for 50 years. today, we also modify the shape of the transistors to increase their switching speed, but, as far as i know, only intel, global foundries and tsmc are able to create finfets today. yet, there are some other ways to improve the maximum clock speed of a cpu : if you split your logical equation into several smaller ones, you can make each step faster, and have a higher clock speed. you also need more clock periods to perform the same action, but, using pipelining techniques, you can make the rate of instructions per second follow your clock rate. today, the speed of electrons has become a limit : at 10ghz, an electric signal can't be propagated on more than 3cm. this is roughly the size of current processors. to avoid this issue, you may have several independent synchronous domains in your chip, reducing the constraints on signal propagation. but this is only one limiting factor, amongst transistor switching speed, heat dissipation, emc, and probably others ( but i'm not in the silicon foundry industry ).
|
https://api.stackexchange.com
|
here's what's really going on with the dual problem. ( this is my attempt to answer my own question, over a year after originally asking it. ) ( a very nice presentation of this material is given in ekeland and temam. these ideas are also in rockafellar. ) let $ v $ be a finite dimensional normed vector space over $ \ mathbb r $. ( working in an inner product space or just in $ \ mathbb r ^ n $ risks concealing the fundamental role that the dual space plays in duality in convex optimization. ) the basic idea behind duality in convex analysis is to think of a convex set in terms of its supporting hyperplanes. ( a closed convex set $ \ omega $ can be " recovered " from its supporting hyperplanes by taking the intersection of all closed half spaces containing $ \ omega $. the set of all supporting hyperplanes to $ \ omega $ is sort of a " dual representation " of $ \ omega $. ) for a convex function $ f $ ( whose epigraph is a convex set ), this strategy leads us to think about $ f $ in terms of affine functions $ \ langle m ^ *, x \ rangle - \ alpha $ which are majorized by $ f $. ( here $ m ^ * \ in v ^ * $ and we are using the notation $ \ langle m ^ *, x \ rangle = m ^ * ( x ) $. ) for a given slope $ m ^ * \ in v ^ * $, we only need to consider the " best " choice of $ \ alpha $ - - the other affine minorants with slope $ m ^ * $ can be disregarded. \ begin { align * } & f ( x ) \ geq \ langle m ^ *, x \ rangle - \ alpha \ quad \ forall x \ in v \ \ \ iff & \ alpha \ geq \ langle m ^ *, x \ rangle - f ( x ) \ quad \ forall x \ in v \ \ \ iff & \ alpha \ geq \ sup _ { x \ in v } \ quad \ langle m ^ *, x \ rangle - f ( x ) \ end { align * } so the best choice of $ \ alpha $ is \ begin { equation } f ^ * ( m ^ * ) = \ sup _ { x \ in v } \ quad \ langle m
|
https://api.stackexchange.com
|
^ *, x \ rangle - f ( x ). \ end { equation } if this supremum is finite, then $ \ langle m ^ *, x \ rangle - f ^ * ( m ^ * ) $ is the best affine minorant of $ f $ with slope $ m ^ * $. if $ f ^ * ( m ^ * ) = \ infty $, then there is no affine minorant of $ f $ with slope $ m ^ * $. the function $ f ^ * $ is called the " conjugate " of $ f $. the definition and basic facts about $ f ^ * $ are all highly intuitive. for example, if $ f $ is a proper closed convex function then $ f $ can be recovered from $ f ^ * $, because any closed convex set ( in this case the epigraph of $ f $ ) is the intersection of all the closed half spaces containing it. ( i still think the fact that the " inversion formula " $ f = f ^ { * * } $ is so simple is a surprising and mathematically beautiful fact, but not hard to derive or prove with this intuition. ) because $ f ^ * $ is defined on the dual space, we see already the fundamental role played by the dual space in duality in convex optimization. given an optimization problem, we don't obtain a dual problem until we specify how to perturb the optimization problem. this is why equivalent formulations of an optimization problem can lead to different dual problems. by reformulating it we have in fact specified a different way to perturb it. as is typical in math, the ideas become clear when we work at an appropriate level of generality. assume that our optimization problem is \ begin { equation * } \ operatorname * { minimize } _ { x } \ quad \ phi ( x, 0 ). \ end { equation * } here $ \ phi : x \ times y \ to \ bar { \ mathbb r } $ is convex. standard convex optimization problems can be written in this form with an appropriate choice of $ \ phi $. the perturbed problems are \ begin { equation * } \ operatorname * { minimize } _ { x } \ quad \ phi ( x, y ) \ end { equation * } for nonzero values of $ y \ in y $. let $ h ( y ) = \ inf _ x \ phi ( x, y ) $. our optimization
|
https://api.stackexchange.com
|
problem is simply to evaluate $ h ( 0 ) $. from our knowledge of conjugate functions, we know that \ begin { equation * } h ( 0 ) \ geq h ^ { * * } ( 0 ) \ end { equation * } and that typically we have equality. for example, if $ h $ is subdifferentiable at $ 0 $ ( which is typical for a convex function ) then $ h ( 0 ) = h ^ { * * } ( 0 ) $. the dual problem is simply to evaluate $ h ^ { * * } ( 0 ) $. in other words, the dual problem is : \ begin { equation * } \ operatorname * { maximize } _ { y ^ * \ in y ^ * } \ quad - h ^ * ( y ^ * ). \ end { equation * } we see again the fundamental role that the dual space plays here. it is enlightening to express the dual problem in terms of $ \ phi $. it's easy to show that the dual problem is \ begin { equation * } \ operatorname * { maximize } _ { y ^ * \ in y ^ * } \ quad - \ phi ^ * ( 0, y ^ * ). \ end { equation * } so the primal problem is \ begin { equation * } \ operatorname * { minimize } _ { x \ in x } \ quad \ phi ( x, 0 ) \ end { equation * } and the dual problem ( slightly restated ) is \ begin { equation * } \ operatorname * { minimize } _ { y ^ * \ in y ^ * } \ quad \ phi ^ * ( 0, y ^ * ). \ end { equation * } the similarity between these two problems is mathematically beautiful, and we can see that if we perturb the dual problem in the obvious way, then the dual of the dual problem will be the primal problem ( assuming $ \ phi = \ phi ^ { * * } $ ). the natural isomorphism between $ v $ and $ v ^ { * * } $ is of fundamental importance here. the key facts about the dual problem - - strong duality, the optimality conditions, and the sensitivity interpretation of the optimal dual variables - - all become intuitively clear and even " obvious " from this viewpoint. an optimization problem in the form \ begin { align * } \ operatorname * { minimize } _ x & \ quad f ( x ) \ \ \ text { subject to
|
https://api.stackexchange.com
|
} & \ quad g ( x ) \ leq 0, \ end { align * } can be perturbed as follows : \ begin { align * } \ operatorname * { minimize } _ x & \ quad f ( x ) \ \ \ text { subject to } & \ quad g ( x ) + y \ leq 0. \ end { align * } this perturbed problem has the form given above with \ begin { equation * } \ phi ( x, y ) = \ begin { cases } f ( x ) \ quad \ text { if } g ( x ) + y \ leq 0 \ \ \ infty \ quad \ text { otherwise }. \ end { cases } \ end { equation * } to find the dual problem, we need to evaluate $ - \ phi ^ * ( 0, y ^ * ) $, which is a relatively straightforward calculation. \ begin { align * } - \ phi ^ * ( 0, y ^ * ) & = - \ sup _ { g ( x ) + y \ leq 0 } \ quad \ langle y ^ *, y \ rangle - f ( x ) \ \ & = - \ sup _ { \ substack { x \ \ q \ geq 0 } } \ quad \ langle y ^ *, - g ( x ) - q \ rangle - f ( x ) \ \ & = \ inf _ { \ substack { x \ \ q \ geq 0 } } \ quad f ( x ) + \ langle y ^ *, g ( x ) \ rangle + \ langle y ^ *, q \ rangle. \ end { align * } we can minimize first with respect to $ q $, and we will get $ - \ infty $ unless $ \ langle y ^ *, q \ rangle \ geq 0 $ for all $ q \ geq 0 $. in other words, we will get $ - \ infty $ unless $ y ^ * \ geq 0 $. the dual function is \ begin { equation * } - \ phi ^ * ( 0, y ^ * ) = \ begin { cases } \ inf _ x \ quad f ( x ) + \ langle y ^ *, g ( x ) \ rangle \ quad \ text { if } y ^ * \ geq 0 \ \ - \ infty \ quad \ text { otherwise }. \
|
https://api.stackexchange.com
|
end { cases } \ end { equation * } this is the expected result.
|
https://api.stackexchange.com
|
summary : yes " polarised " aluminum " wet electrolytic " capacitors can legitimately be connected " back - to - back " ( ie in series with opposing polarities ) to form a non - polar capacitor. c1 + c2 are always equal in capacitance and voltage rating ceffective = = c1 / 2 = c2 / 2 veffective = vrating of c1 & c2. see " mechanism " at end for how this ( probably ) works. it is universally assumed that the two capacitors have identical capacitance when this is done. the resulting capacitor with half the capacitance of each individual capacitor. eg if two x 10 uf capacitors are placed in series the resulting capacitance will be 5 uf. i conclude that the resulting capacitor will have the same voltage rating as the individual capacitors. ( i may be wrong ). i have seen this method used on many occasions over many years and, more importanttly have seen the method described in application notes from a number of capacitor manufacturers. see at end for one such reference. understanding how the individual capacitors become correctly charged requires either faith in the capacitor manufacturers statements ( " act as if they had been bypassed by diodes " or additional complexity but understanding how the arrangement works once initiated is easier. imagine two back - to - back caps with cl fully charged and cr fully discharged. if a current is now passed though the series arrangement such that cl then discharges to zero charge then the reversed polarity of cr will cause it to be charged to full voltage. attempts to apply additional current and to further discharge cl so it assumes incorrect polarity would lead to cr being charge above its rated voltage. ie it could be attempted but would be outside spec for both devices. given the above, the specific questions can be answered : what are some reasons to connect capacitors in series? can create a bipolar cap from 2 x polar caps. or can double rated voltage as long as care is taken to balance voltage distribution. paralleld resistors are sometimes used to help achieve balance. " turns out that what might look like two ordinary electrolytics are not, in fact, two ordinary electrolytics. " this can be done with oridinary electrolytics. " no, do not do this. it will act as a capacitor also, but once you pass a few volts it will blow out the insulator. "
|
https://api.stackexchange.com
|
works ok if ratings are not exceeded.'kind of like " you can't make a bjt from two diodes "'reason for comparison is noted but is not a valid one. each half capacitor is still subject to same rules and demands as when standing alone. " it is a process that a tinkerer cannot do " tinkerer can - entirely legitimate. so is a non - polar ( np ) electrolytic cap electrically identical to two electrolytic caps in reverse series, or not? it coild be but the manufacturers usually make a manufacturing change so that there are two anode foils but the result is the same. does it not survive the same voltages? voltage rating is that of a single cap. what happens to the reverse - biased cap when a large voltage is placed across the combination? under normal operation there is no reverse biased cap. each cap handles a full cycle of ac whole effectively seeing half a cycle. see my explanation above. are there practical limitations other than physical size? no obvious limitation that i can think of. does it matter which polarity is on the outside? no. draw a picture of what each cap sees in isolation without reference to what is " outside it. now change their order in the circuit. what they see is identical. i don't see what the difference is, but a lot of people seem to think there is one. you are correct. functionally from a " black box " point of view they are the same. manufacturer's example : in this document application guide, aluminum electrolytic capacitors by cornell dubilier, a competent and respected capacitor manufacturer it says ( on age 2. 183 & 2. 184 ) if two, same - value, aluminum electrolytic capacitors are connected in series, back - to - back with the positive terminals or the negative terminals connected, the resulting single capacitor is a non - polar capacitor with half the capacitance. the two capacitors rectify the applied voltage and act as if they had been bypassed by diodes. when voltage is applied, the correct - polarity capacitor gets the full voltage. in non - polar aluminum electrolytic capacitors and motor - start aluminum electrolytic capacitors a second anode foil substitutes for the cathode foil to achieve a non - polar capacitor in a single case. of relevance to understanding the overall action is this comment from page 2. 183.
|
https://api.stackexchange.com
|
while it may appear that the capacitance is between the two foils, actually the capacitance is between the anode foil and the electrolyte. the positive plate is the anode foil ; the dielectric is the insulating aluminum oxide on the anode foil ; the true negative plate is the conductive, liquid electrolyte, and the cathode foil merely connects to the electrolyte. this construction delivers colossal capacitance because etching the foils can increase surface area more than 100 times and the aluminum - oxide dielectric is less than a micrometer thick. thus the resulting capacitor has very large plate area and the plates are awfully close together. added : i intuitively feel as olin does that it should be necessary to provide a means of maintaining correct polarity. in practice it seems that the capacitors do a good job of accommodating the startup " boundary condition ". cornell dubiliers " acts like a diode " needs better understanding. mechanism : i think the following describes how the system works. as i described above, once one capacitor is fully charged at one extreme of the ac waveform and the other fully discharged then the system will operate correctly, with charge being passed into the outside " plate " of one cap, across from inside plate of that cap to the other cap and " out the other end ". ie a body of charge transfers to and from between the two capacitors and allows net charge flow to and from through the dual cap. no problem so far. a correctly biased capacitor has very low leakage. a reverse biased capacitor has higher leakage and possibly much higher. at startup one cap is reverse biased on each half cycle and leakage current flows. the charge flow is such as to drive the capacitors towards the properly balanced condition. this is the " diode action " referred to - not formal rectification per say but leakage under incorrect operating bias. after a number of cycles balance will be achieved. the " leakier " the cap is in the reverse direction the quicker balance will be achieved. any imperfections or inequalities will be compensated for by this self adjusting mechanism. very neat.
|
https://api.stackexchange.com
|
most of the other answers focus on the example of unbalanced classes. yes, this is important. however, i argue that accuracy is problematic even with balanced classes. frank harrell has written about this on his blog : classification vs. prediction and damage caused by classification accuracy and other discontinuous improper accuracy scoring rules. essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. mapping these predicted probabilities $ ( \ hat { p }, 1 - \ hat { p } ) $ to a 0 - 1 classification, by choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. it is part of the decision component. and here, you need the probabilistic output of your model - but also considerations like : what are the consequences of deciding to treat a new observation as class 1 vs. 0? do i then send out a cheap marketing mail to all 1s? or do i apply an invasive cancer treatment with big side effects? what are the consequences of treating a " true " 0 as 1, and vice versa? will i tick off a customer? subject someone to unnecessary medical treatment? are my " classes " truly discrete? or is there actually a continuum ( e. g., blood pressure ), where clinical thresholds are in reality just cognitive shortcuts? if so, how far beyond a threshold is the case i'm " classifying " right now? or does a low - but - positive probability to be class 1 actually mean " get more data ", " run another test "? depending on the consequences of your decision, you will use a different threshold to make the decision. if the action is invasive surgery, you will require a much higher probability for your classification of the patient as suffering from something than if the action is to recommend two aspirin. or you might even have three different decisions although there are only two classes ( sick vs. healthy ) : " go home and don't worry " vs. " run another test because the one we have is inconclusive " vs. " operate immediately ". the correct way of assessing predicted probabilities $ ( \ hat { p }, 1 - \ hat { p } ) $ is not to compare them to a threshold, map them to $ ( 0, 1 ) $ based on the threshold and then assess the transformed $ ( 0, 1 ) $ classification. instead, one should
|
https://api.stackexchange.com
|
use proper scoring - rules. these are loss functions that map predicted probabilities and corresponding observed outcomes to loss values, which are minimized in expectation by the true probabilities $ ( p, 1 - p ) $. the idea is that we take the average over the scoring rule evaluated on multiple ( best : many ) observed outcomes and the corresponding predicted class membership probabilities, as an estimate of the expectation of the scoring rule. note that " proper " here has a precisely defined meaning - there are improper scoring rules as well as proper scoring rules and finally strictly proper scoring rules. scoring rules as such are loss functions of predictive densities and outcomes. proper scoring rules are scoring rules that are minimized in expectation if the predictive density is the true density. strictly proper scoring rules are scoring rules that are only minimized in expectation if the predictive density is the true density. as frank harrell notes, accuracy is an improper scoring rule. ( more precisely, accuracy is not even a scoring rule at all : see my answer to is accuracy an improper scoring rule in a binary classification setting? ) this can be seen, e. g., if we have no predictors at all and just a flip of an unfair coin with probabilities $ ( 0. 6, 0. 4 ) $. accuracy is maximized if we classify everything as the first class and completely ignore the 40 % probability that any outcome might be in the second class. ( here we see that accuracy is problematic even for balanced classes. ) proper scoring - rules will prefer a $ ( 0. 6, 0. 4 ) $ prediction to the $ ( 1, 0 ) $ one in expectation. in particular, accuracy is discontinuous in the threshold : moving the threshold a tiny little bit may make one ( or multiple ) predictions change classes and change the entire accuracy by a discrete amount. this makes little sense. more information can be found at frank's two blog posts linked to above, as well as in chapter 10 of frank harrell's regression modeling strategies. ( this is shamelessly cribbed from an earlier answer of mine. ) edit. my answer to example when using accuracy as an outcome measure will lead to a wrong conclusion gives a hopefully illustrative example where maximizing accuracy can lead to wrong decisions even for balanced classes.
|
https://api.stackexchange.com
|
these are not very strict terms and they are highly related. however : loss function is usually a function defined on a data point, prediction and label, and measures the penalty. for example : square loss : $ l ( f ( x _ i | \ theta ), y _ i ) = \ left ( f ( x _ i | \ theta ) - y _ i \ right ) ^ 2 $, used in linear regression hinge loss : $ l ( f ( x _ i | \ theta ), y _ i ) = \ max ( 0, 1 - f ( x _ i | \ theta ) y _ i ) $, used in svm 0 / 1 loss : $ l ( f ( x _ i | \ theta ), y _ i ) = 1 \ iff f ( x _ i | \ theta ) \ neq y _ i $, used in theoretical analysis and definition of accuracy cost function is usually more general. it might be a sum of loss functions over your training set plus some model complexity penalty ( regularization ). for example : mean squared error : $ mse ( \ theta ) = \ frac { 1 } { n } \ sum _ { i = 1 } ^ n \ left ( f ( x _ i | \ theta ) - y _ i \ right ) ^ 2 $ svm cost function : $ svm ( \ theta ) = \ | \ theta \ | ^ 2 + c \ sum _ { i = 1 } ^ n \ xi _ i $ ( there are additional constraints connecting $ \ xi _ i $ with $ c $ and with training set ) objective function is the most general term for any function that you optimize during training. for example, a probability of generating training set in maximum likelihood approach is a well defined objective function, but it is not a loss function nor cost function ( however you could define an equivalent cost function ). for example : mle is a type of objective function ( which you maximize ) divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1 - divergence, and name it a cost long story short, i would say that : a loss function is a part of a cost function which is a type of an objective function. all that being said, thse terms are far from strict, and depending on context, research group, background, can shift and be used in a different meaning. with the main ( only? ) common thing being " loss
|
https://api.stackexchange.com
|
" and " cost " functions being something that want wants to minimise, and objective function being something one wants to optimise ( which can be both maximisation or minimisation ).
|
https://api.stackexchange.com
|
the fact that the result is complex is to be expected. i want to point out a couple things : you are applying a brick - wall frequency - domain filter to the data, attempting to zero out all fft outputs that correspond to a frequency greater than 0. 005 hz, then inverse - transforming to get a time - domain signal again. in order for the result to be real, then the input to the inverse fft must be conjugate symmetric. this means that for a length - $ n $ fft, $ $ x [ k ] = x ^ * [ n - k ], k = 1, 2, \ ldots, \ frac { n } { 2 } - 1 \ ; \ ; \ ; \ ; \ ; \ ; \ ; ( n \ ; \ ; even ) $ $ $ $ x [ k ] = x ^ * [ n - k ], k = 1, 2, \ ldots, \ lfloor \ frac { n } { 2 } \ rfloor \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; ( n \ ; \ ; odd ) $ $ note that for $ n $ even, $ x [ 0 ] $ and $ x [ \ frac { n } { 2 } ] $ are not equal in general, but they are both real. for odd $ n $, $ x [ 0 ] $ must be real. i see that you attempted to do something like this in your code above, but it is not quite correct. if you enforce the above condition on the signal that you pass to the inverse fft, then you should get a real signal out. my second point is more of a philosophical one : what you're doing will work, in that it will suppress the frequency - domain content that you don't want. however, this is not typically the way a lowpass filter would be implemented in practice. as i mentioned before, what you're doing is essentially applying a filter that has a brick - wall ( i. e. perfectly rectangular ) magnitude response. the impulse response of such a filter has a $ sinc ( x ) $ shape. since multiplication in the frequency domain is equivalent to ( in the case of using the dft, circular ) convolution in the time domain, this operation is equivalent to convolving the time domain signal with a $ sinc $ function. why is this a problem? recall what the $ sinc $ function looks like
|
https://api.stackexchange.com
|
in the time domain ( below image shamelessly borrowed from wikipedia ) : the $ sinc $ function has very broad support in the time domain ; it decays very slowly as you move in time away from its main lobe. for many applications, this is not a desirable property ; when you convolve a signal with a $ sinc $, the effects of the slowly - decaying sidelobes will often be apparent in the time - domain form of the filtered output signal. this sort of effect is often referred to as ringing. if you know what you're doing, there are some instances where this type of filtering might be appropriate, but in the general case, it's not what you want. there are more practical means of applying lowpass filters, both in the time and frequency domains. finite impulse response and infinite impulse response filters can be applied directly using their difference equation representation. or, if your filter has a sufficiently - long impulse response, you can often obtain performance benefits using fast convolution techniques based on the fft ( applying the filter by multiplying in the frequency domain instead of convolution in the time domain ), like the overlap - save and overlap - add methods.
|
https://api.stackexchange.com
|
when the plug starts to slip out of the jack, very often it's the ground contact ( sleeve ) that breaks its connection first, leaving the two " hot " leads ( left and right, tip and ring ) still connected. with the ground open like this, both earpieces still get a signal, but now it's the " difference " signal between the left and right channels ; any signal that is in - phase in both channels cancels out. recording engineers tend to place the lead vocal signal right in the middle of the stereo image, so that's just one example of an in - phase signal that disappears when you're listening to the difference signal.
|
https://api.stackexchange.com
|
your wire is not quite round ( almost no wire is ), and consequently it has a different vibration frequency along its principal axes1. you are exciting a mixture of the two modes of oscillation by displacing the wire along an axis that is not aligned with either of the principal axes. the subsequent motion, when analyzed along the axis of initial excitation, is exactly what you are showing. the first signal you show - which seems to " die " then come back to life, is exactly what you expect to see when you have two oscillations of slightly different frequency superposed ; in fact, from the time to the first minimum we can estimate the approximate difference in frequency : it takes 19 oscillations to reach a minimum, and since the two waves started out in phase, that means they will be in phase again after about 38 oscillations, for a 2. 5 % difference in frequency. update here is the output of my little simulation. it took me a bit of time to tweak things, but with frequencies of 27 hz and 27. 7 hz respectively and after adjusting the angle of excitation a little bit, and adding significant damping i was able to generate the following plots : which looks a lot like the output of your tracker. your wire is describing a lissajous figure. very cool experiment - well done capturing so much detail! here is an animation that i made, using a frequency difference of 0. 5 hz and a small amount of damping, and that shows how the rotation changes from clockwise to counterclockwise : for your reference, here is the python code i used to generate the first pair of curves. not the prettiest code... i scale things twice. you can probably figure out how to reduce the number of variables needed to generate the same curve - in the end it's a linear superposition of two oscillations, observed at a certain angle to their principal axes. import numpy as np import matplotlib. pyplot as plt from math import pi, sin, cos f1 = 27. 7 f2 = 27 theta = 25 * pi / 180. # different amplitudes of excitation a1 = 2. 0 a2 = 1. 0 t = np. linspace ( 0, 1, 400 ) # damping factor k = 1. 6 # raw oscillation along principal axes : a1 = a1 * np. cos ( 2 * pi * f1 * t
|
https://api.stackexchange.com
|
) * np. exp ( - k * t ) a2 = a2 * np. cos ( 2 * pi * f2 * t ) * np. exp ( - k * t ) # rotate the axes of detection y1 = cos ( theta ) * a1 - sin ( theta ) * a2 y2 = sin ( theta ) * a1 + cos ( theta ) * a2 plt. figure ( ) plt. subplot ( 2, 1, 1 ) plt. plot ( t, - 20 * y2 ) # needed additional scale factor plt. xlabel ('t') plt. ylabel ('x') plt. subplot ( 2, 1, 2 ) plt. plot ( t, - 50 * y1 ) # and a second scale factor plt. xlabel ('t') plt. ylabel ('y') plt. show ( ) 1. the frequency of a rigid beam is proportional to $ \ sqrt { \ frac { ei } { a \ rho } } $, where $ e $ is young's modulus, $ i $ is the second moment of area, $ a $ is the cross sectional area and $ \ rho $ is the density ( see section 4. 2 of " the vibration of continuous structures " ). for an elliptical cross section with semimajor axis $ a $ and $ b $, the second moment of area is proportional to $ a ^ 3 b $ ( for vibration along axis $ a $ ). the ratio of resonant frequencies along the two directions will be $ \ sqrt { \ frac { a ^ 3b } { ab ^ 3 } } = \ frac { a } { b } $. from this it follows that a 30 gage wire ( 0. 254 mm ) with a 2. 5 % difference in resonant frequency needs the perpendicular measurements of diameter to be different by just 6 µm to give the effect you observed. given the cost of a thickness gage with 1 µm resolution, this is really a very ( cost ) effective way to determine whether a wire is truly round.
|
https://api.stackexchange.com
|
added mid 2022 : a lightly edited version of a comment by @ littlewhole in 2022 the world is moving towards the far more robust and convenient usb - c connector. while there are still issues with usb - c ( including even mechanical incompatibilities ), things are slowly being addressed ( i. e. usb4 standard on the protocol side ) and i have only ever encountered one usb - c cable that wouldn't plug into a usb - c receptacle in my life. adoption of usb - c is definitely picking up the pace - not just in consumer electronics, but a motor controller for my school's robotics club has even adopted usb - c _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a major flaw : a major factor in abandoning mini - usb is that it was fatally flawed mechanically. most people who have used a mini - usb device which requires many insertions will have experienced poor reliability after a significant but not vast number of uses. the original mini - usb had an extremely poor insertion lifetime - about 1000 insertions total claimed. that's about once a day for 3 years. or 3 times a day for one year. or... for some people that order of reliability may be acceptable and the problems may go unnoticed. for others it becomes a major issue. a photographer using a flash card reader may expend that lifetime in well under a year. the original mini - usb connector had sides which sloped as at present but they were reasonably straight. ( much the same as the sides on a micro - a connector ). these are now so rare that i couldn't find an image using a web search. this image is diagrammatic only but shows the basic shape with sloped but straight sides. efforts were made to address the low lifetime issues while maintaining backwards compatibility and the current " kinked sides " design was produced. both plug and socket were changed but the sockets ( " receptacle " ) will still accept the old straight sided plugs. this is the shape that we are all so used to that the old shape is largely forgotten. unfortunately, this alteration " only sort of worked ". insertion lifetime was increased to about 5, 000 cycles. this sounds high enough in theory but in practice the design was still walking wounded with respect to mechanical reliability. 5, 000 cycles is a very poor rating in the connector industry. while most users will not achieve that many insertion cycles, the actual reliability in heavy use is poor.
|
https://api.stackexchange.com
|
the micro - usb connector was designed with these past failings in mind and has a rated lifetime of about 10, 000 insertion cycles. this despite its apparent frailty and what may appear to be a less robust design. [ this still seems woefully low to me. time will tell ]. latching unlike mini usb, micro usb has a passive latching mechanism which increases retention force but which allows removal without active user action ( apart from pulling ). [ latching seems liable to reduce the plug " working " in the receptacle and may increase reliability ]. size matters : the micro and mini usb connectors are of similar width. but the micro connector is much thinner ( smaller vertical dimension ). some product designs were not able to accommodate the height of the mini receptacle and the new thinner receptacle will encourage and allow thinner products. a mini - usb socket would have been too tall for thin design. by way of example - a number of motorola's " razr " cellphones used micro - usb receptacles, thus allowing the designs to be thinner than would have been possible with a mini - usb receptacle. specific razr models which use micro - usb include razr2 v8, razr2 v9, razr2 v9m, razr2 v9x, droid razr, razr maxx & razr ve20. wikipedia on usb - see " durability ". connector manufacturer molex's micro usb page they say : micro - usb technology was developed by the usb implementers forum, inc. ( usb - if ), an independent nonprofit group that advances usb technology. molex's micro - usb connectors offer advantages of smaller size and increased durability compared with the mini - usb. micro - usb connectors allow manufacturers to push the limits of thinner and lighter mobile devices with sleeker designs and greater portability. micro - usb replaces a majority of mini - usb plugs and receptacles currently in use. the specification of the micro - usb supports the current usb on - the - go ( otg ) supplement and provides total mobile interconnectivity by enabling portable devices to communicate directly with each other without the need for a host computer.... other key features of the product include high durability of over 10, 000 insertion cycles, and a passive latching mechanism that provides higher extraction forces without sacrificing the usb's ease -
|
https://api.stackexchange.com
|
of - use when synchronizing and charging portable devices. all change : once all can change, all tend to. a significant driver to a common usb connector is the new usb charging standard which is being adopted by all cellphone makers. ( or all who wish to survive ). the standard relates primarily to the electrical standards required to allow universal charging and chargers but a common mechanical connection system using the various micro - usb components is part of the standard. whereas in the past it only really mattered that your'whizzygig'could plug into its supplied power supply, it is now required that any whizzygig's power supply will fit any other device. a common plug and socket system is a necessary minimum for this to happen. while adapters can be used this is an undesirable approach. as usb charging becomes widely accepted not only for cellphones but for xxxpods, xxxpads, pda's and stuff in general, the drive for a common connector accelerates. the exception may be manufacturers whose names begin with a who consider themselves large enough and safe enough to actively pursue interconnect incompatibility in their products. once a new standard is widely adopted and attains'critical mass " the economics of scale tend to drive the market very rapidly to the new standard. it becomes increasingly less cost effective to manufacture and stock and handle parts which have a diminishing market share and which are incompatible with new facilities. i may add some more references to this if it appears there is interest - or ask mr gargoyle. large list of cellphones that use micro - usb receptacle _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a few more images allowing comparisons of a range of aspects including thickness, area of panel, overall volume ( all being important independently of the others to some for various reasons ) and retention means. large google image samples each linked to a web page and more useful discussion & brief history note : they say ( and, as bailey s also notes ) why micro types offer better durability? accomplished by moving leaf - spring from the pcb receptacle to plug, the most - stressed part is now on the cable side of the connection. inexpensive cable bears most wear instead of
|
https://api.stackexchange.com
|
the µusb device. maybe useful : usb connector guide — guide to usb cables usb connections compared what is micro usb vs mini usb
|
https://api.stackexchange.com
|
good observation! gene coding for the lactase gene lct mammals have a gene ( called lct c / t - 13910 ) coding for the lactase enzyme, a protein able to digest lactose. lactose is a disaccharide sugar found in milk. expression of lct in mammals, the gene lct is normally expressed ( see gene expression ) only early in development, when the baby feeds on his / her mother's milk. some human lineages have evolved the ability to express lct all life long, allowing them to drink milk and digest lactose at any age. today, the inability to digest lactose at all ages in humans is called lactose intolerance. evolution of lactose tolerance in human three independent mutations tishkoff et al. 2007 found that the ability to express lct at an old age has evolved at least three times independently. indeed, they found three different snps ( stands for single nucleotide polymorphism ; it is a common type of mutation ), two of them having high prevalence in africa ( and people of african descent ) and one having high prevalence in europe ( and people of european descent ). the three snps are g / c - 14010, t / g - 13915 and c / g - 13907. pastoralist populations lactose tolerance is much more common in people descending from pastoralist populations than in people descending from non - pastoralist populations, suggesting a strong selection for lactose tolerance durham 1991. selective sweep on top of that, tishkoff et al. 2007 focusing on the locus 14010 ( one of the three snp's mentioned above ) showed that there is a clear selective sweep ( which is a signature of past and present selection ) around this locus. they estimated the age of the allele allowing lactose tolerance at this locus ( allele c is derived, the ancestral being g ; see nucleotide ) at around 3, 000 to 7, 000 years ( with a 95 % confidence interval ranging from 1, 200 to 23, 200 years ) and a selection coefficient of 0. 04 - 0. 097 ( with a 95 % confidence interval ranging from 0. 01 to 0. 15 ). i recommend reading tishkoff et al. 2007. it is a classic, is short and is relatively easy to read, even for someone with only basic knowledge in evolutionary biology. are humans the only animal that is able to drink milk as adults? i don't really know... but i
|
https://api.stackexchange.com
|
would think so, yes! drink vs digest thoroughly as @ anongoodnurse rightly said in his / her answer " drink " and " digest thoroughly " are two different things pets according to many dog health websites ( such this one for example ) claim that there is also variance among dogs where some dogs are lactose tolerant and others are lactose intolerant. i could not find any paper on the underlying genetics of lactose intolerance in dogs or other pets. it is not impossible our pets have also been under selection to be able to digest lactose as we humans could have given milk to them. it is also possible that pets do not actually produce any lactase at adult age but rather that some pets are just able to deal with having indigestible lactose in their guts! but then again, " drink " and " digest thoroughly " are two different things. tits and robins in 20th century england a funny and famous case is the case of blue tits and robins in the 20th century, in england. at that time, in england, the milkman was bringing the milk at home in the morning and would leave glass bottles with a simple aluminum cap in front of people's home. at some point, blue tits and robins learnt that by pecking through the aluminum they can get access to the milk. see this ( non - peer - reviewed ) article that tells the story. somewhat related there are already a number of good posts on milk digestion in humans on biology. se. consider having a look at : what inactivates pepsin in infants? and seriously, do humans produce rennin? are there any non - mammalian species known that lactate? can an adult without genetic lactase persistence still develop a tolerance for dairy foods?
|
https://api.stackexchange.com
|
from the manual of velvet : it must be an odd number, to avoid palindromes. if you put in an even number, velvet will just decrement it and proceed. the palindromes in biology are defined as reverse complementary sequences. the problem of palindromes is explained in this review : palindromes induce paths that fold back on themselves. at least one assembler avoids these elegantly ; velvet requires k, the length of a k - mer, to be odd. an odd - size k - mer cannot match its reverse complement. it is possible to construct graph with palindromes, but then the interpretation will be harder. allowing only graphs of odd k - mers is just an elegant way to avoid writing a code for interpretation of a more complicated graph.
|
https://api.stackexchange.com
|
an epic question. unfortunately, the short answer is : no, there are no widely used solutions. for several thousand samples, bcf2, the binary representation of vcf, should work well. i don't see the need of new tools at this scale. for a larger sample size, exac people are using spark - based hail. it keeps all per - sample annotations ( like gl, gq and dp ) in addition to genotypes. hail is at least something heavily used in practice, although mostly by a few groups so far. a simpler problem is to store genotypes only. this is sufficient to the majority of end users. there are better approaches to store and query genotypes. gqt, developed by the gemini team, enables fast query of samples. it allows you to quickly pull samples under certain genotype configurations. as i remember, gqt is orders of magnitude faster than google genomics api to do pca. another tool is bgt. it produces a much smaller file and provides fast and convenient queries over sites. its paper talks about ~ 32k whole - genome samples. i am in the camp who believe specialized binary formats like gqt and bgt are faster than solutions built on top of generic databases. i would encourage you to have a look if you only want to query genotypes. intel's genomicdb approaches the problem in a different angle. it does not actually keep a " squared " multi - sample vcf internally. it instead keeps per - sample genotypes / annotations and generates merged vcf on the fly ( this is my understanding, which could be wrong ). i don't have first - hand experience with genomicdb, but i think something in this line should be the ultimate solution in the era of 1m samples. i know gatk4 is using it at some step. as to others in your list, gemini might not scale that well, i guess. it is partly the reason why they work on gqt. last time i checked, bigquery did not query individual genotypes. it only queries over site statistics. google genomics apis access individual genotypes, but i doubt it can be performant. adam is worth trying. i have not tried, though.
|
https://api.stackexchange.com
|
crystallin proteins are found in the eye lens ( where their main job is probably to define the refractive index of the medium ) ; they are commonly considered to be non - regenerated. so, your crystallins are as old as you are! because of this absence of regeneration, the accumulate damage over time, including proteolysis, cross - linkings etc., which is one of the main reasons why visual acuity decays after a certain age : that is where cataracts come from. the cloudy lens is the result of years of degradation events in a limited pool of non - renewed proteins. edit : a few references : this article shows that one can use 14c radiodating to determine the date of synthesis of lens proteins, because of their exceptionally low turnover : lynnerup, " radiocarbon dating of the human eye lens crystallines reveal proteins without carbon turnover throughout life ", plos one ( 2008 ) 3 : e1529 this excellent review suggested by iayork ( thanks! ) lists long - lived proteins ( including crystallins ) and how they were identified as such : toyama & hetzer, " protein homeostasis : live long, won ’ t prosper " nat rev mol cell biol. ( 2013 ) 14 : 55 – 61
|
https://api.stackexchange.com
|
i understand that covalent bonding is an equilibrium state between attractive and repulsive forces, but which one of the fundamental forces actually causes atoms to attract each other? the role of pauli exclusion in bonding it is an unfortunate accident of history that because chemistry has a very convenient and predictive set of approximations for understanding bonding, some of the details of why those bonds exist can become a bit hard to discern. it's not that they aren't there - - they most emphatically are! - - but you often have to dig a bit deeper to find them. they are found in physics, in particular in the concept of pauli exclusion. chemistry as avoiding black holes let's take your attraction question first. what causes that? well, in one sense that question is easy : it's electrostatic attraction, the interplay of pulls between positively charged nuclei and negatively charged electrons. but even in saying that, something is wrong. here's the question that points that out : if nothing else was involved except electrostatic attraction, what would be the most stable configuration of two or more atoms with a mix of positive and negative charges? the answer to that is a bit surprising. if the charges are balanced, the only stable, non - decaying answer for conventional ( classical ) particles is always the same : " a very, very small black hole. " of course, you could modify that a bit by assuming that the strong force is for some reason stable, in which case the answer becomes " a bigger atomic nucleus, " one with no electrons around it. or maybe atoms as get fuzzy? at this point, some of you reading this should be thinking loudly " now wait a minute! electrons don't behave like point particles in atoms, because quantum uncertainty makes them'fuzz out'as they get close to the nucleus. " and that is exactly correct - - i'm fond of quoting that point myself in other contexts! however, the issue here is a bit different, since even " fuzzed out " electrons provide a poor barrier for keeping other electrons away by electrostatic repulsion alone, precisely because their charge is so diffuse. the case of electrons that lack pauli exclusion is nicely captured by richard feynman in his lectures on physics, in volume iii, chapter 4, page 4 - 13, figure 4 - 11 at the top of the page. the outcome feynman describes is pretty boring since atoms would remain simple, smoothly spherical, and about the same size as more and more proton
|
https://api.stackexchange.com
|
##s and electrons get added in. while feynman does not get into how such atoms would interact, there's a problem there too. because the electron charges would be so diffuse in comparison to the nuclei, the atoms would pose no real barrier to each other until the nuclei themselves begin to repel each other. the result would be a very dense material that would have more in common with neutronium than with conventional matter. for now, i'll just forge ahead with a more classical description, and capture the idea of the electron cloud simply by asserting that each electron is selfish and likes to capture as much " address space " ( see below ) as possible. charge - only is boring! so, while you can finagle with funny configurations of charges that might prevent the inevitable for a while by pitting positive against positive and negative against negative, positively charged nuclei and negatively charged electrons with nothing much else in play will always wind up in the same bad spot : either as very puny black holes or as tiny boring atoms that lack anything resembling chemistry. a universe full of nothing but various sizes of black holes or simple homogenous neutronium is not very interesting! preventing the collapse so, to understand atomic electrostatic attraction properly, you must start with the inverse issue : what in the world is keeping these things from simply collapsing down to zero size - - that is, where is the repulsion coming from? and that is your next question : also, am i right to think that " repulsion occurs when atoms are too close together " comes from electrostatic interaction? no ; that is simply wrong. in the absence of " something else, " the charges will wiggle about and radiate until any temporary barrier posed by identical charges simply becomes irrelevant... meaning that once again you will wind up with those puny black holes. what keeps atoms, bonds, and molecules stable is always something else entirely, a " force " that is not traditionally thought of as being a force at all, even though it is unbelievably powerful and can prevent even two nearby opposite electrical charges from merging. the electrostatic force is enormously powerful at the tiny separation distances within atoms, so anything that can stop charged particles from merging is impressive! the " repulsive force that is not a force " is the pauli exclusion i mentioned earlier. a simple way to think of pauli exclusion is that identical material particles ( electrons, protons, and neutrons in particular ) all insist on having completely unique " addresses " to tell them
|
https://api.stackexchange.com
|
apart from other particles of the same type. for an electron, this address includes : where the electron is located in space, how fast and in what direction it is moving ( momentum ), and one last item called spin, which can only have on of two values that are usually called " up " or " down. " you can force such material particles ( called fermions ) into nearby addresses, but with the exception of that up - down spin part of the address, doing so always increases the energy of at least one of the electrons. that required increase in energy, in a nutshell, is why material objects push back when you try to squeeze them. squeezing them requires minutely reducing the available space of many of the electrons in the object, and those electrons respond by capturing the energy of the squeeze and using it to push right back at you. now, take that thought and bring it back to the question about where repulsion comes from when two atoms bond at a certain distance, but no closer. they are the same mechanism! that is, two atoms can " touch " ( move so close, but no closer ) only because they both have a lot of electrons that require separate space, velocity, and spin addresses. push them together and they start hissing like cats from two households who have suddenly been forced to share the same house. ( if you own multiple cats, you'll know exactly what i mean by that. ) so, what happens is that the overall set of plus - and - minus forces of the two atoms is trying really hard to crush all of the charges down into a single very tiny black hole - - not into some stable state! it is only the hissing and spitting of the overcrowded and very unhappy electrons that keep this event from happening. orbitals as juggling acts but just how does that work? it's sort of a juggling act, frankly. electrons are allowed to " sort of " occupy many different spots, speeds, and spins ( mnemonic $ s ^ 3 $, and no, that is not standard, i'm just using it for convenience in this answer only ) at the same time, due to quantum uncertainty. however, it's not necessary to get into that here beyond recognizing that every electron tries to occupy as much of its local $ s ^ 3 $ address space as possible. juggling between spots and speeds requires energy. so, since only so much energy is available, this is the part of the juggling act that gives atoms size and shape.
|
https://api.stackexchange.com
|
when all the jockeying around wraps up, the lowest energy situations keep the electrons stationed in various ways around the nucleus, not quite touching each other. we call those special solutions to the crowding problem orbitals, and they are very convenient for understanding and estimating how atoms and molecules will combine. orbitals as specialized solutions however, it's still a good idea to keep in mind that orbitals are not exactly fundamental concepts, but rather outcomes of the much deeper interplay of pauli exclusion with the unique masses, charges, and configurations of nuclei and electrons. so, if you toss in some weird electron - like particle such as a muon or positron, standard orbital models have to be modified significantly, and applied only with great care. standard orbitals can also get pretty weird just from having unusual geometries of fully conventional atomic nuclei, with the unusual dual hydrogen bonding found in boron hydrides such as diborane probably being the best example. such bonding is odd if viewed in terms of conventional hydrogen bonds, but less so if viewed simply as the best possible " electron juggle " for these compact cases. " jake! the bond! " now on to the part that i find delightful, something that underlies the whole concept of chemical bonding. do you recall that it takes energy to squeeze electrons together in terms of the main two parts of their " addresses, " the spots ( locations ) and speeds ( momenta )? i also mentioned that spin is different in this way : the only energy cost for adding two electrons with different spin addresses is that of conventional electrostatic repulsion. that is, there is no " forcing them closer " pauli exclusion cost as you get for locations and velocities. now you might think, " but electrostatic repulsion is huge! ", and you would be exactly correct. however, compared to the pauli exclusion " non - force force " cost, the energy cost of this electrostatic repulsion is actually quite small - - so small that it can usually be ignored for small atoms. so when i say that pauli exclusion is powerful, i mean it, since it even makes the enormous repulsion of two electrons stuck inside the same tiny sector of a single atom look so insignificant that you can usually ignore its impact! but that's secondary because the real point is this : when two atoms approach each other closely, the electrons start fighting fierce energy - escalation battles that keep both atoms from collapsing all the way down into a black hole.
|
https://api.stackexchange.com
|
but there is one exception to that energetic infighting : spin! for spin and spin alone, it becomes possible to get significantly closer to that final point - like collapse that all the charges want to do. spin thus becomes a major " hole " - - the only such major hole - - in the ferocious armor of repulsion produced by pauli exclusion. if you interpret atomic repulsion due to pauli exclusion as the norm, then spin - pairing two electrons becomes another example of a " force that is not a force, " or a pseudo force. in this case, however, the result is a net attraction. that is, spin - pairing allows two atoms ( or an atom and an electron ) to approach each other more closely than pauli exclusion would otherwise permit. the result is a significant release of electrostatic attraction energy. that release of energy in turn creates a stable bond since it cannot be broken unless that same energy is returned. sharing ( and stealing ) is cheaper so, if two atoms ( e. g. two hydrogen atoms ) each have an outer orbital that contains only one electron, those two electrons can sort of look each other over and say, " you know, if you spin downwards and i spin upwards, we could both share this space for almost no energy cost at all! " and so they do, with a net release of energy, producing a covalent bond if the resulting spin - pair cancels out positive nuclear charges equally on both atoms. however, in some cases, the " attractive force " of spin - pairing is so overwhelmingly greater for one of the two atoms that it can pretty much fully overcome (! ) the powerful electrostatic attraction of the other atom for its own electron. when that happens, the electron is simply ripped away from the other atom. we call that an ionic bond, and we act as it if it's no big deal. but it is truly an amazing thing, one that is possible only because of the pseudo force of spin - pairing. bottom line : pseudo forces are important! my apologies for having given such a long answer, but you happened to ask a question that cannot be answered correctly without adding in some version of pauli " repulsion " and spin - pair " attraction. " for that matter, the size of an atom, the shape of its orbitals, and its ability to form bonds similarly all depend on pseudo forces.
|
https://api.stackexchange.com
|
intel support for ieee float16 storage format intel supports ieee half as a storage type in processors since ivy bridge ( 2013 ). storage type means you can get a memory / cache capacity / bandwidth advantage but the compute is done with single precision after converting to and from the ieee half precision format. intel support for bfloat16 intel has announced support for bf16 in cooper lake and sapphire rapids. ( the june 2020 update 319433 - 040 describes amx bf16 ) i work for intel. i ’ m citing official sources and will not comment on rumors etc. it is good to be curious about the relative merits of ieee fp16 vs bf16. there is a lot of analysis of this topic, e. g. non - intel hardware support the following is information on other processors. please verify with the vendors as necessary. lists the following hardware support : amd - mi5, mi8, mi25 arm - neon vfp fp16 in v8. 2 - a nvidia - pascal and volta nvidia ampere has fp16 support as well (
|
https://api.stackexchange.com
|
logical and : use the linear constraints $ y _ 1 \ ge x _ 1 + x _ 2 - 1 $, $ y _ 1 \ le x _ 1 $, $ y _ 1 \ le x _ 2 $, $ 0 \ le y _ 1 \ le 1 $, where $ y _ 1 $ is constrained to be an integer. this enforces the desired relationship. see also logical or : use the linear constraints $ y _ 2 \ le x _ 1 + x _ 2 $, $ y _ 2 \ ge x _ 1 $, $ y _ 2 \ ge x _ 2 $, $ 0 \ le y _ 2 \ le 1 $, where $ y _ 2 $ is constrained to be an integer. logical not : use $ y _ 3 = 1 - x _ 1 $. logical implication : to express $ y _ 4 = ( x _ 1 \ rightarrow x _ 2 ) $ ( i. e., $ y _ 4 = \ neg x _ 1 \ lor x _ 2 $ ), we can adapt the construction for logical or. in particular, use the linear constraints $ y _ 4 \ le 1 - x _ 1 + x _ 2 $, $ y _ 4 \ ge 1 - x _ 1 $, $ y _ 4 \ ge x _ 2 $, $ 0 \ le y _ 4 \ le 1 $, where $ y _ 4 $ is constrained to be an integer. forced logical implication : to express that $ x _ 1 \ rightarrow x _ 2 $ must hold, simply use the linear constraint $ x _ 1 \ le x _ 2 $ ( assuming that $ x _ 1 $ and $ x _ 2 $ are already constrained to boolean values ). xor : to express $ y _ 5 = x _ 1 \ oplus x _ 2 $ ( the exclusive - or of $ x _ 1 $ and $ x _ 2 $ ), use linear inequalities $ y _ 5 \ le x _ 1 + x _ 2 $, $ y _ 5 \ ge x _ 1 - x _ 2 $, $ y _ 5 \ ge x _ 2 - x _ 1 $, $ y _ 5 \ le 2 - x _ 1 - x _ 2 $, $ 0 \ le y _ 5 \ le 1 $, where $ y _ 5 $ is constrained to be an integer. another helpful technique for handling complex boolean formulas is to convert them to cnf, then apply the rules above for converting
|
https://api.stackexchange.com
|
and, or, and not. and, as a bonus, one more technique that often helps when formulating problems that contain a mixture of zero - one ( boolean ) variables and integer variables : cast to boolean ( version 1 ) : suppose you have an integer variable $ x $, and you want to define $ y $ so that $ y = 1 $ if $ x \ ne 0 $ and $ y = 0 $ if $ x = 0 $. if you additionally know that $ 0 \ le x \ le u $, then you can use the linear inequalities $ 0 \ le y \ le 1 $, $ y \ le x $, $ x \ le uy $ ; however, this only works if you know an upper and lower bound on $ x $. alternatively, if you know that $ | x | \ le u $ ( that is, $ - u \ le x \ le u $ ) for some constant $ u $, then you can use the method described here. this is only applicable if you know an upper bound on $ | x | $. cast to boolean ( version 2 ) : let's consider the same goal, but now we don't know an upper bound on $ x $. however, assume we do know that $ x \ ge 0 $. here's how you might be able to express that constraint in a linear system. first, introduce a new integer variable $ t $. add inequalities $ 0 \ le y \ le 1 $, $ y \ le x $, $ t = x - y $. then, choose the objective function so that you minimize $ t $. this only works if you didn't already have an objective function. if you have $ n $ non - negative integer variables $ x _ 1, \ dots, x _ n $ and you want to cast all of them to booleans, so that $ y _ i = 1 $ if $ x _ i \ ge 1 $ and $ y _ i = 0 $ if $ x _ i = 0 $, then you can introduce $ n $ variables $ t _ 1, \ dots, t _ n $ with inequalities $ 0 \ le y _ i \ le 1 $, $ y _ i \ le x _ i $, $ t _ i = x _ i - y _ i $ and define the objective function to minimize $ t _ 1 + \ dots + t _ n $. again, this only
|
https://api.stackexchange.com
|
works nothing else needs to define an objective function ( if, apart from the casts to boolean, you were planning to just check the feasibility of the resulting ilp, not try to minimize / maximize some function of the variables ). for some excellent practice problems and worked examples, i recommend formulating integer linear programs : a rogues'gallery.
|
https://api.stackexchange.com
|
one approach that i have used in the past is to maintain a phase accumulator which is used as an index into a waveform lookup table. a phase delta value is added to the accumulator at each sample interval : phase _ index + = phase _ delta to change frequency you change the phase delta that is added to the phase accumulator at each sample, e. g. phase _ delta = n * f / fs where : phase _ delta is the number of lut samples to increment freq is the desired output frequency fs is the sample rate n is the size of the lut this guarantees that the output waveform is continuous even if you change phase _ delta dynamically, e. g. for frequency changes, fm, etc. for smoother changes in frequency ( portamento ) you can ramp the phase _ delta value between its old value and new value over a suitable number of samples intervals rather than just changing it instantaneously. note that phase _ index and phase _ delta both have an integer and a fractional component, i. e. they need to be floating point or fixed point. the integer part of phase _ index ( modulo table size ) is used as an index into the waveform lut, and the fractional part may optionally be used for interpolation between adjacent lut values for higher quality output and / or smaller lut size.
|
https://api.stackexchange.com
|
haha! the student probably has a more reasonable interpretation of the question. of course, cutting one thing into two pieces requires only one cut! cutting something into three pieces requires two cuts! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 0 cuts / 1 piece / 0 minutes - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - 1 cut / 2 pieces / 10 minutes - - - - - - - - - | - - - - - - - - - - - | - - - - - - - - - 2 cuts / 3 pieces / 20 minutes this is a variation of the " fence post " problem : how many posts do you need to build a 100 foot long fence with 10 foot sections between the posts? answer : 11 you have to draw the problem to get it... see below, and count the posts! | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | - - - - - | 0 - - - - - 10 - - - - 20 - - - - 30 - - - - 40 - - - - 50 - - - - 60 - - - - 70 - - - - 80 - - - - 90 - - - 100
|
https://api.stackexchange.com
|
three sentence version : each layer can apply any function you want to the previous layer ( usually a linear transformation followed by a squashing nonlinearity ). the hidden layers'job is to transform the inputs into something that the output layer can use. the output layer transforms the hidden layer activations into whatever scale you wanted your output to be on. like you're 5 : if you want a computer to tell you if there's a bus in a picture, the computer might have an easier time if it had the right tools. so your bus detector might be made of a wheel detector ( to help tell you it's a vehicle ) and a box detector ( since the bus is shaped like a big box ) and a size detector ( to tell you it's too big to be a car ). these are the three elements of your hidden layer : they're not part of the raw image, they're tools you designed to help you identify busses. if all three of those detectors turn on ( or perhaps if they're especially active ), then there's a good chance you have a bus in front of you. neural nets are useful because there are good tools ( like backpropagation ) for building lots of detectors and putting them together. like you're an adult a feed - forward neural network applies a series of functions to the data. the exact functions will depend on the neural network you're using : most frequently, these functions each compute a linear transformation of the previous layer, followed by a squashing nonlinearity. sometimes the functions will do something else ( like computing logical functions in your examples, or averaging over adjacent pixels in an image ). so the roles of the different layers could depend on what functions are being computed, but i'll try to be very general. let's call the input vector $ x $, the hidden layer activations $ h $, and the output activation $ y $. you have some function $ f $ that maps from $ x $ to $ h $ and another function $ g $ that maps from $ h $ to $ y $. so the hidden layer's activation is $ f ( x ) $ and the output of the network is $ g ( f ( x ) ) $. why have two functions ( $ f $ and $ g $ ) instead of just one? if the level of complexity per function is limited, then $ g ( f ( x ) ) $ can compute things that $ f $ and $ g $ can't
|
https://api.stackexchange.com
|
do individually. an example with logical functions : for example, if we only allow $ f $ and $ g $ to be simple logical operators like " and ", " or ", and " nand ", then you can't compute other functions like " xor " with just one of them. on the other hand, we could compute " xor " if we were willing to layer these functions on top of each other : first layer functions : make sure that at least one element is " true " ( using or ) make sure that they're not all " true " ( using nand ) second layer function : make sure that both of the first - layer criteria are satisfied ( using and ) the network's output is just the result of this second function. the first layer transforms the inputs into something that the second layer can use so that the whole network can perform xor. an example with images : slide 61 from this talk - - also available here as a single image - - shows ( one way to visualize ) what the different hidden layers in a particular neural network are looking for. the first layer looks for short pieces of edges in the image : these are very easy to find from raw pixel data, but they're not very useful by themselves for telling you if you're looking at a face or a bus or an elephant. the next layer composes the edges : if the edges from the bottom hidden layer fit together in a certain way, then one of the eye - detectors in the middle of left - most column might turn on. it would be hard to make a single layer that was so good at finding something so specific from the raw pixels : eye detectors are much easier to build out of edge detectors than out of raw pixels. the next layer up composes the eye detectors and the nose detectors into faces. in other words, these will light up when the eye detectors and nose detectors from the previous layer turn on with the right patterns. these are very good at looking for particular kinds of faces : if one or more of them lights up, then your output layer should report that a face is present. this is useful because face detectors are easy to build out of eye detectors and nose detectors, but really hard to build out of pixel intensities. so each layer gets you farther and farther from the raw pixels and closer to your ultimate goal ( e. g. face detection or bus detection ). answers to assorted other questions " why are some layers in the input layer connected to the hidden layer
|
https://api.stackexchange.com
|
and some are not? " the disconnected nodes in the network are called " bias " nodes. there's a really nice explanation here. the short answer is that they're like intercept terms in regression. " where do the " eye detector " pictures in the image example come from? " i haven't double - checked the specific images i linked to, but in general, these visualizations show the set of pixels in the input layer that maximize the activity of the corresponding neuron. so if we think of the neuron as an eye detector, this is the image that the neuron considers to be most eye - like. folks usually find these pixel sets with an optimization ( hill - climbing ) procedure. in this paper by some google folks with one of the world's largest neural nets, they show a " face detector " neuron and a " cat detector " neuron this way, as well as a second way : they also show the actual images that activate the neuron most strongly ( figure 3, figure 16 ). the second approach is nice because it shows how flexible and nonlinear the network is - - these high - level " detectors " are sensitive to all these images, even though they don't particularly look similar at the pixel level. let me know if anything here is unclear or if you have any more questions.
|
https://api.stackexchange.com
|
you are correct that the fwt is better thought of as a " cousin " of the stft, rather than the ft. in fact, the fwt is just a discrete sampling of the cwt ( continuous wavelet transform ), as the fft / dft is a discrete sampling of the fourier transform. this may seem like a subtle point, but it is relevant when choosing how you discretize the transform. the cwt and stft are both redundant analyses of a signal. in other words, you have more " coefficients " ( in the discrete case ) than you need to fully represent a signal. however, a fourier transform ( or say a wavelet transform using only one scale ) integrate a signal from - infinity to + infinity. this is not very useful on real world signals, so we truncate ( i. e. window ) the transforms to shorter lengths. windowing of a signal changes the transform - - you multiply by the window in time / space, so in transform space you have the convolution of the transform of the window with the transform of the signal. in the case of the stft, the windows are ( usually ) the same length ( non - zero extent ) at all time, and are frequency agnostic ( you window a 10 hz signal the same width as a 10 khz signal ). so you get the rectangular grid spectrogram like you have drawn. the cwt has this windowing built in by the fact that the wavelets get shorter ( in time or space ) as the scale decreases ( like higher frequency ). thus for higher frequencies, the effective window is shorter in duration, and you end up with a scaleogram that looks like what you have drawn for the fwt. how you discretize the cwt is somewhat up to you, though i think there are minimum samplings in both shift and scale to fully represent a signal. typically ( at least how i've used them ), for lowest scale ( highest frequency ), you will sample at all shift locations ( time / space ). as you get higher in scale ( lower in frequency ), you can sample less often. the rationale is that low frequencies don't change that rapidly ( think of a cymbal crash vs. a bass guitar - - the cymbal crash has very short transients, whereas the bass guitar would take longer to change ). in fact, at the shortest scale ( assuming you sample at all shift locations ), you have the full representation
|
https://api.stackexchange.com
|
of a signal ( you can reconstruct it using only the coefficients at this scale ). i'm not so sure about the rationale of sampling the scale. i've seen this suggested as logarithmic, with ( i think ) closer spacing between shorter scales. i think this is because the wavelets at longer scales have a broader fourier transform ( therefore they " pick up " more frequencies ). i admit i do not fully understand the fwt. my hunch is that it is actually the minimum sampling in shift / scale, and is not a redundant representation. but then i think you lose the ability to analyze ( and mess with ) a signal in short time without introducing unwanted artifacts. i will read more about it and, if i learn anything useful, report back. hopefully others will like to comment.
|
https://api.stackexchange.com
|
look at candiedorange's answer this answer was accepted, but candiedorange has the right answer. see this document page 21 : the second way in which reflection can interfer e with controller ’ s vision is light sources within the cab ( or direct sunlight that enters the cab ), which can cause disturbing reflections during either day or night operations. the effects of these reflections can be a loss of contrast of the image being viewed, a masking effect of a competing image, or glare. the two ways to mitigate these effects are to reduce the reflection coefficient or to design the atct cab to reduce or eliminate the probability that any light source ( artificial or natural, direct or indirect ) can produce a reflection in the pathway of a controller ’ s view out of the cab windows. it controls glare. whenever the sun hits a window, it reflects off of it. if the windows are vertical, its pretty hard to control where that glint could go. when the sun is near the horizon, it could even be seen by other ships, but at the very least it can blind workers on your own ship. angling them doesn't prevent this from happening entirely, but it does substantially limit the places on the ship which can be hit by this glint to a small region around the bridge itself. this requirement appears in specifications such as these regulations from the uk : 1. 9 windows shall meet the following requirements : 1. 9. 1 to help avoid reflections, the bridge front windows shall be inclined from the vertical plane top out, at an angle of not less than 10° and not more than 25°.... these same rules are also applied to air traffic control towers at airports :
|
https://api.stackexchange.com
|
according to the fgsea preprint : we ran reference gsea with default parameters. the permutation number was set to 1000, which means that for each input gene set 1000 independent samples were generated. the run took 100 seconds and resulted in 79 gene sets with gsea - adjusted fdr q - value of less than 10−2. all significant gene sets were in a positive mode. first, to get a similar nominal p - values accuracy we ran fgsea algorithm on 1000 permutations. this took 2 seconds, but resulted in no significant hits due after multiple testing correction ( with frd ≤ 1 % ). thus, fgsea and gsea are not identical. and again in the conclusion : consequently, gene sets can be ranked more precisely in the results and, which is even more important, standard multiple testing correction methods can be applied instead of approximate ones as in [ gsea ]. the author argues that fgsea is more accurate, so it can't be equivalent. if you are interested specifically in the enrichment score, that was addressed by the author in the preprint comments : values of enrichment scores and normalized enrichment scores are the same for both broad version and fgsea. so that part seems to be the same.
|
https://api.stackexchange.com
|
i know that you are referring to the commonly ribosome - translated l - proteins, but i can't help but add that there are some peptides, called nonribosomal peptides, which are not dependent on the mrna and can incorporate d - amino acids. they have very important pharmaceutical properties. i recommend this ( 1 ) review article if you are interested in the subject. it is also worth mentioning that d - alanine and d - glutamine are incorporated into the peptidoglycane of bacteria. i read several papers ( 2, 3, 4 ) that discuss the problem of chirality but all of them conclude that there is no apparent reason why we live in the l - world. the l - amino acids should not have chemical advantages over the d - amino acids, as biocs already pointed out. reasons for the occurrence of the twenty coded protein amino acids ( 2 ) has an informative and interesting outline. this is the paragraph on the topic of chirality : this is related to the question of the origin of optical activity in living organisms on which there is a very large literature ( bonner 1972 ; norden 1978 ; brack and spack 1980 ). we do not propose to deal with this question here, except to note that arguments presented in this paper would apply to organisms constructed from either d or l amino acids. it might be possible that both l and d lives were present ( l / d - amino acids, l / d - enzymes recognizing l / d - substrates ), but, by random chance the l - world outcompeted the d - world. i also found the same question in a forum where one of the answers seems intriguing. i cannot comment on the reliability of the answer, but hopefully someone will have the expertise to do so : one, our galaxy has a chiral spin and a magnetic orientation, which causes cosmic dust particles to polarize starlight as circularly polarized in one direction only. this circularly polarized light degrades d enantiomers of amino acids more than l enantiomers, and this effect is clear when analyzing the amino acids found on comets and meteors. this explains why, at least in the milky way, l enantiomers are preferred. two, although gravity, electromagnetism, and the strong nuclear force are achiral, the weak nuclear force ( radioactive decay ) is chiral. during beta decay, the emitted electrons preferentially favor one kind of spin.
|
https://api.stackexchange.com
|
that's right, the parity of the universe is not conserved in nuclear decay. these chiral electrons once again preferrentially degrade d amino acids vs. l amino acids. thus due to the chirality of sunlight and the chirality of nuclear radiation, l amino acids are the more stable enantiomers and therefore are favored for abiogenesis. biosynthesis of nonribosomal peptides reasons for the occurrence of the twenty coded protein amino acids molecular basis for chiral selection in rna aminoacylation how nature deals with stereoisomers the adaptation of diastereomeric s - prolyl dipeptide derivatives to the quantitative estimation of r - and s - leucine enantiomers. bonner wa, 1972 the asymmetry of life. norden b, 1978 beta - structures of polypeptides with l - and d - residues. part iii. experimental evidences for enrichment in enantiomer. brack a, spach g, 1980
|
https://api.stackexchange.com
|
to understand the difference between kinetic and thermodynamic stability, you first have to understand potential energy surfaces, and how they are related to the state of a system. a potential energy surface is a representation of the potential energy of a system as a function of one or more of the other dimensions of a system. most commonly, the other dimensions are spatial. potential energy surfaces for chemical systems are usually very complex and hard to draw and visualize. fortunately, we can make life easier by starting with simple 2 - d models, and then extend that understanding to the generalized n - d case. so, we will start with the easiest type of potential energy to understand : gravitational potential energy. this is easy for us because we live on earth and are affected by it every day. we have developed an intuitive sense that things tend to move from higher places to lower places, if given the opportunity. for example, if i show you this picture : you can guess that the rock is eventually going to roll downhill, and eventually come to rest at the bottom of the valley. however, you also intuitively know that it is not going to move unless something moves it. in other words, it needs some kinetic energy to get going. i could make it even harder for the rock to get moving by changing the surface a little bit : now it is really obvious that the rock isn't going anywhere until it gains enough kinetic energy to overcome the little hill between the valley it is in, and the deeper valley to the right. we call the first valley a local minimum in the potential energy surface. in mathematical terms, this means that the first derivative of potential energy with respect to position is zero : $ $ \ frac { \ mathrm de } { \ mathrm dx } = 0 $ $ and the second derivative is positive : $ $ \ frac { \ mathrm d ^ 2e } { \ mathrm dx ^ 2 } \ gt 0 $ $ in other words, the slope is zero and the shape is concave up ( or convex ). the deeper valley to the right is the global minimum ( at least as far as we can tell ). it has the same mathematical properties, but the magnitude of the energy is lower – the valley is deeper. if you put all of this together, ( and can tolerate a little anthropomorphization ) you could say that the rock wants to get to the global minimum, but whether or not it can get there is determined by the amount of kinetic energy it
|
https://api.stackexchange.com
|
has. it needs at least enough kinetic energy to overcome all of the local maxima along the path between its current local minimum and the global minimum. if it doesn't have enough kinetic energy to move out of its current position, we say that it is kinetically stable or kinetically trapped. if it has reached the global minimum, we say it is thermodynamically stable. to apply this concept to chemical systems, we have to change the potential energy that we use to describe the system. gravitational potential energy is too weak to play much of a role at the molecular level. for large systems of reacting molecules, we instead look at one of several thermodynamic potential energies. the one we choose depends on which state variables are constant. for macroscopic chemical reactions, there is usually a constant number of particles, constant temperature, and either constant pressure or volume ( npt or nvt ), and so we use the gibbs free energy ( $ g $ for npt systems ) or the helmholtz free energy ( $ a $ for nvt systems ). each of these is a thermodynamic potential under the appropriate conditions, which means that it does the same thing that gravitational potential energy does : it allows us to predict where the system will go, if it gets the opportunity to do so. for kinetic energy, we don't have to change much - the main difference between the kinetic energy of a rock on a hill and the kinetic energy of a large collection of molecules is how we measure it. for single particles, we can measure it using the velocity, but for large groups of molecules, we have to measure it using temperature. in other words, increasing the temperature increases the kinetic energy of all molecules in a system. if we can describe the thermodynamic potential energy of a system in different states, we can figure out whether a transition between two states is thermodynamically favorable – we can calculate whether the potential energy would increase, decrease, or stay the same. if we look at all accessible states and decide that the one we are in has the lowest thermodynamic potential energy, then we are in a thermodynamically stable state. in your example using methane gas, we can look at gibbs free energy for the reactants and products and decide that the products are more thermodynamically stable than the reactants, and therefore methane gas in the presence of oxygen at 1 atm and 298 k is thermod
|
https://api.stackexchange.com
|
##ynamically unstable. however, you would have to wait a very long time for methane to react without some outside help. the reason is that the transition states along the lowest - energy reaction path have a much higher thermodynamic potential energy than the average kinetic energy of the reactants. the reactants are kinetically trapped - or stable just because they are stuck in a local minimum. the minimum amount of energy that you would need to provide in the form of heat ( a lit match ) to overcome that barrier is called the activation energy. we can apply this to lots of other systems as well. one of the most famous and still extensively researched examples is glasses. glasses are interesting because they are examples of kinetic stability in physical phases. usually, phase changes are governed by thermodynamic stability. in glassy solids, the molecules would have a lower potential energy if they were arranged in a crystalline structure, but because they don't have the energy needed to get out of the local minimum, they are " stuck " with a liquid - like disordered structure, even though the phase is a solid.
|
https://api.stackexchange.com
|
i doubt that we will ever know the exact integral that vexed feynman. here is something similar to what he describes. suppose $ f ( z ) $ is an analytic function on the unit disk. then, by cauchy's integral formula, $ $ \ oint _ \ gamma \ frac { f ( z ) } { z } dz = 2 \ pi i f ( 0 ), $ $ where $ \ gamma $ traces out the unit circle in a counterclockwise manner. let $ z = e ^ { i \ phi } $. then $ \ int _ 0 ^ { 2 \ pi } f ( e ^ { i \ phi } ) d \ phi = 2 \ pi f ( 0 ). $ taking the real part of each side we find $ $ \ begin { equation * } \ int _ 0 ^ { 2 \ pi } \ mathrm { re } ( f ( e ^ { i \ phi } ) ) d \ phi = 2 \ pi \ mathrm { re } ( f ( 0 ) ). \ tag { 1 } \ end { equation * } $ $ ( we could just as well take the imaginary part. ) clearly we can build some terrible integrals by choosing $ f $ appropriately. example 1. let $ \ displaystyle f ( z ) = \ exp \ frac { 2 + z } { 3 + z } $. this is a mild choice compared to what could be done... in any case, $ f $ is analytic on the disk. applying ( 1 ), and after some manipulations of the integrand, we find $ $ \ int _ 0 ^ { 2 \ pi } \ exp \ left ( \ frac { 7 + 5 \ cos \ phi } { 10 + 6 \ cos \ phi } \ right ) \ cos \ left ( \ frac { \ sin \ phi } { 10 + 6 \ cos \ phi } \ right ) d \ phi = 2 \ pi e ^ { 2 / 3 }. $ $ example 2. let $ \ displaystyle f ( z ) = \ exp \ exp \ frac { 2 + z } { 3 + z } $. then \ begin { align * } \ int _ 0 ^ { 2 \ pi } & \ exp \ left ( \ exp \ left ( \ frac { 7 + 5 \ cos \ phi } { 10 + 6 \ cos \ phi } \ right
|
https://api.stackexchange.com
|
) \ cos \ left ( \ frac { \ sin \ phi } { 10 + 6 \ cos \ phi } \ right ) \ right ) \ \ & \ times \ cos \ left ( \ exp \ left ( \ frac { 7 + 5 \ cos \ phi } { 10 + 6 \ cos \ phi } \ right ) \ sin \ left ( \ frac { \ sin \ phi } { 10 + 6 \ cos \ phi } \ right ) \ right ) = 2 \ pi e ^ { e ^ { 2 / 3 } }. \ end { align * }
|
https://api.stackexchange.com
|
cross - correlation and convolution are closely related. in short, to do convolution with ffts, you zero - pad the input signals a and b ( add zeros to the end of each. the zero padding should fill the vectors until they reach a size of at least n = size ( a ) + size ( b ) - 1 ) take the fft of both signals multiply the results together ( element - wise multiplication ) do the inverse fft conv ( a, b ) = ifft ( fft ( a _ and _ zeros ) * fft ( b _ and _ zeros ) ) you need to do the zero - padding because the fft method is actually circular cross - correlation, meaning the signal wraps around at the ends. so you add enough zeros to get rid of the overlap, to simulate a signal that is zero out to infinity. to get cross - correlation instead of convolution, you either need to time - reverse one of the signals before doing the fft, or take the complex conjugate of one of the signals after the fft : corr ( a, b ) = ifft ( fft ( a _ and _ zeros ) * fft ( b _ and _ zeros [ reversed ] ) ) corr ( a, b ) = ifft ( fft ( a _ and _ zeros ) * conj ( fft ( b _ and _ zeros ) ) ) whichever is easier with your hardware / software. for autocorrelation ( cross - correlation of a signal with itself ), it's better to do the complex conjugate, because then you only need to calculate the fft once. if the signals are real, you can use real ffts ( rfft / irfft ) and save half your computation time by only calculating half of the spectrum. also you can save computation time by padding to a larger size that the fft is optimized for ( such as a 5 - smooth number for fftpack, a ~ 13 - smooth number for fftw, or a power of 2 for a simple hardware implementation ). here's an example in python of fft correlation compared with brute - force correlation : this will give you the cross - correlation function, which is a measure of similarity vs offset. to get the offset at which the waves are " lined up " with each other, there will be a peak in the correlation function : the x value
|
https://api.stackexchange.com
|
of the peak is the offset, which could be negative or positive. i've only seen this used to find the offset between two waves. you can get a more precise estimate of the offset ( better than the resolution of your samples ) by using parabolic / quadratic interpolation on the peak. to get a similarity value between - 1 and 1 ( a negative value indicating one of the signals decreases as the other increases ) you'd need to scale the amplitude according to the length of the inputs, length of the fft, your particular fft implementation's scaling, etc. the autocorrelation of a wave with itself will give you the value of the maximum possible match. note that this will only work on waves that have the same shape. if they've been sampled on different hardware or have some noise added, but otherwise still have the same shape, this comparison will work, but if the wave shape has been changed by filtering or phase shifts, they may sound the same, but won't correlate as well.
|
https://api.stackexchange.com
|
you are getting reflections from the front ( glass surface ) and back ( mirrored ) surface, including ( multiple ) internal reflections : it should be obvious from this diagram that the spots will be further apart as you move to a more glancing angle of incidence. depending on the polarization of the laser pointer, there is an angle ( the brewster angle ) where you can make the front ( glass ) surface reflection disappear completely. this takes some experimenting. the exact details of the intensity as a function of angle of incidence are described by the fresnel equations. from that wikipedia article, here is a diagram showing how the intensity of the ( front ) reflection changes with angle of incidence and polarization : this effect is independent of wavelength ( except inasmuch as the refractive index is a weak function of wavelength... so different colors of light will have a slightly different brewster angle ) ; the only way in which laser light is different from " ordinary " light in this case is the fact that laser light is typically linearly polarized, so that the reflection coefficient for a particular angle can be changed simply by rotating the laser pointer. as rainer p pointed out in a comment, if there is a coefficient of reflection $ c $ at the front face, then $ ( 1 - c ) $ of the intensity makes it to the back ; and if the coefficient of reflection at the inside of the glass / air interface is $ r $, then the successive reflected beams will have intensities that decrease geometrically : $ $ c, ( 1 - c ) ( 1 - r ), ( 1 - c ) ( 1 - r ) r, ( 1 - c ) ( 1 - r ) r ^ 2, ( 1 - c ) ( 1 - r ) r ^ 3,... $ $ of course the reciprocity theorem tells us that when we reverse the direction of a beam, we get the same reflectivity, so $ r = c $. this means the above can be simplified ; but i left it in this form to show better what interactions the rays undergo. the above also assumes perfect reflection at the silvered ( back ) face : it should be easy to see how you could add that term...
|
https://api.stackexchange.com
|
the planet neptune's discovery was an example of something similar to this. it was known that newtons's equations gave the wrong description of the motion of uranus and mercury. urbain le verrier sat down and tried to see what would happen if we assumed that the equations were right and the universe was wrong. he set up a complicated system of equations that incorporated a lot of ways contemporary knowledge of the universe could wrong, including the number of planets, the location and mass of the planets, and the presences of the forces other than gravity. he would eventually find a solution to the equations where the dominating error was the presence of another, as of yet undetected, planet. his equations gave the distance from the sun and the mass of the planet correctly, as well as enough detail about the planet's location in the sky that it was found with only an hour of searching. mercury's orbit's issues would eventually be solved by general relativity.
|
https://api.stackexchange.com
|
finite element : volumetric integrals, internal polynomial order classical finite element methods assume continuous or weakly continuous approximation spaces and ask for volumetric integrals of the weak form to be satisfied. the order of accuracy is increased by raising the approximation order within elements. the methods are not exactly conservative, thus often struggle with stability for discontinuous processes. finite volume : surface integrals, fluxes from discontinuous data, reconstruction order finite volume methods use piecewise constant approximation spaces and ask for integrals against piecewise constant test functions to be satisfied. this yields exact conservation statements. the volume integral is converted to a surface integral and the entire physics is specified in terms of fluxes in those surface integrals. for first - order hyperbolic problems, this is a riemann solve. second order / elliptic fluxes are more subtle. order of accuracy is increased by using neighbors to ( conservatively ) reconstruct higher order representations of the state inside elements ( slope reconstruction / limiting ) or by reconstructing fluxes ( flux limiting ). the reconstruction process is usually nonlinear to control oscillations around discontinuous features of the solution, see total variation diminishing ( tvd ) and essentially non - oscillatory ( eno / weno ) methods. a nonlinear discretization is necessary to simultaneously obtain both higher than first order accuracy in smooth regions and bounded total variation across discontinuities, see godunov's theorem. comments both fe and fv are easy to define up to second order accuracy on unstructured grids. fe is easier to go beyond second order on unstructured grids. fv handles non - conforming meshes more easily and robustly. combining fe and fv the methods can be married in multiple ways. discontinuous galerkin methods are finite element methods that use discontinuous basis functions, thus acquiring riemann solvers and more robustness for discontinuous processes ( especially hyperbolic ). dg methods can be used with nonlinear limiters ( usually with some reduction in accuracy ), but satisfy a cell - wise entropy inequality without limiting and can thus be used without limiting for some problems where other schemes require limiters. ( this is especially useful for adjoint - based optimization since it makes the discrete adjoint more representative of the continuous adjoint equations. ) mixed fe methods for elliptic problems use discontinuous basis functions and after some choices of quadrature, can be reinterpreted as standard
|
https://api.stackexchange.com
|
finite volume methods, see this answer for more. reconstruction dg methods ( aka. $ p _ n p _ m $ or " recovery dg " ) use both fv - like conservative reconstruction and internal order enrichment, and are thus a superset of fv and dg methods.
|
https://api.stackexchange.com
|
another example : euler's sum of powers conjecture, a generalization of fermat's last theorem. it states : if the equation $ \ sum _ { i = 1 } ^ kx _ i ^ n = z ^ n $ has a solution in positive integers, then $ n \ leq k $ ( unless $ k = 1 $ ). fermat's last theorem is the $ k = 2 $ case of this conjecture. a counterexample for $ n = 5 $ was found in 1966 : it's $ $ 61917364224 = 27 ^ 5 + 84 ^ 5 + 110 ^ 5 + 133 ^ 5 = 144 ^ 5 $ $ the smallest counterexample for $ n = 4 $ was found in 1988 : $ $ 31858749840007945920321 = 95800 ^ 4 + 217519 ^ 4 + 414560 ^ 4 = 422481 ^ 4 $ $ this example used to be even more useful in the days before flt was proved, as an answer to the question " why do we need to prove flt if it has been verified for thousands of numbers? " : - )
|
https://api.stackexchange.com
|
the previous answers all restate the problem as " work is force dot / times distance ". but this is not really satisfying, because you could then ask " why is work force dot distance? " and the mystery is the same. the only way to answer questions like this is to rely on symmetry principles, since these are more fundamental than the laws of motion. using galilean invariance, the symmetry that says that the laws of physics look the same to you on a moving train, you can explain why energy must be proportional to the mass times the velocity squared. first, you need to define kinetic energy. i will define it as follows : the kinetic energy $ e ( m, v ) $ of a ball of clay of mass $ m $ moving with velocity $ v $ is the amount of calories of heat that it makes when it smacks into a wall. this definition does not make reference to any mechanical quantity, and it can be determined using thermometers. i will show that, assuming galilean invariance, $ e ( v ) $ must be the square of the velocity. $ e ( m, v ) $, if it is invariant, must be proportional to the mass, because you can smack two clay balls side by side and get twice the heating, so $ $ e ( m, v ) = m e ( v ) $ $ further, if you smack two identical clay balls of mass $ m $ moving with velocity $ v $ head - on into each other, both balls stop, by symmetry. the result is that each acts as a wall for the other, and you must get an amount of heating equal to $ 2m e ( v ) $. but now look at this in a train which is moving along with one of the balls before the collision. in this frame of reference, the first ball starts out stopped, the second ball hits it at $ 2v $, and the two - ball stuck system ends up moving with velocity $ v $. the kinetic energy of the second ball is $ me ( 2v ) $ at the start, and after the collision, you have $ 2me ( v ) $ kinetic energy stored in the combined ball. but the heating generated by the collision is the same as in the earlier case. so there are now two $ 2me ( v ) $ terms to consider : one representing the heat generated by the collision, which we saw earlier was $ 2me ( v ) $, and the other representing the energy stored in the
|
https://api.stackexchange.com
|
moving, double - mass ball, which is also $ 2me ( v ) $. due to conservation of energy, those two terms need to add up to the kinetic energy of the second ball before the collision : $ $ me ( 2v ) = 2me ( v ) + 2me ( v ) $ $ $ $ e ( 2v ) = 4 e ( v ) $ $ which implies that $ e $ is quadratic. non - circular force - times - distance here is the non - circular version of the force - times - distance argument that everyone seems to love so much, but is never done correctly. in order to argue that energy is quadratic in velocity, it is enough to establish two things : potential energy on the earth's surface is linear in height objects falling on the earth's surface have constant acceleration the result then follows. that the energy in a constant gravitational field is proportional to the height is established by statics. if you believe the law of the lever, an object will be in equilibrium with another object on a lever when the distances are inversely proportional to the masses ( there are simple geometric demonstrations of this that require nothing more than the fact that equal mass objects balance at equal center - of - mass distances ). then if you tilt the lever a little bit, the mass - times - height gained by 1 is equal to the mass - times - height gained by the other. this allows you to lift objects and lower them with very little effort, so long as the mass - times - height added over all the objects is constant before and after. this is archimedes'principle. another way of saying the same thing uses an elevator, consisting of two platforms connected by a chain through a pulley, so that when one goes up, the other goes down. you can lift an object up, if you lower an equal amount of mass down the same amount. you can lift two objects a certain distance in two steps, if you drop an object twice as far. this establishes that for all reversible motions of the elevator, the ones that do not require you to do any work ( in both the colloquial sense and the physics sense - - - the two notions coincide here ), the mass - times - height summed over all the objects is conserved. the " energy " can now be defined as that quantity of motion which is conserved when these objects are allowed to move with a non - infinitesimal velocity. this is feynman's version of archimedes
|
https://api.stackexchange.com
|
. so the mass - times - height is a measure of the effort required to lift something, and it is a conserved quantity in statics. this quantity should be conserved even if there is dynamics in intermediate stages. by this i mean that if you let two weights drop while suspended on a string, let them do an elastic collision, and catch the two objects when they stop moving again, you did no work. the objects should then go up to the same total mass - times - height. this is the original demonstration of the laws of elastic collisions by christian huygens, who argued that if you drop two masses on pendulums, and let them collide, their center of mass has to go up to the same height, if you catch the balls at their maximum point. from this, huygens generalized the law of conservation of potential energy implicit in archimedes to derive the law of conservation of square - velocity in elastic collisions. his principle that the center of mass cannot be raised by dynamic collisions is the first statement of conservation of energy. for completeness, the fact that an object accelerates in a constant gravitational field with uniform acceleration is a consequence of galilean invariance, and the assumption that a gravitational field is frame invariant to uniform motions up and down with a steady velocity. once you know that motion in constant gravity is constant acceleration, you know that $ $ mv ^ 2 / 2 + mgh = c $ $ so that huygens dynamical quantity which is additively conserved along with archimedes mass times height is the velocity squared.
|
https://api.stackexchange.com
|
gpus are bad at doing one thing at a time. a modern high - end gpu may have several thousand cores, but these are organized into simd blocks of 16 or 32. if you want to compute 2 + 2, you might have 32 cores each compute an addition operation, and then discard 31 of the results. gpus are bad at doing individual things fast. gpus only recently topped the one - gigahertz mark, something that cpus did more than twenty years ago. if your task involves doing many things to one piece of data, rather than one thing to many pieces of data, a cpu is far better. gpus are bad at dealing with data non - locality. the hardware is optimized for working on contiguous blocks of data. if your task involves picking up individual pieces of data scattered around your data set, the gpu's incredible memory bandwidth is mostly wasted.
|
https://api.stackexchange.com
|
here are a few. the first one is included because it's not very well known and is not general, though the ones that follow are very general and very useful. a great but not very well known way to find the primitive of $ f ^ { - 1 } $ in terms of the primitive of $ f $, $ f $, is ( very easy to prove : just differentiate both sides and use the chain rule ) : $ $ \ int f ^ { - 1 } ( x ) \, dx = x \ cdot f ^ { - 1 } ( x ) - ( f \ circ f ^ { - 1 } ) ( x ) + c. $ $ examples : $ $ \ begin { aligned } \ displaystyle \ int \ arcsin ( x ) \, dx & = x \ cdot \ arcsin ( x ) - ( - \ cos \ circ \ arcsin ) ( x ) + c \ \ & = x \ cdot \ arcsin ( x ) + \ sqrt { 1 - x ^ 2 } + c. \ end { aligned } $ $ $ $ \ begin { aligned } \ int \ log ( x ) \, dx & = x \ cdot \ log ( x ) - ( \ exp \ circ \ log ) ( x ) + c \ \ & = x \ cdot \ left ( \ log ( x ) - 1 \ right ) + c. \ end { aligned } $ $ this one is more well known, and extremely powerful, it's called differentiating under the integral sign. it requires ingenuity most of the time to know when to apply, and how to apply it, but that only makes it more interesting. the technique uses the simple fact that $ $ \ frac { \ mathrm d } { \ mathrm d x } \ int _ a ^ b f \ left ( { x, y } \ right ) \ mathrm d y = \ int _ a ^ b \ frac { \ partial f } { \ partial x } \ left ( { x, y } \ right ) \ mathrm d y. $ $ example : we want to calculate the integral $ \ int _ { 0 } ^ { \ infty } \ frac { \ sin ( x ) } { x } dx $. to do that, we unintuitively consider the more complicated integral $ \ int _ { 0 } ^ { \ infty } e ^
|
https://api.stackexchange.com
|
{ - tx } \ frac { \ sin ( x ) } { x } dx $ instead. let $ $ i ( t ) = \ int _ { 0 } ^ { \ infty } e ^ { - tx } \ frac { \ sin ( x ) } { x } dx, $ $ then $ $ i'( t ) = - \ int _ { 0 } ^ { \ infty } e ^ { - tx } \ sin ( x ) dx = \ frac { e ^ { - t x } ( t \ sin ( x ) + \ cos ( x ) ) } { t ^ 2 + 1 } \ bigg | _ 0 ^ { \ infty } = \ frac { - 1 } { 1 + t ^ 2 }. $ $ since both $ i ( t ) $ and $ - \ arctan ( t ) $ are primitives of $ \ frac { - 1 } { 1 + t ^ 2 } $, they must differ only by a constant, so that $ i ( t ) + \ arctan ( t ) = c $. let $ t \ to \ infty $, then $ i ( t ) \ to 0 $ and $ - \ arctan ( t ) \ to - \ pi / 2 $, and hence $ c = \ pi / 2 $, and $ i ( t ) = \ frac { \ pi } { 2 } - \ arctan ( t ) $. finally, $ $ \ int _ { 0 } ^ { \ infty } \ frac { \ sin ( x ) } { x } dx = i ( 0 ) = \ frac { \ pi } { 2 } - \ arctan ( 0 ) = \ boxed { \ frac { \ pi } { 2 } }. $ $ this one is probably the most commonly used " advanced integration technique ", and for good reasons. it's referred to as the " residue theorem " and it states that if $ \ gamma $ is a counterclockwise simple closed curve, then $ \ displaystyle \ int _ \ gamma f ( z ) dz = 2 \ pi i \ sum _ { k = 1 } ^ n \ operatorname { res } ( f, a _ k ) $. it will be difficult for you to understand this one without knowledge in complex analysis, but you can get the gist of it with the wiki article. example : we
|
https://api.stackexchange.com
|
want to compute $ \ int _ { - \ infty } ^ { \ infty } \ frac { x ^ 2 } { 1 + x ^ 4 } dx $. the poles of our function $ f ( z ) = \ frac { x ^ 2 } { 1 + x ^ 4 } $ in the upper half plane are $ a _ 1 = e ^ { i \ frac { \ pi } { 4 } } $ and $ a _ 2 = e ^ { i \ frac { 3 \ pi } { 4 } } $. the residues of our function at those points are $ $ \ operatorname { res } ( f, a _ 1 ) = \ lim _ { z \ to a _ 1 } ( z - a _ 1 ) f ( z ) = \ frac { e ^ { i \ frac { - \ pi } { 4 } } } { 4 }, $ $ and $ $ \ operatorname { res } ( f, a _ 2 ) = \ lim _ { z \ to a _ 2 } ( z - a _ 2 ) f ( z ) = \ frac { e ^ { i \ frac { - 3 \ pi } { 4 } } } { 4 }. $ $ let $ \ gamma $ be the closed path around the boundary of the semicircle of radius $ r > 1 $ on the upper half plane, traversed in the counter - clockwise direction. then the residue theorem gives us $ { 1 \ over 2 \ pi i } \ int _ \ gamma f ( z ) \, dz = \ operatorname { res } ( f, a _ 1 ) + \ operatorname { res } ( f, a _ 2 ) = { 1 \ over 4 } \ left ( { 1 - i \ over \ sqrt { 2 } } + { - 1 - i \ over \ sqrt { 2 } } \ right ) = { - i \ over 2 \ sqrt { 2 } } $ and $ \ int _ \ gamma f ( z ) \, dz = { \ pi \ over \ sqrt { 2 } } $. now, by the definition of $ \ gamma $, we have : $ $ \ int _ \ gamma f ( z ) \, dz = \ int _ { - r } ^ r \ frac { x ^ 2 } { 1 + x ^ 4 } dx + \ int _ 0 ^ \ pi { i ( r e ^
|
https://api.stackexchange.com
|
{ it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } dz = { \ pi \ over \ sqrt { 2 } }. $ $ for the integral on the semicircle $ $ \ int _ 0 ^ \ pi { i ( r e ^ { it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } dz, $ $ we have $ $ \ begin { aligned } \ left | \ int _ 0 ^ \ pi { i ( r e ^ { it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } dz \ right | & \ leq \ int _ 0 ^ \ pi \ left | { i ( r e ^ { it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } \ right | dz \ \ & \ leq \ int _ 0 ^ \ pi { r ^ 3 \ over r ^ 4 - 1 } dz = { \ pi r ^ 3 \ over r ^ 4 - 1 }. \ end { aligned } $ $ hence, as $ r \ to \ infty $, we have $ { \ pi r ^ 3 \ over r ^ 4 - 1 } \ to 0 $, and hence $ \ int _ 0 ^ \ pi { i ( r e ^ { it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } dz \ to 0 $. finally, $ $ \ begin { aligned } \ int _ { - \ infty } ^ \ infty \ frac { x ^ 2 } { 1 + x ^ 4 } dx & = \ lim _ { r \ to \ infty } \ int _ { - r } ^ r \ frac { x ^ 2 } { 1 + x ^ 4 } dx \ \ & = \ lim _ { r \ to \ infty } { \ pi \ over \ sqrt { 2 } } - \ int _ 0 ^ \ pi { i ( r e ^ { it } ) ^ 3 \ over 1 + ( r e ^ { it } ) ^ 4 } dz = \ boxed { { \ pi \ over \ sqrt { 2 } } }. \ end { aligned } $ $ my final " technique " is the use of the mean value property for complex analytic functions, or cauchy's integral formula in
|
https://api.stackexchange.com
|
other words : $ $ \ begin { aligned } f ( a ) & = \ frac { 1 } { 2 \ pi i } \ int _ \ gamma \ frac { f ( z ) } { z - a } \, dz \ \ & = \ frac { 1 } { 2 \ pi } \ int _ { 0 } ^ { 2 \ pi } f \ left ( a + e ^ { ix } \ right ) dx. \ end { aligned } $ $ example : we want to compute the very messy looking integral $ \ int _ 0 ^ { 2 \ pi } \ cos ( \ cos ( x ) + 1 ) \ cosh ( \ sin ( x ) ) dx $. we first notice that $ $ \ begin { aligned } & \ hphantom { = } \ cos [ \ cos ( x ) + 1 ] \ cosh [ \ sin ( x ) ] \ \ & = \ re \ left \ { \ cos [ \ cos ( x ) + 1 ] \ cosh [ \ sin ( x ) ] - i \ sin [ \ cos ( x ) + 1 ] \ sinh [ \ sin ( x ) ] \ right \ } \ \ & = \ re \ left [ \ cos \ left ( 1 + e ^ { i x } \ right ) \ right ]. \ end { aligned } $ $ then, we have $ $ \ begin { aligned } \ int _ 0 ^ { 2 \ pi } \ cos [ \ cos ( x ) + 1 ] \ cosh [ \ sin ( x ) ] dx & = \ int _ 0 ^ { 2 \ pi } \ re \ left [ \ cos \ left ( 1 + e ^ { i x } \ right ) \ right ] dx \ \ & = \ re \ left [ \ int _ 0 ^ { 2 \ pi } \ cos \ left ( 1 + e ^ { i x } \ right ) dx \ right ] \ \ & = \ re \ left ( \ cos ( 1 ) \ cdot 2 \ pi \ right ) = \ boxed { 2 \ pi \ cos ( 1 ) }. \ end { aligned } $ $
|
https://api.stackexchange.com
|
fever is a trait observed in warm and cold - blooded vertebrates that has been conserved for hundreds of millions of years ( evans, 2015 ). elevated body temperature stimulates the body's immune response against infectious viruses and bacteria. it also makes the body less favorable as a host for replicating viruses and bacteria, which are temperature sensitive ( source : sci am ). the innate system is stimulated by increasing the recruitment, activation and bacteriolytic activity of neutrophils. likewise, natural killer cells'cytotoxic activity is enhanced and their recruitment is increased, including that to tumors. macrophages and dendritic cells increase their activity in clearing up the mess associated with infection. also the adaptive immune response is enhanced by elevated temperatures. for example, the circulation of t cells to the lymph nodes is increased and their proliferation is stimulated. in fact, taking pain killers that reduce fever have been shown to lead to poorer clearance of pathogens from the body ( evans, 2015 ). in adults, when body temperature reaches 104 of ( 40 oc ) it can become dangerous and fever reducing agents like aspirin are recommended ( source : emedicine ) reference - evans, nat rev immunol ( 2015 ) ; 15 ( 6 ) : 335 – 49
|
https://api.stackexchange.com
|
* * warning : lithium ion cells * * while this question relates to non - rechargeable aa cells it is possible that someone may seek to extend the advice to testing other small cells. in the case of li - ion rechargeable cells ( aa, 18650, other ) this can be a very bad idea in some cases. shorting lithium ion cells as in test 2 is liable to be a very bad idea indeed. depending on design, some li - ion cells will provide short circuit current of many times the cell mah rating - eg perhaps 50 + amps for an 18650 cell, and perhaps 10's of amps for an aa size li - ion cell. this level of discharge can cause injury and worst case may destroy the cell, in some uncommon cases with substantial release of energy in the form of flame and hot material. aa non - rechargeable cells : 1 ) ignore the funny answers generally speaking, if a battery is more than 1 year old then only alkaline batteries are worth keeping. shelf life of non - alkaline can be some years but they deteriorate badly with time. modern alkaline have gotten awesome, as they still retain a majority of charge at 3 to 5 years. non brand name batteries are often ( but not always ) junk. heft battery in hand. learn to get the feel of what a " real " aa cell weighs. an eveready or similar alkaline will be around 30 grams / one ounce. an aa nimh 2500 mah will be similar. anything under 25g is suspect. under 20g is junk. under 15g is not unknown. 2 ) brutal but works set multimeter to high current range ( 10a or 20a usually ). needs both dial setting and probe socket change in most meters. use two sharpish probes. if battery has any light surface corrosion scratch a clean bright spot with probe tip. if it has more than surface corrosion consider binning it. some alkaline cells leak electrolyte over time, which is damaging to gear and annoying ( at least ) to skin. press negative probe against battery base. move slightly to make scratching contact. press firmly. do not slip so probe jumps off battery and punctures your other hand. not advised. ask me how i know. press positive probe onto top of battery. hold for maybe 1 second. perhaps 2. experience will show what is needed. this is thrashing the battery, decreasing its life and making it
|
https://api.stackexchange.com
|
sad. try not to do this often or for very long. top aa alkaline cells new will give 5 - 10 a. ( nimh aa will approach 10a for a good cell ). lightly used aa or ones which have had bursts of heavy use and then recovered will typically give a few amps. deader again will be 1 - 3a. anything under 1 a you probably want to discard unless you have a micropower application. non alkaline will usually be lower. i buy only alkaline primary cells as other " quality " cells are usually not vastly cheaper but are of much lower capacity. current will fall with time. a very good cell will fall little over 1 to maybe 2 seconds. more used cells will start lower and fall faster. well used cells may plummet. i place cells in approximate order of current after testing. the top ones can be grouped and wrapped with a rubber band. the excessively keen may mark the current given on the cell with a marker. absolute current is not the point - it serves as a measure of usefulness. 3 ) gentler - but works reasonably well. set meter to 2v range or next above 2v if no 2v range. measure battery unloaded voltage. new unused alkaline are about 1. 65v. most books don't tell you that. unused but sat on the shelf 1 year + alkaline will be down slightly. maybe 1. 55 - 1. 6v modestly used cells will be 1. 5v + used but useful may be 1. 3v - 1. 5v range after that it's all downhill. a 1v oc cell is dodo dead. a 1. 1v -. 2v cell will probably load down to 1v if you look at it harshly. do this a few times and you will get a feel for it. 4 ) in between. use a heavyish load and measure voltage. keep a standard resistor for this. solder the wires on that you use as probes. a twisted connection has too much variability. resistor should draw a heavy load for battery type used. 100 ma - 500 ma is probably ok. battery testers usually work this way. 5 ) is this worth doing? yes, it is. as well as returning a few batteries to the fold and making your life more exciting when some fail to perform, it teaches you a new skill that can be helpful in understanding how batteries behave in real life and the possible effect on equipment
|
https://api.stackexchange.com
|
. the more you know, the more you get to know, and this is one more tool along the path towards knowing everything : - ). [ the path is rather longer than any can traverse, but learning how to run along it can be fun ].
|
https://api.stackexchange.com
|
your question implies that aic and bic try to answer the same question, which is not true. the aic tries to select the model that most adequately describes an unknown, high dimensional reality. this means that reality is never in the set of candidate models that are being considered. on the contrary, bic tries to find the true model among the set of candidates. i find it quite odd the assumption that reality is instantiated in one of the models that the researchers built along the way. this is a real issue for bic. nevertheless, there are a lot of researchers who say bic is better than aic, using model recovery simulations as an argument. these simulations consist of generating data from models a and b, and then fitting both datasets with the two models. overfitting occurs when the wrong model fits the data better than the generating. the point of these simulations is to see how well aic and bic correct these overfits. usually, the results point to the fact that aic is too liberal and still frequently prefers a more complex, wrong model over a simpler, true model. at first glance these simulations seem to be really good arguments, but the problem with them is that they are meaningless for aic. as i said before, aic does not consider that any of the candidate models being tested is actually true. according to aic, all models are approximations to reality, and reality should never have a low dimensionality. at least lower than some of the candidate models. my recommendation is to use both aic and bic. most of the times they will agree on the preferred model, when they don't, just report it. if you are unhappy with both aic and bic and have free time to invest, look up minimum description length ( mdl ), a totally different approach that overcomes the limitations of aic and bic. there are several measures stemming from mdl, like normalized maximum likelihood or the fisher information approximation. the problem with mdl is that its mathematically demanding and / or computationally intensive. still, if you want to stick to simple solutions, a nice way for assessing model flexibility ( especially when the number of parameters are equal, rendering aic and bic useless ) is doing parametric bootstrap, which is quite easy to implement. here is a link to a paper on it. some people here advocate for the use of cross - validation. i personally have used it and don't have anything against it, but
|
https://api.stackexchange.com
|
the issue with it is that the choice among the sample - cutting rule ( leave - one - out, k - fold, etc ) is an unprincipled one.
|
https://api.stackexchange.com
|
the simple answer is that no, the big bang did not happen at a point. instead, it happened everywhere in the universe at the same time. consequences of this include : the universe doesn't have a centre : the big bang didn't happen at a point so there is no central point in the universe that it is expanding from. the universe isn't expanding into anything : because the universe isn't expanding like a ball of fire, there is no space outside the universe that it is expanding into. in the next section, i'll sketch out a rough description of how this can be, followed by a more detailed description for the more determined readers. a simplified description of the big bang imagine measuring our current universe by drawing out a grid with a spacing of 1 light year. although obviously, we can't do this, you can easily imagine putting the earth at ( 0, 0 ), alpha centauri at ( 4. 37, 0 ), and plotting out all the stars on this grid. the key thing is that this grid is infinite $ ^ 1 $ i. e. there is no point where you can't extend the grid any further. now wind time back to 7 billion years after the big bang, i. e. about halfway back. our grid now has a spacing of half a light year, but it's still infinite - there is still no edge to it. the average spacing between objects in the universe has reduced by half and the average density has gone up by a factor of $ 2 ^ 3 $. now wind back to 0. 0000000001 seconds after the big bang. there's no special significance to that number ; it's just meant to be extremely small. our grid now has a very small spacing, but it's still infinite. no matter how close we get to the big bang we still have an infinite grid filling all of space. you may have heard pop science programs describing the big bang as happening everywhere and this is what they mean. the universe didn't shrink down to a point at the big bang, it's just that the spacing between any two randomly selected spacetime points shrank down to zero. so at the big bang, we have a very odd situation where the spacing between every point in the universe is zero, but the universe is still infinite. the total size of the universe is then $ 0 \ times \ infty $, which is undefined. you probably think this doesn
|
https://api.stackexchange.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.