text
stringlengths
1
3.05k
source
stringclasses
4 values
' t make sense, and actually, most physicists agree with you. the big bang is a singularity, and most of us don't think singularities occur in the real universe. we expect that some quantum gravity effect will become important as we approach the big bang. however, at the moment we have no working theory of quantum gravity to explain exactly what happens. $ ^ 1 $ we assume the universe is infinite - more on this in the next section for determined readers only to find out how the universe evolved in the past, and what will happen to it in the future, we have to solve einstein's equations of general relativity for the whole universe. the solution we get is an object called the metric tensor that describes spacetime for the universe. but einstein's equations are partial differential equations, and as a result, have a whole family of solutions. to get the solution corresponding to our universe we need to specify some initial conditions. the question is then what initial conditions to use. well, if we look at the universe around us we note two things : if we average over large scales the universe looks the same in all directions, that is it is isotropic if we average over large scales the universe is the same everywhere, that is it is homogeneous you might reasonably point out that the universe doesn't look very homogeneous since it has galaxies with a high density randomly scattered around in space with a very low density. however, if we average on scales larger than the size of galaxy superclusters we do get a constant average density. also, if we look back to the time the cosmic microwave background was emitted ( 380, 000 years after the big bang and well before galaxies started to form ) we find that the universe is homogeneous to about $ 1 $ part in $ 10 ^ 5 $, which is pretty homogeneous. so as the initial conditions let's specify that the universe is homogeneous and isotropic, and with these assumptions, einstein's equation has a ( relatively! ) simple solution. indeed this solution was found soon after einstein formulated general relativity and has been independently discovered by several different people. as a result the solution glories in the name friedmann – lemaitre – robertson – walker metric, though you'll usually see this shortened to flrw metric or sometimes frw metric ( why lemaitre misses out i'm not sure ). recall the grid i described to measure out the universe in the first section of this answer, and how i described the grid shrinking as
https://api.stackexchange.com
we went back in time towards the big bang? well the flrw metric makes this quantitative. if $ ( x, y, z ) $ is some point on our grid then the current distance to that point is just given by pythagoras'theorem : $ $ d ^ 2 = x ^ 2 + y ^ 2 + z ^ 2 $ $ what the flrw metric tells us is that the distance changes with time according to the equation : $ $ d ^ 2 ( t ) = a ^ 2 ( t ) ( x ^ 2 + y ^ 2 + z ^ 2 ) $ $ where $ a ( t ) $ is a function called the [ scale factor ]. we get the function for the scale factor when we solve einstein's equations. sadly it doesn't have a simple analytical form, but it's been calculated in answers to the previous questions what was the density of the universe when it was only the size of our solar system? and how does the hubble parameter change with the age of the universe?. the result is : the value of the scale factor is conventionally taken to be unity at the current time, so if we go back in time and the universe shrinks we have $ a ( t ) < 1 $, and conversely in the future as the universe expands we have $ a ( t ) > 1 $. the big bang happens because if we go back to time to $ t = 0 $ the scale factor $ a ( 0 ) $ is zero. this gives us the remarkable result that the distance to any point in the universe $ ( x, y, z ) $ is : $ $ d ^ 2 ( t ) = 0 ( x ^ 2 + y ^ 2 + z ^ 2 ) = 0 $ $ so the distance between every point in the universe is zero. the density of matter ( the density of radiation behaves differently but let's gloss over that ) is given by : $ $ \ rho ( t ) = \ frac { \ rho _ 0 } { a ^ 3 ( t ) } $ $ where $ \ rho _ 0 $ is the density at the current time, so the density at time zero is infinitely large. at the time $ t = 0 $ the flrw metric becomes singular. no one i know thinks the universe did become singular at the big bang. this isn't a modern opinion : the first person i know to have objected publically was fred hoyle, and he suggested steady state theory
https://api.stackexchange.com
to avoid the singularity. these days it's commonly believed that some quantum gravity effect will prevent the geometry from becoming singular, though since we have no working theory of quantum gravity no one knows how this might work. so to conclude : the big bang is the zero time limit of the flrw metric, and it's a time when the spacing between every point in the universe becomes zero and the density goes to infinity. it should be clear that we can't associate the big bang with a single spatial point because the distance between all points was zero so the big bang happened at all points in space. this is why it's commonly said that the big bang happened everywhere. in the discussion above i've several times casually referred to the universe as infinite, but what i really mean is that it can't have an edge. remember that our going - in assumption is that the universe is homogeneous i. e. it's the same everywhere. if this is true the universe can't have an edge because points at the edge would be different from points away from the edge. a homogenous universe must either be infinite, or it must be closed i. e. have the spatial topology of a 3 - sphere. the recent planck results show the curvature is zero to within experimental error, so if the universe is closed the scale must be far larger than the observable universe.
https://api.stackexchange.com
this is partly a matter of terminology, and as such, only requires that you and the person you're talking to clarify it beforehand. however, there are different topics that are more strongly associated with parallelism, concurrency, or distributed systems. parallelism is generally concerned with accomplishing a particular computation as fast as possible, exploiting multiple processors. the scale of the processors may range from multiple arithmetical units inside a single processor, to multiple processors sharing memory, to distributing the computation on many computers. on the side of models of computation, parallelism is generally about using multiple simultaneous threads of computation internally, in order to compute a final result. parallelism is also sometimes used for real - time reactive systems, which contain many processors that share a single master clock ; such systems are fully deterministic. concurrency is the study of computations with multiple threads of computation. concurrency tends to come from the architecture of the software rather than from the architecture of the hardware. software may be written to use concurrency in order to exploit hardware parallelism, but often the need is inherent in the software's behavior, to react to different asynchronous events ( e. g. a computation thread that works independently of a user interface thread, or a program that reacts to hardware interrupts by switching to an interrupt handler thread ). distributed computing studies separate processors connected by communication links. whereas parallel processing models often ( but not always ) assume shared memory, distributed systems rely fundamentally on message passing. distributed systems are inherently concurrent. like concurrency, distribution is often part of the goal, not solely part of the solution : if resources are in geographically distinct locations, the system is inherently distributed. systems in which partial failures ( of processor nodes or of communication links ) are possible fall under this domain.
https://api.stackexchange.com
if you apply binary search, you have $ $ \ log _ 2 ( n ) + o ( 1 ) $ $ many comparisons. if you apply ternary search, you have $ $ 2 \ cdot \ log _ 3 ( n ) + o ( 1 ) $ $ many comparisons, as in each step, you need to perform 2 comparisons to cut the search space into three parts. now if you do the math, you can observe that : $ $ 2 \ cdot \ log _ 3 ( n ) + o ( 1 ) = 2 \ cdot \ frac { \ log ( 2 ) } { \ log ( 3 ) } \ log _ 2 ( n ) + o ( 1 ) $ $ since we know that $ 2 \ cdot \ frac { \ log ( 2 ) } { \ log ( 3 ) } > 1 $, we actually get more comparisons with ternary search. by the way : $ n $ - ary search may make a lot of sense in case if comparisons are quite costly and can be parallelized, as then, parallel computers can be applied. note that argument can be generalized to $ n $ - ary search quite easily. you just need to show that the function $ f ( k ) = ( k - 1 ) \ cdot \ frac { \ log ( 2 ) } { \ log ( k ) } $ is strictly monotone increasing for integer values of $ k $.
https://api.stackexchange.com
the anode is the electrode where the oxidation reaction \ begin { align } \ ce { red - > ox + e - } \ end { align } takes place while the cathode is the electrode where the reduction reaction \ begin { align } \ ce { ox + e - - > red } \ end { align } takes place. that's how cathode and anode are defined. galvanic cell now, in a galvanic cell the reaction proceeds without an external potential helping it along. since at the anode you have the oxidation reaction which produces electrons you get a build - up of negative charge in the course of the reaction until electrochemical equilibrium is reached. thus the anode is negative. at the cathode, on the other hand, you have the reduction reaction which consumes electrons ( leaving behind positive ( metal ) ions at the electrode ) and thus leads to a build - up of positive charge in the course of the reaction until electrochemical equilibrium is reached. thus the cathode is positive. electrolytic cell in an electrolytic cell, you apply an external potential to enforce the reaction to go in the opposite direction. now the reasoning is reversed. at the negative electrode where you have produced a high electron potential via an external voltage source electrons are " pushed out " of the electrode, thereby reducing the oxidized species $ \ ce { ox } $, because the electron energy level inside the electrode ( fermi level ) is higher than the energy level of the lumo of $ \ ce { ox } $ and the electrons can lower their energy by occupying this orbital - you have very reactive electrons so to speak. so the negative electrode will be the one where the reduction reaction will take place and thus it's the cathode. at the positive electrode where you have produced a low electron potential via an external voltage source electrons are " sucked into " the electrode leaving behind the the reduced species $ \ ce { red } $ because the electron energy level inside the electrode ( fermi level ) is lower than the energy level of the homo of $ \ ce { red } $. so the positive electrode will be the one where the oxidation reaction will take place and thus it's the anode. a tale of electrons and waterfalls since there is some confusion concerning the principles on which an electrolysis works, i'll try a metaphor to explain it. electrons flow from a region of high potential to a region of low potential much like water falls down a waterfall or flows down an
https://api.stackexchange.com
inclined plane. the reason is the same : water and electrons can lower their energy this way. now the external voltage source acts like two big rivers connected to waterfalls : one at a high altitude that leads towards a waterfall - that would be the minus pole - and one at a low altitude that leads away from a waterfall - that would be the plus pole. the electrodes would be like the points of the river shortly before or after the waterfalls in this picture : the cathode is like the edge of a waterfall where the water drops down and the anode is like the point where the water drops into. ok, what happens at the electrolysis reaction? at the cathode, you have the high altitude situation. so the electrons flow to the " edge of their waterfall ". they want to " fall down " because behind them the river is pushing towards the edge exerting some kind of " pressure ". but where can they fall down to? the other electrode is separated from them by the solution and usually a diaphragm. but there are $ \ ce { ox } $ molecules that have empty states that lie energetically below that of the electrode. those empty states are like small ponds lying at a lower altitude where a little bit of the water from the river can fall into. so every time such an $ \ ce { ox } $ molecule comes near the electrode an electron takes the opportunity to jump to it and reduce it to $ \ ce { red } $. but that does not mean that the electrode is suddenly missing an electron because the river is replacing the " pushed out " electron immediately. and the voltage source ( the source of the river ) can't run dry of electrons because it gets its electrons from the power socket. now the anode : at the anode, you have the low altitude situation. so here the river lies lower than everything else. now you can imagine the homo - states of the $ \ ce { red } $ molecules as small barrier lakes lying at a higher altitude than our river. when a $ \ ce { red } $ molecule comes close to the electrode it is like someone opening the floodgates of the barrier lake's dam. the electrons flow from the homo into the electrode thus creating an $ \ ce { ox } $ molecule. but the electrons don't stay in the electrode, so to speak, they are carried away by the river. and since the river is such a vast entity ( lots of water ) and usually flows into an ocean, the little "
https://api.stackexchange.com
water " that is added to it doesn't change the river much. it stays the same, unaltered so that everytime a floodgate gets opened the water from the barrier lake will drop the same distance.
https://api.stackexchange.com
dataset will be a data frame. as i don't have forr. csv, i'll make up a small data frame for illustration : set. seed ( 1 ) dataset < - data. frame ( a = sample ( c ( na, 1 : 100 ), 1000, rep = true ), b = rnorm ( 1000 ) ) > head ( dataset ) a b 1 26 0. 07730312 2 37 - 0. 29686864 3 57 - 1. 18324224 4 91 0. 01129269 5 20 0. 99160104 6 90 1. 59396745 to get the number of cases, count the number of rows using nrow ( ) or nrow ( ) : > nrow ( dataset ) [ 1 ] 1000 > nrow ( dataset ) [ 1 ] 1000 to count the data after omitting the na, use the same tools, but wrap dataset in na. omit ( ) : > nrow ( na. omit ( dataset ) ) [ 1 ] 993 the difference between nrow ( ) and ncol ( ) and their lowercase variants ( ncol ( ) and nrow ( ) ) is that the lowercase versions will only work for objects that have dimensions ( arrays, matrices, data frames ). the uppercase versions will work with vectors, which are treated as if they were a 1 column matrix, and are robust if you end up subsetting your data such that r drops an empty dimension. alternatively, use complete. cases ( ) and sum it ( complete. cases ( ) returns a logical vector [ true or false ] indicating if any observations are na for any rows. > sum ( complete. cases ( dataset ) ) [ 1 ] 993
https://api.stackexchange.com
the difference in your timings seems to be due to the manual unrolling of the unit - stride fortran daxpy. the following timings are on a 2. 67 ghz xeon x5650, using the command. / test 1000000 10000 intel 11. 1 compilers fortran with manual unrolling : 8. 7 sec fortran w / o manual unrolling : 5. 8 sec c w / o manual unrolling : 5. 8 sec gnu 4. 1. 2 compilers fortran with manual unrolling : 8. 3 sec fortran w / o manual unrolling : 13. 5 sec c w / o manual unrolling : 13. 6 sec c with vector attributes : 5. 8 sec gnu 4. 4. 5 compilers fortran with manual unrolling : 8. 1 sec fortran w / o manual unrolling : 7. 4 sec c w / o manual unrolling : 8. 5 sec c with vector atrributes : 5. 8 sec conclusions manual unrolling helped the gnu 4. 1. 2 fortran compilers on this architecture, but hurts the newer version ( 4. 4. 5 ) and the intel fortran compiler. the gnu 4. 4. 5 c compiler is much more competitive with fortran than for version 4. 2. 1. vector intrinsics allow the gcc performance to match the intel compilers. time to test more complicated routines like dgemv and dgemm?
https://api.stackexchange.com
sorry i don't know opencv, and this is more a pre - processing step than a complete answer : first, you don't want an edge detector. an edge detector converts transitions ( like this dark - to - light ) : into ridges ( bright lines on dark ) like this : it performs a differentiation, in other words. but in your images, there is a light shining down from one direction, which shows us the relief of the 3d surface. we perceive this as lines and edges, because we're used to seeing things in 3d, but they aren't really, which is why edge detectors aren't working, and template matching won't work easily with rotated images ( a perfect match at 0 degrees rotation would actually cancel out completely at 180 degrees, because light and dark would line up with each other ). if the height of one of these mazy lines looks like this from the side : then the brightness function when illuminated from one side will look like this : this is what you see in your images. the facing surface becomes brighter and the trailing surface becomes darker. so you don't want to differentiate. you need to integrate the image along the direction of illumination, and it will give you the original height map of the surface ( approximately ). then it will be easier to match things, whether through hough transform or template matching or whatever. i'm not sure how to automate finding the direction of illumination. if it's the same for all your images, great. otherwise you'd have to find the biggest contrast line and assume the light is perpendicular to it or something. for my example, i rotated the image manually to what i thought was the right direction, with light coming from the left : you also need to remove all the low - frequency changes in the image, though, to highlight only the quickly - changing line - like features. to avoid ringing artifacts, i used 2d gaussian blur and then subtracted that from the original : the integration ( cumulative sum ) can runaway easily, which produces horizontal streaks. i removed these with another gaussian high - pass, but only in the horizontal direction this time : now the stomata are white ellipses all the way around, instead of white in some places and black in others. original : integrated : from pylab import * import image from scipy. ndimage import gaussian _ filter, gaussian _ filter1d filename ='rotated _ sample. jpg'i
https://api.stackexchange.com
= image. open ( filename ). convert ('l') i = asarray ( i ) # remove dc offset i = i - average ( i ) close ('all') figure ( ) imshow ( i ) gray ( ) show ( ) title ('original') # remove slowly - varying features sigma _ 2d = 2 i = i - gaussian _ filter ( i, sigma _ 2d ) figure ( ) imshow ( i ) title ('2d filtered with % s'% sigma _ 2d ) # integrate summed = cumsum ( i, 1 ) # remove slowly - changing streaks in horizontal direction sigma _ 1d = 5 output = summed - gaussian _ filter1d ( summed, sigma _ 1d, axis = 1 ) figure ( ) imshow ( output ) title ('1d filtered with % s'% sigma _ 1d ) the hough transform can be used to detect ridge ellipses like this, made of " edge pixels ", though it's really expensive in computation and memory, and they are not perfect ellipses so it would have to be a bit of a " sloppy " detector. i've never done it, but there are a lot of google results for " hough ellipse detection ". i'd say if you detect one ellipse inside the other, within a certain size search space, it should be counted as a stoma. also see : opencv : how to detect a ellipse in the binary image python and opencv. how do i detect all ( filled ) circles / round objects in an image? detection of coins ( and fit ellipses ) on an image
https://api.stackexchange.com
a linear function fixes the origin, whereas an affine function need not do so. an affine function is the composition of a linear function with a translation, so while the linear part fixes the origin, the translation can map it somewhere else. linear functions between vector spaces preserve the vector space structure ( so in particular they must fix the origin ). while affine functions don't preserve the origin, they do preserve some of the other geometry of the space, such as the collection of straight lines. if you choose bases for vector spaces $ v $ and $ w $ of dimensions $ m $ and $ n $ respectively, and consider functions $ f \ colon v \ to w $, then $ f $ is linear if $ f ( v ) = av $ for some $ n \ times m $ matrix $ a $ and $ f $ is affine if $ f ( v ) = av + b $ for some matrix $ a $ and vector $ b $, where coordinate representations are used with respect to the bases chosen.
https://api.stackexchange.com
i think that documentation for scientific software can be divided into three categories, all of which are necessary for full understanding. the easiest and most common is individual method documentation. there are many systems for this. you mention doxygen, python has pydoc, and in petsc we have our own package sowing which generates the following. however, for any piece of software which goes beyond a simple utility, you need a manual. this provides a high - level view of the purpose of the package, and how its different functionalities integrate to achieve this purpose. it helps a new user structure their code, often through the use of examples. in petsc, we just use latex for the manual, but the pyclaw package uses the sphinx framework which i am very impressed with. one thing that we have implemented in the sowing package that i find very useful is the link between example code and function documentation. for example, this example solves the bratu equation. notice how you can follow the links for any custom type or function call and get to the low - level documentation, and how those pages link back to examples using them. this is how i learn about new functionality which other people in the project contribute. a frequently overlooked part of documentation, i think, is developer documentation. it is not uncommon to publish a coding - style document, and instructions for interacting with the repository. however, it is very rare to explain the design decisions made before implementation. these decisions always involve tradeoffs, and the situation with respect to hardware and algorithms will necessarily change over time. without a discussion of the tradeoffs reviewed and rationale for particular design decisions, later programmers are left to recreate the entire process on their own. i think this is a major impediment to successful maintenance and improvement of old codes when the original developers are no longer in charge.
https://api.stackexchange.com
i am not a kineticist, and my quantum chemistry is long, long out of date, but what i was about to say was that i'd guess the reason the " effect " is " unsolved " is that it's not real. that is, it is not a property of a single reactant while disregarding its environment ( gas phase, solvent interactions ). then i saw that the two recent articles both were about solvation, so my comment is redundant ( and certainly only a partially / inadequately educated guess ). i'd also comment that comparing $ \ ce { ho - } $ with $ \ ce { hoo - } $ is apples and oranges. you should compare it with a species with an alpha atom which is electronegative but doesn't have a lone pair. if it doesn't really have a published dft model, then it might be good for an ms student to work on. i suspect answering it is like " curing cancer ", it doesn't have just one'reason ', rather the cures depend on the exact nature of the reaction ( including solvation ).
https://api.stackexchange.com
what ’ s the proper soldering iron temperature for standard. 031 " 60 / 40 solder? there is no proper soldering iron temperature just for a given type of solder - the iron temperature should be set for both the component and the solder. when soldering surface mount components, a small tip and 600f ( 315c ) should be sufficient to quickly solder the joint well without overheating the component. when soldering through hole components, 700f ( 370c ) is useful to pump more heat into the wire and plated hole to solder it quickly. a negative capacitor lead to a heatsinking solid pour ground plane is going to need a big fat tip at a much higher temperature. however, i don't micromanage my soldering temperature, and simply keep mine at 700f ( 370c ). i'll change the tips according to what i'm soldering, and the tip size really ends up determining how much heat gets into the joint in a given period of contact. i think you'll find that very few soldering jobs will really require you to change your tip temperature. keep in mind that the ideal situation is that the soldering iron heats up the joint enough that the joint melts the solder - not the iron. so the iron is expected to be hotter than the melting point of the solder so that the entire joint comes up to the melting point of the solder quickly. the more quickly you bring the joint temperature up and solder it, the less time the soldering iron is on the joint, and thus the less heat gets transferred to the component. it's not a big deal for many passive or small components, but it turns out that overall a higher tip temperature results in faster soldering and less likely damage to the component being soldered. so if you do use higher tip temperatures, don't leave them on components any longer than necessary. apply the iron, apply the solder, and remove both - it should take just a second or maybe two for surface mount, and 1 - 3 seconds for a through hole part. please note that i'm talking about prototyping, hobbyist, and one - off projects. if you are planning on doing final assembly with the iron, repair work for critical projects, etc, then you'll need to consider what you're doing more carefully than this general rule of thumb.
https://api.stackexchange.com
samtools has a subsampling option : - s float : integer part is used to seed the random number generator [ 0 ]. part after the decimal point sets the fraction of templates / pairs to subsample [ no subsampling ] samtools view - bs 42. 1 in. bam > subsampled. bam will subsample 10 percent mapped reads with 42 as the seed for the random number generator.
https://api.stackexchange.com
the laplace and fourier transforms are continuous ( integral ) transforms of continuous functions. the laplace transform maps a function \ $ f ( t ) \ $ to a function \ $ f ( s ) \ $ of the complex variable s, where \ $ s = \ sigma + j \ omega \ $. since the derivative \ $ \ dot f ( t ) = \ frac { df ( t ) } { dt } \ $ maps to \ $ sf ( s ) \ $, the laplace transform of a linear differential equation is an algebraic equation. thus, the laplace transform is useful for, among other things, solving linear differential equations. if we set the real part of the complex variable s to zero, \ $ \ sigma = 0 \ $, the result is the fourier transform \ $ f ( j \ omega ) \ $ which is essentially the frequency domain representation of \ $ f ( t ) \ $ ( note that this is true only if for that value of \ $ \ sigma \ $ the formula to obtain the laplace transform of \ $ f ( t ) \ $ exists, i. e., it does not go to infinity ). the z transform is essentially a discrete version of the laplace transform and, thus, can be useful in solving difference equations, the discrete version of differential equations. the z transform maps a sequence \ $ f [ n ] \ $ to a continuous function \ $ f ( z ) \ $ of the complex variable \ $ z = re ^ { j \ omega } \ $. if we set the magnitude of z to unity, \ $ r = 1 \ $, the result is the discrete time fourier transform ( dtft ) \ $ f ( j \ omega ) \ $ which is essentially the frequency domain representation of \ $ f [ n ] \ $.
https://api.stackexchange.com
a guy is flying in a hot air balloon and he's lost. so he lowers himself over a field and shouts to a guy on the ground : " can you tell me where i am, and which way i'm headed? " " sure! you're at 43 degrees, 12 minutes, 21. 2 seconds north ; 123 degrees, 8 minutes, 12. 8 seconds west. you're at 212 meters above sea level. right now, you're hovering, but on your way in here you were at a speed of 1. 83 meters per second at 1. 929 radians " " thanks! by the way, are you a statistician? " " i am! but how did you know? " " everything you've told me is completely accurate ; you gave me more detail than i needed, and you told me in such a way that it's no use to me at all! " " dang! by the way, are you a principal investigator? " " geeze! how'd you know that???? " " you don't know where you are, you don't know where you're going. you got where you are by blowing hot air, you start asking questions after you get into trouble, and you're in exactly the same spot you were a few minutes ago, but now, somehow, it's my fault!
https://api.stackexchange.com
i've decided to tackle this question in a somewhat different manner. instead of giving the chemical intuition behind it, i wanted to check for myself if the mathematics actually work out. as far as i understand, this isn't done often, so that's why i wanted to try it, even though it may not make the clearest answer. it turns out to be a bit complicated, and i haven't done much math in a while, so i'm kinda rusty. hopefully, everything is correct. i would love to have someone check my results. my approach here is to explicitly find the equation of a general titration curve and figure out from that why the ph varies quickly near the equivalence point. for simplicity, i shall consider the titration to be between a monoprotic acid and base. explicitly, we have the following equilibria in solution $ $ \ ce { ha < = > h ^ + + a ^ - } \ \ \ → \ \ \ k _ \ text { a } = \ ce { \ frac { [ h ^ + ] [ a ^ - ] } { [ ha ] } } $ $ $ $ \ ce { boh < = > b ^ + + oh ^ - } \ \ \ → \ \ \ k _ \ text { b } = \ ce { \ frac { [ oh ^ - ] [ b ^ + ] } { [ boh ] } } $ $ $ $ \ ce { h2o < = > h ^ + + oh ^ - } \ \ \ → \ \ \ k _ \ text { w } = \ ce { [ h ^ + ] [ oh ^ - ] } $ $ let us imagine adding two solutions, one of the acid $ \ ce { ha } $ with volume $ v _ \ text { a } $ and concentration $ c _ \ text { a } $, and another of the base $ \ ce { boh } $ with volume $ v _ \ text { b } $ and concentration $ c _ \ text { b } $. notice that after mixing the solutions, the number of moles of species containing $ \ ce { a } $ ( $ \ ce { ha } $ or $ \ ce { a ^ - } $ ) is simply $ n _ \ text { a } = c _ \ text { a } v _ \ text { a } $, while the number of moles of species containing $ \ ce { b } $ ( $ \ ce
https://api.stackexchange.com
{ boh } $ or $ \ ce { b ^ + } $ ) is $ n _ \ text { b } = c _ \ text { b } v _ \ text { b } $. notice that at the equivalence point, $ n _ \ text { a } = n _ \ text { b } $ and therefore $ c _ \ text { a } v _ \ text { a } = c _ \ text { b } v _ \ text { b } $ ; this will be important later. we will assume that volumes are additive ( total volume $ v _ \ text { t } = v _ \ text { a } + v _ \ text { b } $ ), which is close to true for relatively dilute solutions. in search of an equation to solve the problem of finding the final equilibrium after adding the solutions, we write out the charge balance and matter balance equations : charge balance : $ \ ce { [ h ^ + ] + [ b ^ + ] = [ a ^ - ] + [ oh ^ - ] } $ matter balance for $ \ ce { a } $ : $ \ displaystyle \ ce { [ ha ] + [ a ^ - ] } = \ frac { c _ \ text { a } v _ \ text { a } } { v _ \ text { a } + v _ \ text { b } } $ matter balance for $ \ ce { b } $ : $ \ displaystyle \ ce { [ boh ] + [ b ^ + ] } = \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } $ a titration curve is given by the ph on the $ y $ - axis and the volume of added acid / base on the $ x $ - axis. so what we need is to find an equation where the only variables are $ \ ce { [ h ^ + ] } $ and $ v _ \ text { a } $ or $ v _ \ text { b } $. by manipulating the dissociation constant equations and the mass balance equations, we can find the following : $ $ \ ce { [ ha ] } = \ frac { \ ce { [ h ^ + ] [ a ^ - ] } } { k _ \ text { a } } $ $ $ $ \ ce { [ boh ] } = \ frac { \ ce { [ b ^ + ] }
https://api.stackexchange.com
k _ \ text { w } } { k _ \ text { b } \ ce { [ h ^ + ] } } $ $ $ $ \ ce { [ a ^ - ] } = \ frac { c _ \ text { a } v _ \ text { a } } { v _ \ text { a } + v _ \ text { b } } \ left ( \ frac { k _ \ text { a } } { k _ \ text { a } + \ ce { [ h ^ + ] } } \ right ) $ $ $ $ \ ce { [ b ^ + ] } = \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } \ left ( \ frac { k _ \ text { b } \ ce { [ h ^ + ] } } { k _ \ text { b } \ ce { [ h ^ + ] } + k _ \ text { w } } \ right ) $ $ replacing those identities in the charge balance equation, after a decent bit of algebra, yields : $ $ \ ce { [ h ^ + ] ^ 4 } + \ left ( k _ \ text { a } + \ frac { k _ \ text { w } } { k _ \ text { b } } + \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } \ right ) \ ce { [ h ^ + ] ^ 3 } + \ left ( \ frac { k _ \ text { a } } { k _ \ text { b } } k _ \ text { w } + \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } k _ \ text { a } - \ frac { c _ \ text { a } v _ \ text { a } } { v _ \ text { a } + v _ \ text { b } } k _ \ text { a } - k _ \ text { w } \ right ) \ ce { [ h ^ + ] ^ 2 } - \ left ( k _ \ text { a } k _ \ text { w } + \ frac { c _ \ text { a } v _ \ text { a }
https://api.stackexchange.com
} { v _ \ text { a } + v _ \ text { b } } \ frac { k _ \ text { a } } { k _ \ text { b } } k _ \ text { w } + \ frac { k ^ 2 _ \ text { w } } { k _ \ text { b } } \ right ) \ ce { [ h ^ + ] } - \ frac { k _ \ text { a } } { k _ \ text { b } } k ^ 2 _ \ text { w } = 0 $ $ now, this equation sure looks intimidating, but it is very interesting. for one, this single equation will exactly solve any equilibrium problem involving the mixture of any monoprotic acid and any monoprotic base, in any concentration ( as long as they're not much higher than about $ 1 ~ \ mathrm { \ small m } $ ) and any volume. though it doesn't seem to be possible to separate the variables $ \ ce { [ h ^ + ] } $ and $ v _ \ text { a } $ or $ v _ \ text { b } $, the graph of this equation represents any titration curve ( as long as it obeys the previous considerations ). though in its full form it is quite daunting, we can obtain some simpler versions. for example, consider that the mixture is of a weak acid and a strong base. this means that $ k _ \ text { b } \ gg 1 $, and so every term containing $ k _ \ text { b } $ in the denominator is approximately zero and gets cancelled out. the equation then becomes : weak acid and strong base : $ $ \ ce { [ h ^ + ] ^ 3 } + \ left ( k _ \ text { a } + \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } \ right ) \ ce { [ h ^ + ] ^ 2 } + \ left ( \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ ce { a } + v _ \ ce { b } } k _ \ ce { a } - \ frac { c _ \ ce { a } v _ \ ce { a } } { v _ \ ce { a } + v _ \ ce { b } } k _ \
https://api.stackexchange.com
ce { a } - k _ \ ce { w } \ right ) \ ce { [ h ^ + ] } - k _ \ ce { a } k _ \ ce { w } = 0 $ $ for a strong acid and weak base ( $ k _ \ text { a } \ gg 1 $ ), you can divide both sides of the equation by $ k _ \ text { a } $, and now all terms with $ k _ \ text { a } $ in the denominator get cancelled out, leaving : strong acid and weak base : $ $ \ ce { [ h ^ + ] ^ 3 } + \ left ( \ frac { k _ \ ce { w } } { k _ \ ce { b } } + \ frac { c _ \ ce { b } v _ \ ce { b } } { v _ \ ce { a } + v _ \ ce { b } } - \ frac { c _ \ ce { a } v _ \ ce { a } } { v _ \ ce { a } + v _ \ ce { b } } \ right ) \ ce { [ h ^ + ] ^ 2 } - \ left ( k _ \ ce { w } + \ frac { c _ \ text { a } v _ \ ce { a } } { v _ \ ce { a } + v _ \ ce { b } } \ frac { k _ \ ce { w } } { k _ \ ce { b } } \ right ) \ ce { [ h ^ + ] } - \ frac { k ^ 2 _ \ ce { w } } { k _ \ ce { b } } = 0 $ $ the simplest case happens when adding a strong acid to a strong base ( $ k _ \ ce { a } \ gg 1 $ and $ k _ \ ce { b } \ gg 1 $ ), in which case all terms containing either in the denominator get cancelled out. the result is simply : strong acid and strong base : $ $ \ ce { [ h ^ + ] ^ 2 } + \ left ( \ frac { c _ \ text { b } v _ \ text { b } } { v _ \ text { a } + v _ \ text { b } } - \ frac { c _ \ text { a } v _ \ text { a } } { v _ \ text { a } + v _ \ text { b } } \
https://api.stackexchange.com
right ) \ ce { [ h ^ + ] } - k _ \ ce { w } = 0 $ $ it would be enlightening to draw some example graphs for each equation, but wolfram alpha only seems to be able to handle the last one, as the others require more than the standard computation time to display. still, considering the titration of $ 1 ~ \ text { l } $ of a $ 1 ~ \ ce { \ small m } $ solution of a strong acid with a $ 1 ~ \ ce { \ small m } $ solution of a strong base, you get this graph. the $ x $ - axis is the volume of base added, in litres, while the $ y $ - axis is the ph. notice that the graph is exactly as what you'll find in a textbook! now what? with the equations figured out, let's study how they work. we want to know why the ph changes quickly near the equivalence point, so a good idea is to analyze the derivative of the equation and figure out where they have a very positive or very negative value, indicating a region where $ \ ce { [ h ^ + ] } $ changes quickly with a slight addition of an acid / base. suppose we want to study the titration of an acid with a base. what we need then is the derivative $ \ displaystyle \ frac { \ ce { d [ h ^ + ] } } { \ ce { d } v _ \ ce { b } } $. we will obtain this by implicit differentiation of both sides of the equations by $ \ displaystyle \ frac { \ ce { d } } { \ ce { d } v _ \ ce { b } } $. starting with the easiest case, the mixture of a strong acid and strong base, we obtain : $ $ \ frac { \ ce { d [ h ^ + ] } } { \ ce { d } v _ \ ce { b } } = \ frac { k _ \ ce { w } - c _ \ ce { b } \ ce { [ h ^ + ] - [ h ^ + ] ^ 2 } } { 2 ( v _ \ ce { a } + v _ \ ce { b } \ left ) \ ce { [ h ^ + ] } + ( c _ \ ce { b } v _ \ ce { b } - c _ \ ce { a } v _ \ ce { a } \ right ) } $ $ once again a
https://api.stackexchange.com
complicated looking fraction, but with very interesting properties. the numerator is not too important, it's the denominator where the magic happens. notice that we have a sum of two terms ( $ 2 ( v _ \ ce { a } + v _ \ ce { b } ) \ ce { [ h ^ + ] } $ and $ ( c _ \ ce { b } v _ \ ce { b } - c _ \ ce { a } v _ \ ce { a } ) $ ). the lower this sum is, the higher $ \ displaystyle \ frac { \ mathrm { d } \ ce { [ h ^ + ] } } { \ mathrm { d } v _ \ ce { b } } $ is and the quicker the ph will change with a small addition of the base. notice also that, if the solutions aren't very dilute, then the second term quickly dominates the denominator because while adding base, the value of $ [ h ^ + ] $ will become quite small compared to $ c _ \ ce { a } $ and $ c _ \ ce { b } $. now we have a very interesting situation ; a fraction where the major component of the denominator has a subtraction. here's an example of how this sort of function behaves. when the subtraction ends up giving a result close to zero, the function explodes. this means that the speed at which $ \ ce { [ h ^ + ] } $ changes becomes very sensitive to small variations of $ v _ \ ce { b } $ near the critical region. and where does this critical region happen? well, close to the region where $ c _ \ ce { b } v _ \ ce { b } - c _ \ ce { a } v _ \ ce { a } $ is zero. if you remember the start of the answer, this is the equivalence point!. so there, this proves mathematically that the speed at which the ph changes is maximum at the equivalence point. this was only the simplest case though. let's try something a little harder. taking the titration equation for a weak acid with strong base, and implicitly differentiating both sides by $ \ displaystyle \ frac { \ ce { d } } { \ ce { d } v _ \ ce { b } } $ again, we get the significantly more fearsome : $ $ \ displaystyle \ frac { \ ce { d [ h
https://api.stackexchange.com
^ + ] } } { \ ce { d } v _ \ ce { b } } = \ frac { - \ frac { v _ \ ce { a } } { ( v _ \ ce { a } + v _ \ ce { b } ) ^ 2 } \ ce { [ h ^ + ] } ( c _ \ ce { b } \ ce { [ h ^ + ] } - c _ \ ce { b } k _ \ ce { a } + c _ \ ce { a } k _ \ ce { a } ) } { 3 \ ce { [ h ^ + ] ^ 2 + 2 [ h ^ + ] } \ left ( k _ \ ce { a } + \ frac { c _ \ ce { b } v _ \ ce { b } } { v _ \ ce { a } + v _ \ ce { b } } \ right ) + \ frac { k _ \ ce { a } } { v _ \ ce { a } + v _ \ ce { b } } ( c _ \ ce { b } v _ \ ce { b } - c _ \ ce { a } v _ \ ce { a } ) - k _ \ ce { w } } $ $ once again, the term that dominates the behaviour of the complicated denominator is the part containing $ c _ \ ce { b } v _ \ ce { b } - c _ \ ce { a } v _ \ ce { a } $, and once again the derivative explodes at the equivalence point.
https://api.stackexchange.com
here is a 97 - line example of solving a simple multivariate pde using finite difference methods, contributed by prof. david ketcheson, from the py4sci repository i maintain. for more complicated problems where you need to handle shocks or conservation in a finite - volume discretization, i recommend looking at pyclaw, a software package that i help develop. " " " pattern formation code solves the pair of pdes : u _ t = d _ 1 \ nabla ^ 2 u + f ( u, v ) v _ t = d _ 2 \ nabla ^ 2 v + g ( u, v ) " " " import matplotlib matplotlib. use ('tkagg') import numpy as np import matplotlib. pyplot as plt from scipy. sparse import spdiags, linalg, eye from time import sleep # parameter values du = 0. 500 ; dv = 1 ; delta = 0. 0045 ; tau1 = 0. 02 ; tau2 = 0. 2 ; alpha = 0. 899 ; beta = - 0. 91 ; gamma = - alpha ; # delta = 0. 0045 ; tau1 = 0. 02 ; tau2 = 0. 2 ; alpha = 1. 9 ; beta = - 0. 91 ; gamma = - alpha ; # delta = 0. 0045 ; tau1 = 2. 02 ; tau2 = 0. ; alpha = 2. 0 ; beta = - 0. 91 ; gamma = - alpha ; # delta = 0. 0021 ; tau1 = 3. 5 ; tau2 = 0 ; alpha = 0. 899 ; beta = - 0. 91 ; gamma = - alpha ; # delta = 0. 0045 ; tau1 = 0. 02 ; tau2 = 0. 2 ; alpha = 1. 9 ; beta = - 0. 85 ; gamma = - alpha ; # delta = 0. 0001 ; tau1 = 0. 02 ; tau2 = 0. 2 ; alpha = 0. 899 ; beta = - 0. 91 ; gamma = - alpha ; # delta = 0. 0005 ; tau1 = 2. 02 ; tau2 = 0. ; alpha = 2. 0 ; beta = - 0. 91 ; gamma = - alpha ; nx = 150 ; # define the reaction functions def f ( u, v ) : return alpha * u * ( 1 - tau1 *
https://api.stackexchange.com
v * * 2 ) + v * ( 1 - tau2 * u ) ; def g ( u, v ) : return beta * v * ( 1 + alpha * tau1 / beta * u * v ) + u * ( gamma + tau2 * v ) ; def five _ pt _ laplacian ( m, a, b ) : " " " construct a matrix that applies the 5 - point laplacian discretization " " " e = np. ones ( m * * 2 ) e2 = ( [ 0 ] + [ 1 ] * ( m - 1 ) ) * m h = ( b - a ) / ( m + 1 ) a = np. diag ( - 4 * e, 0 ) + np. diag ( e2 [ 1 : ], - 1 ) + np. diag ( e2 [ 1 : ], 1 ) + np. diag ( e [ m : ], m ) + np. diag ( e [ m : ], - m ) a / = h * * 2 return a def five _ pt _ laplacian _ sparse ( m, a, b ) : " " " construct a sparse matrix that applies the 5 - point laplacian discretization " " " e = np. ones ( m * * 2 ) e2 = ( [ 1 ] * ( m - 1 ) + [ 0 ] ) * m e3 = ( [ 0 ] + [ 1 ] * ( m - 1 ) ) * m h = ( b - a ) / ( m + 1 ) a = spdiags ( [ - 4 * e, e2, e3, e, e ], [ 0, - 1, 1, - m, m ], m * * 2, m * * 2 ) a / = h * * 2 return a # set up the grid a = - 1. ; b = 1. m = 100 ; h = ( b - a ) / m ; x = np. linspace ( - 1, 1, m ) y = np. linspace ( - 1, 1, m ) y, x = np. meshgrid ( y, x ) # initial data u = np. random. randn ( m, m ) / 2. ; v = np. random. randn ( m, m ) / 2. ; plt. hold ( false ) plt. pcolormesh ( x, y, u ) plt.
https://api.stackexchange.com
colorbar ; plt. axis ('image') ; plt. draw ( ) u = u. reshape ( - 1 ) v = v. reshape ( - 1 ) a = five _ pt _ laplacian _ sparse ( m, - 1., 1. ) ; ii = eye ( m * m, m * m ) t = 0. dt = h / delta / 5. ; plt. ion ( ) # now step forward in time for k in range ( 120 ) : # simple ( 1st - order ) operator splitting : u = linalg. spsolve ( ii - dt * delta * du * a, u ) v = linalg. spsolve ( ii - dt * delta * dv * a, v ) unew = u + dt * f ( u, v ) ; v = v + dt * g ( u, v ) ; u = unew ; t = t + dt ; # plot every 3rd frame if k / 3 = = float ( k ) / 3 : u = u. reshape ( ( m, m ) ) plt. pcolormesh ( x, y, u ) plt. colorbar plt. axis ('image') plt. title ( str ( t ) ) plt. draw ( ) plt. ioff ( )
https://api.stackexchange.com
summary spi is faster. i2c is more complex and not as easy to use if your microcontroller doesn't have an i2c controller. i2c only requires 2 lines. i2c is a bus system with bidirectional data on the sda line. spi is a point - to - point connection with data in and data out on separate lines ( mosi and miso ). essentially spi consists of a pair of shift registers, where you clock data in to one shift register while you clock data out of the other. usually data is written in bytes by having each time 8 clock pulses in succession, but that's not an spi requirement. you can also have word lengths of 16 bit or even 13 bit, if you like. while in i2c synchronization is done by the start sequence in spi it's done by ss going high ( ss is active low ). you decide yourself after how many clock pulses this is. if you use 13 bit words the ss will latch the last clocked in bits after 13 clock pulses. since the bidirectional data is on two separate lines it's easy to interface. spi in standard mode needs at least four lines : sclk ( serial clock ), mosi ( master out slave in ), miso ( master in slave out ) and ss ( slave select ). in bideroctional mode needs at least three lines : sclk ( serial clock ), mimo ( master in master out ) which is one of the mosi or miso lines and ss ( slave select ). in systems with more than one slave you need a ss line for each slave, so that for \ $ n \ $ slaves you have \ $ n + 3 \ $ lines in standard mode and \ $ n + 2 \ $ lines in bidirectional mode. if you don't want that, in standard mode you can daisy - chain the slaves by connecting the mosi signal of one slave to the miso of the next. this will slow down communication since you have to cycle through all slaves data. like tcrosley says spi can operate at a much higher frequency than i2c. i2c is a bit more complex. since it's a bus you need a way to address devices. your communication starts with a unique start sequence : the data line ( sda ) is pulled low while the clock ( scl ) is high, for the rest of the
https://api.stackexchange.com
communication data is only allowed to change when the clock is low. this start sequence synchronizes each communication. since the communication includes the addressing only two lines are required for any number of devices ( up to 127 ). edit it's obvious that the data line is bidirectional, but it's worth noting that this is also true for the clock line. slaves may stretch the clock to control bus speed. this makes i2c less convenient for level - shifting or buffering. ( spi lines in standard mode are all unidirectional. ) after each byte ( address or data ) is sent the receiver has to acknowledge the receipt by placing an acknowledge pulse on sda. if your microcontroller has an i2c interface this will automatically be taken care of. you can still bit - bang it if your microcontroller doesn't support it, but you'll have to switch the i / o pin from output to input for each acknowledge or read data, unless you use an i / o pin for reading and one for writing. at 400khz standard i2c is much slower than spi. there are high - speed i2c devices which operate at 1mhz, still much slower than 20mhz spi.
https://api.stackexchange.com
arbitrary record access in constant time to get a random record in constant time, it is sufficient to get an arbitrary record in constant time. i have two solutions here : one with tabix and one with grabix. i think the grabix solution is more elegant, but i am keeping the tabix solution below because tabix is a more mature tool than grabix. thanks to user172818 for suggesting grabix. update this answer previously stated that tabix and grabix perform lookups in log ( n ) time. after taking a closer look at the grabix source code and the tabix paper i am now convinced that lookups are independent of n in complexity. however, both tools use an index that scales in size proportionally to n. so, the loading of the index is order n. however, if we consider the loading of the index as "... a single limited transformation of the data to another file format... ", then i think this answer is still a valid one. if more than one record is to be retrieved, then the index needs to be stored in memory, perhaps with a framework such as pysam or htslib. using grabix compress with bgzip. index the file and perform lookups with grabix in bash : gzip - dc input. fastq. gz | bgzip - c > output. fastq. gz grabix index output. fastq. gz # retrieve 5 - th record ( 1 - based ) in log ( n ) time # requires some math to convert indices ( 4 * 4 + 1, 4 * 4 + 4 ) = ( 17, 20 ) grabix grab output. fastq. gz 17 20 # count the number of records for part two of this question export n _ lines = $ ( gzip - dc output. fastq. gz | wc - l ) using tabix the tabix code is more complicated and relies on the iffy assumption that \ t is an acceptable character for replacement of \ n in a fastq record. if you are happy with a file format that is close to but not exactly fastq, then you could do the following : paste each record into a single line. add a dummy chromosome and line number as the first and second column. compress with bgzip. index the file and perform lookups with tabix note that we need to remove leading spaces introduced by nl and we need to introduce a dummy chromosome column to
https://api.stackexchange.com
keep tabix happy : gzip - dc input. fastq. gz | paste - - - - | nl | sed's / ^ * / /'| sed's / ^ / dummy \ t /'| bgzip - c > output. fastq. gz tabix - s 1 - b 2 - e 2 output. fastq. gz # now retrieve the 5th record ( 1 - based ) in log ( n ) time tabix output. fastq. gz dummy : 5 - 5 # this command will retrieve the 5th record and convert it record back into fastq format tabix output. fastq. gz dummy : 5 - 5 | perl - pe's / ^ dummy \ t \ d + \ t / /'| tr'\ t'' \ n'# count the number of records for part two of this question export n _ records = $ ( gzip - dc output. fastq. gz | wc - l ) random record in constant time now that we have a way of retrieving an arbitrary record in log ( n ) time, retrieving a random record is simply a matter of getting a good random number generator and sampling. here is some example code to do this in python : using grabix # random _ read. py import os import random n _ records = int ( os. environ [ " n _ lines " ] ) / / 4 rand _ record _ start = random. randrange ( 0, n _ records ) * 4 + 1 rand _ record _ end = rand _ record _ start + 3 os. system ( " grabix grab output. fastq. gz { 0 } { 1 } ". format ( rand _ record _ start, rand _ record _ end ) ) using tabix # random _ read. py import os import random n _ records = int ( os. environ [ " n _ records " ] ) rand _ record _ index = random. randrange ( 0, n _ records ) + 1 # super ugly, but works... os. system ( " tabix output. fastq. gz dummy : { 0 } - { 0 } | perl - pe's / ^ dummy \ t \ d + \ t / /'| tr'\ t'' \ n'". format ( rand _ record _ index ) ) and this works for me : python3. 5 random
https://api.stackexchange.com
_ read. py disclaimer please note that os. system calls a system shell and is vulnerable to shell injection vulnerabilities. if you are writing production code, then you probably want to take extra precautions. thanks to chris _ rands for raising this issue.
https://api.stackexchange.com
check into generatingfunctionology by herbert wilf. from the linked ( author's ) site, the second edition is available for downloading as a pdf. there is also a link to the third edition, available for purchase. it's a very helpful, useful, readable, fun, ( and short! ) book that a student could conceivably cover over winter break. another promising book by john conway ( et. al. ) is the symmetries of things, which may very well be of interest to students. one additional suggestion, as it is a classic well worth being placed on any serious student's bookshelf : how to solve it by george polya.
https://api.stackexchange.com
this answer is intended to clear up some misconceptions about resonance which have come up many times on this site. resonance is a part of valence bond theory which is used to describe delocalised electron systems in terms of contributing structures, each only involving 2 - centre - 2 - electron bonds. it is a concept that is very often taught badly and misinterpreted by students. the usual explanation is that it is as if the molecule is flipping back and forth between different structures very rapidly and that what is observed is an average of these structures. this is wrong! ( there are molecules that do this ( e. g bullvalene ), but the rapidly interconverting structures are not called resonance forms or resonance structures. ) individual resonance structures do not exist on their own. they are not in some sort of rapid equilibrium. there is only a single structure for a molecule such as benzene, which can be described by resonance. the difference between an equilibrium situation and a resonance situation can be seen on a potential energy diagram. this diagram shows two possible structures of the 2 - norbornyl cation. structure ( a ) shows the single delocalised structure, described by resonance whereas structures ( b ) show the equilibrium option, with the delocalised structure ( a ) as a transition state. the key point is that resonance hybrids are a single potential energy minimum, whereas equilibrating structures are two energy minima separated by a barrier. in 2013 an x - ray diffraction structure was finally obtained and the correct structure was shown to be ( a ). resonance describes delocalised bonding in terms of contributing structures that give some of their character to the single overall structure. these structures do not have to be equally weighted in their contribution. for example, amides can be described by the following resonance structures : the left structure is the major contributor but the right structure also contributes and so the structure of an amide has some double bond character in the c - n bond ( ie. the bond order is > 1 ) and less double bond character in the c - o bond ( bond order < 2 ). the alternative to valence bond theory and the resonance description of molecules is molecular orbital theory. this explains delocalised bonding as electrons occupying molecular orbitals which extend over more than two atoms.
https://api.stackexchange.com
filtfilt is zero - phase filtering, which doesn't shift the signal as it filters. since the phase is zero at all frequencies, it is also linear - phase. filtering backwards in time requires you to predict the future, so it can't be used in " online " real - life applications, only for offline processing of recordings of signals. lfilter is causal forward - in - time filtering only, similar to a real - life electronic filter. it can't be zero - phase. it can be linear - phase ( symmetrical fir ), but usually isn't. usually it adds different amounts of delay at different frequencies. an example and image should make it obvious. although the magnitude of the frequency response of the filters is identical ( top left and top right ), the zero - phase lowpass lines up with the original signal, just without high frequency content, while the minimum phase filtering delays the signal in a causal way : from _ _ future _ _ import division, print _ function import numpy as np from numpy. random import randn from numpy. fft import rfft from scipy import signal import matplotlib. pyplot as plt b, a = signal. butter ( 4, 0. 03, analog = false ) # show that frequency response is the same impulse = np. zeros ( 1000 ) impulse [ 500 ] = 1 # applies filter forward and backward in time imp _ ff = signal. filtfilt ( b, a, impulse ) # applies filter forward in time twice ( for same frequency response ) imp _ lf = signal. lfilter ( b, a, signal. lfilter ( b, a, impulse ) ) plt. subplot ( 2, 2, 1 ) plt. semilogx ( 20 * np. log10 ( np. abs ( rfft ( imp _ lf ) ) ) ) plt. ylim ( - 100, 20 ) plt. grid ( true, which ='both') plt. title ('lfilter') plt. subplot ( 2, 2, 2 ) plt. semilogx ( 20 * np. log10 ( np. abs ( rfft ( imp _ ff ) ) ) ) plt. ylim ( - 100, 20 ) plt. grid ( true, which ='both') plt. title ('filtfilt') sig =
https://api.stackexchange.com
np. cumsum ( randn ( 800 ) ) # brownian noise sig _ ff = signal. filtfilt ( b, a, sig ) sig _ lf = signal. lfilter ( b, a, signal. lfilter ( b, a, sig ) ) plt. subplot ( 2, 1, 2 ) plt. plot ( sig, color ='silver ', label ='original') plt. plot ( sig _ ff, color ='# 3465a4 ', label ='filtfilt') plt. plot ( sig _ lf, color ='# cc0000 ', label ='lfilter') plt. grid ( true, which ='both') plt. legend ( loc = " best " )
https://api.stackexchange.com
for fastq : seqtk fqchk in. fq | head - 2 it gives you percentage of " n " bases, not the exact count, though. for fasta : seqtk comp in. fa | awk'{ x + = $ 9 } end { print x }'this command line also works with fastq, but it will be slower as awk is slow. edit : ok, based on @ bach's reminder, here we go ( you need kseq. h to compile ) : / / to compile : gcc - o2 - o count - n this - prog. c - lz # include < zlib. h > # include < stdio. h > # include < stdint. h > # include " kseq. h " kseq _ init ( gzfile, gzread ) unsigned char dna5tbl [ 256 ] = { 0, 1, 2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
https://api.stackexchange.com
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 } ; int main ( int argc, char * argv [ ] ) { long i, n _ n = 0, n _ acgt = 0, n _ gap = 0 ; gzfile fp ; kseq _ t * seq ; if ( argc = = 1 ) { fprintf ( stderr, " usage : count - n < in. fa > \ n " ) ; return 1 ; } if ( ( fp = gzopen ( argv [ 1 ], " r " ) ) = = 0 ) { fprintf ( stderr, " error : fail to open the input file \ n " ) ; return 1 ; } seq = kseq _ init ( fp ) ; while ( kseq _ read ( seq ) > = 0 ) { for ( i = 0 ; i < seq - > seq. l ; + + i ) { int c = dna5tbl [ ( unsigned char ) seq - > seq. s [ i ] ] ; if ( c < 4 ) + + n _ acgt ; else if ( c = = 4 ) + + n _ n ; else + + n _ gap ; } } kseq _ destroy ( seq ) ; gzclose ( fp ) ; printf ( " % ld \ t % ld \ t % ld \ n ", n _ acgt, n _ n, n _ gap ) ; return
https://api.stackexchange.com
0 ; } it works for both fasta / q and gzip'ed fasta / q. the following uses seqan : # include < seqan / seq _ io. h > using namespace seqan ; int main ( int argc, char * argv [ ] ) { if ( argc = = 1 ) { std : : cerr < < " usage : count - n < in. fastq > " < < std : : endl ; return 1 ; } std : : ios : : sync _ with _ stdio ( false ) ; charstring id ; dna5string seq ; seqfilein seqfilein ( argv [ 1 ] ) ; long i, n _ n = 0, n _ acgt = 0 ; while (! atend ( seqfilein ) ) { readrecord ( id, seq, seqfilein ) ; for ( i = beginposition ( seq ) ; i < endposition ( seq ) ; + + i ) if ( seq [ i ] < 4 ) + + n _ acgt ; else + + n _ n ; } std : : cout < < n _ acgt < <'\ t'< < n _ n < < std : : endl ; return 0 ; } on a fastq with 4 - million 150bp reads : the c version : ~ 0. 74 sec the c + + version : ~ 2. 15 sec an older c version without a lookup table ( see the previous edit ) : ~ 2. 65 sec
https://api.stackexchange.com
according to current nomenclature rules, $ \ ce { h3n } $ would be correct and acceptable. however some chemical formulas, like $ \ ce { nh3 } $ for ammonia, that were in use long before the rules came out, are still accepted today.
https://api.stackexchange.com
joblib does what you want. the basic usage pattern is : from joblib import parallel, delayed def myfun ( arg ) : do _ stuff return result results = parallel ( n _ jobs = - 1, verbose = verbosity _ level, backend = " threading " ) ( map ( delayed ( myfun ), arg _ instances ) ) where arg _ instances is list of values for which myfun is computed in parallel. the main restriction is that myfun must be a toplevel function. the backend parameter can be either " threading " or " multiprocessing ". you can pass additional common parameters to the parallelized function. the body of myfun can also refer to initialized global variables, the values which will be available to the children. args and results can be pretty much anything with the threading backend but results need to be serializable with the multiprocessing backend. dask also offers similar functionality. it might be preferable if you are working with out of core data or you are trying to parallelize more complex computations.
https://api.stackexchange.com
to quote from the answer to “ traversals from the root in avl trees and red black trees ” question for some kinds of binary search trees, including red - black trees but not avl trees, the " fixes " to the tree can fairly easily be predicted on the way down and performed during a single top - down pass, making the second pass unnecessary. such insertion algorithms are typically implemented with a loop rather than recursion, and often run slightly faster in practice than their two - pass counterparts. so a redblack tree insert can be implemented without recursion, on some cpus recursion is very expensive if you overrun the function call cache ( e. g sparc due to is use of register window ) ( i have seen software run over 10 times as fast on the sparc by removing one function call, that resulted in a often called code path being too deep for the register window. as you don't know how deep the register window will be on your customer's system, and you don't know how far down the call stack you are in the " hot code path ", not using recursion make like more predictable. ) also not risking running out of stack is a benefit.
https://api.stackexchange.com
i agree that a turing machine can do " all the possible mathematical problems ". well, you shouldn't, because it's not true. for example, turing machines cannot determine if polynomials with integer coefficients have integer solutions ( hilbert's tenth problem ). is turing machine “ by definition ” the most powerful machine? no. we can dream up an infinite hierarchy of more powerful machines. however, the turing machine is the most powerful machine that we know, at least in principle, how to build. that's not a definition, though : it is just that we do not have any clue how to build anything more powerful, or if it is even possible. what's the new thing alan turing gave us? a formal definition of algorithm. without such a definition ( e. g., the turing machine ), we have only informal definitions of algorithm, along the lines of " a finitely specified procedure for solving something. " ok, great. but what individual steps are these procedures allowed to take? are basic arithmetic operations steps? is finding the gradient of a curve a step? is finding roots of polynomials a step? is finding integer roots of polynomials a step? each of those seems about as natural. however, if you allow all of them, your " finitely specified procedures " are more powerful than turing machines, which means that they can solve things that can't be solved by algorithms. if you allow all but the last one, you're still within the realms of turing computation. if we didn't have a formal definition of algorithm, we wouldn't even be able to ask these questions. we wouldn't be able to discuss what algorithms can do, because we wouldn't know what an algorithm is.
https://api.stackexchange.com
j. m's comment is right : you can find an interpolating polynomial and differentiate it. there are other ways of deriving such formulas ; typically, they all lead to solving a van der monde system for the coefficients. this approach is problematic when the finite difference stencil includes a large number of points, because the vandermonde matrices become ill - conditioned. a more numerically stable approach was devised by fornberg, and is explained more clearly and generally in a second paper of his. here is a simple matlab script that implements fornberg's method to compute the coefficients of a finite difference approximation for any order derivative with any set of points. for a nice explanation, see chapter 1 of leveque's text on finite difference methods. a bit more on fd formulas : suppose you have a 1d grid. if you use the whole set of grid points to determine a set of fd formulas, the resulting method is equivalent to finding an interpolating polynomial through the whole grid and differentiating that. this approach is referred to as spectral collocation. alternatively, for each grid point you could determine a fd formula using just a few neighboring points. this is what is done in traditional finite difference methods. as mentioned in the comments below, using finite differences of very high order can lead to oscillations ( the runge phenomenon ) if the points are not chosen carefully.
https://api.stackexchange.com
if $ p $ is an infinitely differentiable function such that for each $ x $, there is an $ n $ with $ p ^ { ( n ) } ( x ) = 0 $, then $ p $ is a polynomial. ( note $ n $ depends on $ x $. ) see the discussion in math overflow.
https://api.stackexchange.com
well, a dfa is just a turing machine that's only allowed to move to the right and that must accept or reject as soon as it runs out of input characters. so i'm not sure one can really say that a dfa is natural but a turing machine isn't. critique of the question aside, remember that turing was working before computers existed. as such, he wasn't trying to codify what electronic computers do but, rather, computation in general. my parents have a dictionary from the 1930s that defines computer as " someone who computes " and this is basically where turing was coming from : for him, at that time, computation was about slide rules, log tables, pencils and pieces of paper. in that mind - set, rewriting symbols on a paper tape doesn't seem like a bad abstraction. ok, fine, you're saying ( i hope! ) but we're not in the 1930s any more so why do we still use this? here, i don't think there's any one specific reason. the advantage of turing machines is that they're reasonably simple and we're decently good at proving things about them. although formally specifying a turing machine program to do some particular task is very tedious, once you've done it a few times, you have a reasonable intuition about what they can do and you don't need to write the formal specifications any more. the model is also easily extended to include other natural features, such as random access to the tape. so they're a pretty useful model that we understand well and we also have a pretty good understanding of how they relate to actual computers. one could use other models but one would then have to do a huge amount of translation between results for the new model and the vast body of existing work on what turing machines can do. nobody has come up with a replacement for turing machines that have had big enough advantages to make that look like a good idea.
https://api.stackexchange.com
you can refer to " detecting start of a loop in singly linked list ", here's an excerpt : distance travelled by slowpointer before meeting $ = x + y $ distance travelled by fastpointer before meeting $ = ( x + y + z ) + y = x + 2y + z $ since fastpointer travels with double the speed of slowpointer, and time is constant for both when both pointers reach the meeting point. so by using simple speed, time and distance relation ( slowpointer traveled half the distance ) : \ begin { align * } 2 * \ operatorname { dist } ( \ text { slowpointer } ) & = \ operatorname { dist } ( \ text { fastpointer } ) \ \ 2 ( x + y ) & = x + 2y + z \ \ 2x + 2y & = x + 2y + z \ \ x & = z \ end { align * } hence by moving slowpointer to start of linked list, and making both slowpointer and fastpointer to move one node at a time, they both have same distance to cover. they will reach at the point where the loop starts in the linked list.
https://api.stackexchange.com
first, not all processor architectures stopped at 32 registers. almost all the risc architectures that have 32 registers exposed in the instruction set actually have 32 integer registers and 32 more floating point registers ( so 64 ). ( floating point " add " uses different registers than integer " add ". ) the sparc architecture has register windows. on the sparc you can only access 32 integer registers at a time, but the registers act like a stack and you can push and pop new registers 16 at a time. the itanium architecture from hp / intel had 128 integer and 128 floating point registers exposed in the instruction set. modern gpus from nvidia, amd, intel, arm and imagination technologies, all expose massive numbers of registers in their register files. ( i know this to be true of the nvidia and intel architectures, i am not very familiar with the amd, arm and imagination instruction sets, but i think the register files are large there too. ) second, most modern microprocessors implement register renaming to eliminate unnecessary serialization caused by needing to reuse resources, so the underlying physical register files can be larger ( 96, 128 or 192 registers on some machines. ) this ( and dynamic scheduling ) eliminates some of the need for the compiler to generate so many unique register names, while still providing a larger register file to the scheduler. there are two reasons why it might be difficult to further increase the number of registers exposed in the instruction set. first, you need to be able to specify the register identifiers in each instruction. 32 registers require a 5 bit register specifier, so 3 - address instructions ( common on risc architectures ) spend 15 of the 32 instruction bits just to specify the registers. if you increased that to 6 or 7 bits, then you would have less space to specify opcodes and constants. gpus and itanium have much larger instructions. larger instructions come at a cost : you need to use more instruction memory, so your instruction cache behavior is less ideal. the second reason is access time. the larger you make a memory the slower it is to access data from it. ( just in terms of basic physics : the data is stored in 2 - dimensional space, so if you are storing $ n $ bits, the average distance to a specific bit is $ o ( \ sqrt { n } ) $. ) a register file is just a small multi - ported memory, and one of the constraints on making it larger is that eventually
https://api.stackexchange.com
you would need to start clocking your machine slower to accommodate the larger register file. usually in terms of total performance this is a lose.
https://api.stackexchange.com
1. verify that your code is bug free there's a saying among writers that " all writing is re - writing " - - that is, the greater part of writing is revising. for programmers ( or at least data scientists ) the expression could be re - phrased as " all coding is debugging. " any time you're writing code, you need to verify that it works as intended. the best method i've ever found for verifying correctness is to break your code into small segments, and verify that each segment works. this can be done by comparing the segment output to what you know to be the correct answer. this is called unit testing. writing good unit tests is a key piece of becoming a good statistician / data scientist / machine learning expert / neural network practitioner. there is simply no substitute. you have to check that your code is free of bugs before you can tune network performance! otherwise, you might as well be re - arranging deck chairs on the rms titanic. there are two features of neural networks that make verification even more important than for other types of machine learning or statistical models. neural networks are not " off - the - shelf " algorithms in the way that random forest or logistic regression are. even for simple, feed - forward networks, the onus is largely on the user to make numerous decisions about how the network is configured, connected, initialized and optimized. this means writing code, and writing code means debugging. even when a neural network code executes without raising an exception, the network can still have bugs! these bugs might even be the insidious kind for which the network will train, but get stuck at a sub - optimal solution, or the resulting network does not have the desired architecture. ( this is an example of the difference between a syntactic and semantic error. ) this medium post, " how to unit test machine learning code, " by chase roberts discusses unit - testing for machine learning models in more detail. i borrowed this example of buggy code from the article : def make _ convnet ( input _ image ) : net = slim. conv2d ( input _ image, 32, [ 11, 11 ], scope = " conv1 _ 11x11 " ) net = slim. conv2d ( input _ image, 64, [ 5, 5 ], scope = " conv2 _ 5x5 " ) net = slim. max _ pool2d ( net, [
https://api.stackexchange.com
4, 4 ], stride = 4, scope ='pool1') net = slim. conv2d ( input _ image, 64, [ 5, 5 ], scope = " conv3 _ 5x5 " ) net = slim. conv2d ( input _ image, 128, [ 3, 3 ], scope = " conv4 _ 3x3 " ) net = slim. max _ pool2d ( net, [ 2, 2 ], scope ='pool2') net = slim. conv2d ( input _ image, 128, [ 3, 3 ], scope = " conv5 _ 3x3 " ) net = slim. max _ pool2d ( net, [ 2, 2 ], scope ='pool3') net = slim. conv2d ( input _ image, 32, [ 1, 1 ], scope = " conv6 _ 1x1 " ) return net do you see the error? many of the different operations are not actually used because previous results are over - written with new variables. using this block of code in a network will still train and the weights will update and the loss might even decrease - - but the code definitely isn't doing what was intended. ( the author is also inconsistent about using single - or double - quotes but that's purely stylistic. ) the most common programming errors pertaining to neural networks are variables are created but never used ( usually because of copy - paste errors ) ; expressions for gradient updates are incorrect ; weight updates are not applied ; loss functions are not measured on the correct scale ( for example, cross - entropy loss can be expressed in terms of probability or logits ) the loss is not appropriate for the task ( for example, using categorical cross - entropy loss for a regression task ). dropout is used during testing, instead of only being used for training. make sure you're minimizing the loss function $ l ( x ) $, instead of minimizing $ - l ( x ) $. make sure your loss is computed correctly. unit testing is not just limited to the neural network itself. you need to test all of the steps that produce or transform data and feed into the network. some common mistakes here are na or nan or inf values in your data creating na or nan or inf values in the output, and therefore in the loss function. shuffling the labels independently from the samples ( for instance, creating train / test splits for
https://api.stackexchange.com
the labels and samples separately ) ; accidentally assigning the training data as the testing data ; when using a train / test split, the model references the original, non - split data instead of the training partition or the testing partition. forgetting to scale the testing data ; scaling the testing data using the statistics of the test partition instead of the train partition ; forgetting to un - scale the predictions ( e. g. pixel values are in [ 0, 1 ] instead of [ 0, 255 ] ). here's an example of a question where the problem appears to be one of model configuration or hyperparameter choice, but actually the problem was a subtle bug in how gradients were computed. is this drop in training accuracy due to a statistical or programming error? 2. for the love of all that is good, scale your data the scale of the data can make an enormous difference on training. sometimes, networks simply won't reduce the loss if the data isn't scaled. other networks will decrease the loss, but only very slowly. scaling the inputs ( and certain times, the targets ) can dramatically improve the network's training. prior to presenting data to a neural network, standardizing the data to have 0 mean and unit variance, or to lie in a small interval like $ [ - 0. 5, 0. 5 ] $ can improve training. this amounts to pre - conditioning, and removes the effect that a choice in units has on network weights. for example, length in millimeters and length in kilometers both represent the same concept, but are on different scales. the exact details of how to standardize the data depend on what your data look like. data normalization and standardization in neural networks why does $ [ 0, 1 ] $ scaling dramatically increase training time for feed forward ann ( 1 hidden layer )? batch or layer normalization can improve network training. both seek to improve the network by keeping a running mean and standard deviation for neurons'activations as the network trains. it is not well - understood why this helps training, and remains an active area of research. " understanding batch normalization " by johan bjorck, carla gomes, bart selman " towards a theoretical understanding of batch normalization " by jonas kohler, hadi daneshmand, aurelien lucchi, ming zhou, klaus neymeyr, thomas hofmann " how does batch normalization help optimization? ( no, it is not about internal covariate shift ) " by shibani santurkar,
https://api.stackexchange.com
dimitris tsipras, andrew ilyas, aleksander madry 3. crawl before you walk ; walk before you run wide and deep neural networks, and neural networks with exotic wiring, are the hot thing right now in machine learning. but these networks didn't spring fully - formed into existence ; their designers built up to them from smaller units. first, build a small network with a single hidden layer and verify that it works correctly. then incrementally add additional model complexity, and verify that each of those works as well. too few neurons in a layer can restrict the representation that the network learns, causing under - fitting. too many neurons can cause over - fitting because the network will " memorize " the training data. even if you can prove that there is, mathematically, only a small number of neurons necessary to model a problem, it is often the case that having " a few more " neurons makes it easier for the optimizer to find a " good " configuration. ( but i don't think anyone fully understands why this is the case. ) i provide an example of this in the context of the xor problem here : aren't my iterations needed to train nn for xor with mse < 0. 001 too high?. choosing the number of hidden layers lets the network learn an abstraction from the raw data. deep learning is all the rage these days, and networks with a large number of layers have shown impressive results. but adding too many hidden layers can make risk overfitting or make it very hard to optimize the network. choosing a clever network wiring can do a lot of the work for you. is your data source amenable to specialized network architectures? convolutional neural networks can achieve impressive results on " structured " data sources, image or audio data. recurrent neural networks can do well on sequential data types, such as natural language or time series data. residual connections can improve deep feed - forward networks. 4. neural network training is like lock picking to achieve state of the art, or even merely good, results, you have to set up all of the parts configured to work well together. setting up a neural network configuration that actually learns is a lot like picking a lock : all of the pieces have to be lined up just right. just as it is not sufficient to have a single tumbler in the right place, neither is it sufficient to have only the architecture, or only the optimizer, set up correctly. tuning
https://api.stackexchange.com
configuration choices is not really as simple as saying that one kind of configuration choice ( e. g. learning rate ) is more or less important than another ( e. g. number of units ), since all of these choices interact with all of the other choices, so one choice can do well in combination with another choice made elsewhere. this is a non - exhaustive list of the configuration options which are not also regularization options or numerical optimization options. all of these topics are active areas of research. the network initialization is often overlooked as a source of neural network bugs. initialization over too - large an interval can set initial weights too large, meaning that single neurons have an outsize influence over the network behavior. the key difference between a neural network and a regression model is that a neural network is a composition of many nonlinear functions, called activation functions. ( see : what is the essential difference between neural network and linear regression ) classical neural network results focused on sigmoidal activation functions ( logistic or $ \ tanh $ functions ). a recent result has found that relu ( or similar ) units tend to work better because the have steeper gradients, so updates can be applied quickly. ( see : why do we use relu in neural networks and how do we use it? ) one caution about relus is the " dead neuron " phenomenon, which can stymie learning ; leaky relus and similar variants avoid this problem. see why can't a single relu learn a relu? my relu network fails to launch there are a number of other options. see : comprehensive list of activation functions in neural networks with pros / cons residual connections are a neat development that can make it easier to train neural networks. " deep residual learning for image recognition " kaiming he, xiangyu zhang, shaoqing ren, jian sun in : cvpr. ( 2016 ). additionally, changing the order of operations within the residual block can further improve the resulting network. " identity mappings in deep residual networks " by kaiming he, xiangyu zhang, shaoqing ren, and jian sun. 5. non - convex optimization is hard the objective function of a neural network is only convex when there are no hidden units, all activations are linear, and the design matrix is full - rank - - because this configuration is identically an ordinary regression problem. in all other cases, the optimization problem is non - convex, and non - convex optimization is hard. the challenges of training neural
https://api.stackexchange.com
networks are well - known ( see : why is it hard to train deep neural networks? ). additionally, neural networks have a very large number of parameters, which restricts us to solely first - order methods ( see : why is newton's method not widely used in machine learning? ). this is a very active area of research. setting the learning rate too large will cause the optimization to diverge, because you will leap from one side of the " canyon " to the other. setting this too small will prevent you from making any real progress, and possibly allow the noise inherent in sgd to overwhelm your gradient estimates. see : how can change in cost function be positive? gradient clipping re - scales the norm of the gradient if it's above some threshold. i used to think that this was a set - and - forget parameter, typically at 1. 0, but i found that i could make an lstm language model dramatically better by setting it to 0. 25. i don't know why that is. learning rate scheduling can decrease the learning rate over the course of training. in my experience, trying to use scheduling is a lot like regex : it replaces one problem ( " how do i get learning to continue after a certain epoch? " ) with two problems ( " how do i get learning to continue after a certain epoch? " and " how do i choose a good schedule? " ). other people insist that scheduling is essential. i'll let you decide. choosing a good minibatch size can influence the learning process indirectly, since a larger mini - batch will tend to have a smaller variance ( law - of - large - numbers ) than a smaller mini - batch. you want the mini - batch to be large enough to be informative about the direction of the gradient, but small enough that sgd can regularize your network. there are a number of variants on stochastic gradient descent which use momentum, adaptive learning rates, nesterov updates and so on to improve upon vanilla sgd. designing a better optimizer is very much an active area of research. some examples : no change in accuracy using adam optimizer when sgd works fine how does the adam method of stochastic gradient descent work? why does momentum escape from a saddle point in this famous image? when it first came out, the adam optimizer generated a lot of interest. but some recent research has found that sgd with momentum can out - perform adaptive gradient methods for neural
https://api.stackexchange.com
networks. " the marginal value of adaptive gradient methods in machine learning " by ashia c. wilson, rebecca roelofs, mitchell stern, nathan srebro, benjamin recht but on the other hand, this very recent paper proposes a new adaptive learning - rate optimizer which supposedly closes the gap between adaptive - rate methods and sgd with momentum. " closing the generalization gap of adaptive gradient methods in training deep neural networks " by jinghui chen, quanquan gu adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, have been observed to generalize worse than stochastic gradient descent ( sgd ) with momentum in training deep neural networks. this leaves how to close the generalization gap of adaptive gradient methods an open problem. in this work, we show that adaptive gradient methods such as adam, amsgrad, are sometimes " over adapted ". we design a new algorithm, called partially adaptive momentum estimation method ( padam ), which unifies the adam / amsgrad with sgd to achieve the best from both worlds. experiments on standard benchmarks show that padam can maintain fast convergence rate as adam / amsgrad while generalizing as well as sgd in training deep neural networks. these results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks. specifically for triplet - loss models, there are a number of tricks which can improve training time and generalization. see : in training a triplet network, i first have a solid drop in loss, but eventually the loss slowly but consistently increases. what could cause this? 6. regularization choosing and tuning network regularization is a key part of building a model that generalizes well ( that is, a model that is not overfit to the training data ). however, at the time that your network is struggling to decrease the loss on the training data - - when the network is not learning - - regularization can obscure what the problem is. when my network doesn't learn, i turn off all regularization and verify that the non - regularized network works correctly. then i add each regularization piece back, and verify that each of those works along the way. this tactic can pinpoint where some regularization might be poorly set. some examples are $ l ^ 2 $ regularization ( aka weight decay ) or $ l ^ 1 $ regularization is set too large, so the weights can't move. two parts of regularization are in conflict. for example,
https://api.stackexchange.com
it's widely observed that layer normalization and dropout are difficult to use together. since either on its own is very useful, understanding how to use both is an active area of research. " understanding the disharmony between dropout and batch normalization by variance shift " by xiang li, shuo chen, xiaolin hu, jian yang " adjusting for dropout variance in batch normalization and weight initialization " by dan hendrycks, kevin gimpel. " self - normalizing neural networks " by gunter klambauer, thomas unterthiner, andreas mayr and sepp hochreiter 7. keep a logbook of experiments when i set up a neural network, i don't hard - code any parameter settings. instead, i do that in a configuration file ( e. g., json ) that is read and used to populate network configuration details at runtime. i keep all of these configuration files. if i make any parameter modification, i make a new configuration file. finally, i append as comments all of the per - epoch losses for training and validation. the reason that i'm so obsessive about retaining old results is that this makes it very easy to go back and review previous experiments. it also hedges against mistakenly repeating the same dead - end experiment. psychologically, it also lets you look back and observe " well, the project might not be where i want it to be today, but i am making progress compared to where i was $ k $ weeks ago. " as an example, i wanted to learn about lstm language models, so i decided to make a twitter bot that writes new tweets in response to other twitter users. i worked on this in my free time, between grad school and my job. it took about a year, and i iterated over about 150 different models before getting to a model that did what i wanted : generate new english - language text that ( sort of ) makes sense. ( one key sticking point, and part of the reason that it took so many attempts, is that it was not sufficient to simply get a low out - of - sample loss, since early low - loss models had managed to memorize the training data, so it was just reproducing germane blocks of text verbatim in reply to prompts - - it took some tweaking to make the model more spontaneous and still have low loss. )
https://api.stackexchange.com
noise is quite good ( hard to compress ), but it becomes grey when looking from far, becoming easy to compress. a good pattern would be kind of fractal, looking similar at all scales. well, there is fractal noise. i think brownian noise is fractal, looking the same as you zoom into it. wikipedia talks about adding perlin noise to itself at different scales to produce fractal noise, which is maybe identical, i'm not sure : i don't think this would be hard to compress, though. noise is hard for lossless compression, but jpeg is lossy, so it's just going to throw away the detail instead of struggling with it. i'm not sure if it's possible to make something " hard for jpeg to compress " since it will just ignore anything that's too hard to compress at that quality level. something with hard edges at any scale would probably be better, like the infinite checkerboard plane : also something with lots of colors. maybe look at actual fractals instead of fractal noise. maybe a mondrian fractal? : )
https://api.stackexchange.com
short answer color - blind subjects are better at detecting color - camouflaged objects. this may give color blinds an advantage in terms of spotting hidden dangers ( predators ) or finding camouflaged foods. background there are two types of red - green blindness : protanopia ( red - blind ) and deuteranopia ( green - blind ), i. e., these people miss one type of cone, namely the ( red l cone or the green m cone ). these conditions should be set apart from the condition where there are mutations in the l cones shifting their sensitivity to the green cone spectrum ( deuteranomaly ) or vice versa ( protanomaly ). since you are talking color - " blindness ", as opposed to reduced sensitivity to red or green, i reckon you are asking about true dichromats, i. e., protanopes and deuteranopes. it's an excellent question as to why 2 % of the men have either one condition, given that : protanopes are more likely to confuse : - black with many shades of red dark brown with dark green, dark orange and dark red some blues with some reds, purples and dark pinks mid - greens with some oranges deuteranopes are more likely to confuse : - mid - reds with mid - greens blue - greens with grey and mid - pinks bright greens with yellows pale pinks with light grey mid - reds with mid - brown light blues with lilac there are reports on the benefits of being red - green color blind under certain specific conditions. for example, morgan et al. ( 1992 ) report that the identification of a target area with a different texture or orientation pattern was performed better by dichromats when the surfaces were painted with irrelevant colors. in other words, when color is simply a distractor and confuses the subject to focus on the task ( i. e., texture or orientation discrimination ), the lack of red - green color vision can actually be beneficial. this in turn could be interpreted as dichromatic vision being beneficial over trichromatic vision to detect color - camouflaged objects. reports on improved foraging of dichromats under low - lighting are debated, but cannot be excluded. the better camouflage - breaking performance of dichromats is, however, an established phenomenon ( cain et al., 2010 ). during the second world war it was suggested that color - deficient observers could often penetrate camouflage that deceived the normal observer. the idea
https://api.stackexchange.com
has been a recurrent one, both with respect to military camouflage and with respect to the camouflage of the natural world ( reviewed in morgan et al. ( 1992 ) outlines, rather than colors, are responsible for pattern recognition. in the military, colorblind snipers and spotters are highly valued for these reasons ( source : de paul university ). if you sit back far from your screen, look at the normal full - color picture on the left and compare it to the dichromatic picture on the right ; the picture on the right appears at higher contrast in trichromats, but dichromats may not see any difference between the two : left : full - color image, right : dichromatic image. source : de paul university however, i think the dichromat trait is simply not selected against strongly and this would explain its existence more easily than finding reasons it would be selected for ( morgan et al., 1992 ). references - cain et al., biol lett ( 2010 ) ; 6, 3 – 38 - morgan et al., proc r soc b ( 1992 ) ; 248 : 291 - 5
https://api.stackexchange.com
the way to think of cross - validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. if you use cross - validation to estimate the hyperparameters of a model ( the $ \ alpha $ s ) and then use those hyper - parameters to fit a model to the whole dataset, then that is fine, provided that you recognise that the cross - validation estimate of performance is likely to be ( possibly substantially ) optimistically biased. this is because part of the model ( the hyper - parameters ) have been selected to minimise the cross - validation performance, so if the cross - validation statistic has a non - zero variance ( and it will ) there is the possibility of over - fitting the model selection criterion. if you want to choose the hyper - parameters and estimate the performance of the resulting model then you need to perform a nested cross - validation, where the outer cross - validation is used to assess the performance of the model, and in each fold cross - validation is used to determine the hyper - parameters separately in each fold. you build the final model by using cross - validation on the whole set to choose the hyper - parameters and then build the classifier on the whole dataset using the optimized hyper - parameters. this is of course computationally expensive, but worth it as the bias introduced by improper performance estimation can be large. see my paper g. c. cawley and n. l. c. talbot, over - fitting in model selection and subsequent selection bias in performance evaluation, journal of machine learning research, 2010. research, vol. 11, pp. 2079 - 2107, july 2010. ( pdf ) however, it is still possible to have over - fitting in model selection ( nested cross - validation just allows you to test for it ). a method i have found useful is to add a regularisation term to the cross - validation error that penalises hyper - parameter values likely to result in overly - complex models, see g. c. cawley and n. l. c. talbot, preventing over - fitting in model selection via bayesian regularisation of the hyper - parameters, journal of machine learning research, volume 8, pages 841 - 861, april 2007. ( so the answers to your question are ( i ) yes, you should use the full dataset to produce your final model as the more data you use the more likely it is to generalise well but ( ii ) make sure
https://api.stackexchange.com
you obtain an unbiased performance estimate via nested cross - validation and potentially consider penalising the cross - validation statistic to further avoid over - fitting in model selection.
https://api.stackexchange.com
first determine the coil current when the coil is on. this is the current that will flow through the diode when the coil is switched off. in your relay, the coil current is shown as 79. 4 ma. specify a diode for at least 79. 4 ma current. in your case, a 1n4001 current rating far exceeds the requirement. the diode reverse voltage rating should be at least the voltage applied to the relay coil. normally a designer puts in plenty of reserve in the reverse rating. a diode in your application having 50 volts would be more than adequate. again 1n4001 will do the job. additionally, the 1n4007 ( in single purchase quantities ) costs the same but has 1000 volt rating.
https://api.stackexchange.com
sometimes men wake up with an erection in the morning. why does this happen? shortly speaking : rem ( rapid eye movement ) is one phase of sleep. during this phase, we dream and some of our neurotransmitters are shut off. this include norepinephrine, which is involved in controlling erections. norepinephrine prevents blood from entering the penis ( preventing the erection ). in absence of norepinephrine — during rem phase norepinephrine is absent — blood enters the penis, leading to an erection. this phenomenon is called nocturnal penile tumescence. such erections typically occur 3 to 5 times a night. a related question concerning similar erections in women can be found here. high pressure in the bladder may also lead to a " reflex erection ". this erection allows for preventing uncontrolled urination. the drawback is that when in the morning one has an erection and can't wait to pee it might get hard to accurately target the toilets! this video is also a nice and easy source of information on the subject. is it bad? ( reading my " note " below, you have edited your post to get rid of this question, thank you ) it is perfectly healthy you don't have to worry about that. these erections are even thought of as contributing to penile health. at the opposite end of the spectrum, the absence of erections during the nights are an index of erectile dysfunction ( e. d. ). note be aware that medical questions are often considered off - topic on this site. asking " is it bad? " turns your question into a medical one. health - related questions ( but not personal health ) should be asked on health. se
https://api.stackexchange.com
i've gathered the following from online research so far : i've used armadillo a little bit, and found the interface to be intuitive enough, and it was easy to locate binary packages for ubuntu ( and i'm assuming other linux distros ). i haven't compiled it from source, but my hope is that it wouldn't be too difficult. it meets most of my design criteria, and uses dense linear algebra. it can call lapack or mkl routines. there generally is no need to compile armadillo, it is a purely template - based library : you just include the header and link to blas / lapack or mkl etc. i've heard good things about eigen, but haven't used it. it claims to be fast, uses templating, and supports dense linear algebra. it doesn't have lapack or blas as a dependency, but appears to be able to do everything that lapack can do ( plus some things lapack can't ). a lot of projects use eigen, which is promising. it has a binary package for ubuntu, but as a header - only library it's trivial to use elsewhere too. the matrix template library version 4 also looks promising, and uses templating. it supports both dense and sparse linear algebra, and can call umfpack as a sparse solver. the features are somewhat unclear from their website. it has a binary package for ubuntu, downloadable from their web site. petsc, written by a team at argonne national laboratory, has access to sparse and dense linear solvers, so i'm presuming that it can function as a matrix library. it's written in c, but has c + + bindings, i think ( and even if it didn't, calling c from c + + is no problem ). the documentation is incredibly thorough. the package is a bit overkill for what i want to do now ( matrix multiplication and indexing to set up mixed - integer linear programs ), but could be useful as a matrix format for me in the future, or for other people who have different needs than i do. trilinos, written by a team at sandia national laboratory, provides object - oriented c + + interfaces for dense and sparse matrices through its epetra component, and templated interfaces for dense and sparse matrices through its tpetra component. it also has components that provide
https://api.stackexchange.com
linear solver and eigensolver functionality. the documentation does not seem to be as polished or prominent as petsc ; trilinos seems like the sandia analog of petsc. petsc can call some of the trilinos solvers. binaries for trilinos are available for linux. blitz is a c + + object - oriented library that has linux binaries. it doesn't seem to be actively maintained ( 2012 - 06 - 29 : a new version has just appeared yesterday! ), although the mailing list is active, so there is some community that uses it. it doesn't appear to do much in the way of numerical linear algebra beyond blas, and looks like a dense matrix library. it uses templates. boost : : ublas is a c + + object - oriented library and part of the boost project. it supports templating and dense numerical linear algebra. i've heard it's not particularly fast. the template numerical toolkit is a c + + object - oriented library developed by nist. its author, roldan pozo, seems to contribute patches occasionally, but it doesn't seem to be under active development any longer ( last update was 2010 ). it focuses on dense linear algebra, and provides interfaces for some basic matrix decompositions and an eigenvalue solver. elemental, developed by jack poulson, is a distributed memory ( parallel ) dense linear algebra software package written in a style similar to flame. for a list of features and background on the project, see his documentation. flame itself has an associated library for sequential and shared - memory dense linear algebra, called libflame, which appears to be written in object - oriented c. libflame looks a lot like lapack, but with better notation underlying the algorithms to make development of fast numerical linear algebra libraries more of a science and less of a black art. there are other libraries that can be added to the list ; if we're counting sparse linear algebra packages as " matrix libraries ", the best free one i know of in c is suitesparse, which is programmed in object - oriented style. i've used suitesparse and found it fairly easy to pick up ; it depends on blas and lapack for some of the algorithms that decompose sparse problems into lots of small, dense linear algebra subproblems. the lead author of the package, tim davis, is incredibly helpful and a great all - around guy. the ha
https://api.stackexchange.com
##rwell subroutine libraries are famous for their sparse linear algebra routines, and are free for academic users, though you have to go through this process of filling out a form and receiving an e - mail for each file that you want to download. since the subroutines often have dependencies, using one solver might require downloading five or six files, and the process can get somewhat tedious, especially since the form approval is not instantaneous. there are also other sparse linear algebra solvers, but as far as i can tell, mumps and other packages are focused mostly on the solution of linear systems, and solving linear systems is the least of my concerns right now. ( maybe later, i will need that functionality, and it could be useful for others. )
https://api.stackexchange.com
you can find the various equations in this oft - cited blog post from harold pimentel. cpm is basically depth - normalized counts, whereas tpm is length - normalized ( and then normalized by the length - normalized values of the other genes ). if one has to choose between those two choices one typically chooses tpm for most things, since generally the length normalization is handy. realistically, you probably want log ( tpm ) since otherwise noise in your most highly expressed genes dominates over small expression signals.
https://api.stackexchange.com
the other answers here, describing oxygen toxicity are telling what can go wrong if you have too much oxygen, but they are not describing two important concepts that should appear with their descriptions. also, there is a basic safety issue with handling pressure tanks of high oxygen fraction. an important property of breathed oxygen is its partial pressure. at normal conditions at sea level, the partial pressure of oxygen is about 0. 21 atm. this is compatible with the widely known estimate that the atmosphere is about 78 % nitrogen, 21 % oxygen, and 1 % " other ". partial pressures are added to give total pressure ; this is dalton's law. as long as you don't use toxic gasses, you can replace the nitrogen and " other " with other gasses, like helium, as long as you keep the partial pressure of oxygen near 0. 21, and breathe the resulting mixtures without adverse effects. there are two hazards that can be understood by considering the partial pressure of oxygen. if the partial pressure drops below about 0. 16 atm, a normal person experiences hypoxia. this can happen by entering a room where oxygen has been removed. for instance, entering a room which has a constant source of nitrogen constantly displacing the room air, lowering the concentration - - and partial pressure - - of oxygen. another way is to go to the tops of tall mountains. the total atmospheric pressure is lowered and the partial pressure of oxygen can be as low as 0. 07 atm ( summit of mt. everest ) which is why very high altitude climbing requires carrying additional oxygen. yet a third way is " horsing around " with helium tanks - - repeatedly inhaling helium to produce very high pitched voices deprives the body of oxygen and the partial pressure of dissolved oxygen in the body falls, perhaps leading to loss of consciousness. alternatively, if the partial pressure rises above about 1. 4 atm, a normal person experiences hyperoxia which can lead to oxygen toxicity ( described in the other answers ). at 1. 6 atm the risk of central nervous system oxygen toxicity is very high. so, don't regulate the pressure that high? there's a problem. if you were to make a 10 - foot long snorkel and dive to the bottom of a swimming pool to use it, you would fail to inhale. the pressure of air at your mouth would be about 1 atm, because the 10 - foot column of air in the snorkel doesn't weigh very much. the pressure of water trying to squeeze
https://api.stackexchange.com
the air out of you ( like a tube of toothpaste ) is about 1. 3 atm. your diaphragm is not strong enough to overcome the squeezing and fill your lungs with air. divers overcome this problem by using a regulator ( specifically, a demand valve ), which allows the gas pressure at the outlet to be very near that of the ambient pressure. the principle job of the regulator is to reduce the very high pressure inside the tank to a much lower pressure at the outlet. the demand valve tries to only supply gas when the diver inhales and tries to supply it at very nearly ambient pressure. notice that at depth the ambient pressure can be much greater than 1 atm, increasing by about 1 atm per 10 m ( or 33 feet ). if the regulator were to supply normal air at 2 atm pressure, the partial pressure of oxygen would be 0. 42 atm. if at 3 atm, 0. 63 atm. so as a diver descends, the partial pressure of oxygen automatically increases as a consequence of having to increase the gas pressure to allow the diver to inflate their lungs. around 65 m ( 220 ft ), the partial pressure of oxygen in an " air mix " would be high enough to risk hyperoxia and other dangerous consequences. now imagine a gas cylinder containing 100 % oxygen. if we breathe from it at the surface, the partial pressure of oxygen is 1 atm - - high, but not dangerous. at a depth of 10 m, the partial pressure of supplied oxygen is 2 atm - - exceeding acceptable exposure limits. this is a general pattern - - raising the oxygen fraction of diving gasses decreases the maximum diving depth. and you can't lower the partial pressure much because the lower limit, 0. 16 atm, isn't that much lower than the 0. 21 atm of sea level atmosphere. one general category of solutions is to change gas mixes at various depths. this is complicated, requires a great deal of planning, and is outside the scope of your question. but it is certainly not as straightforward as just simplifying the gas mixtures or just raising the partial pressure of oxygen. additionally, compressed oxygen is a relatively annoying gas to work with. it is not itself flammable, but it makes every nearby organic thing flammable. for instance using grease or oil on or near an oxygen fitting risks spontaneously igniting the grease or oil. merely having grease on your hand while handling oxygen refilling gear ( with a small leak ) can burn your hand.
https://api.stackexchange.com
ngspice is available for geda. gnucap is also available for geda. ltspice is free from linear technology. i thought that one of the other analog chip makers had a spice too but i can't remember who : ( i have been to a few talks on simulation given by physicists and ees who have done chip design. each of the talks seems to end like this - - - except for simple circuits you will spend most of your time getting models and determining where the models need to be modified for your application. unless you are doing work for an ic manufacturer the manufacturer will not give you detailed models. you will not be able to avoid a prototype. you should only simulate subsections of your design. simulating the entire design is not usually practical. also most of the free simulators are not distributed with models. re - distribution of the models is usually a copyright violation. ltspice is distributed with models of the linear tech parts. i am not sure the quality of the models. most manufacturers do not want to reveal too many details about their process.
https://api.stackexchange.com
in fact, the idea of a plant nervous system is quite serious and constantly developing ; of course those are rather local, simple signal pathways rather than an " animalian " centralized global network, but they use similar mechanisms - - depolarisation waves, neurotransmitter - like compounds, specialized cells... here is a review paper by brenner et al. in the case of mimosa, there is a good paper summing up takao sibaoka's long research of the topic. in short, it seems that its petioles'phloem has cells which have polarized membranes and can trigger depolarization due to a mechanical stimulation. the signal then propagates to the corresponding pulvinus by a mixture of electrical and cl - depolarization waves. in the pulvinus, this signal triggers a second depolarization which coordinates the pulvinus'cells to trigger water pumping responsible for the leaf drop. the transmission to the adjacent leaves is most likely mechanical, i. e. the movement of one dropping leaf excites another. references : brenner ed, stahlberg r, mancuso s, vivanco j, baluska f, van volkenburgh e. 2006. plant neurobiology : an integrated view of plant signaling. trends in plant science 11 : 413 – 9. sibaoka t. 1991. rapid plant movements triggered by action potentials. the botanical magazine tokyo 104 : 73 – 95.
https://api.stackexchange.com
i'll quote from $ \ ce { [ 1 ] } $ : the general requirements for an odorant are that it should be volatile, hydrophobic and have a molecular weight less than approximately 300 daltons. ohloff ( 1994 ) has stated that the largest known odorant is a labdane with a molecular weight of 296. the first two requirements make physical sense, for the molecule has to reach the nose and may need to cross membranes. the size requirement appears to be a biological constraint. to be sure, vapor pressure ( volatility ) falls rapidly with molecular size, but that cannot be the reason why larger molecules have no smell, since some of the strongest odorants ( e. g. some steroids ) are large molecules. in addition, the cut - off is very sharp indeed : for example, substitution of the slightly larger silicon atom for a carbon in a benzenoid musk causes it to become odorless ( wrobel and wannagat, 1982d ). a further indication that the size limit has something to do with the chemoreception mechanism comes from the fact that specific anosmias become more frequent as molecular size increases. at the “ ragged edge ” of the size limit, subjects become anosmic to large numbers of molecules. an informal poll among perfumers, for example has elicited the fact that most of them are completely anosmic to one or more musks ( e. g. galaxolide® mw 244. 38 ) or, less commonly, ambergris odorants such as ambrox®, or the larger esters of salicylic acid. one can probably infer from this that the receptors cannot accommodate molecules larger than a certain size, and that this size is genetically determined ( whissel - buechy and amoore, 1973 ) and varies from individual to individual. n. b. : labdane's molecular formula is $ \ ce { c20h38 } $, which gives a molecular weight ( mw ) of $ \ pu { 278. 5 da } $ ( da ). $ \ ce { [ 5 ] } $ thus either the $ \ pu { 296 da } $ value is a typo, or the authors were quoting the mw of a labdane derivative. note added in response to answer posted by john cuthbert ( which was a nice find! ) : while iodoform, at $ \ pu { 394 da } $, does indeed exceed the $ \ pu {
https://api.stackexchange.com
> 300 da } $ " general requirement " provided above by turin & yoshii, a comparison of its estimated density to that of, e. g., labdane, indicates it's a much smaller molecule ( iodoform's three iodine atoms add a lot of mass without a lot of size, at least relative to carbon, hydrogen, and oxygen ) : i couldn't find labdane's density, but i found the density of one of its diols ( i. e., labdane with an $ \ text { – oh } $ substituted for $ \ text { – h } $ in two places ). so if we use its density, along with labane's molecular weight, we obtain : $ \ pu { mw = 278. 5 da } $, $ \ pu { \ rho = 0. 9 g / cm ^ 3 } $ $ \ ce { [ 6 ] } $ = > estimated molecular volume ≈ $ \ pu { 510 a ^ 3 } $ iodoform : $ \ pu { mw = 393. 732 da } $, $ \ pu { \ rho = 4. 008 g / cm ^ 3 } $ $ \ ce { [ 7 ] } $ = > estimated molecular volume ≈ $ \ pu { 160 a ^ 3 } $ even if the density of labdane were, say, 20 % higher than that of the diol, we'd get a molecular volume of ≈ $ \ pu { 430 a ^ 3 } $, which is still far above that of iodoform. this makes it clear that the limiting attribute is physical size rather than molecular weight, and that turin & yoshii were using molecular weight as a shorthand surrogate for size. this works reasonably well when comparing oxygenated hydrocarbons, but obviously breaks down when the compounds contain significantly heavier nuclei. as turin & yoshii write more precisely at the end of the quoted passage : " one can probably infer from this that the receptors cannot accommodate molecules larger than a certain size. " [ emphasis mine. ] references $ \ ce { [ 1 ] } $ : " structure - odor relationships : a modern perspective ", by luca turin ( dept of physiology, university college london, uk ) and fumiko yoshii ( graduate school of science and technology, niigata university, japan ), which appears as chapter 13 of : handbook of olfaction and gustation. richard l. doty ( ed.
https://api.stackexchange.com
). 2nd ed., marcel dekker, 2003. $ \ ce { [ 2 ] } $ : ohloff, g. scent and fragrances : the fascination of odors and their chemical perspectives. berlin, springer, 1994. $ \ ce { [ 3 ] } $ : wrobel d, wannagat u. sila perfumes. 2. silalinalool. chemischer informationsdienst. 13 ( 30 ), jul 27, 1982. $ \ ce { [ 4 ] } $ : whissell - buechy d, amoore je. letter : odour - blindness to musk : simple recessive inheritance. nature, 245 ( 5421 ) : 157 - 8, sep 21, 1973. $ \ ce { [ 5 ] } $ : $ \ ce { [ 6 ] } $ : $ \ ce { [ 7 ] } $ :
https://api.stackexchange.com
actually this is one of the main problems you have when analyzing scrna - seq data, and there is no established method for dealing with this. different ( dedicated ) algorithms deal with it in different ways, but mostly you rely on how good the error modelling of your software is ( a great read is the review by wagner, regev & yosef, esp. the section on " false negatives and overamplification " ). there are a couple of options : you can impute values, i. e. fill in the gaps on technical zeros. cidr and scimpute do it directly. magic and zifa project cells into a lower - dimensional space and use their similarity there to decide how to fill in the blanks. some people straight up exclude genes that are expressed in very low numbers. i can't give you citations off the top of my head, but many trajectory inference algorithms like monocle2 and slicer have heuristics to choose informative genes for their analysis. if the method you use for analysis doesn't model gene expression explicitly but uses some other distance method to quantify similarity between cells ( like cosine distance, euclidean distance, correlation ), then the noise introduced by dropout can be covered by the signal of genes that are highly expressed. note that this is dangerous, as genes that are highly expressed are not necessarily informative. ercc spike ins can help you reduce technical noise, but i am not familiar with the chromium protocol so maybe it doesn't apply there (? ) since we are speaking about noise, you might consider using a protocol with unique molecular identifiers. they remove the amplification errors almost completely, at least for the transcripts that you capture... edit : also, i would highly recommend using something more advanced than pca to do the analysis. software like the above - mentioned monocle or destiny is easy to operate and increases the power of your analysis considerably.
https://api.stackexchange.com
there are several reasons for using the hamiltonian formalism : statistical physics. the standard thermal states weight of pure states is given according to $ $ \ text { prob } ( \ text { state } ) \ propto e ^ { - h ( \ text { state } ) / k _ bt } $ $ so you need to understand hamiltonians to do stat mech in real generality. geometrical prettiness. hamilton's equations say that flowing in time is equivalent to flowing along a vector field on phase space. this gives a nice geometrical picture of how time evolution works in such systems. people use this framework a lot in dynamical systems, where they study questions like'is the time evolution chaotic? '. the generalization to quantum physics. the basic formalism of quantum mechanics ( states and observables ) is an obvious generalization of the hamiltonian formalism. it's less obvious how it's connected to the lagrangian formalism, and way less obvious how it's connected to the newtonian formalism. [ edit in response to a comment : ] this might be too brief, but the basic story goes as follows : in hamiltonian mechanics, observables are elements of a commutative algebra which carries a poisson bracket $ \ { \ cdot, \ cdot \ } $. the algebra of observables has a distinguished element, the hamiltonian, which defines the time evolution via $ d \ mathcal { o } / dt = \ { \ mathcal { o }, h \ } $. thermal states are simply linear functions on this algebra. ( the observables are realized as functions on the phase space, and the bracket comes from the symplectic structure there. but the algebra of observables is what matters : you can recover the phase space from the algebra of functions. ) on the other hand, in quantum physics, we have an algebra of observables which is not commutative. but it still has a bracket $ \ { \ cdot, \ cdot \ } = - \ frac { i } { \ hbar } [ \ cdot, \ cdot ] $ ( the commutator ), and it still gets its time evolution from a distinguished element $ h $, via $ d \ mathcal { o } / dt = \ { \ mathcal { o }, h \ } $. likewise, thermal states are still linear functionals on the
https://api.stackexchange.com
algebra.
https://api.stackexchange.com
if you are interested in conducting an analysis on sparse matrices, i would also consider davis's university of florida sparse matrix collection and the matrix market.
https://api.stackexchange.com
first let's deal with a false assumption : similar to the way that the sum of a huge number of randomly selected 1's and - 1's would never stray far from 0. suppose we have a set of $ n $ random variables $ x _ i $, each independent and with equal probability of being either $ + 1 $ or $ - 1 $. define $ $ s = \ sum _ { i = 1 } ^ n x _ i. $ $ then, yes, the expectation of $ s $ may be $ 0 $, $ $ \ langle s \ rangle = \ sum _ { i = 1 } ^ n \ langle x _ i \ rangle = \ sum _ { i = 1 } ^ n \ left ( \ frac { 1 } { 2 } ( + 1 ) + \ frac { 1 } { 2 } ( - 1 ) \ right ) = 0, $ $ but the fluctuations can be significant. since we can write $ $ s ^ 2 = \ sum _ { i = 1 } ^ n x _ i ^ 2 + 2 \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n x _ i x _ j, $ $ then more manipulation of expectation values ( remember, they always distribute over sums ; also the expectation of a product is the product of the expectations if and only if the factors are independent, which is the case for us for $ i \ neq j $ ) yields $ $ \ langle s ^ 2 \ rangle = \ sum _ { i = 1 } ^ n \ langle x _ i ^ 2 \ rangle + 2 \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n \ langle x _ i x _ j \ rangle = \ sum _ { i = 1 } ^ n \ left ( \ frac { 1 } { 2 } ( + 1 ) ^ 2 + \ frac { 1 } { 2 } ( - 1 ) ^ 2 \ right ) + 2 \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n ( 0 ) ( 0 ) = n. $ $ the standard deviation will be $ $ \ sigma _ s = \ left ( \ langle s ^ 2 \ rangle - \ langle s \ rangle ^ 2 \ right ) ^ { 1 / 2 } = \ sqrt { n }. $ $
https://api.stackexchange.com
this can be arbitrarily large. another way of looking at this is that the more coins you flip, the less likely you are to be within a fixed range of breaking even. now let's apply this to the slightly more advanced case of independent phases of photons. suppose we have $ n $ independent photons with phases $ \ phi _ i $ uniformly distributed on $ ( 0, 2 \ pi ) $. for simplicity i will assume all the photons have the same amplitude, set to unity. then the electric field will have strength $ $ e = \ sum _ { i = 1 } ^ n \ mathrm { e } ^ { \ mathrm { i } \ phi _ i }. $ $ sure enough, the average electric field will be $ 0 $ : $ $ \ langle e \ rangle = \ sum _ { i = 1 } ^ n \ langle \ mathrm { e } ^ { \ mathrm { i } \ phi _ i } \ rangle = \ sum _ { i = 1 } ^ n \ frac { 1 } { 2 \ pi } \ int _ 0 ^ { 2 \ pi } \ mathrm { e } ^ { \ mathrm { i } \ phi } \ \ mathrm { d } \ phi = \ sum _ { i = 1 } ^ n 0 = 0. $ $ however, you see images not in electric field strength but in intensity, which is the square - magnitude of this : $ $ i = \ lvert e \ rvert ^ 2 = \ sum _ { i = 1 } ^ n \ mathrm { e } ^ { \ mathrm { i } \ phi _ i } \ mathrm { e } ^ { - \ mathrm { i } \ phi _ i } + \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n \ left ( \ mathrm { e } ^ { \ mathrm { i } \ phi _ i } \ mathrm { e } ^ { - \ mathrm { i } \ phi _ j } + \ mathrm { e } ^ { - \ mathrm { i } \ phi _ i } \ mathrm { e } ^ { \ mathrm { i } \ phi _ j } \ right ) = n + 2 \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n \ cos ( \ phi _ i - \ phi
https://api.stackexchange.com
_ j ). $ $ paralleling the computation above, we have $ $ \ langle i \ rangle = \ langle n \ rangle + 2 \ sum _ { i = 1 } ^ n \ sum _ { j = i + 1 } ^ n \ frac { 1 } { ( 2 \ pi ) ^ 2 } \ int _ 0 ^ { 2 \ pi } \! \! \ int _ 0 ^ { 2 \ pi } \ cos ( \ phi - \ phi') \ \ mathrm { d } \ phi \ \ mathrm { d } \ phi'= n + 0 = n. $ $ the more photons there are, the greater the intensity, even though there will be more cancellations. so what does this mean physically? the sun is an incoherent source, meaning the photons coming from its surface really are independent in phase, so the above calculations are appropriate. this is in contrast to a laser, where the phases have a very tight relation to one another ( they are all the same ). your eye ( or rather each receptor in your eye ) has an extended volume over which it is sensitive to light, and it integrates whatever fluctuations occur over an extended time ( which you know to be longer than, say, $ 1 / 60 $ of a second, given that most people don't notice faster refresh rates on monitors ). in this volume over this time, there will be some average number of photons. even if the volume is small enough such that all opposite - phase photons will cancel ( obviously two spatially separated photons won't cancel no matter their phases ), the intensity of the photon field is expected to be nonzero. in fact, we can put some numbers to this. take a typical cone in your eye to have a diameter of $ 2 \ \ mathrm { µm } $, as per wikipedia. about $ 10 \ % $ of the sun's $ 1400 \ \ mathrm { w / m ^ 2 } $ flux is in the $ 500 \ text { – } 600 \ \ mathrm { nm } $ range, where the typical photon energy is $ 3. 6 \ times10 ^ { - 19 } \ \ mathrm { j } $. neglecting the effects of focusing among other things, the number of photons in play in a single receptor is something like $ $ n \ approx \ frac { \ pi ( 1 \ \ mathrm { µm
https://api.stackexchange.com
} ) ^ 2 ( 140 \ \ mathrm { w / m ^ 2 } ) ( 0. 02 \ \ mathrm { s } ) } { 3. 6 \ times10 ^ { - 19 } \ \ mathrm { j } } \ approx 2 \ times10 ^ 7. $ $ the fractional change in intensity from " frame to frame " or " pixel to pixel " in your vision would be something like $ 1 / \ sqrt { n } \ approx 0. 02 \ % $. even give or take a few orders of magnitude, you can see that the sun should shine steadily and uniformly.
https://api.stackexchange.com
i can not help but think : this is divide & conquer, plain and simple! m / r is not divide & conquer. it does not involve the repeated application of an algorithm to a smaller subset of the previous input. it's a pipeline ( a function specified as a composition of simpler functions ) where pipeline stages are alternating map and reduce operations. different stages can perform different operations. so, is there ( conceptual ) novelty in mapreduce somewhere, or is it just a new implementation of old ideas useful in certain scenarios? mapreduce does not break new ground in the theory of computation - - it does not show a new way of decomposing a problem into simpler operations. it does show that particular simpler operations are practical for a particular class of problem. the mapreduce paper's contribution was evaluating a pipeline of two well understood orthogonal operators that can be distributed efficiently and fault - tolerantly on a particular problem : creating a text index of large corpus benchmarking map - reduce on that problem to show how much data is transferred between nodes and how latency differences in stages affect overall latency showing how to make the system fault tolerant so machine failures during computation can be compensated for automatically identifying specific useful implementation choices and optimizations some of the critiques fall into these classes : " map / reduce does not break new ground in theory of computation. " true. the original paper's contribution was that these well - understood operators with a specific set of optimizations had been successfully used to solve real problems more easily and fault - tolerantly than one - off solutions. " this distributed computation doesn't easily decompose into map & reduce operations ". fair enough, but many do. " a pipeline of n map / reduce stages require latency proportional to the number of reduce steps of the pipeline before any results are produced. " probably true. the reduce operator does have to receive all its input before it can produce a complete output. " map / reduce is overkill for this use - case. " maybe. when engineers find a shiny new hammer, they tend to go looking for anything that looks like a nail. that doesn't mean that the hammer isn't a well - made tool for a certain niche. " map / reduce is a poor replacement for a relational db. " true. if a relational db scales to your data - set then wonderful for you - - you have options.
https://api.stackexchange.com
what's the relationship between sigma and radius? i've read that sigma is equivalent to radius, i don't see how sigma is expressed in pixels. or is " radius " just a name for sigma, not related to pixels? there are three things at play here. the variance, ( $ \ sigma ^ 2 $ ), the radius, and the number of pixels. since this is a 2 - dimensional gaussian function, it makes sense to talk of the covariance matrix $ \ boldsymbol { \ sigma } $ instead. be that as it may however, those three concepts are weakly related. first of all, the 2 - d gaussian is given by the equation : $ $ g ( { \ bf z } ) = \ frac { 1 } { \ sqrt { ( 2 \ pi ) ^ 2 | \ boldsymbol { \ sigma } | } } e ^ { - \ frac { 1 } { 2 } ( { \ bf z } - \ boldsymbol { \ mu } ) ^ t \ boldsymbol { \ sigma } ^ { - 1 } \ ( { \ bf z } - \ boldsymbol { \ mu } ) } $ $ where $ { \ bf z } $ is a column vector containing the $ x $ and $ y $ coordinate in your image. so, $ { \ bf z } = \ begin { bmatrix } x \ \ y \ end { bmatrix } $, and $ \ boldsymbol { \ mu } $ is a column vector codifying the mean of your gaussian function, in the $ x $ and $ y $ directions $ \ boldsymbol { \ mu } = \ begin { bmatrix } \ mu _ x \ \ \ mu _ y \ end { bmatrix } $. example : now, let us say that we set the covariance matrix $ \ boldsymbol { \ sigma } = \ begin { bmatrix } 1 & 0 \ \ 0 & 1 \ end { bmatrix } $, and $ \ boldsymbol { \ mu } = \ begin { bmatrix } 0 \ \ 0 \ end { bmatrix } $. i will also set the number of pixels to be $ 100 $ x $ 100 $. furthermore, my'grid ', where i evaluate this pdf, is going to be going from $ - 10 $ to $ 10 $, in
https://api.stackexchange.com
both $ x $ and $ y $. this means i have a grid resolution of $ \ frac { 10 - ( - 10 ) } { 100 } = 0. 2 $. but this is completely arbitrary. with those settings, i will get the probability density function image on the left. now, if i change the'variance ', ( really, the covariance ), such that $ \ boldsymbol { \ sigma } = \ begin { bmatrix } 9 & 0 \ \ 0 & 9 \ end { bmatrix } $ and keep everything else the same, i get the image on the right. the number of pixels are still the same for both, $ 100 $ x $ 100 $, but we changed the variance. suppose instead we do the same experiment, but use $ 20 $ x $ 20 $ pixels instead, but i still ran from $ - 10 $ to $ 10 $. then, my grid has a resolution of $ \ frac { 10 - ( - 10 ) } { 20 } = 1 $. if i use the same covariances as before, i get this : these are how you must understand the interplay between those variables. if you would like the code, i can post that here as well. how do i choose sigma? the choice of the variance / covariance - matrix of your gaussian filter is extremely application dependent. there is no'right'answer. that is like asking what bandwidth should one choose for a filter. again, it depends on your application. typically, you want to choose a gaussian filter such that you are nulling out a considerable amount of high frequency components in your image. one thing you can do to get a good measure, is compute the 2d dft of your image, and overlay its co - efficients with your 2d gaussian image. this will tell you what co - efficients are being heavily penalized. for example, if your gaussian image has a covariance so wide that it is encompassing many high frequency coefficients of your image, then you need to make its covariance elements smaller.
https://api.stackexchange.com
back in the pleistoscene ( 1960s or earlier ), logic was implemented with bipolar transistors. even more specifically, they were npn because for some reasons i'm not going to get into, npn were faster. back then it made sense to someone that the positive supply voltage would be called vcc where the " c " stands for collector. sometimes ( but less commonly ) the negative supply was called vee where " e " stands for emitter. when fet logic came about, the same kind of naming was used, but now the positive supply was vdd ( drain ) and the negative vss ( source ). with cmos this makes no sense, but it persists anyway. note that the " c " in cmos stands for " complementary ". that means both n and p channel devices are used in about equal numbers. a cmos inverter is just a p channel and a n channel mosfet in its simplest form. with roughly equal numbers of n and p channel devices, drains aren't more likely to be positive than sources, and vice versa. however, the vdd and vss names have stuck for historical reasons. technically vcc / vee is for bipolar and vdd / vss for fets, but in practise today vcc and vdd mean the same, and vee and vss mean the same.
https://api.stackexchange.com
rigorous arguments are very similar to computer programming - - - you need to write a proof which can ( in principle ) ultimately be carried out in a formal system. this is not easy, and requires defining many data - structures ( definitions ), and writing many subroutines ( lemmas ), which you use again and again. then you prove many results along the way, only some of which are of general usefulness. this activity is extremely illuminating, but it is time consuming, and tedious, and requires a great deal of time and care. rigorous arguments also introduce a lot of pedantic distinctions which are extremely important for the mathematics, but not so important in the cases one deals with in physics. in physics, you never have enough time, and we must always have a only just precise enough understanding of the mathematics that can be transmitted maximally quickly to the next generation. often this means that you forsake full rigor, and introduce notational short - cuts and imprecise terminology that makes turning the argument rigorous difficult. some of the arguments in physics though are pure magic. for me, the replica trick is the best example. if this ever gets a rigorous version, i will be flabbergasted. 1 ) what are the most important and the oldest insights ( notions, results ) from physics that are still lacking rigorous mathematical formulation / proofs. here are old problems which could benefit from rigorous analysis : mandelstam's double - dispersion relations : the scattering amplitude for 2 particle to 2 particle scattering can be analytically expanded as an integral over the imaginary discontinuity $ \ rho ( s ) $ in the s parameter, and then this discontinuity $ \ rho ( s ) $ can be written as an integral over the t parameter, giving a double - discontinuity $ \ rho ( s, t ) $ if you go the other way, expand the discontinuity in t first then in s, you get the same function. why is that? it was argued from perturbation theory by mandelstam, and there was some work in the 1960s and early 1970s, but it was never solved as far as i know. the oldest, dating back centuries : is the ( newtonian, comet and asteroid free ) solar system stable for all time? this is a famous one. rigorous bounds on where integrability fails will help. the kam theorem might be the best answer possible, but it doesn't answer
https://api.stackexchange.com
the question really, since you don't know whether the planetary perturbations are big enough to lead to instability for 8 planets some big moons, plus sun. continuum statistical mechanics : what is a thermodynamic ensemble for a continum field? what is the continuum limit of a statistical distribution? what are the continuous statistical field theories here? what are the generic topological solitonic solutions to classical nonlinear field equations? given a classical equation, how do you find the possible topological solitons? can they all be generated continuously from given initial data? for a specific example, consider the solar - plasma - - - are there localized magneto - hydrodynamic solitons? there are a bazillion problems here, but my imagination fails. 2 ) the endeavor of rigorous mathematical explanations, formulations, and proofs for notions and results from physics is mainly taken by mathematicians. what are examples that this endeavor was beneficial to physics itself. there are a few examples, but i think they are rare : penrose's rigorous proof of the existence of singularities in a closed trapped surface is the canonical example : it was a rigorous argument, derived from riemannian geometry ideas, and it was extremely important for clarifying what's going on in black holes. quasi - periodic tilings, also associated with penrose, first arose in hao and wang's work in pure logic, where they were able to demonstrate that an appropriate tiling with complicated matching edges could do full computation. the number of tiles were reduced until penrose gave only 2, and finally physicists discovered quasicrystals. this is spectacular, because here you start in the most esoteric non - physics part of pure mathematics, and you end up at the most hands - on of experimental systems. kac - moody algebras : these came up in half - mathematics, half early string theory. the results became physical in the 1980s when people started getting interested in group manifold models. the ade classificiation from lie group theory ( and all of lie group theory ) in mathematics is essential in modern physics. looking back further, gell - mann got su ( 3 ) quark symmetry by generalizing isospin in pure mathematics. obstruction theory was essential in understanding how to formulate 3d topological field theories ( this was the subject of a recent very interesting question ), which have application in the fractional quantum hall effect. this is very abstract mathematics connected to laboratory physics, but only certain simpler parts of the general mathematical
https://api.stackexchange.com
machinery are used. 3 ) what are examples that insisting on rigour delayed progress in physics. this has happened several times, unfortunately. statistical mechanics : the lack of rigorous proof of boltzmann ergodicity delayed the acceptance of the idea of statistical equilibrium. the rigorous arguments were faulty - - - for example, it is easy to prove that there are no phase transitions in finite volume ( since the boltzmann distribution is analytic ), so this was considered a strike against boltzmann theory, since we see phase transitions. you could also prove all sorts of nonsense about mixing entropy ( which was fixed by correctly dealing with classical indistinguishability ). since there was no proof that fields would come to thermal equilibrium, some people believed that blackbody light was not thermal. this delayed acceptance of planck's theory, and einstein's. statistical mechanics was not fully accepted until onsager's ising model solution in 1941. path integrals : this is the most notorious example. these were accepted by some physicists immediately in the 1950s, although = the formalism wasn't at all close to complete until candlin formulated grassman variables in 1956. past this point, they could have become standard, but they didn't. the formalism had a bad reputation for giving wrong results, mostly because people were uncomfortable with the lack of rigor, so that they couldn't trust the method. i heard a notable physicist complain in the 1990s that the phase - space path integral ( with p and q ) couldn't possibly be correct because p and q don't commute, and in the path integral they do because they are classical numbers ( no, actually, they don't - - - their value in an insertion depends discontinuously on their time order in the proper way ). it wasn't until the early 1970s that physicists became completely comfortable with the method, and it took a lot of selling to overcome the resistance. quantum field theory construction : the rigorous methods of the 1960s built up a toolbox of complicated distributional methods and perturbation series resummation which turns out to be the least useful way of looking at the thing. it's now c * algebras and operator valued distributions. the correct path is through the path integral the wilsonian way, and this is closer to the original point of view of feynman and schwinger. but a school of rigorous physicists in the 1960s erected large barriers to entry in field theory work, and
https://api.stackexchange.com
progress in field theory was halted for a decade, until rigor was thrown out again in the 1970s. but a proper rigorous formulation of quantum fields is still missing. in addition to this, there are countless no - go theorems that delayed the discovery of interesting things : time cannot be an operator ( pauli ) : this delayed the emergence of the path integral particle formulation due to feynman and schwinger. here, the time variable on the particle - path is path - integrated just like anything else. von - neumann's proof of no - hidden variables : this has a modern descendent in the kochen sprecher theorem about entangled sets of qubits. this delayed the bohm theory, which faced massive resistance at first. no charges which transform nontrivially under the lorentz group ( coleman - mandula ) : this theorem had both positive and negative implications. it killed su ( 6 ) theories ( good ), but it made people miss supersymmetry ( bad ). quasicrystal order is impossible : this " no go " theorem is the standard proof that periodic order ( the general definition of crystals ) is restricted to the standard space - groups. this made quasicrystals bunk. the assumption that is violated is the assumption of strict periodicity. no supergravity compactifications with chiral fermions ( witten ) : this theorem assumed manifold compactification, and missed orbifolds of 11d sugra, which give rise to the heterotic strings ( also witten, with horava, so witten solved the problem ). 4 ) what are examples that solid mathematical understanding of certain issues from physics came from further developements in physics itself. ( in particular, i am interested in cases where mathematical rigorous understanding of issues from classical mechanics required quantum mechenics, and also in cases where progress in physics was crucial to rigorous mathematical solutions of questions in mathematics not originated in physics. ) there are several examples here : understanding the adiabatic theorem in classical mechanics ( that the action is an adiabatic invariant ) came from quantum mechanics, since it was clear that it was the action that needed to be quantized, and this wouldn't make sense without it being adiabatic invariant. i am not sure who proved the adiabatic theorem, but this is exactly what you were asking for - - - an insightful classical theorem that came from quantum mechanics ( although some decades before modern quantum mechanics ) the understanding of quantum anoma
https://api.stackexchange.com
##lies came directly from a physical observation ( the high rate of neutral pion decay to two photons ). clarifying how this happens through feynman diagrams, even though a naive argument says it is forbidden led to complete understanding of all anomalous terms in terms of topology. this in turn led to the development of chern - simons theory, and the connection with knot polynomials, discovered by witten, and earning him a fields medal. distribution theory originated in dirac's work to try to give a good foundation for quantum mechanics. the distributional nature of quantum fields was understood by bohr and rosenfeld in the 1930s, and the mathematics theory was essentially taken from physics into mathematics. dirac already defined distributions using test functions, although i don't think he was pedantic about the test - function space properties. 5 ) the role of rigor is intensly discussed in popular books and blogs. please supply references ( or better annotated references ) to academic studies of the role of mathematical rigour in modern physics. i can't do this, because i don't know any. but for what it's worth, i think it's a bad idea to try to do too much rigor in physics ( or even in some parts of mathematics ). the basic reason is that rigorous formulations have to be completely standardized in order for the proofs of different authors to fit - together without seams, and this is only possible in very long hindsight, when the best definitions become apparent. in the present, we're always muddling through fog. so there is always a period where different people have slightly different definitions of what they mean, and the proofs don't quite work, and mistakes can happen. this isn't so terrible, so long as the methods are insightful. the real problem is the massive barrier to entry presented by rigorous definitions. the actual arguments are always much less daunting than the superficial impression you get from reading the proof, because most of the proof is setting up machinery to make the main idea go through. emphasizing the rigor can put undue emphasis on the machinery rather than the idea. in physics, you are trying to describe what a natural system is doing, and there is no time to waste in studying sociology. so you can't learn all the machinery the mathematicians standardize on at any one time, you just learn the ideas. the ideas are sufficient for getting on, but they aren't sufficient to convince mathematicians you
https://api.stackexchange.com
know what you're talking about ( since you have a hard time following the conventions ). this is improved by the internet, since the barriers to entry have fallen down dramatically, and there might be a way to merge rigorous and nonrigorous thinking today in ways that were not possible in earlier times.
https://api.stackexchange.com
this is a reference resources question, masquerading as an answer, given the constraints of the site. the question hardly belongs here, and has been duplicated in the overflow cousin site. it might well be deleted. there have been schools and proceedings on the subject, integrability : from statistical systems to gauge theorylecture notes of the les houches summer school : volume 106, june 2016, volume 106, patrick dorey, gregory korchemsky, nikita nekrasov, volker schomerus, didina serban, and leticia cugliandolo. print publication date : 2019, isbn - 13 : 9780198828150, published to oxford scholarship online : september 2019. doi : 10. 1093 / oso / 9780198828150. 001. 0001 including, specifically, integrability in 2d fields theory / sigma - models, sergei l lukyanov & alexander b zamolodchikov. doi : 10. 1093 / oso / 9780198828150. 003. 0006 integrability in sigma - models, k. zarembo. doi : 10. 1093 / oso / 9780198828150. 003. 0005 i am particular to integrable 2d sigma models : quantum corrections to geometry from rg flow, ben hoare, nat levine, arkady tseytlin, nucl phys b949 ( 2019 ) 114798, but that's only by dint of personal connectivity...
https://api.stackexchange.com
having my own 6 - year - old and having successfully explained this, here's my advice from experience : don't try to explain gravity as a mysterious force. it doesn't make sense to most adults ( sad, but true! talk to non - physicists about it and you'll see ), it won't make sense to a 6yo. the reason this won't work is that it requires inference from general principles to specific applications, plus it requires advanced abstract thinking to even grasp the concept of invisible forces. those are not skills a 6 - year - old has at their fingertips. most things they're figuring out right now is piecemeal and they won't start fitting their experiences to best - fit conscious models of reality for a few years yet. do exploit 6 - year - old's tendency to take descriptions of actions - that - happen at face value as simple piecemeal facts. stuff pulls other stuff to itself. when you have a lot of stuff, it pulls other things a lot. the bigger things pull the smaller things to them. them having previously understood the shape of the solar system and a loose grasp of the fact of orbits ( not how they work — that's a different piece — just that planets and moons move in " circular " tracks around heavier things like the sun and earth ) may be useful before embarking on these parts of the conversation. i'm not sure, but that was a thing my 6yo already had started to grasp at this point. these conversations were also mixed in with our conversations about how earth formed from debris, and how the pull was involved in making that happen, and how it made the pull more and more. so, i can't really separate out that background ; it may also help / be necessary. don't try to correct a 6 - year - old's confusion about up and down being relative, but use it instead. there's a lot of earth under us, and it pulls us down when we jump. if we jumped off the side, it would pull us back sideways. if we fell off the bottom, it would pull us back up. you can follow this up later with a socratic dialogue about the relative nature of up and down, but don't muddy the waters with that immediately. that won't have any purchase until they accept the fact that earth will pull you " back up " if you fall off. build it up over a series of conversations. they won't get it
https://api.stackexchange.com
the first time, or the tenth, but pieces of it will stick. don't try to instill a grasp of the overall working model. if you can successfully give them some single, disconnected facts that they actually believe, putting them together will happen as they age and mature and get more exposure to this stuff. all this is assuming a decently smart but not prodigious child, of course. ( a 6 - year - old prodigy can probably grasp a lay adult's model of gravity, but if that's who you're dealing with then you don't need to adjust your teaching. ) for some more context, this was also after my child's class started experimenting with magnets at school. i was inspired to attempt to explain gravity when my kid told me that trees didn't float off into space because the earth was a giant magnet. ( true! but not why trees don't float away. ) comparing gravity and magnetism might help, to give them an example of invisible pull that they can feel, but it might just confuse the subject a lot too since i had a lot of work ( over multiple conversations ) to convince my own that trees aren't sticking to the ground because of magnetism, even if the earth is a giant magnet. and, a final piece of advice that's incidental, but can help : once you've had a few of these conversations, play kerbal space program while they watch. ( again, this comes from experience. my kid loves to watch ksp. ) seeing a practical example of gravity at work in it natural environment will go a long way to cementing the previous conversations. it may sound like a sign - off joke, but seeing a system moving and being manipulated makes a huge difference to a young child's comprehension, because it is no longer abstract or requires building mental abstractions to grasp, like showing them a globe does.
https://api.stackexchange.com
as far as i know, no but the vcf. gz files are behind a http server that supports byte - range, so you can use tabix or any related api : $ tabix " " 22 : 17265182 - 17265182 " 22 17265182. a t 762. 04 pass ac = 1 ; af = 4. 78057e - 06 ; an = 209180 ; baseqranksum = - 4. 59400e + 00 ; clippingranksum = 2. 18000e + 00 ; dp = 4906893 ; fs = 1. 00270e + 01 ; inbreedingcoeff = 4. 40000e - 03 ; mq = 3. 15200e + 01 ; mqranksum = 1. 40000e + 00 ; qd = 1. 31400e + 01 ; readposranksum = 2. 23000e - 01 ; sor = 9. 90000e - 02 ; vqslod = - 5. 12800e + 00 ; vqsr _ culprit = mq ; gq _ hist _ alt = 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 ; dp _ hist _ alt = 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 ; ab _ hist _ alt = 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 ; gq _ hist _ all = 1591 | 589 | 120 | 301 | 650 | 589 | 1854 | 2745 | 1815 | 4297 | 5061 | 2921 | 10164 | 1008 | 6489 | 1560 | 7017 | 457 | 6143 | 52950 ; dp _ hist _ all = 2249 | 1418 | 6081 | 11707 | 16538 | 9514 | 28624 | 23829 | 7391 | 853 | 95 | 19 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 ; ab _ hist _ all =
https://api.stackexchange.com