text
stringlengths 1
3.05k
| source
stringclasses 4
values |
|---|---|
to use their energy for two things, staying alive via repair and maintenance ( somatic - maintenance ) and making offspring ( reproductive - investment ), then any energy devoted to one will take away from the other. if an individual carries a gene that makes it devote all of its energy to somatic maintenance then its fitness will be very low ( probably 0! ) and that gene will not spread. if the level of maintenance required to live forever costs more energy than an individual can spare without suffering from low fitness ( very likely ) or can even acquire and efficiently convert in the first place ( also very likely ) then high - maintenance alleles will not spread ( and aging & death will continue to occur ). to go a little further, it is common for sexes to age differently ( this is what i work on ) and one possible explanation is that the sexes favour different balances of the trade off between somatic - maintenance and reproductive investment, this can lead to conflict over the evolution of genes affecting this balance and slow the rates of evolution to sex specific optima. this paper provides a good review of the area. to summarise, evolution has not managed to get rid of death via genetic disease etc. ( intrinsic mortality ) because the effect is only weakly selected against, and those alleles may provide some early life benefit, and resource limitation may also reduce the potential to increase lifespan due to trade - offs with reproductive effort. adaptive evolution is not about the survival of the fittest but the reproduction of the fittest - the fittest allele is the one which spreads the most effectively. edit : thanks to remi. b for also pointing out some other considerations. another thought is that of altruistic aging - aging for the good of the population ( the population is likely to contain related individuals, you are related to all other humans to some degree ). in this model aging is an adaptive process ( unlike in ma where it is just a consequence of weak selection ). by dying an individual makes space for it's offspring / relatives to survive ( because resources are then less likely to limit populations ). this will stop excessive population growth which could lead to crashes in the population and so, by dying earlier, an individual promotes the likelihood that its progeny will survive. arguments of altruistic sacrifice are often hard to promote but recent work suggests that this is a more plausible model than once thought. evolvabilty theories also suggest that aging is an adaptive process. these suggest that populations, composed of a mixture of young and old,
|
https://api.stackexchange.com
|
have biases in how well adapted the members of the population are - where younger individuals are better adapted ( because they were produced more recently it is likely that the environment is similar to the environment they are favoured in ). thus by removing the less well adapted individuals from a population via senescence and freeing up resources for younger better adapted individuals, a population evolves more rapidly towards it optimal state.
|
https://api.stackexchange.com
|
first, we need to understand what is a markov chain. consider the following weather example from wikipedia. suppose that weather on any given day can be classified into two states only : sunny and rainy. based on past experience, we know the following : $ p ( \ text { next day is sunny } \, \ vert \, \ text { given today is rainy ) } = 0. 50 $ since, the next day's weather is either sunny or rainy it follows that : $ p ( \ text { next day is rainy } \, \ vert \, \ text { given today is rainy ) } = 0. 50 $ similarly, let : $ p ( \ text { next day is rainy } \, \ vert \, \ text { given today is sunny ) } = 0. 10 $ therefore, it follows that : $ p ( \ text { next day is sunny } \, \ vert \, \ text { given today is sunny ) } = 0. 90 $ the above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows : $ p = \ begin { bmatrix } & s & r \ \ s & 0. 9 & 0. 1 \ \ r & 0. 5 & 0. 5 \ end { bmatrix } $ we might ask several questions whose answers follow : q1 : if the weather is sunny today then what is the weather likely to be tomorrow? a1 : since, we do not know what is going to happen for sure, the best we can say is that there is a $ 90 \ % $ chance that it is likely to be sunny and $ 10 \ % $ that it will be rainy. q2 : what about two days from today? a2 : one day prediction : $ 90 \ % $ sunny, $ 10 \ % $ rainy. therefore, two days from now : first day it can be sunny and the next day also it can be sunny. chances of this happening are : $ 0. 9 \ times 0. 9 $. or first day it can be rainy and second day it can be sunny. chances of this happening are : $ 0. 1 \ times 0. 5 $. therefore, the probability that the weather will be sunny in two days is : $ p ( \ text { sunny 2 days from now } = 0. 9 \ times 0. 9 + 0. 1 \ times 0. 5 = 0. 81 +
|
https://api.stackexchange.com
|
0. 05 = 0. 86 $ similarly, the probability that it will be rainy is : $ p ( \ text { rainy 2 days from now } = 0. 1 \ times 0. 5 + 0. 9 \ times 0. 1 = 0. 05 + 0. 09 = 0. 14 $ in linear algebra ( transition matrices ) these calculations correspond to all the permutations in transitions from one step to the next ( sunny - to - sunny ( $ s _ 2s $ ), sunny - to - rainy ( $ s _ 2r $ ), rainy - to - sunny ( $ r _ 2s $ ) or rainy - to - rainy ( $ r _ 2r $ ) ) with their calculated probabilities : on the lower part of the image we see how to calculate the probability of a future state ( $ t + 1 $ or $ t + 2 $ ) given the probabilities ( probability mass function, $ pmf $ ) for every state ( sunny or rainy ) at time zero ( now or $ t _ 0 $ ) as simple matrix multiplication. if you keep forecasting weather like this you will notice that eventually the $ n $ - th day forecast, where $ n $ is very large ( say $ 30 $ ), settles to the following'equilibrium'probabilities : $ p ( \ text { sunny } ) = 0. 833 $ and $ p ( \ text { rainy } ) = 0. 167 $ in other words, your forecast for the $ n $ - th day and the $ n + 1 $ - th day remain the same. in addition, you can also check that the'equilibrium'probabilities do not depend on the weather today. you would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy. the above example will only work if the state transition probabilities satisfy several conditions which i will not discuss here. but, notice the following features of this'nice'markov chain ( nice = transition probabilities satisfy conditions ) : irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states. markov chain monte carlo exploits the above feature as follows : we want to generate random draws from a target distribution. we then identify a way to construct a'nice'markov chain such that its equilibrium probability distribution is our target distribution. if we can construct such a chain then we arbitrarily start from some point and iterate the markov
|
https://api.stackexchange.com
|
chain many times ( like how we forecast the weather $ n $ times ). eventually, the draws we generate would appear as if they are coming from our target distribution. we then approximate the quantities of interest ( e. g. mean ) by taking the sample average of the draws after discarding a few initial draws which is the monte carlo component. there are several ways to construct'nice'markov chains ( e. g., gibbs sampler, metropolis - hastings algorithm ).
|
https://api.stackexchange.com
|
in a previous answer in the theoretical computer science site, i said that category theory is the " foundation " for type theory. here, i would like to say something stronger. category theory is type theory. conversely, type theory is category theory. let me expand on these points. category theory is type theory in any typed formal language, and even in normal mathematics using informal notation, we end up declaring functions with types $ f : a \ to b $. implicit in writing that is the idea that $ a $ and $ b $ are some things called " types " and $ f $ is a " function " from one type to another. category theory is the algebraic theory of such " types " and " functions ". ( officially, category theory calls them " objects " and " morphisms " so as to avoid treading on the set - theoretic toes of the traditionalists, but increasingly i see category theorists throwing such caution to the wind and using the more intuitive terms : " type " and " function ". but, be prepared for protests from the traditionalists when you do so. ) we have all been brought up on set theory from high school onwards. so, we are used to thinking of types such as $ a $ and $ b $ as sets, and functions such as $ f $ as set - theoretic mappings. if you never thought of them that way, you are in good shape. you have escaped set - theoretic brain - washing. category theory says that there are many kinds of types and many kinds of functions. so, the idea of types as sets is limiting. instead, category theory axiomatizes types and functions in an algebraic way. basically, that is what category theory is. a theory of types and functions. it does get quite sophisticated, involving high levels of abstraction. but, if you can learn it, you will acquire a deep understanding of types and functions. type theory is category theory by " type theory, " i mean any kind of typed formal language, based on rigid rules of term - formation which make sure that everything type checks. it turns out that, whenever we work in such a language, we are working in a category - theoretic structure. even if we use set - theoretic notations and think set - theoretically, still we end up writing stuff that makes sense categorically. that is an amazing fact. historically, dana scott may have been the first to realize this. he worked on producing semantic models of programming languages based on
|
https://api.stackexchange.com
|
typed ( and untyped ) lambda calculus. the traditional set - theoretic models were inadequate for this purpose, because programming languages involve unrestricted recursion which set theory lacks. scott invented a series of semantic models that captured programming phenomena, and came to the realization that typed lambda calculus exactly represented a class of categories called cartesian closed categories. there are plenty of cartesian closed categories that are not " set - theoretic ". but typed lambda calculus applies to all of them equally. scott wrote a nice essay called " relating theories of lambda calculus " explaining what is going on, parts of which seem to be available on the web. the original article was published in a volume called " to h. b. curry : essays on combinatory logic, lambda calculus and formalism ", academic press, 1980. berry and curien came to the same realization, probably independently. they defined a categorical abstract machine ( cam ) to use these ideas in implementing functional languages, and the language they implemented was called " caml " which is the underlying framework of microsoft's f #. standard type constructors like $ \ times $, $ \ to $, $ list $ etc. are functors. that means that they not only map types to types, but also functions between types to functions between types. polymorphic functions preserve all such functions resulting from functor actions. category theory was invented in 1950's by eilenberg and maclane precisely to formalize the concept of polymorphic functions. they called them " natural transformations ", " natural " because they are the only ones that you can write in a type - correct way using type variables. so, one might say that category theory was invented precisely to formalize polymorphic programming languages, even before programming languages came into being! a set - theoretic traditionalist has no knowledge of the functors and natural transformations that are going on under the surface when he uses set - theoretic notations. but, as long as he is using the type system faithfully, he is really doing categorical constructions without being aware of them. all said and done, category theory is the quintessential mathematical theory of types and functions. so, all programmers can benefit from learning a bit of category theory, especially functional programmers. unfortunately, there do not seem to be any text books on category theory targeted at programmers specifically. the " category theory for computer science " books are typically targeted at theoretical computer science students / researchers. the book by benjamin pierce, basic category theory
|
https://api.stackexchange.com
|
for computer scientists is perhaps the most readable of them. however, there are plenty of resources on the web, which are targeted at programmers. the haskellwiki page can be a good starting point. at the midlands graduate school, we have lectures on category theory ( among others ). graham hutton's course was pegged as a " beginner " course, and mine was pegged as an " advanced " course. but both of them cover essentially the same content, going to different depths. university of chalmers has a nice resource page on books and lecture notes from around the world. the enthusiastic blog site of " sigfpe " also provides a lot of good intuitions from a programmer's point of view. the basic topics you would want to learn are : definition of categories, and some examples of categories functors, and examples of them natural transformations, and examples of them definitions of products, coproducts and exponents ( function spaces ), initial and terminal objects. adjunctions monads, algebras and kleisli categories my own lecture notes in the midlands graduate school covers all these topics except for the last one ( monads ). there are plenty of other resources available for monads these days. so that is not a big loss. the more mathematics you know, the easier it would be to learn category theory. because category theory is a general theory of mathematical structures, it is helpful to know some examples to appreciate what the definitions mean. ( when i learnt category theory, i had to make up my own examples using my knowledge of programming language semantics, because the standard text books only had mathematical examples, which i didn't know anything about. ) then came the brilliant book by lambek and scott called " introduction to categorical logic " which related category theory to type systems ( what they call " logic " ). it is now possible to understand category theory just by relating it to type systems even without knowing a lot of examples. a lot of the resources i mentioned above use this approach to explain category theory.
|
https://api.stackexchange.com
|
here is a scatterplot of some multivariate data ( in two dimensions ) : what can we make of it when the axes are left out? introduce coordinates that are suggested by the data themselves. the origin will be at the centroid of the points ( the point of their averages ). the first coordinate axis ( blue in the next figure ) will extend along the " spine " of the points, which ( by definition ) is any direction in which the variance is the greatest. the second coordinate axis ( red in the figure ) will extend perpendicularly to the first one. ( in more than two dimensions, it will be chosen in that perpendicular direction in which the variance is as large as possible, and so on. ) we need a scale. the standard deviation along each axis will do nicely to establish the units along the axes. remember the 68 - 95 - 99. 7 rule : about two - thirds ( 68 % ) of the points should be within one unit of the origin ( along the axis ) ; about 95 % should be within two units. that makes it easy to eyeball the correct units. for reference, this figure includes the unit circle in these units : that doesn't really look like a circle, does it? that's because this picture is distorted ( as evidenced by the different spacings among the numbers on the two axes ). let's redraw it with the axes in their proper orientations - - left to right and bottom to top - - and with a unit aspect ratio so that one unit horizontally really does equal one unit vertically : you measure the mahalanobis distance in this picture rather than in the original. what happened here? we let the data tell us how to construct a coordinate system for making measurements in the scatterplot. that's all it is. although we had a few choices to make along the way ( we could always reverse either or both axes ; and in rare situations the directions along the " spines " - - the principal directions - - are not unique ), they do not change the distances in the final plot. technical comments ( not for grandma, who probably started to lose interest as soon as numbers reappeared on the plots, but to address the remaining questions that were posed. ) unit vectors along the new axes are the eigenvectors ( of either the covariance matrix or its inverse ). we noted that undistorting the ellipse to make a circle divides the distance along each eigenvector by the standard deviation :
|
https://api.stackexchange.com
|
the square root of the covariance. letting $ c $ stand for the covariance function, the new ( mahalanobis ) distance between two points $ x $ and $ y $ is the distance from $ x $ to $ y $ divided by the square root of $ c ( x - y, x - y ) $. the corresponding algebraic operations, thinking now of $ c $ in terms of its representation as a matrix and $ x $ and $ y $ in terms of their representations as vectors, are written $ \ sqrt { ( x - y )'c ^ { - 1 } ( x - y ) } $. this works regardless of what basis is used to represent vectors and matrices. in particular, this is the correct formula for the mahalanobis distance in the original coordinates. the amounts by which the axes are expanded in the last step are the ( square roots of the ) eigenvalues of the inverse covariance matrix. equivalently, the axes are shrunk by the ( roots of the ) eigenvalues of the covariance matrix. thus, the more the scatter, the more the shrinking needed to convert that ellipse into a circle. although this procedure always works with any dataset, it looks this nice ( the classical football - shaped cloud ) for data that are approximately multivariate normal. in other cases, the point of averages might not be a good representation of the center of the data or the " spines " ( general trends in the data ) will not be identified accurately using variance as a measure of spread. the shifting of the coordinate origin, rotation, and expansion of the axes collectively form an affine transformation. apart from that initial shift, this is a change of basis from the original one ( using unit vectors pointing in the positive coordinate directions ) to the new one ( using a choice of unit eigenvectors ). there is a strong connection with principal components analysis ( pca ). that alone goes a long way towards explaining the " where does it come from " and " why " questions - - if you weren't already convinced by the elegance and utility of letting the data determine the coordinates you use to describe them and measure their differences. for multivariate normal distributions ( where we can carry out the same construction using properties of the probability density instead of the analogous properties of the point cloud ), the mahalanobis distance ( to the new origin ) appears in place of the " $ x $ " in the expression $ \ ex
|
https://api.stackexchange.com
|
##p ( - \ frac { 1 } { 2 } x ^ 2 ) $ that characterizes the probability density of the standard normal distribution. thus, in the new coordinates, a multivariate normal distribution looks standard normal when projected onto any line through the origin. in particular, it is standard normal in each of the new coordinates. from this point of view, the only substantial sense in which multivariate normal distributions differ among one another is in terms of how many dimensions they use. ( note that this number of dimensions may be, and sometimes is, less than the nominal number of dimensions. )
|
https://api.stackexchange.com
|
you are right. notice that the term $ o ( n + m ) $ slightly abuses the classical big - o notation, which is defined for functions in one variable. however there is a natural extension for multiple variables. simply speaking, since $ $ \ frac { 1 } { 2 } ( m + n ) \ le \ max \ { m, n \ } \ le m + n \ le 2 \ max \ { m, n \ }, $ $ you can deduce that $ o ( n + m ) $ and $ o ( \ max \ { m, n \ } ) $ are equivalent asymptotic upper bounds. on the other hand $ o ( n + m ) $ is different from $ o ( \ min \ { n, m \ } ) $, since if you set $ n = 2 ^ m $, you get $ $ o ( 2 ^ m + m ) = o ( 2 ^ m ) \ supsetneq o ( m ) = o ( \ min \ { 2 ^ m, m \ } ). $ $
|
https://api.stackexchange.com
|
sometimes we can " augment knowledge " with an unusual or different approach. i would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons! given paired $ ( x, y ) $ data, draw their scatterplot. ( the younger students may need a teacher to produce this for them. : - ) each pair of points $ ( x _ i, y _ i ) $, $ ( x _ j, y _ j ) $ in that plot determines a rectangle : it's the smallest rectangle, whose sides are parallel to the axes, containing those points. thus the points are either at the upper right and lower left corners ( a " positive " relationship ) or they are at the upper left and lower right corners ( a " negative " relationship ). draw all possible such rectangles. color them transparently, making the positive rectangles red ( say ) and the negative rectangles " anti - red " ( blue ). in this fashion, wherever rectangles overlap, their colors are either enhanced when they are the same ( blue and blue or red and red ) or cancel out when they are different. ( in this illustration of a positive ( red ) and negative ( blue ) rectangle, the overlap ought to be white ; unfortunately, this software does not have a true " anti - red " color. the overlap is gray, so it will darken the plot, but on the whole the net amount of red is correct. ) now we're ready for the explanation of covariance. the covariance is the net amount of red in the plot ( treating blue as negative values ). here are some examples with 32 binormal points drawn from distributions with the given covariances, ordered from most negative ( bluest ) to most positive ( reddest ). they are drawn on common axes to make them comparable. the rectangles are lightly outlined to help you see them. this is an updated ( 2019 ) version of the original : it uses software that properly cancels the red and cyan colors in overlapping rectangles. let's deduce some properties of covariance. understanding of these properties will be accessible to anyone who has actually drawn a few of the rectangles. : - ) bilinearity. because the amount of red depends on the size of the plot, covariance is directly proportional to the scale on the x - axis and to
|
https://api.stackexchange.com
|
the scale on the y - axis. correlation. covariance increases as the points approximate an upward sloping line and decreases as the points approximate a downward sloping line. this is because in the former case most of the rectangles are positive and in the latter case, most are negative. relationship to linear associations. because non - linear associations can create mixtures of positive and negative rectangles, they lead to unpredictable ( and not very useful ) covariances. linear associations can be fully interpreted by means of the preceding two characterizations. sensitivity to outliers. a geometric outlier ( one point standing away from the mass ) will create many large rectangles in association with all the other points. it alone can create a net positive or negative amount of red in the overall picture. incidentally, this definition of covariance differs from the usual one only by a constant of proportionality. the mathematically inclined will have no trouble performing the algebraic demonstration that the formula given here is always twice the usual covariance. for a full explanation, see the follow - up thread at
|
https://api.stackexchange.com
|
first off, may i say that i applaud your decision to test this through an experiment. it's rare to see that than i would like. now, on to the matter at hand. it's fairly well known from industrial chemistry that non - polar solvents degrade latex quite heavily. i work with latex seals a lot, and the hexanes we use routinely break the seals down in under a day. of course, if you're lubricating your condoms with hexanes, you're a ) an idiot or b ) absolutely insane. a paper i managed to find suggests that there really isn't too much direct data on condoms, and it muses that the warnings might have arisen from industry, where nonpolar solvents decidedly do degrade latex. to find out, they did a burst experiment with condoms that had been treated with various oils. glycerol and vaseline - treated condoms showed a very, very minor decrease in strength, while mineral oil / baby oil - treated ones burst at less than 10 % of the volume of an untreated condom. they also found that 10 month - old condoms have half the burst volume of 1 - month old ones, so you could argue that using 1 - month - old condoms that have been slathered in vaseline is still much safer than using older ones. as for the actual chemistry of the weakening, i honestly don't know. if i were to hazard a guess, i would note that the latex looks like a bunch of ethylenes glued together, so my guess would be that the solvents get between the chains and force them apart, weakening them. for this to happen, the solvent must be nonpolar, but still small enough to slip between the chains of the polymer. that's probably why vaseline and canola oil don't have much of an effect - - - they're just too big to fit between the chains. again though, i don't know for sure, so don't quote me on this last paragraph.
|
https://api.stackexchange.com
|
i managed to crack the formula for optical isomers with odd chiral centers, so i'll share my attempt here. hopefully others may innovate on it and post solutions for other formulae. pseudo - chiral carbon atoms - an introduction the gold book defines pseudo - chiral / pseudo - asymmetric carbon atom as : a tetrahedrally coordinated carbon atom bonded to four different entities, two and only two of which have the same constitution but opposite chirality sense. this implies that, in your case : if chiral carbons 2 and 4 both have configuration r ( or both s ), then the central carbon 3 will be achiral / symmetric, because now " two and only two of its groups which have the same constitution " will have the same chirality sense instead. ( your approach by " plane of symmetry " is wrong. find more details on this question ) hence, there can be two stereoisomers ( r and s ) possible on the 3rd carbon due to its pesudochirality. but, there will be only one if both substituents on left and right have the same optical configurations. building up an intuition by manual counting for optical isomers with odd number of chiral centers and similar ends, you can guess that, if there are $ n $ chiral centers, then the middle ( $ \ frac { n + 1 } 2 $ - th ) carbon atom will be pseudo - chiral. to build up an intuition, we'll manually count optical isomers for $ n = 3 $ and $ n = 5 $ : case $ n = 3 $ take the example of pentane - 2, 3, 4 - triol itself. we find four ( = $ 2 ^ { n - 1 } $ ) isomers : $ $ \ begin { array } { | c | c | c | } \ hline \ text { c2 } & \ text { c3 } & \ text { c4 } \ \ \ hline r & - & r \ \ \ hline s & - & s \ \ \ hline r & s & r \ \ \ hline r & s & s \ \ \ hline \ end { array } $ $ as expected from the relevant formula, we find that the first two ( $ = 2 ^ \ frac { n - 1 } 2 $ ) are meso compounds, and the remaining two ( $ = 2 ^ { n - 1 } - 2 ^ \ frac
|
https://api.stackexchange.com
|
{ n - 1 } 2 $ ) are enantiomers. case $ n = 5 $ take the example of heptane - 2, 3, 4, 5, 6 - pentol : we expect $ 16 ~ ( = 2 ^ { n - 1 } ) $ isomers, with the c4 carbon being pseudo - chiral. to avoid a really large table, we observe that the number of meso isomers is easily countable ( < < number of enantiomers ). here is a table of those four ( = $ 2 ^ \ frac { n - 1 } 2 $ ) meso isomers : $ $ \ begin { array } { | c | c | c | c | c | c | } \ hline \ text { c2 } & \ text { c3 } & \ text { c4 } & \ text { c5 } & \ text { c6 } \ \ \ hline r & r & - & r & r \ \ \ hline r & s & - & s & r \ \ \ hline s & r & - & r & s \ \ \ hline s & s & - & s & s \ \ \ hline \ end { array } $ $ note that the total optical isomers are given by $ 2 ^ { n - 1 } $ isomers ( more on that below ). hence, the number of enantiomers is easily $ 12 ( = 2 ^ { n - 1 } - 2 ^ \ frac { n - 1 } 2 ) $. a formula for the number of meso isomers as you must have observed from the table, the sequence of optical configurations, when read from the fourth carbon atom, is exactly the same towards both left and right. in other words, if we fix an arbitrary permutation for the optical configurations of the carbon atoms on the left ( say rss ), then we will get only one unique permutation of the optical configurations on the right ( ssr ). we know that each carbon on the left has two choices ( r or s ), and there are $ \ frac { n - 1 } { 2 } $ carbon atoms on the left. hence, the total number of permutations will be $ 2 \ times2 \ times2 \ cdots \ frac { n - 1 } { 2 } \ text { times } = 2 ^ \ frac { n - 1 } { 2 } $. since, our
|
https://api.stackexchange.com
|
description ( " the sequence of optical configurations, when read from the fourth carbon atom, is exactly the same on both left and right " ) describes meso isomers, we have hence counted the number of meso isomers, which is $ 2 ^ \ frac { n - 1 } { 2 } $. a formula for the number of total isomers we note that there are $ n $ chiral carbons ( including that pseudo chiral carbon ). again, each chiral carbon has $ 2 $ choices. hence, the maximum possible number of optical isomers is $ 2 \ times2 \ times2 \ cdots n \ text { times } = 2 ^ n $. this is the maximum possible, not the actual total number of isomers, which is much lower. the reduction in number of isomers is because the string of optical configurations reads exactly the same from either terminal carbon. example : rssrs is the same as srssr. this happens because the compound has " similar ends " hence, each permutation has been over counted exactly twice. thus, the actual total number of isomers is half of the maximum possible, and is $ = \ frac { 2 ^ n } 2 = 2 ^ { n - 1 } $. conclusion hence, we have derived that, if'n'( number of chiral centers ) is odd for a compound with similar ends, then : $ \ text { number of meso isomers } = 2 ^ { ( n - 1 ) / 2 } $ $ \ text { total number of optical isomers } = 2 ^ { n - 1 } $ $ \ text { number of enantiomers } = 2 ^ { n - 1 } - 2 ^ { ( n - 1 ) / 2 } $
|
https://api.stackexchange.com
|
the ancient greeks had a theory that the sun, the moon, and the planets move around the earth in circles. this was soon shown to be wrong. the problem was that if you watch the planets carefully, sometimes they move backwards in the sky. so ptolemy came up with a new idea - the planets move around in one big circle, but then move around a little circle at the same time. think of holding out a long stick and spinning around, and at the same time on the end of the stick there's a wheel that's spinning. the planet moves like a point on the edge of the wheel. well, once they started watching really closely, they realized that even this didn't work, so they put circles on circles on circles... eventually, they had a map of the solar system that looked like this : this " epicycles " idea turns out to be a bad theory. one reason it's bad is that we know now that planets orbit in ellipses around the sun. ( the ellipses are not perfect because they're perturbed by the influence of other gravitating bodies, and by relativistic effects. ) but it's wrong for an even worse reason that that, as illustrated in this wonderful youtube video. in the video, by adding up enough circles, they made a planet trace out homer simpson's face. it turns out we can make any orbit at all by adding up enough circles, as long as we get to vary their size and speeds. so the epicycle theory of planetary orbits is a bad one not because it's wrong, but because it doesn't say anything at all about orbits. claiming " planets move around in epicycles " is mathematically equivalent to saying " planets move around in two dimensions ". well, that's not saying nothing, but it's not saying much, either! a simple mathematical way to represent " moving around in a circle " is to say that positions in a plane are represented by complex numbers, so a point moving in the plane is represented by a complex function of time. in that case, moving on a circle with radius $ r $ and angular frequency $ \ omega $ is represented by the position $ $ z ( t ) = re ^ { i \ omega t } $ $ if you move around on two circles, one at the end of the other, your position is $ $ z ( t ) = r _ 1e ^ { i \ omega _ 1
|
https://api.stackexchange.com
|
t } + r _ 2 e ^ { i \ omega _ 2 t } $ $ we can then imagine three, four, or infinitely - many such circles being added. if we allow the circles to have every possible angular frequency, we can now write $ $ z ( t ) = \ int _ { - \ infty } ^ { \ infty } r ( \ omega ) e ^ { i \ omega t } \ mathrm { d } \ omega. $ $ the function $ r ( \ omega ) $ is the fourier transform of $ z ( t ) $. if you start by tracing any time - dependent path you want through two - dimensions, your path can be perfectly - emulated by infinitely many circles of different frequencies, all added up, and the radii of those circles is the fourier transform of your path. caveat : we must allow the circles to have complex radii. this isn't weird, though. it's the same thing as saying the circles have real radii, but they do not all have to start at the same place. at time zero, you can start however far you want around each circle. if your path closes on itself, as it does in the video, the fourier transform turns out to simplify to a fourier series. most frequencies are no longer necessary, and we can write $ $ z ( t ) = \ sum _ { k = - \ infty } ^ \ infty c _ k e ^ { ik \ omega _ 0 t } $ $ where $ \ omega _ 0 $ is the angular frequency associated with the entire thing repeating - the frequency of the slowest circle. the only circles we need are the slowest circle, then one twice as fast as that, then one three times as fast as the slowest one, etc. there are still infinitely - many circles if you want to reproduce a repeating path perfectly, but they are countably - infinite now. if you take the first twenty or so and drop the rest, you should get close to your desired answer. in this way, you can use fourier analysis to create your own epicycle video of your favorite cartoon character. that's what fourier analysis says. the questions that remain are how to do it, what it's for, and why it works. i think i will mostly leave those alone. how to do it - how to find $ r ( \ omega ) $ given $ z ( t ) $ is found in any introductory
|
https://api.stackexchange.com
|
treatment, and is fairly intuitive if you understand orthogonality. why it works is a rather deep question. it's a consequence of the spectral theorem. what it's for has a huge range. it's useful in analyzing the response of linear physical systems to an external input, such as an electrical circuit responding to the signal it picks up with an antenna or a mass on a spring responding to being pushed. it's useful in optics ; the interference pattern from light scattering from a diffraction grating is the fourier transform of the grating, and the image of a source at the focus of a lens is its fourier transform. it's useful in spectroscopy, and in the analysis of any sort of wave phenomena. it converts between position and momentum representations of a wavefunction in quantum mechanics. check out this question on physics. stackexchange for more detailed examples. fourier techniques are useful in signal analysis, image processing, and other digital applications. finally, they are of course useful mathematically, as many other posts here describe.
|
https://api.stackexchange.com
|
there are lots of modern finite element references, but i will just comment on a few books that i think are practical and relevant to applications, plus one containing more comprehensive analysis. wriggers nonlinear finite element methods ( 2008 ) is a good general reference, but will be most relevant to those concerned with applications in structural mechanics ( including contact, shells, and plasticity ). elman, silvester, and wathen finite elements and fast iterative solvers : with applications in incompressible fluid dynamics ( 2005 ) is less comprehensive on finite element discretization techniques, but has good content on incompressible flow and a certain class of iterative solvers. it also explains the ifiss package. donea and huerta finite element methods for flow problems ( 2003 ) covers similar material, but includes ale moving mesh methods and compressible gas dynamics. brenner and scott the mathematical theory of finite element methods ( 2008 revision ) contains a rigorous theoretical development of discretizations for linear elliptic problems, including associated multigrid and domain decomposition theory. it does not treat transport - dominated problems, " messy " nonlinearities like plasticity, or non - polynomial bases. these resources fail to cover topics such as discontinuous galerkin methods or $ h ( curl ) $ problems ( maxwell ). i think papers are currently a better resource than books for these topics, although hesthaven and warburton nodal discontinuous galerkin methods ( 2008 ) is certainly worthwhile. i also recommend reading the examples from open source finite element software packages such as fenics, libmesh, and deal. ii.
|
https://api.stackexchange.com
|
in 2010, dr. craig venter actually used a bacterial shell and wrote dna for it. scientists have created the world's first synthetic life form in a landmark experiment that paves the way for designer organisms that are built rather than evolved. ( snip ) the new organism is based on an existing bacterium that causes mastitis in goats, but at its core is an entirely synthetic genome that was constructed from chemicals in the laboratory. keep in mind, this is only a synthetic genome, not a truly unique organism created from scratch. although i am confident that the technology will become available in the future. as has been pointed out, the entire genome wasn't built de novo, but rather most of it was copied from a baseline which was built up from the base chemicals with no biological processes, and then the watermarks were added ( still damn impressive since they took inorganic matter and made a living cell function with it ). but they are working at building a totally unique genome from scratch ( pdf ). this is actually quite an emerging field, so much so that the mit press has set up an entire series of journalsfor this. as far as to the purpose of these artificial organisms, most research funded by companies are meant to be for specific purposes that biology hasn't solved yet ( such as a bacteria that eats a toxic waste or something ). although, a lot of people are concerned about scientists venturing into the domain of theology. in terms of abiogenesis, there are many resources to learn more about this. here is a list of 88 papers that discuss the natural mechanisms of abiogenesis ( this list is a little old, so i am sure that there are many, many more papers at this time ). i also found this list of links and resources for artificial life. i cannot verify the usefulness of this since the field is a bit outside my area of expertise. however, it does seem quite extensive. edit to add : now we have " xna " ( a totally synthetic genome ) on the way.
|
https://api.stackexchange.com
|
the initial mersenne - twister ( mt ) was regarded as good for some years, until it was found out to be pretty bad with the more advanced testu01 bigcrush tests and better prngs. this page lists the mersenne - twister features in detail : positive qualities produces 32 - bit or 64 - bit numbers ( thus usable as source of random bits ) passes most statistical tests neutral qualities inordinately huge period of $ 2 ^ { 19937 } - 1 $ 623 - dimensionally equidistributed period can be partitioned to emulate multiple streams negative qualities fails some statistical tests, with as few as 45, 000 numbers. fails linearcomp test of the testu01 crush and bigcrush batteries. predictable β after 624 outputs, we can completely predict its output. generator state occupies 2504 bytes of ram β in contrast, an extremely usable generator with a huger - than - anyone - can - ever - use period can fit in 8 bytes of ram. not particularly fast. not particularly space efficient. the generator uses 20000 bits to store its internal state ( 20032 bits on 64 - bit machines ), but has a period of only $ 2 ^ { 19937 } $, a factor of $ 2 ^ { 63 } $ ( or $ 2 ^ { 95 } $ ) fewer than an ideal generator of the same size. uneven in its output ; the generator can get into β bad states β that are slow to recover from. seedings that only differ slightly take a long time to diverge from each other ; seeding must be done carefully to avoid bad states. while jump - ahead is possible, algorithms to do so are slow to compute ( i. e., require several seconds ) and rarely provided by implementations. summary : mersenne twister is not good enough anymore, but most applications and libraries are not there yet.
|
https://api.stackexchange.com
|
ok, i'll try to answer your questions : q1 : the number of taps is not equal the to the filter order. in your example the filter length is 5, i. e. the filter extends over 5 input samples [ $ x ( n ), x ( n - 1 ), x ( n - 2 ), x ( n - 3 ), x ( n - 4 ) $ ]. the number of taps is the same as the filter length. in your case you have one tap equal to zero ( the coefficient for $ x ( n - 1 ) $ ), so you happen to have 4 non - zero taps. still, the filter length is 5. the order of an fir filter is filter length minus 1, i. e. the filter order in your example is 4. q2 : the $ n $ in the matlab function fir1 ( ) is the filter order, i. e. you get a vector with $ n + 1 $ elements as a result ( so $ n + 1 $ is your filter length = number of taps ). q3 : the filter order is again 4. you can see it from the maximum delay needed to implement your filter. it is indeed a recursive iir filter. if by number of taps you mean the number of filter coefficients, then for an $ n ^ { th } $ order iir filter you generally have $ 2 ( n + 1 ) $ coefficients, even though in your example several of them are zero. q4 : this is a slightly tricky one. let's start with the simple case : a non - recursive filter always has a finite impulse response, i. e. it is a fir filter. usually a recursive filter has an infinite impulse response, i. e. it is an iir filter, but there are degenerate cases where a finite impulse response is implemented using a recursive structure. but the latter case is the exception.
|
https://api.stackexchange.com
|
serial is an umbrella word for all that is " time division multiplexed ", to use an expensive term. it means that the data is sent spread over time, most often one single bit after another. all the protocols you're naming are serial protocols. uart, for universal asynchronous receiver transmitter, is one of the most used serial protocols. it's almost as old as i am, and very simple. most controllers have a hardware uart on board. it uses a single data line for transmitting and one for receiving data. most often 8 - bit data is transferred, as follows : 1 start bit ( low level ), 8 data bits and 1 stop bit ( high level ). the low level start bit and high level stop bit mean that there's always a high to low transition to start the communication. that's what describes uart. no voltage level, so you can have it at 3. 3 v or 5 v, whichever your microcontroller uses. note that the microcontrollers which want to communicate via uart have to agree on the transmission speed, the bit - rate, as they only have the start bits falling edge to synchronize. that's called asynchronous communication. for long distance communication ( that doesn't have to be hundreds of meters ) the 5 v uart is not very reliable, that's why it's converted to a higher voltage, typically + 12 v for a " 0 " and - 12 v for a " 1 ". the data format remains the same. then you have rs - 232 ( which you actually should call eia - 232, but nobody does. ) the timing dependency is one of the big drawbacks of uart, and the solution is usart, for universal synchronous / asynchronous receiver transmitter. this can do uart, but also a synchronous protocol. in synchronous there's not only data, but also a clock transmitted. with each bit a clock pulse tells the receiver it should latch that bit. synchronous protocols either need a higher bandwidth, like in the case of manchester encoding, or an extra wire for the clock, like spi and i2c. spi ( serial peripheral interface ) is another very simple serial protocol. a master sends a clock signal, and upon each clock pulse it shifts one bit out to the slave, and one bit in, coming from the slave. signal names are
|
https://api.stackexchange.com
|
therefore sck for clock, mosi for master out slave in, and miso for master in slave out. by using ss ( slave select ) signals the master can control more than one slave on the bus. there are two ways to connect multiple slave devices to one master, one is mentioned above i. e. using slave select, and other is daisy chaining, it uses fewer hardware pins ( select lines ), but software gets complicated. i2c ( inter - integrated circuit, pronounced " i squared c " ) is also a synchronous protocol, and it's the first we see which has some " intelligence " in it ; the other ones dumbly shifted bits in and out, and that was that. i2c uses only 2 wires, one for the clock ( scl ) and one for the data ( sda ). that means that master and slave send data over the same wire, again controlled by the master who creates the clock signal. i2c doesn't use separate slave selects to select a particular device, but has addressing. the first byte sent by the master holds a 7 bit address ( so that you can use 127 devices on the bus ) and a read / write bit, indicating whether the next byte ( s ) will also come from the master or should come from the slave. after each byte, the receiver must send a " 0 " to acknowledge the reception of the byte, which the master latches with a 9th clock pulse. if the master wants to write a byte, the same process repeats : the master puts bit after bit on the bus and each time gives a clock pulse to signal that the data is ready to be read. if the master wants to receive data it only generates the clock pulses. the slave has to take care that the next bit is ready when the clock pulse is given. this protocol is patented by nxp ( formerly phillips ), to save licensing cost, atmel using the word twi ( 2 - wire interface ) which exactly same as i2c, so any avr device will not have i2c but it will have twi. two or more signals on the same wire may cause conflicts, and you would have a problem if one device sends a " 1 " while the other sends a " 0 ". therefore the bus is wired - or'd : two resistors pull the bus to a high level, and the devices only send low levels. if they want to send a high level they simply release the bus. ttl
|
https://api.stackexchange.com
|
( transistor transistor logic ) is not a protocol. it's an older technology for digital logic, but the name is often used to refer to the 5 v supply voltage, often incorrectly referring to what should be called uart. about each of these you can write a book, and it looks i'm well on my way. this is just a very brief overview, let us know if some things need clarification.
|
https://api.stackexchange.com
|
there is a general historical trend. in the olden days, memories were small, and so programs were perforce small. also, compilers were not very smart, and many programs were written in assembler, so it was considered a good thing to be able to write a program using few instructions. instruction pipelines were simple, and processors grabbed one instruction at a time to execute it. the machinery inside the processor was quite complex anyway ; decoding instructions was not felt to be much of a burden. in the 1970s, cpu and compiler designers realized that having such complex instructions was not so helpful after all. it was difficult to design processors in which those instructions were really efficient, and it was difficult to design compilers that really took advantage of these instructions. chip area and compiler complexity was better spent on more generic pursuits such as more general - purpose registers. the wikipedia article on risc explains this in more detail. mips is the ultimate risc architecture, which is why it's taught so often. the x86 family is a bit different. it was originally a cisc architecture meant for systems with very small memory ( no room for large instructions ), and has undergone many successive versions. today's x86 instruction set is not only complicated because it's cisc, but because it's really a 8088 with a 80386 with a pentium possibly with an x86 _ 64 processor. in today's world, risc and cisc are no longer the black - and - white distinction they might have been once. most cpu architectures have evolved to different shades of grey. on the risc side, some modern mips variants have added multiplication and division instructions, with a non - uniform encoding. arm processors have become more complex : many of them have a 16 - bit instruction set called thumb in addition to the β original β 32 - bit instructions, not to mention jazelle to execute jvm instructions on the cpu. modern arm processors also have simd instructions for multimedia applications : some complex instructions do pay after all. on the cisc side, all recent processors are to some extent risc inside. they have microcode to define all these complex macro instructions. the sheer complexity of the processor makes the design of each model take several years, even with a risc design, what with the large number of components, with pipelining and predictive execution and whatnot. so why do the fastest processors remain cisc outside? part of it, in the case of the x86
|
https://api.stackexchange.com
|
( 32 - bit and 64 - bit ) family, is historical compatibility. but that's not the whole of it. in the early 2000s, intel tried pushing the itanium architecture. itanium is an extreme case of complex instructions ( not really cisc, though : its design has been dubbed epic ). it even does away with the old - fashioned idea of executing instructions in sequence : all instructions are executed in parallel until the next barrier. one of the reasons itanium didn't take is that nobody, whether at intel or elsewhere, could write a decent compiler for it. now a good old mostly - sequential processor like x86 _ 64, that's something we understand.
|
https://api.stackexchange.com
|
yes, there are theoretical machines which exceed the turing machines in computational power, such as oracle machines and infinite time turing machines. the buzzword that you should feed to google is hypercomputation.
|
https://api.stackexchange.com
|
myth : manufactures conspire to put internal diodes in discrete components so only ic designers can do neat things with 4 - terminal mosfets. truth : 4 - terminal mosfets aren't very useful. any p - n junction is a diode ( among other ways to make diodes ). a mosfet has two of them, right here : that big chunk of p - doped silicon is the body or the substrate. considering these diodes, one can see it's pretty important that the body is always at a lower voltage than the source or the drain. otherwise, you forward - bias the diodes, and that's probably not what you wanted. but wait, it gets worse! a bjt is a three layer sandwich of npn materials, right? a mosfet also contains a bjt : if the drain current is high, then the voltage across the channel between the source and the drain can also be high, because \ $ r _ { ds ( on ) } \ $ is non - zero. if it's high enough to forward - bias the body - source diode, you don't have a mosfet anymore : you have a bjt. that's also not what you wanted. in cmos devices, it gets even worse. in cmos, you have pnpn structures, which make a parasitic thyristor. this is what causes latchup. solution : short the body to the source. this shorts the base - emitter of the parasitic bjt, holding it firmly off. ideally you don't do this through external leads, because then the " short " would also have high parasitic inductance and resistance, making the " holding off " of the parasitic bjt not so strong. instead, you short them right at the die. this is why mosfets aren't symmetrical. it may be that some designs otherwise are symmetrical, but to make a mosfet that behaves reliably like a mosfet, you have to short one of those n regions to the body. to whichever one you do that, it's now the source, and the diode you didn't short out is the " body diode ". this isn't anything specific to discrete transistors, really. if you do have a 4 - terminal mosfet, then you need to make sure that the body is always at the lowest voltage ( or highest, for p - channel
|
https://api.stackexchange.com
|
devices ). in ics, the body is the substrate for the whole ic, and it's usually connected to ground. if the body is at a lower voltage than the source, then you must consider body effect. if you take a look at a cmos circuit where there's a source not connected to ground ( like the nand gate below ), it doesn't really matter, because if b is high, then the lower - most transistor is on, and the one above it actually does have its source connected to ground. or, b is low, and the output is high, and there isn't any current in the lower two transistors.
|
https://api.stackexchange.com
|
this is quite a broad question and it indeed is quite hard to pinpoint why exactly fourier transforms are important in signal processing. the simplest, hand waving answer one can provide is that it is an extremely powerful mathematical tool that allows you to view your signals in a different domain, inside which several difficult problems become very simple to analyze. its ubiquity in nearly every field of engineering and physical sciences, all for different reasons, makes it all the more harder to narrow down a reason. i hope that looking at some of its properties which led to its widespread adoption along with some practical examples and a dash of history might help one to understand its importance. history : to understand the importance of the fourier transform, it is important to step back a little and appreciate the power of the fourier series put forth by joseph fourier. in a nut - shell, any periodic function $ g ( x ) $ integrable on the domain $ \ mathcal { d } = [ - \ pi, \ pi ] $ can be written as an infinite sum of sines and cosines as $ $ g ( x ) = \ sum _ { k = - \ infty } ^ { \ infty } \ tau _ k e ^ { \ jmath k x } $ $ $ $ \ tau _ k = \ frac { 1 } { 2 \ pi } \ int _ { \ mathcal { d } } g ( x ) e ^ { - \ jmath k x } \ dx $ $ where $ e ^ { \ imath \ theta } = \ cos ( \ theta ) + \ jmath \ sin ( \ theta ) $. this idea that a function could be broken down into its constituent frequencies ( i. e., into sines and cosines of all frequencies ) was a powerful one and forms the backbone of the fourier transform. the fourier transform : the fourier transform can be viewed as an extension of the above fourier series to non - periodic functions. for completeness and for clarity, i'll define the fourier transform here. if $ x ( t ) $ is a continuous, integrable signal, then its fourier transform, $ x ( f ) $ is given by $ $ x ( f ) = \ int _ { \ mathbb { r } } x ( t ) e ^ { - \ jmath 2 \ pi f t } \ dt, \ quad \ forall f \ in \ mathbb { r } $ $ and the inverse transform
|
https://api.stackexchange.com
|
is given by $ $ x ( t ) = \ int _ { \ mathbb { r } } x ( f ) e ^ { \ jmath 2 \ pi f t } \ df, \ quad \ forall t \ in \ mathbb { r } $ $ importance in signal processing : first and foremost, a fourier transform of a signal tells you what frequencies are present in your signal and in what proportions. example : have you ever noticed that each of your phone's number buttons sounds different when you press during a call and that it sounds the same for every phone model? that's because they're each composed of two different sinusoids which can be used to uniquely identify the button. when you use your phone to punch in combinations to navigate a menu, the way that the other party knows what keys you pressed is by doing a fourier transform of the input and looking at the frequencies present. apart from some very useful elementary properties which make the mathematics involved simple, some of the other reasons why it has such a widespread importance in signal processing are : the magnitude square of the fourier transform, $ \ vert x ( f ) \ vert ^ 2 $ instantly tells us how much power the signal $ x ( t ) $ has at a particular frequency $ f $. from parseval's theorem ( more generally plancherel's theorem ), we have $ $ \ int _ \ mathbb { r } \ vert x ( t ) \ vert ^ 2 \ dt = \ int _ \ mathbb { r } \ vert x ( f ) \ vert ^ 2 \ df $ $ which means that the total energy in a signal across all time is equal to the total energy in the transform across all frequencies. thus, the transform is energy preserving. convolutions in the time domain are equivalent to multiplications in the frequency domain, i. e., given two signals $ x ( t ) $ and $ y ( t ) $, then if $ $ z ( t ) = x ( t ) \ star y ( t ) $ $ where $ \ star $ denotes convolution, then the fourier transform of $ z ( t ) $ is merely $ $ z ( f ) = x ( f ) \ cdot y ( f ) $ $ for discrete signals, with the development of efficient fft algorithms, almost always, it is faster to implement a convolution operation in the frequency domain than in the time domain. similar to
|
https://api.stackexchange.com
|
the convolution operation, cross - correlations are also easily implemented in the frequency domain as $ z ( f ) = x ( f ) ^ * y ( f ) $, where $ ^ * $ denotes complex conjugate. by being able to split signals into their constituent frequencies, one can easily block out certain frequencies selectively by nullifying their contributions. example : if you're a football ( soccer ) fan, you might've been annoyed at the constant drone of the vuvuzelas that pretty much drowned all the commentary during the 2010 world cup in south africa. however, the vuvuzela has a constant pitch of ~ 235hz which made it easy for broadcasters to implement a notch filter to cut - off the offending noise. [ 1 ] a shifted ( delayed ) signal in the time domain manifests as a phase change in the frequency domain. while this falls under the elementary property category, this is a widely used property in practice, especially in imaging and tomography applications, example : when a wave travels through a heterogenous medium, it slows down and speeds up according to changes in the speed of wave propagation in the medium. so by observing a change in phase from what's expected and what's measured, one can infer the excess time delay which in turn tells you how much the wave speed has changed in the medium. this is of course, a very simplified layman explanation, but forms the basis for tomography. derivatives of signals ( nth derivatives too ) can be easily calculated ( see 106 ) using fourier transforms. digital signal processing ( dsp ) vs. analog signal processing ( asp ) the theory of fourier transforms is applicable irrespective of whether the signal is continuous or discrete, as long as it is " nice " and absolutely integrable. so yes, asp uses fourier transforms as long as the signals satisfy this criterion. however, it is perhaps more common to talk about laplace transforms, which is a generalized fourier transform, in asp. the laplace transform is defined as $ $ x ( s ) = \ int _ { 0 } ^ { \ infty } x ( t ) e ^ { - st } \ dt, \ quad \ forall s \ in \ mathbb { c } $ $ the advantage is that one is not necessarily confined to " nice signals " as in the fourier transform, but the transform is valid only within a certain region of convergence. it is widely used in studying / analyzing
|
https://api.stackexchange.com
|
/ designing lc / rc / lcr circuits, which in turn are used in radios / electric guitars, wah - wah pedals, etc. this is pretty much all i could think of right now, but do note that no amount of writing / explanation can fully capture the true importance of fourier transforms in signal processing and in science / engineering
|
https://api.stackexchange.com
|
that is the work of a leaf miner. a leaf miner is the larval stage of an insect that feeds on the inside layer of leaves. notice how the galleries ( tunnels ) start small and then get larger as the larva matures? most leaf miners are moth larvae ( lepidoptera )
|
https://api.stackexchange.com
|
a general rule of thumb is that in order to improve the variance $ n $ times you need $ n ^ 2 $ neighbours. this is only applicable if you consider the $ n ^ 2 $ nearest neighbours of a cell to be biologically identical ( i. e. " similar enough " ) ; if your data includes 10 types of cells with 10 cells each, then using the 20 nearest neighbours for smoothing will obscure the data. as far as i know, there is no single best answer to this question. i would suggest trying different numbers and sticking to what agrees more with the biology of the dataset.
|
https://api.stackexchange.com
|
the main distinction you want to make is between the green function and the kernel. ( i prefer the terminology " green function " without the's. imagine a different name, say, feynman. people would definitely say the feynman function, not the feynman's function. but i digress... ) start with a differential operator, call it $ l $. e. g., in the case of laplace's equation, then $ l $ is the laplacian $ l = \ nabla ^ 2 $. then, the green function of $ l $ is the solution of the inhomogenous differential equation $ $ l _ x g ( x, x ^ \ prime ) = \ delta ( x - x ^ \ prime ) \,. $ $ we'll talk about its boundary conditions later on. the kernel is a solution of the homogeneous equation $ $ l _ x k ( x, x ^ \ prime ) = 0 \,, $ $ subject to a dirichlet boundary condition $ \ lim _ { x \ rightarrow x ^ \ prime } k ( x, x ^ \ prime ) = \ delta ( x - x ^ \ prime ) $, or neumann boundary condition $ \ lim _ { x \ rightarrow x ^ \ prime } \ partial k ( x, x ^ \ prime ) = \ delta ( x - x ^ \ prime ) $. so, how do we use them? the green function solves linear differential equations with driving terms. $ l _ x u ( x ) = \ rho ( x ) $ is solved by $ $ u ( x ) = \ int g ( x, x ^ \ prime ) \ rho ( x ^ \ prime ) dx ^ \ prime \,. $ $ whichever boundary conditions we what to impose on the solution $ u $ specify the boundary conditions we impose on $ g $. for example, a retarded green function propagates influence strictly forward in time, so that $ g ( x, x ^ \ prime ) = 0 $ whenever $ x ^ 0 < x ^ { \ prime \, 0 } $. ( the 0 here denotes the time coordinate. ) one would use this if the boundary condition on $ u $ was that $ u ( x ) = 0 $ far in the past, before the source term $ \ rho $ " turns on. " the kernel solves boundary value problems. say we're solving the equation $ l _
|
https://api.stackexchange.com
|
x u ( x ) = 0 $ on a manifold $ m $, and specify $ u $ on the boundary $ \ partial m $ to be $ v $. then, $ $ u ( x ) = \ int _ { \ partial m } k ( x, x ^ \ prime ) v ( x ^ \ prime ) dx ^ \ prime \,. $ $ in this case, we're using the kernel with dirichlet boundary conditions. for example, the heat kernel is the kernel of the heat equation, in which $ $ l = \ frac { \ partial } { \ partial t } - \ nabla _ { r ^ d } ^ 2 \,. $ $ we can see that $ $ k ( x, t ; x ^ \ prime, t ^ \ prime ) = \ frac { 1 } { [ 4 \ pi ( t - t ^ \ prime ) ] ^ { d / 2 } } \, e ^ { - | x - x ^ \ prime | ^ 2 / 4 ( t - t ^ \ prime ) }, $ $ solves $ l _ { x, t } k ( x, t ; x ^ \ prime, t ^ \ prime ) = 0 $ and moreover satisfies $ $ \ lim _ { t \ rightarrow t ^ \ prime } \, k ( x, t ; x ^ \ prime, t ^ \ prime ) = \ delta ^ { ( d ) } ( x - x ^ \ prime ) \,. $ $ ( we must be careful to consider only $ t > t ^ \ prime $ and hence also take a directional limit. ) say you're given some shape $ v ( x ) $ at time $ 0 $ and want to " melt " is according to the heat equation. then later on, this shape has become $ $ u ( x, t ) = \ int _ { r ^ d } k ( x, t ; x ^ \ prime, 0 ) v ( x ^ \ prime ) d ^ dx ^ \ prime \,. $ $ so in this case, the boundary was the time - slice at $ t ^ \ prime = 0 $. now for the rest of them. propagator is sometimes used to mean green function, sometimes used to mean kernel. the klein - gordon propagator is a green function, because it satisfies $ l _ x d ( x, x ^ \ prime ) = \ delta ( x - x ^ \ prime
|
https://api.stackexchange.com
|
) $ for $ l _ x = \ partial _ x ^ 2 + m ^ 2 $. the boundary conditions specify the difference between the retarded, advanced and feynman propagators. ( see? not feynman's propagator ) in the case of a klein - gordon field, the retarded propagator is defined as $ $ d _ r ( x, x ^ \ prime ) = \ theta ( x ^ 0 - x ^ { \ prime \, 0 } ) \, \ langle0 | \ varphi ( x ) \ varphi ( x ^ \ prime ) | 0 \ rangle \, $ $ where $ \ theta ( x ) = 1 $ for $ x > 0 $ and $ = 0 $ otherwise. the wightman function is defined as $ $ w ( x, x ^ \ prime ) = \ langle0 | \ varphi ( x ) \ varphi ( x ^ \ prime ) | 0 \ rangle \,, $ $ i. e. without the time ordering constraint. but guess what? it solves $ l _ x w ( x, x ^ \ prime ) = 0 $. it's a kernel. the difference is that $ \ theta $ out front, which becomes a dirac $ \ delta $ upon taking one time derivative. if one uses the kernel with neumann boundary conditions on a time - slice boundary, the relationship $ $ g _ r ( x, x ^ \ prime ) = \ theta ( x ^ 0 - x ^ { \ prime \, 0 } ) k ( x, x ^ \ prime ) $ $ is general. in quantum mechanics, the evolution operator $ $ u ( x, t ; x ^ \ prime, t ^ \ prime ) = \ langle x | e ^ { - i ( t - t ^ \ prime ) \ hat { h } } | x ^ \ prime \ rangle $ $ is a kernel. it solves the schroedinger equation and equals $ \ delta ( x - x ^ \ prime ) $ for $ t = t ^ \ prime $. people sometimes call it the propagator. it can also be written in path integral form. linear response and impulse response functions are green functions. these are all two - point correlation functions. " two - point " because they're all functions of two points in space ( time ). in quantum field theory, statistical field theory, etc. one can also consider correlation functions with more field
|
https://api.stackexchange.com
|
insertions / random variables. that's where the real work begins!
|
https://api.stackexchange.com
|
since this question is asked often enough, let me add a detailed solution. i'm not quite following arturo's outline, though. the main difference is that i'm not re - proving the cauchy - schwarz inequality ( step 4 in arturo's outline ) but rather use the fact that multiplication by scalars and addition of vectors as well as the norm are continuous, which is a bit easier to prove. so, assume that the norm $ \ | \ cdot \ | $ satisfies the parallelogram law $ $ 2 \ vert x \ vert ^ 2 + 2 \ vert y \ vert ^ 2 = \ vert x + y \ vert ^ 2 + \ vert x - y \ vert ^ 2 $ $ for all $ x, y \ in v $ and put $ $ \ langle x, y \ rangle = \ frac { 1 } { 4 } \ left ( \ vert x + y \ vert ^ 2 - \ vert x - y \ vert ^ 2 \ right ). $ $ we're dealing with real vector spaces and defer the treatment of the complex case to step 4 below. step 0. $ \ langle x, y \ rangle = \ langle y, x \ rangle $ and $ \ vert x \ vert = \ sqrt { \ langle x, x \ rangle } $. obvious. step 1. the function $ ( x, y ) \ mapsto \ langle x, y \ rangle $ is continuous with respect to $ \ vert \ cdot \ vert $. continuity with respect to the norm $ \ vert \ cdot \ vert $ follows from the fact that addition and negation are $ \ vert \ cdot \ vert $ - continuous, that the norm itself is continuous and that sums and compositions of continuous functions are continuous. remark. this continuity property of the ( putative ) scalar product will only be used at the very end of step 3. until then the solution consists of purely algebraic steps. step 2. we have $ \ langle x + y, z \ rangle = \ langle x, z \ rangle + \ langle y, z \ rangle $. by the parallelogram law we have $ $ 2 \ vert x + z \ vert ^ 2 + 2 \ vert y \ vert ^ 2 = \ vert x + y +
|
https://api.stackexchange.com
|
z \ vert ^ 2 + \ vert x - y + z \ vert ^ 2. $ $ this gives $ $ \ begin { align * } \ vert x + y + z \ vert ^ 2 & = 2 \ vert x + z \ vert ^ 2 + 2 \ vert y \ vert ^ 2 - \ vert x - y + z \ vert ^ 2 \ \ & = 2 \ vert y + z \ vert ^ 2 + 2 \ vert x \ vert ^ 2 - \ vert y - x + z \ vert ^ 2 \ end { align * } $ $ where the second formula follows from the first by exchanging $ x $ and $ y $. since $ a = b $ and $ a = c $ imply $ a = \ frac { 1 } { 2 } ( b + c ) $ we get $ $ \ vert x + y + z \ vert ^ 2 = \ vert x \ vert ^ 2 + \ vert y \ vert ^ 2 + \ vert x + z \ vert ^ 2 + \ vert y + z \ vert ^ 2 - \ frac { 1 } { 2 } \ vert x - y + z \ vert ^ 2 - \ frac { 1 } { 2 } \ vert y - x + z \ vert ^ 2. $ $ replacing $ z $ by $ - z $ in the last equation gives $ $ \ vert x + y - z \ vert ^ 2 = \ vert x \ vert ^ 2 + \ vert y \ vert ^ 2 + \ vert x - z \ vert ^ 2 + \ vert y - z \ vert ^ 2 - \ frac { 1 } { 2 } \ vert x - y - z \ vert ^ 2 - \ frac { 1 } { 2 } \ vert y - x - z \ vert ^ 2. $ $ applying $ \ vert w \ vert = \ vert - w \ vert $ to the two negative terms in the last equation we get $ $ \ begin { align * } \ langle x + y, z \ rangle & = \ frac { 1 } { 4 } \ left ( \ vert x + y + z \ vert ^ 2 - \ vert x + y - z \ vert ^ 2 \ right ) \ \ & =
|
https://api.stackexchange.com
|
\ frac { 1 } { 4 } \ left ( \ vert x + z \ vert ^ 2 - \ vert x - z \ vert ^ 2 \ right ) + \ frac { 1 } { 4 } \ left ( \ vert y + z \ vert ^ 2 - \ vert y - z \ vert ^ 2 \ right ) \ \ & = \ langle x, z \ rangle + \ langle y, z \ rangle \ end { align * } $ $ as desired. step 3. $ \ langle \ lambda x, y \ rangle = \ lambda \ langle x, y \ rangle $ for all $ \ lambda \ in \ mathbb { r } $. this clearly holds for $ \ lambda = - 1 $ and by step 2 and induction we have $ \ langle \ lambda x, y \ rangle = \ lambda \ langle x, y \ rangle $ for all $ \ lambda \ in \ mathbb { n } $, thus for all $ \ lambda \ in \ mathbb { z } $. if $ \ lambda = \ frac { p } { q } $ with $ p, q \ in \ mathbb { z }, q \ neq 0 $ we get with $ x'= \ dfrac { x } { q } $ that $ $ q \ langle \ lambda x, y \ rangle = q \ langle p x ', y \ rangle = p \ langle q x ', y \ rangle = p \ langle x, y \ rangle, $ $ so dividing this by $ q $ gives $ $ \ langle \ lambda x, y \ rangle = \ lambda \ langle x, y \ rangle \ qquad \ text { for all } \ lambda \ in \ mathbb { q }. $ $ we have just seen that for fixed $ x, y $ the continuous function $ \ displaystyle t \ mapsto \ frac { 1 } { t } \ langle t x, y \ rangle $ defined on $ \ mathbb { r } \ smallsetminus \ { 0 \ } $ is equal to $ \ langle x, y \ rangle $ for all $ t \ in \ mathbb { q } \ smallsetminus \ { 0 \ } $, thus equality holds for all $ t \ in \ mathbb { r } \ smallset
|
https://api.stackexchange.com
|
##minus \ { 0 \ } $. the case $ \ lambda = 0 $ being trivial, we're done. step 4. the complex case. define $ \ displaystyle \ langle x, y \ rangle = \ frac { 1 } { 4 } \ sum _ { k = 0 } ^ { 3 } i ^ { k } \ vert x + i ^ k y \ vert ^ 2 $, observe that $ \ langle ix, y \ rangle = i \ langle x, y \ rangle $ and $ \ langle x, y \ rangle = \ overline { \ langle y, x \ rangle } $ and apply the case of real scalars twice ( to the real and imaginary parts of $ \ langle \ cdot, \ cdot \ rangle $ ). addendum. in fact we can weaken requirements of jordan von neumann theorem to $ $ 2 \ vert x \ vert ^ 2 + 2 \ vert y \ vert ^ 2 \ leq \ vert x + y \ vert ^ 2 + \ vert x - y \ vert ^ 2 $ $ indeed after substitution $ x \ to \ frac { 1 } { 2 } ( x + y ) $, $ y \ to \ frac { 1 } { 2 } ( x - y ) $ and simplifications we get $ $ \ vert x + y \ vert ^ 2 + \ vert x - y \ vert ^ 2 \ leq 2 \ vert x \ vert ^ 2 + 2 \ vert y \ vert ^ 2 $ $ which together with previous inequality gives the equality.
|
https://api.stackexchange.com
|
from the reuter's article referenced : sarajevo, march 7 ( reuters ) - european power grid lobby entso - e urged serbia and kosovo to urgently resolve a dispute over their power grid, which has affected the broader european network, causing some digital clocks on the continent to lose time. figure 1. the entso - e system operations committee has 5 permanent regional groups based on the synchronous areas ( continental europe, nordic, baltic, great britain, and ireland - northern ireland ), and 2 voluntary regional groups ( northern europe and isolated systems ). source : entso - e. the european grid shares power across borders. ac grids have to be kept 100 % in - sync if ac connections are used. britain and ireland, for example, are connected to the european grid by dc interconnectors so each nations grid can run asynchronously with the rest of europe whilst sharing power. the grid shared by serbia and its former province kosovo is connected to europe β s synchronized high voltage power network. as explained above. entso - e, which represents european electricity transmission operators, said the continental network had lost 113 gigawatt - hours ( gwh ) of energy since mid - january because kosovo had been using more electricity than it generates. serbia, which is responsible for balancing kosovo β s grid, had failed to do so, entso - e said. the energy hasn't been lost. it was never produced. according to netzfrequenzmessung. de ( you might want to translate ) the 113 gwh shortfall averages out to about 80 mw continuous on a total of 60 gw capacity. that's a 0. 13 %. the scary thing is that we're actually maxed out and can't find an extra 0. 13 %! the loss [ sic ] of energy had meant that electric clocks that are steered by the frequency of the power system, rather than by a quartz crystal, to lag nearly six minutes behind, entso - e said. " steered " is probably a mistranslation. " regulated " would be better. many digital clocks, such as those in alarm clocks and in ovens or microwaves, use the frequency of the power grid to keep time. the problem emerges when the frequency drops over a sustained period of time. figure 2. an electro - mechanical timeswitch of the style popular with the utility companies. analogue, motorised clocks do too. the day / night
|
https://api.stackexchange.com
|
clock on my electricity meter is > 40 years old and it has a mains - powered clock with a self - rewinding clockwork ups to keep it ok during power cuts! entso - e said the european network β s frequency had deviated from its standard of 50 hertz ( hz ) to 49. 996 hz since mid - january, resulting in 113 gigawatt - hours ( gwh ) [ sic ] of lost energy, although it had appeared to be returning to normal on tuesday. the frequency is not held constant to three decimal places for months on end. that might be an average figure. here's the data for the last five minutes : figure 3. note that frequency deviation will be much wider over a longer time period. source : mainsfrequency. com. figure 4. network time deviation has increased from - 100 s to - 350 s in three weeks. source : mainsfrequency. com. figure 5. [ wow! ] in our previous measurement operation ( july 2011 to 2017 ), network time deviations of Β± 160 seconds occurred ( june 2013 ). but since january 3, 2018, the network time deviation is continuously decreasing. changing the setpoint for the secondary control power on january 15 from 50, 000 hz to 50, 010 hz has not yet been able to reduce the mains time. source : mainsfrequency. com. secondary control power is activated when the system is affected for longer than 30 seconds or it is assumed that the system will be affected for a period longer than 30 seconds. prior to this, deviations in the system are only covered through primary control. source : apg. at. β deviation stopped yesterday after kosovo took some steps but it will take some time to get the system back to normal, β entso - e spokeswoman susanne nies told reuters. she said the risk could remain if there is no political solution to the problem. if they start generating and feeding into the grid it will speed up. the political dispute centres mainly on regulatory issues and a row between serbia and kosovo over grid operation. it is further complicated by the fact that belgrade still does not recognise kosovo. β we will try to fix the technicalities by the end of this week but the question of who will compensate for this loss has to be answered, β nies said. this doesn't make any sense to me. energy flow is metered and billed accordingly. each country pays for their imports. entso -
|
https://api.stackexchange.com
|
e urged european governments and policymakers to take swift action and exert pressure on kosovo and serbia to resolve the issue, which is also hampering integration of the western balkans energy market required by the european union. β these actions need to address the political side of this issue, β entso - e said in a statement. the grid operators in serbia and kosovo were not immediately available to comment. kosovo seceded from serbia in 2008. both states want to join the european union but brussels says they must normalize relations to forge closer ties with the bloc. serbia and kosovo signed an agreement on operating their power grid in 2015. however, it has not been implemented yet as they cannot agree on power distribution in kosovo amid conflicting claims about ownership of the grid, built when they were both part of yugoslavia. ( writing by maja zuvela ; editing by susan fenton ) i guess neither of the above are electrical engineers. answering the questions : how a decrease in electricity production can lead to a decrease of the frequency on the grid on the long term? isn't the frequency a parameter controlled by the power plant at the end of the day? if demand is approaching peak capacity then we have to let either the voltage or the frequency droop if we wish to avoid disconnecting customers. dropping the voltage will cause problems with certain loads and is to be avoided. the reuter's article fails to explain why the system average frequency has been low for so long. it can only be that it hasn't been able to run above 50 hz for long enough to catch up. off - peak seems the time to do this but there will be an upper limit on the frequency deviation - about 50. 5 hz ( but i don't have a definite number ). if the loss of power from some countries causes a frequency deviation, shouldn't we also observe other impacts, like a drop of the output voltage? this means we've also been experiencing a drop of voltage for weeks here in europe? no, we reduce frequency to avoid the drop in voltage. why some electric devices directly use the network frequency to sync their clocks, instead of a quartz crystal technology? they don't sync the clocks in the sense of adjusting or correcting the time. they maintain synchronisation by keeping the average frequency at exactly 50 hz. one reason for this is the millions of electro - mechanical clocks in service. these are fantastically reliable, don't require batteries and do the job. why replace them? this means the same oven
|
https://api.stackexchange.com
|
needs 2 different firmwares for countries with different electric network frequencies, while, with a crystal ( that should be needed anyway to run all the embedded circuits ), the same device would run unmodified everywhere. crystals will drift and the further complication of real - time clock with battery backup are required. electrical utilities work on timescales of 20 to 50 years. how long do you think the electrolytic capacitors in your digital clock will last? links : entso - e transmission system map. entso - e faq on the matter. other interesting bits : this grid time deviation is constantly balanced out. if the time deviation is more than twenty seconds the frequency is corrected in the grid. in order to balance out the time deviation again the otherwise customary frequency of 50 hz ( europe ) is changed as follows : 49. 990 hz, if the grid time is running ahead of utc time 50. 010 hz, if the grid time is lagging behind utc time source : swissgrid. meanwhile on 2018 - 03 - 08 : ntso - e has now confirmed with the serbian and kosovar tsos, respectively ems and kostt, that the deviations which affected the average frequency in the synchronous area of continental europe have ceased. this is a first step in the resolution of the issue. the second step is now to develop a plan for returning the missing energy to the system and putting the situation back to normal. source : entso - e. hmmm! they're referring to it as " missing energy ".
|
https://api.stackexchange.com
|
how much voltage is dangerous is not really a static number as it depends on your body resistance, time of exposure and source " stiffness " ( i. e. how much current it can supply ). you get figures like 60v ( or as low as 30v ) which are an attempt at an average figure above which " caution should be taken ". however, depending on how " conductive " you are at any one time, sometimes e. g. 50v might be quite safe and other times it may kill you. dc or ac ( and what frequency ) seem to make a difference too, female or male, etc - this table is very instructive : figures as low as 20ma across the heart are given as possibly capable of inducing fibrillation - here is another table from the same source that gives body resistance based on different situations : you can see that as low as 20v may be dangerous given the right conditions. here is the reference the tables came from, i think it is quite accurate based on some experiments i have done myself measuring body resistances. the rest of the site seems to be generally very well informed and presented from the bits i have read, so i think this may be quite a trustworthy source.
|
https://api.stackexchange.com
|
the scenarios are impossible and would be laughable if they were not so serious. the evidence is in the phylogenetic trees. its a bit like a crime scene when the forensics team investigate. we've done enough crime - scenes often going to the site, collecting the pathogen, sequencing and then analysis - ( usually neglected diseases ) without any associated conspiracy theories. the key technical issue is coronaviruses are zoonoses, pathogens spread to humans from animal reservoirs and phylogenetic tree really helps understand the how the virus is transmitted. trees the key thing about all the trees are bats. bats lineages are present at every single point of the betacoronavirus phylogeny ( tree ), both as paraphyletic and monophyletic lineages, one example is this tree of betacoronaviruses here. meaning the nodes connecting the branches of the tree to the " master - branch ", represent common ancestors and these were almost certainly bat - borne cornaviruses. this is especially true for sars and - here bat - viruses are everywhere. the tree here also shows that sars arose on independently on two occassions, again surrounded by bat lineages and 2019 - ncov has emerged separately at least once, again associated with bats. finally the tree below is a figure from biorxiv zhou et al ( 2020 ) " discovery of a novel coronavirus associated with the recent pneumonia outbreak in humans and its potential bat origin " shows the 2019 - ncov lineage is a direct descendent of a very closely virus isolated from a bat ( ratg13 * ). this is a really conclusive finding btw. note, i don't normally present inline images, but it is such a nice finding ( hint to reviewers ) and biorxiv is open access. conspiracy theory 1 : laboratory made virus literally it would require someone passaging a new virus, with unknown human pathogenicity, and independently introducing all the earlier passages enmass across bat populations of china. they would then hope each lineage becomes an indepedent virus population before, then introducing the virus to humans. thus when field teams of scientists go around using mist nets to trap the bats, buy them from markets, isolate the virus and sequence it they would find a beautiful, array of natural variation in the bat populations leading up to the human epidemics, that perfectly matches vast numbers of other viral zoonoses. moreover, this would have to have happen substancially prior to sars and 2019 - ncov
|
https://api.stackexchange.com
|
, because the bat betacoronaviruses have been known about prior both epidemics, viz. its simply not feasible. biological explanation general bats are a reservoir host to vast numbers of pathogens, particularly viruses, including many alphaviruses, flaviviruses, rabies virus and beleived to be important in ebolavirus ( i don't know about this ) and even important to several eukaryotic parasites. it makes sense, they are mammals, so evolutionary much closer to us than birds for example, with large dispersal potential and roost in'overcrowded'areas enable rapid transmission between bats. technical the trees show bats are the common ancestor of betacoronaviruses in particular for the lineage leading into the emergence of 2019 - ncov and sars, this is seen in this tree, this one and the tree above. the obvious explanation is the virus circulates endemically in bats and has jumped into humans. for sars the intermediate host, or possible " vector " was civet cats. the theory and the observations fit into a seamless biological answer. conspiracy theory 2 : middle eastern connection i heard a very weird conspiracy theory attempting to connect mers with 2019 - ncov. the theory was elaborate and i don't think it is productive to describe here. biological explanation all the trees of betacoronaviruses show mers was one of the earliest viruses to diverge and is very distant from 2019 - ncov, to the extent the theory is completely implausible. the homology between these viruses is 50 %, so its either mers or 2019 - ncov. its more extreme than mixing up yellow fever virus ( mortality 40 - 80 % ) with west nile virus ( mortality < < 0. 1 % ), the two viruses are completely different at every level. what about errors? phylogeneticists can spot it a mile off. there are tell - tale phylogenetic signatures we pick up, but also we do this to assess'rare'genetic phenomina. there is nothing'rare'about the coronaviruses. the only anomaly is variation in the poly - a tail and that is the natural variation from in vitro time - series experiments. basically we've looked at enough virses / parasites through trees, that have no conspiracy theories at all ( often neglected diseases ), and understand how natural variation operates - so a phylogenecist can shift the wheat from the chaff without really thinking about it. opinion the conspiracy theories are deeply misplaced, and
|
https://api.stackexchange.com
|
the only connection i can imagine is its china. however, the chinese have loads of viruses, influenza in particular which causes major pandemics, but that is a consequence of their natural ecology ( small - holder farming ) allowing the virus to move between reservoir hosts. i've not visited small - hold farms in china, but i have in other parts of the world and when you see them, you get it. the pigs, chickens ( ducks.. china ), dogs, horses and humans all living within 10 meters of each other. conclusion shipping large numbers of bats to market, bat soup, raw meat from arboreal ( tree - living ) mammals such as civets that are sympatric to bats. then consider the classical epidemiology in light of the phylogenetic data, which is very consistent, a single picture emerges that coronavirus is one of many zoonoses which has managed to transmit between patients. summary the fundamental point is the bioinformatics fit into the classical epidemiology of a zoonose. *, note the bat coronavirus ratg13 predates the 2019 - ncov outbreak by 7 years. it is not even clear whether the virus has been isolated, i. e. could just be a rna sequence. " they have found some 500 novel coronaviruses, about 50 of which fall relatively close to the sars virus on the family tree, including ratg13 β it was fished out of a bat fecal sample they collected in 2013 from a cave in moglang in yunnan province. " cohen, " mining coronavirus genomes for clues to the outbreak β s origins " feb, 2020 science magazine,
|
https://api.stackexchange.com
|
as far as i know, the commutator relations make a theory quantum. if all observables commute, the theory is classical. if some observables have non - zero commutators ( no matter if they are proportional to $ \ hbar $ or not ), the theory is quantum. intuitively, what makes a theory quantum is the fact that observations affect the state of the system. in some sense, this is encoded in the commutator relations : the order of the measurements affects their outcome, the first measurement affects the result of the second one.
|
https://api.stackexchange.com
|
no, uart and rs - 232 are not the same. uart is responsible for sending and receiving a sequence of bits. at the output of a uart these bits are usually represented by logic level voltages. these bits can become rs - 232, rs - 422, rs - 485, or perhaps some proprietary spec. rs - 232 specifies voltage levels. notice that some of these voltage levels are negative, and they can also reach Β±15v. larger voltage swing makes rs - 232 more resistant to interference ( albeit only to some extent ). a microcontroller uart can not generate such voltages levels by itself. this is done with help of an additional component : rs - 232 line driver. a classic example of an rs - 232 line driver is max232. if you go through the datasheet, you'll notice that this ic has a charge pump, which generates Β±10v from + 5v. ( source )
|
https://api.stackexchange.com
|
this algorithm can be re - written like this scan a until you find an inversion. if you find one, swap and start over. if there is none, terminate. now there can be at most $ \ binom { n } { 2 } \ in \ theta ( n ^ 2 ) $ inversions and you need a linear - time scan to find each - - so the worst - case running time is $ \ theta ( n ^ 3 ) $. a beautiful teaching example as it trips up the pattern - matching approach many succumb to! nota bene : one has to be a little careful : some inversions appear early, some late, so it is not per se trivial that the costs add up as claimed ( for the lower bound ). you also need to observe that swaps never introduce new inversions. a more detailed analysis of the case with the inversely sorted array will then yield something like the quadratic case of gauss'formula. as @ gnasher729 aptly comments, it's easy to see the worst - case running time is $ \ omega ( n ^ 3 ) $ by analyzing the running time when sorting the input $ [ 1, 2, \ dots, n, 2n, 2n - 1, \ dots, n + 1 ] $ ( though this input is probably not the worst case ). be careful : don't assume that a reversely - sorted array will necessarily be the worst - case input for all sorting algorithms. that depends on the algorithm. there are some sorting algorithms where a reversely - sorted array isn't the worst case, and might even be close to the best case.
|
https://api.stackexchange.com
|
i like george bergman's explanation ( beginning in section 7. 4 of his invitation to general algebra and universal constructions ). we start with a motivating example. suppose you are interested in solving $ x ^ 2 = - 1 $ in $ \ mathbb { z } $. of course, there are no solutions, but let's ignore that annoying reality for a moment. we use the notation $ \ mathbb { z } _ n $ for $ \ mathbb z / n \ mathbb z $. the equation has a solution in the ring $ \ mathbb { z } _ 5 $ ( in fact, two : both $ 2 $ and $ 3 $, which are the same up to sign ). so we want to find a solution to $ x ^ 2 = - 1 $ in $ \ mathbb { z } $ which satisfies $ x \ equiv 2 \ pmod { 5 } $. an integer that is congruent to $ 2 $ modulo $ 5 $ is of the form $ 5y + 2 $, so we can rewrite our original equation as $ ( 5y + 2 ) ^ 2 = - 1 $, and expand to get $ 25y ^ 2 + 20y = - 5 $. that means $ 20y \ equiv - 5 \ pmod { 25 } $, or $ 4y \ equiv - 1 \ pmod { 5 } $, which has the unique solution $ y \ equiv 1 \ pmod { 5 } $. substituting back we determine $ x $ modulo $ 25 $ : $ $ x = 5y + 2 \ equiv 5 \ cdot 1 + 2 = 7 \ pmod { 25 }. $ $ continue this way : putting $ x = 25z + 7 $ into $ x ^ 2 = - 1 $ we conclude $ z \ equiv 2 \ pmod { 5 } $, so $ x \ equiv 57 \ pmod { 125 } $. using hensel's lemma, we can continue this indefinitely. what we deduce is that there is a sequence of residues, $ $ x _ 1 \ in \ mathbb { z } _ 5, \ quad x _ 2 \ in \ mathbb { z } _ { 25 }, \ quad \ ldots, x _ { i } \ in \ mathbb { z } _ { 5 ^ i }, \ ldots $ $ each of which sat
|
https://api.stackexchange.com
|
##isfies $ x ^ 2 = - 1 $ in the appropriate ring, and which are " consistent ", in the sense that each $ x _ { i + 1 } $ is a lifting of $ x _ i $ under the natural homomorphisms $ $ \ cdots \ stackrel { f _ { i + 1 } } { \ longrightarrow } \ mathbb { z } _ { 5 ^ { i + 1 } } \ stackrel { f _ i } { \ longrightarrow } \ mathbb { z } _ { 5 ^ i } \ stackrel { f _ { i - 1 } } { \ longrightarrow } \ cdots \ stackrel { f _ 2 } { \ longrightarrow } \ mathbb { z } _ { 5 ^ 2 } \ stackrel { f _ 1 } { \ longrightarrow } \ mathbb { z } _ 5. $ $ take the set of all strings $ ( \ ldots, x _ i, \ ldots, x _ 2, x _ 1 ) $ such that $ x _ i \ in \ mathbb { z } _ { 5 ^ i } $ and $ f _ i ( x _ { i + 1 } ) = x _ i $, $ i = 1, 2, \ ldots $. this is a ring under componentwise operations. what we did above shows that in this ring, you do have a square root of $ - 1 $. added. bergman here inserts the quote, " if the fool will persist in his folly, he will become wise. " we obtained the sequence by stubbornly looking for a solution to an equation that has no solution, by looking at putative approximations, first modulo 5, then modulo 25, then modulo 125, etc. we foolishly kept going even though there was no solution to be found. in the end, we get a " full description " of what that object must look like ; since we don't have a ready - made object that satisfies this condition, then we simply take this " full description " and use that description as if it were an object itself. by insisting in our folly of looking for a solution, we have become wise by introducing an entirely new object that is a solution. this is much along the lines of taking a cauchy sequence of rationals, which " describes " a limit point, and using the entire cauchy sequence to represent this limit point
|
https://api.stackexchange.com
|
, even if that limit point does not exist in our original set. this ring is the $ 5 $ - adic integers ; since an integer is completely determined by its remainders modulo the powers of $ 5 $, this ring contains an isomorphic copy of $ \ mathbb { z } $. essentially, we are taking successive approximations to a putative answer to the original equation, by first solving it modulo $ 5 $, then solving it modulo $ 25 $ in a way that is consistent with our solution modulo $ 5 $ ; then solving it modulo $ 125 $ in a way that is consistent with out solution modulo $ 25 $, etc. the ring of $ 5 $ - adic integers projects onto each $ \ mathbb { z } _ { 5 ^ i } $ via the projections ; because the elements of the $ 5 $ - adic integers are consistent sequences, these projections commute with our original maps $ f _ i $. so the projections are compatible with the $ f _ i $ in the sense that for all $ i $, $ f _ i \ circ \ pi _ { i + 1 } = \ pi _ { i } $, where $ \ pi _ k $ is the projection onto the $ k $ th coordinate from the $ 5 $ - adics. moreover, the ring of $ 5 $ - adic integers is universal for this property : given any ring $ r $ with homomorphisms $ r _ i \ colon r \ to \ mathbb { z } _ { 5 ^ i } $ such that $ f _ i \ circ r _ { i + 1 } = r _ i $, for any $ a \ in r $ the tuple of images $ ( \ ldots, r _ i ( a ), \ ldots, r _ 2 ( a ), r _ 1 ( a ) ) $ defines an element in the $ 5 $ - adics. the $ 5 $ - adics are the inverse limit of the system of maps $ $ \ cdots \ stackrel { f _ { i + 1 } } { \ longrightarrow } \ mathbb { z } _ { 5 ^ { i + 1 } } \ stackrel { f _ i } { \ longrightarrow } \ mathbb { z } _ { 5 ^ i } \ stackrel { f _ { i - 1 } } { \ longrightarrow } \ cdots \ stackrel { f _ 2
|
https://api.stackexchange.com
|
} { \ longrightarrow } \ mathbb { z } _ { 5 ^ 2 } \ stackrel { f _ 1 } { \ longrightarrow } \ mathbb { z } _ 5. $ $ so the elements of the inverse limit are " consistent sequences " of partial approximations, and the inverse limit is a way of taking all these " partial approximations " and combine them into a " target object. " more generally, assume that you have a system of, say, rings, $ \ { r _ i \ } $, indexed by an directed set $ ( i, \ leq ) $ ( so that for all $ i, j \ in i $ there exists $ k \ in i $ such that $ i, j \ leq k $ ), and a system of maps $ f _ { rs } \ colon r _ s \ to r _ r $ whenever $ r \ leq s $ which are " consistent " ( if $ r \ leq s \ leq t $, then $ f _ { rs } \ circ f _ { st } = f _ { rt } $ ), and let's assume that the $ f _ { rs } $ are surjective, as they were in the example of the $ 5 $ - adics. then you can think of the $ r _ i $ as being " successive approximations " ( with a higher indexed $ r _ i $ as being a " finer " or " better " approximation than the lower indexed one ). the directedness of the index set guarantees that given any two approximations, even if they are not directly comparable to one another, you can combine them into an approximation which is finer ( better ) than each of them ( if $ i, j $ are incomparable, then find a $ k $ with $ i, j \ leq k $ ). the inverse limit is a way to combine all of these approximations into an object in a consistent manner. if you imagine your maps as going right to left, you have a branching tree that is getting " thinner " as you move left, and the inverse limit is the combination of all branches occurring " at infinity ". added. the example of the $ p $ - adic integers may be a bit misleading because our directed set is totally ordered and all maps are surjective. in the more general case, you can think of every chain in the directed set as a " line of approximation " ; the directed property
|
https://api.stackexchange.com
|
ensures that any finite number of " lines of approximation " will meet in " finite time ", but you may need to go all the way to " infinity " to really put all the lines of approximation together. the inverse limit takes care of this. if the directed set has no maximal elements, but the structure maps are not surjective, it turns out that no element that is not in the image will matter ; essentially, that element never shows up in a net of " successive approximations ", so it never forms part of a " consistent system of approximations " ( which is what the elements of the inverse limit are ).
|
https://api.stackexchange.com
|
i love this question, because it's a very simple demonstration of how to do science. while it's true that in science one should never accept anything'without evidence ', it's also true that blind skepticism of everything and anything gets one nowhere - skepticism has to be combined with rational inquiry. your date has gotten the'skepticism'part of science, but she's failed to grasp the equally - crucial part where one looks at the evidence and thinks about what the evidence implies. you cannot just refuse to think or accept evidence. if your goal is to learn nothing, then nothing is what you'll learn. there are many, many ways of verifying that the earth is not flat, and most of them are easy to think about and verify. you certainly do not need to go to space to realize the earth is round! if the earth is flat, why can't you see mt. kilimanjaro from your house? mt. kilimanjaro is tall, probably taller than anything in your immediate neighborhood ( unless you live in a very deep valley ) and so the question is why wouldn't you be able to see it from anywhere on earth? or, for that matter, why can't you see it from even closer? you have to be really close, in planetary terms, to be able to see it. this wouldn't be true if the earth were flat! one might argue that this is just because of the scattering of the atmosphere. distant objects appear paler, so probably after some distance you can't see anything at all. so then let's think about things that are closer. stand on the ground and the horizon appears only a few km away. go to the top of a hill, or a large tower, and suddenly you can see things much farther away. why is this the case if the earth is flat? why would your height above ground have anything to do with it? if i raise or lower my eyes with respect to a flat table, i can still see everything on that table. the'horizon'of the table never appears closer. if the earth is flat, why do time zones exist? hopefully your date realizes that time zones exist. if not, it's pretty easy to verify by doing a video call with someone in a distant location. the reason for time zones, of course, is that the sun sets and rises at different times at different parts of the globe. why would this be the case? on a flat earth, the
|
https://api.stackexchange.com
|
sun would rise and set at the same time everywhere. if the earth is flat, why is the moon round? the moon is round and not a flat disc, as you can see by the librations of the moon. what makes the earth special, then? further, all the planets are round, although to verify this you need a good telescope. again, what makes the earth special? if the earth is flat, then what is on its'underside '? hanging dirt and leaves? a large tree? turtles? those who reject the roundness of earth either have no explanation or their explanation is based on much less solid grounding than the pro - round arguments ( which, of course, is because the earth is not flat ). if there is'nothing'under the earth, then lunar eclipses would make no sense as the earth needs to be between the moon and the sun. edit : as to the question of whether the earth is round or some weird hemisphere / pear / donut shape, among other things those would all lead to a situation where gravity is wrong. for a hemisphere for example, gravity would not point down ( towards the earth ) at any point on the earth's surface unless if you were sitting right at the top of the hemisphere. similar arguments can be made for the other shapes. sure, it's possible to make it'work'by doing even stranger things like altering the distribution of mass and so on, but at that point you've gone very far into violating occam's razor.
|
https://api.stackexchange.com
|
my colleague, ido segev, pointed out that there is a problem with most of the elegant proofs here - tetris is not just a problem of tiling a rectangle. below is his proof that the conjecture is, in fact, false.
|
https://api.stackexchange.com
|
to be honest the line between the two is almost gone nowadays and there are processors that can be classified as both ( ad blackfin for instance ). generally speaking : microcontrollers are integer math processors with an interrupt sub system. some may have hardware multiplication units, some don't, etc. point is they are designed for simple math, and mostly to control other devices. dsps are processors optimized for streaming signal processing. they often have special instructions that speed common tasks such as multiply - accumulate in a single instruction. they also often have other vector or simd instructions. historically they weren't interrupt based systems and operated with non - standard memory systems optimized for their purpose making them more difficult to program. they were usually designed to operate in one big loop processing a data stream. dsp's can be designed as integer, fixed point or floating point processors. historically if you wanted to process audio streams, video streams, do fast motor control, anything that required processing a stream of data at high speed you would look to a dsp. if you wanted to control some buttons, measure a temperature, run a character lcd, control other ics which are processing things, you'd use a microcontroller. today, you mostly find general purpose microcontroller type processors with either built in dsp - like instructions or with on chip co - processors to deal with streaming data or other dsp operations. you don't see pure dsp's used much anymore except in specific industries. the processor market is much broader and more blurry than it used to be. for instance i hardly consider a arm cortex - a8 soc a micro - controller but it probably fits the standard definition, especially in a pop package. edit : figured i'd add a bit to explain when / where i've used dsps even in the days of application processors. a recent product i designed was doing audio processing with x channels of input and x channels of output per'zone '. the intended use for the product meant that it would often times sit there doing its thing, processing the audio channels for years without anyone touching it. the audio processing consisted of various acoustical filters and functions. the system also was " hot plugable " with the ability to add some number of independent'zones'all in one box. it was a total of 3 pcb designs ( mainboard, a backplane and a plug in module ) and the backplane supported 4 plug in modules. quite a fun
|
https://api.stackexchange.com
|
project as i was doing it solo, i got to do the system design, schematic, pcb layout and firmware. now i could have done the entire thing with an single bulky arm core, i only needed about 50mips of dsp work on 24bit fixed point numbers per zone. but because i knew this system would operate for an extremely long time and knew it was critical that it never click or pop or anything like that. i chose to implement it with a low power dsp per zone and a single pic microcontroller that played the system management role. this way even if one of the uc functions crashed, maybe a ddos attack on its ethernet port, the dsp would happily just keep chugging away and its likely no one would ever know. so the microcontroller played the role of running the 2 line character lcd, some buttons, temperature monitoring and fan control ( there were also some fairly high power audio amplifiers on each board ) and even served an ajax style web page via ethernet. it also managed the dsps via a serial connection. so thats a situation where even in the days where i could have used a single arm core to do everything, the design dictated a dedicated signal processing ic. other areas where i've run into dsps : * high end audio - very high end receivers and concert quality mixing and processing gear * radar processing - i've also used arm cores for this in low end apps. * sonar processing * real time computer vision for the most part, the low and mid ends of the audio / video / similar space have been taken over by application processors which combine a general purpose cpu with co - proc offload engines for various applications.
|
https://api.stackexchange.com
|
it is true that, from an outside perspective, nothing can ever pass the event horizon. i will attempt to describe the situation as best i can, to the best of my knowledge. first, let's imagine a classical black hole. by " classical " i mean a black - hole solution to einstein's equations, which we imagine not to emit hawking radiation ( for now ). such an object would persist for ever. let's imagine throwing a clock into it. we will stand a long way from the black hole and watch the clock fall in. what we notice as the clock approaches the event horizon is that it slows down compared to our clock. in fact its hands will asymptotically approach a certain time, which we might as well call 12 : 00. the light from the clock will also slow down, becoming red - shifted quite rapidly towards the radio end of the spectrum. because of this red shift, and because we can only ever see photons emitted by the clock before it struck twelve, it will rapidly become very hard to detect. eventually it will get to the point where we'd have to wait billions of years in between photons. nevertheless, as you say, it is always possible in principle to detect the clock, because it never passes the event horizon. i had the opportunity to chat to a cosmologist about this subject a few months ago, and what he said was that this red - shifting towards undetectability happens very quickly. ( i believe the " no hair theorem " provides the justification for this. ) he also said that the black - hole - with - an - essentially - undetectable - object - just - outside - its - event - horizon is a very good approximation to a black hole of a slightly larger mass. ( at this point i want to note in passing that any " real " black hole will emit hawking radiation until it eventually evaporates away to nothing. since our clock will still not have passed the event horizon by the time this happens, it must eventually escape - although presumably the hawking radiation interacts with it on the way out. presumably, from the clock's perspective all those billions of years of radiation will appear in the split - second before 12 : 00, so it won't come out looking much like a clock any more. to my mind the resolution to the black hole information paradox lies along this line of reasoning and not in any specifics of string theory. but of course that's just my opinion
|
https://api.stackexchange.com
|
. ) now, this idea seems a bit weird ( to me and i think to you as well ) because if nothing ever passes the event horizon, how can there ever be a black hole in the first place? my friendly cosmologist's answer boiled down to this : the black hole itself is only ever an approximation. when a bunch of matter collapses in on itself it very rapidly converges towards something that looks like a black - hole solution to einstein's equations, to the point where to all intents and purposes you can treat it as if the matter is inside the event horizon rather than outside it. but this is only ever an approximation because from our perspective none of the infalling matter can ever pass the event horizon.
|
https://api.stackexchange.com
|
first part it won't decide the issue but the organic chemistry text by clayden, greeves, warren and wothers also mentions that the matter might not be as clear - cut as the majority of your textbooks make it seem. this might strengthen the position of the textbook you're using a bit. but again, there are no references given. here is the relevant passage ( especially the last two paragraphs ) : second part i have found the following passage on the formation of halohydrins from epoxides in the book by smith and march ( 7th edition ), chapter 10 - 50, page 507 : unsymmetrical epoxides are usually opened to give mixtures of regioisomers. in a typical reaction, the halogen is delivered to the less sterically hindered carbon of the epoxide. in the absence of this structural feature, and in the absence of a directing group, relatively equal mixtures of regioisomeric halohydrins are expected. the phenyl is such a group, and in 1 - phenyl - 2 - alkyl epoxides reaction with $ \ ce { pocl3 } / \ ce { dmap } $ ( $ \ ce { dmap } $ = 4 - dimethylaminopyridine ) leads to the chlorohydrin with the chlorine on the carbon bearing the phenyl. $ { } ^ { 1231 } $ when done in an ionic liquid with $ \ ce { me3sicl } $, styrene epoxide gives 2 - chloro - 2 - phenylethanol. $ { } ^ { 1232 } $ the reaction of thionyl chloride and poly ( vinylpyrrolidinone ) converts epoxides to the corresponding 2 - chloro - 1 - carbinol. $ { } ^ { 1233 } $ bromine with a phenylhydrazine catalyst, however, converts epoxides to the 1 - bromo - 2 - carbinol. $ { } ^ { 1234 } $ an alkenyl group also leads to a halohydrin with the halogen on the carbon bearing the $ \ ce { c = c } $ unit. $ { } ^ { 1235 } $ epoxy carboxylic acids are another example. when $ \ ce { nai } $ reacts at ph 4, the major regioisomer is the 2 -
|
https://api.stackexchange.com
|
iodo - 3 - hydroxy compound, but when $ \ ce { incl3 } $ is added, the major product is the 3 - iodo - 2 - hydroxy carboxylic acid. $ { } ^ { 1236 } $ references : $ { } ^ { 1231 } $ sartillo - piscil, f. ; quinero, l. ; villegas, c. ; santacruz - juarez, e. ; de parrodi, c. a. tetrahedron lett. 2002, 43, 15. $ { } ^ { 1232 } $ xu, l. - w. ; li, l. ; xia, c. - g. ; zhao, p. - q. tetrahedron lett. 2004, 45, 2435. $ { } ^ { 1233 } $ tamami, b. ; ghazi, i. ; mahdavi, h. synth. commun. 2002, 32, 3725. $ { } ^ { 1234 } $ sharghi, h. ; eskandari, m. m. synthesis 2002, 1519. $ { } ^ { 1235 } $ ha, j. d. ; kim, s. y. ; lee, s. j. ; kang, s. k. ; ahn, j. h. ; kim, s. s. ; choi, j. - k. tetrahedron lett. 2004, 45, 5969. $ { } ^ { 1236 } $ fringuelli, f. ; pizzo, f. ; vaccaro, l. j. org. chem. 2001, 66, 4719. also see concellon, j. m. ; bardales, e. ; concellon, c. ; garcia - granda, s. ; diaz, m. r. j. org. chem. 2004, 69, 6923.
|
https://api.stackexchange.com
|
you need to add a couple of more questions - - ( c ) what dielectric should i use and ( d ) where do i place the capacitor in my layout. the amount and size varies by application. for power supply components the esr ( effective series resistance ) is a critical component. for example the mc33269 ldo datasheet lists an esr recommendation of 0. 2ohms to 10ohms. there is a minimum amount of esr required for stability. for most logic ics and op - amps i use a 0. 1uf ceramic capacitor. i place the capacitor very close to the ic so that there is very short path from the capacitor leads to the ground. i use extensive ground and power planes to provide low impedance paths. for power supply and high current components each application is different. i follow the manufacturer recommendations and place the capacitors very close to the ic. for bulk filtering of power inputs coming into the board i will typically use a 10uf ceramic x7r capacitor. again this varies with application. unless there is an minimum esr requirement for stability or i need very large values of capacitance i will use either x7r or x5r dielectrics. capacitance varies with voltage and temperature. currently it is not difficult to get affordable 10uf ceramic capacitors. you do not need to over specify the voltage rating on ceramic capacitors. at the rated voltage the capacitance is within the tolerance range. unless you increase the voltage above the dielectric breakdown you are only losing capacitance. typically the dielectric strength is 2 to 3 times the rated voltage. there is a very good application note about grounding and decoupling by paul brokaw called " an ic amplifier user's guide to decoupling, grounding,. and making things go right for a change ".
|
https://api.stackexchange.com
|
the moving average filter ( sometimes known colloquially as a boxcar filter ) has a rectangular impulse response : $ $ h [ n ] = \ frac { 1 } { n } \ sum _ { k = 0 } ^ { n - 1 } \ delta [ n - k ] $ $ or, stated differently : $ $ h [ n ] = \ begin { cases } \ frac { 1 } { n }, & & 0 \ le n < n \ \ 0, & & \ text { otherwise } \ end { cases } $ $ remembering that a discrete - time system's frequency response is equal to the discrete - time fourier transform of its impulse response, we can calculate it as follows : $ $ \ begin { align } h ( \ omega ) & = \ sum _ { n = - \ infty } ^ { \ infty } x [ n ] e ^ { - j \ omega n } \ \ & = \ frac { 1 } { n } \ sum _ { n = 0 } ^ { n - 1 } e ^ { - j \ omega n } \ end { align } $ $ to simplify this, we can use the known formula for the sum of the first $ n $ terms of a geometric series : $ $ \ sum _ { n = 0 } ^ { n - 1 } e ^ { - j \ omega n } = \ frac { 1 - e ^ { - j \ omega n } } { 1 - e ^ { - j \ omega } } $ $ what we're most interested in for your case is the magnitude response of the filter, $ | h ( \ omega ) | $. using a couple simple manipulations, we can get that in an easier - to - comprehend form : $ $ \ begin { align } h ( \ omega ) & = \ frac { 1 } { n } \ sum _ { n = 0 } ^ { n - 1 } e ^ { - j \ omega n } \ \ & = \ frac { 1 } { n } \ frac { 1 - e ^ { - j \ omega n } } { 1 - e ^ { - j \ omega } } \ \ & = \ frac { 1 } { n } \ frac { e ^ { - j \ omega n / 2 } } { e ^ { - j \ omega / 2 } } \ frac { e ^ { j \ omega n / 2 } - e ^ { - j \
|
https://api.stackexchange.com
|
omega n / 2 } } { e ^ { j \ omega / 2 } - e ^ { - j \ omega / 2 } } \ end { align } $ $ this may not look any easier to understand. however, due to euler's identity, recall that : $ $ \ sin ( \ omega ) = \ frac { e ^ { j \ omega } - e ^ { - j \ omega } } { j2 } $ $ therefore, we can write the above as : $ $ \ begin { align } h ( \ omega ) & = \ frac { 1 } { n } \ frac { e ^ { - j \ omega n / 2 } } { e ^ { - j \ omega / 2 } } \ frac { j2 \ sin \ left ( \ frac { \ omega n } { 2 } \ right ) } { j2 \ sin \ left ( \ frac { \ omega } { 2 } \ right ) } \ \ & = \ frac { 1 } { n } \ frac { e ^ { - j \ omega n / 2 } } { e ^ { - j \ omega / 2 } } \ frac { \ sin \ left ( \ frac { \ omega n } { 2 } \ right ) } { \ sin \ left ( \ frac { \ omega } { 2 } \ right ) } \ end { align } $ $ as i stated before, what you're really concerned about is the magnitude of the frequency response. so, we can take the magnitude of the above to simplify it further : $ $ | h ( \ omega ) | = \ frac { 1 } { n } \ left | \ frac { \ sin \ left ( \ frac { \ omega n } { 2 } \ right ) } { \ sin \ left ( \ frac { \ omega } { 2 } \ right ) } \ right | $ $ note : we are able to drop the exponential terms out because they don't affect the magnitude of the result ; $ | e ^ { j \ omega } | = 1 $ for all values of $ \ omega $. since $ | xy | = | x | | y | $ for any two finite complex numbers $ x $ and $ y $, we can conclude that the presence of the exponential terms don't affect the overall magnitude response ( instead, they affect the system's phase response ). the resulting function inside the magnitude brackets is
|
https://api.stackexchange.com
|
a form of a dirichlet kernel. it is sometimes called a periodic sinc function, because it resembles the sinc function somewhat in appearance, but is periodic instead. anyway, since the definition of cutoff frequency is somewhat underspecified ( - 3 db point? - 6 db point? first sidelobe null? ), you can use the above equation to solve for whatever you need. specifically, you can do the following : set $ | h ( \ omega ) | $ to the value corresponding to the filter response that you want at the cutoff frequency. set $ \ omega $ equal to the cutoff frequency. to map a continuous - time frequency to the discrete - time domain, remember that $ \ omega = 2 \ pi \ frac { f } { f _ s } $, where $ f _ s $ is your sample rate. find the value of $ n $ that gives you the best agreement between the left and right hand sides of the equation. that should be the length of your moving average.
|
https://api.stackexchange.com
|
heap just guarantees that elements on higher levels are greater ( for max - heap ) or smaller ( for min - heap ) than elements on lower levels, whereas bst guarantees order ( from " left " to " right " ). if you want sorted elements, go with bst. source : heap is better at findmin / findmax ( o ( 1 ) ), while bst is good at all finds ( o ( logn ) ). insert is o ( logn ) for both structures. if you only care about findmin / findmax ( e. g. priority - related ), go with heap. if you want everything sorted, go with bst. source :
|
https://api.stackexchange.com
|
classic torgerson's metric mds is actually done by transforming distances into similarities and performing pca ( eigen - decomposition or singular - value - decomposition ) on those. [ the other name of this procedure ( distances between objects - > similarities between them - > pca, whereby loadings are the sought - for coordinates ) is principal coordinate analysis or pcoa. ] so, pca might be called the algorithm of the simplest mds. non - metric mds is based on iterative alscal or proxscal algorithm ( or algorithm similar to them ) which is a more versatile mapping technique than pca and can be applied to metric mds as well. while pca retains m important dimensions for you, alscal / proxscal fits configuration to m dimensions ( you pre - define m ) and it reproduces dissimilarities on the map more directly and accurately than pca usually can ( see illustration section below ). thus, mds and pca are probably not at the same level to be in line or opposite to each other. pca is just a method while mds is a class of analysis. as mapping, pca is a particular case of mds. on the other hand, pca is a particular case of factor analysis which, being a data reduction, is more than only a mapping, while mds is only a mapping. as for your question about metric mds vs non - metric mds there's little to comment because the answer is straightforward. if i believe my input dissimilarities are so close to be euclidean distances that a linear transform will suffice to map them in m - dimensional space, i will prefer metric mds. if i don't believe, then monotonic transform is necessary, implying use of non - metric mds. a note on terminology for a reader. term classic ( al ) mds ( cmds ) can have two different meanings in a vast literature on mds, so it is ambiguous and should be avoided. one definition is that cmds is a synonym of torgerson's metric mds. another definition is that cmds is any mds ( by any algorithm ; metric or nonmetric analysis ) with single matrix input ( for there exist models analyzing many matrices at once - individual " indscal " model and replicated model ). illustration to the answer. some cloud of points ( ellipse ) is being mapped on a one - dimensional mds - map. a pair of points
|
https://api.stackexchange.com
|
is shown in red dots. iterative or " true " mds aims straight to reconstruct pairwise distances between objects. for it is the task of any mds. various stress or misfit criteria could be minimized between original distances and distances on the map : $ \ | d _ o - d _ m \ | _ 2 ^ 2 $, $ \ | d _ o ^ 2 - d _ m ^ 2 \ | _ 1 $, $ \ | d _ o - d _ m \ | _ 1 $. an algorithm may ( non - metric mds ) or may not ( metric mds ) include monotonic transformation on this way. pca - based mds ( torgerson's, or pcoa ) is not straight. it minimizes the squared distances between objects in the original space and their images on the map. this is not quite genuine mds task ; it is successful, as mds, only to the extent to which the discarded junior principal axes are weak. if $ p _ 1 $ explains much more variance than $ p _ 2 $ the former can alone substantially reflect pairwise distances in the cloud, especially for points lying far apart along the ellipse. iterative mds will always win, and especially when the map is wanted very low - dimensional. iterative mds, too, will succeed more when a cloud ellipse is thin, but will fulfill the mds - task better than pcoa. by the property of the double - centration matrix ( described here ) it appears that pcoa minimizes $ \ | d _ o \ | _ 2 ^ 2 - \ | d _ m \ | _ 2 ^ 2 $, which is different from any of the above minimizations. once again, pca projects cloud's points on the most advantageous all - corporal saving subspace. it does not project pairwise distances, relative locations of points on a subspace most saving in that respect, as iterative mds does it. nevertheless, historically pcoa / pca is considered among the methods of metric mds.
|
https://api.stackexchange.com
|
the starch forms a loosely bonded network that traps water vapor and air into a foamy mass, which expands rapidly as it heats up. starch is made of glucose polymers ( amylopectin is one of them, shown here ) : some of the chains are branched, some are linear, but they all have $ \ ce { - oh } $ groups which can form hydrogen bonds with each other. let's follow some starch molecules through the process and see what happens. in the beginning, the starch is dehydrated and tightly compacted - the chains are lined up in nice orderly structures with no water or air between them, maximizing the hydrogen bonds between starch polymers : as the water heats up ( or as you let the pasta soak ), water molecules begin to " invade " the tightly packed polymer chains, forming their own hydrogen bonds with the starch : soon, the polymer chains are completely surrounded by water, and are free to move in solution ( they have dissolved ) : however, the water / starch solution is not completely uniform. in the middle of the pot of water, the concentration of starch is low compared to water. there are lots and lots of water molecules available to surround the starch chains and to keep them apart. near the surface, when the water is boiling, the water molecules escape as vapor. this means that near the surface, the local concentration of starch increases. it increases so much as the water continues to boil, that the starch can collapse back in on itself and hydrogen bond to other starch molecules again. however, this time the orderly structure is broken and there is too much thermal motion to line up. instead, they form a loosely packed network of molecules connected by hydrogen bonds and surrounding little pockets of water and air ( bubbles ) : this network is very weak, but it is strong enough to temporarily trap the air as it expands due to heating - thus, the bubbles puff up and a rapidly growing foam forms. since they are very weak, it doesn't take much to disrupt them. some oil in the water will inhibit the bubbles from breaking the surface as easily, and a wooden spoon across the top will break the network mechanically as soon as it touches it. many biomolecules will form these types of networks under different conditions. for example, gelatin is a protein ( amino acid polymer ) that will form elastic hydrogen - bonded networks in hot water. as the gelatin - water mixture cools, the gel solidifies,
|
https://api.stackexchange.com
|
trapping the water inside to form what is called a sol - gel, or more specifically, a hydrogel. gluten in wheat is another example, although in this case the bonds are disulfide bonds. gluten networks are stronger than hydrogen - bonded polysaccharide networks, and are responsible for the elasticity of bread ( and of pasta ). disclaimer : pictures are not remotely to scale, starch is usually several hundred glucose monomers long, and the relative size of the molecules and atoms isn't shown. there aren't nearly enough water molecules - in reality there would be too many to be able to see the polymer ( 1, 000's ). the starch molecules aren't " twisty " enough or showing things like branching - the real network structure and conformations in solution would be much more complicated. but, hopefully you get the idea!
|
https://api.stackexchange.com
|
let me offer one reason and one misconception as an answer to your question. the main reason that it is easier to write ( seemingly ) correct mathematical proofs is that they are written at a very high level. suppose that you could write a program like this : function maximumwindow ( a, n, w ) : using a sliding window, calculate ( in o ( n ) ) the sums of all length - w windows return the maximum sum ( be smart and use only o ( 1 ) memory ) it would be much harder to go wrong when programming this way, since the specification of the program is much more succinct than its implementation. indeed, every programmer who tries to convert pseudocode to code, especially to efficient code, encounters this large chasm between the idea of an algorithm and its implementation details. mathematical proofs concentrate more on the ideas and less on the detail. the real counterpart of code for mathematical proofs is computer - aided proofs. these are much harder to develop than the usual textual proofs, and one often discovers various hidden corners which are " obvious " to the reader ( who usually doesn't even notice them ), but not so obvious to the computer. also, since the computer can only fill in relatively small gaps at present, the proofs must be elaborated to such a level that a human reading them will miss the forest for the trees. an important misconception is that mathematical proofs are often correct. in fact, this is probably rather optimistic. it is very hard to write complicated proofs without mistakes, and papers often contain errors. perhaps the most celebrated recent cases are wiles'first attempt at ( a special case of ) the modularity theorem ( which implies fermat's last theorem ), and various gaps in the classification of finite simple groups, including some 1000 + pages on quasithin groups which were written 20 years after the classification was supposedly finished. a mistake in a paper of voevodsky made him doubt written proofs so much that he started developing homotopy type theory, a logical framework useful for developing homotopy theory formally, and henceforth used a computer to verify all his subsequent work ( at least according to his own admission ). while this is an extreme ( and at present, impractical ) position, it is still the case that when using a result, one ought to go over the proof and check whether it is correct. in my area there are a few papers which are known to be wrong but have never been
|
https://api.stackexchange.com
|
retracted, whose status is relayed from mouth to ear among experts.
|
https://api.stackexchange.com
|
you can prove it by explicitly calculating the conditional density by brute force, as in procrastinator's link ( + 1 ) in the comments. but, there's also a theorem that says all conditional distributions of a multivariate normal distribution are normal. therefore, all that's left is to calculate the mean vector and covariance matrix. i remember we derived this in a time series class in college by cleverly defining a third variable and using its properties to derive the result more simply than the brute force solution in the link ( as long as you're comfortable with matrix algebra ). i'm going from memory but it was something like this : it is worth pointing out that the proof below only assumes that $ \ sigma _ { 22 } $ is nonsingular, $ \ sigma _ { 11 } $ and $ \ sigma $ may well be singular. let $ { \ bf x } _ { 1 } $ be the first partition and $ { \ bf x } _ 2 $ the second. now define $ { \ bf z } = { \ bf x } _ 1 + { \ bf a } { \ bf x } _ 2 $ where $ { \ bf a } = - \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } $. now we can write \ begin { align * } { \ rm cov } ( { \ bf z }, { \ bf x } _ 2 ) & = { \ rm cov } ( { \ bf x } _ { 1 }, { \ bf x } _ 2 ) + { \ rm cov } ( { \ bf a } { \ bf x } _ 2, { \ bf x } _ 2 ) \ \ & = \ sigma _ { 12 } + { \ bf a } { \ rm var } ( { \ bf x } _ 2 ) \ \ & = \ sigma _ { 12 } - \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } \ sigma _ { 22 } \ \ & = 0 \ end { align * } therefore $ { \ bf z } $ and $ { \ bf x } _ 2 $ are uncorrelated and, since they are jointly normal, they are independent. now, clearly $ e ( { \ bf z } ) = { \ boldsymbol \ mu } _ 1 + { \ bf a } { \ boldsymbol \ mu } _ 2 $, therefore it follows that \ begin
|
https://api.stackexchange.com
|
{ align * } e ( { \ bf x } _ 1 | { \ bf x } _ 2 ) & = e ( { \ bf z } - { \ bf a } { \ bf x } _ 2 | { \ bf x } _ 2 ) \ \ & = e ( { \ bf z } | { \ bf x } _ 2 ) - e ( { \ bf a } { \ bf x } _ 2 | { \ bf x } _ 2 ) \ \ & = e ( { \ bf z } ) - { \ bf a } { \ bf x } _ 2 \ \ & = { \ boldsymbol \ mu } _ 1 + { \ bf a } ( { \ boldsymbol \ mu } _ 2 - { \ bf x } _ 2 ) \ \ & = { \ boldsymbol \ mu } _ 1 + \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } ( { \ bf x } _ 2 - { \ boldsymbol \ mu } _ 2 ) \ end { align * } which proves the first part. for the covariance matrix, note that \ begin { align * } { \ rm var } ( { \ bf x } _ 1 | { \ bf x } _ 2 ) & = { \ rm var } ( { \ bf z } - { \ bf a } { \ bf x } _ 2 | { \ bf x } _ 2 ) \ \ & = { \ rm var } ( { \ bf z } | { \ bf x } _ 2 ) + { \ rm var } ( { \ bf a } { \ bf x } _ 2 | { \ bf x } _ 2 ) - { \ bf a } { \ rm cov } ( { \ bf z }, - { \ bf x } _ 2 ) - { \ rm cov } ( { \ bf z }, - { \ bf x } _ 2 ) { \ bf a }'\ \ & = { \ rm var } ( { \ bf z } | { \ bf x } _ 2 ) \ \ & = { \ rm var } ( { \ bf z } ) \ end { align * } now we're almost done : \ begin { align * } { \ rm var } ( { \ bf x } _ 1 | { \ bf x } _ 2 ) = { \ rm var } ( { \ bf z } ) & = { \ rm var }
|
https://api.stackexchange.com
|
( { \ bf x } _ 1 + { \ bf a } { \ bf x } _ 2 ) \ \ & = { \ rm var } ( { \ bf x } _ 1 ) + { \ bf a } { \ rm var } ( { \ bf x } _ 2 ) { \ bf a }'+ { \ bf a } { \ rm cov } ( { \ bf x } _ 1, { \ bf x } _ 2 ) + { \ rm cov } ( { \ bf x } _ 2, { \ bf x } _ 1 ) { \ bf a }'\ \ & = \ sigma _ { 11 } + \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } \ sigma _ { 22 } \ sigma ^ { - 1 } _ { 22 } \ sigma _ { 21 } - 2 \ sigma _ { 12 } \ sigma _ { 22 } ^ { - 1 } \ sigma _ { 21 } \ \ & = \ sigma _ { 11 } + \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } \ sigma _ { 21 } - 2 \ sigma _ { 12 } \ sigma _ { 22 } ^ { - 1 } \ sigma _ { 21 } \ \ & = \ sigma _ { 11 } - \ sigma _ { 12 } \ sigma ^ { - 1 } _ { 22 } \ sigma _ { 21 } \ end { align * } which proves the second part. note : for those not very familiar with the matrix algebra used here, this is an excellent resource. edit : one property used here this is not in the matrix cookbook ( good catch @ flyingpig ) is property 6 on the wikipedia page about covariance matrices : which is that for two random vectors $ \ bf x, y $, $ $ { \ rm var } ( { \ bf x } + { \ bf y } ) = { \ rm var } ( { \ bf x } ) + { \ rm var } ( { \ bf y } ) + { \ rm cov } ( { \ bf x }, { \ bf y } ) + { \ rm cov } ( { \ bf y }, { \ bf x } ) $ $ for scalars, of course, $ { \ rm cov } ( x, y ) = { \ rm cov } ( y, x ) $ but for vectors they are different insofar as the matrices are arranged differently.
|
https://api.stackexchange.com
|
one approach to this is to use whatever data you have to iteratively update the reference genome. you can keep chain files along the way so you can convert coordinates ( e. g. in gff files ) from the original reference to your new pseudoreference. a simple approach might be : align new data to existing reference call variants ( e. g. samtools mpileup, gatk, or whatever is best for you ) create new reference incorporating variants from 2 rinse and repeat ( i. e. go to 1 ) you can track some simple stats as you do this - e. g. the number of new variants should decrease, the number of reads mapped should increase, and the mismatch rate should decrease, with every iteration of the above loop. once the pseudoreference stabilises, you know you can't do much more.
|
https://api.stackexchange.com
|
homological algebra. let $ a, b $ be abelian groups ( or more generally objects of an abelian category ) and consider the set of isomorphism classes of abelian groups $ c $ together with an exact sequence $ 0 \ to b \ to c \ to a \ to 0 $ ( extensions of $ a $ by $ b $ ). it turns out that this set has a canonical group structure ( isn't that surprising?! ), namely the baer sum, and that this group is isomorphic to $ \ mathrm { ext } ^ 1 ( a, b ) $. this is also quite helpful to classify extensions for specific $ a $ and $ b $, since $ \ mathrm { ext } $ has two long exact sequences. for details, see weibel's book on homological algebra, chapter 3. similarily many obstructions in deformation theories are encoded in certain abelian groups. combinatorial game theory. a two - person game is called combinatorial if no chance is involved and the ending condition holds, so that in each case one of the two players wins. each player has a set of possible moves, each one resulting in a new game. there is a notion of equivalent combinatorial games. it turns out that the equivalence classes of combinatorial games can be made into a ( large ) group. the zero game $ 0 $ is the game where no moves are available. a move in the sum $ g + h $ of two games $ g, h $ is just a move in exactly one of $ g $ or $ h $. the inverse $ - g $ of a game $ g $ is the one where the possibles moves for the two players are swapped. the equation $ g + ( - g ) = 0 $ requires a proof. an important subgroup is the class of impartial games, where the same moves are available for both players ( or equivalently $ g = - g $ ). this extra structure already suffices to solve many basic combinatorial games, such as nim. in fact, one the first results in combinatorial game theory is that the ( large ) group of impartial combinatorial games is isomorphic to the ordinal numbers $ \ mathbf { on } $ with a certain group law $ \ oplus $, called the nim - sum ( different from the usual ordinal addition ). this identification is given by the nimber. this makes it possible
|
https://api.stackexchange.com
|
to reduce complicated games to simpler ones, in fact in theory to a trivial one - pile nim game. even the restriction to finite ordinal numbers gives an interesting group law on the set of natural numbers $ \ mathbb { n } $ ( see jyrki's answer ). all this can be found in the fantastic book winning ways... by conway, berlekamp, guy, and in conway's on numbers and games. a more formal introduction can be found in this paper by schleicher, stoll. there you also learn that ( certain ) combinatorial games actually constitute a ( large ) totally ordered field, containing the real numbers as well as the ordinal numbers. you couldn't have guessed this rich structure from their definition, right? algebraic topology. if $ x $ is a based space, the set of homotopy classes of pointed maps $ s ^ n \ to x $ has a group structure ; this is the $ n $ th homotopy group $ \ pi _ n ( x ) $ of $ x $. for $ n = 1 $ the group structure is quite obvious, since we can compose paths and go paths backwards. but at first sight it is not obvious that we can do something like that in higher dimensions. essentially this comes down to the cogroup structure of $ s ^ n $. there is a nice geometric proof that $ \ pi _ n ( x ) $ is abelian for $ n > 1 $.
|
https://api.stackexchange.com
|
causality indicates that information only flows forward in time, and algorithms should be designed to exploit this fact. time stepping schemes do this, whereas global - in - time spectral methods or other ideas do not. the question is of course why everyone insists on exploiting this fact - - but that's easy to understand : if your spatial problem already has a million unknowns and you need to do 1000 time steps, then on a typical machine today you have enough resources to solve the spatial problem by itself one timestep after the other, but you don't have enough resources to deal with a coupled problem of $ 10 ^ 9 $ unknowns. the situation is really not very different from what you have with spatial discretizations of transport phenomena either. sure, you can discretize a pure 1d advection equation using a globally coupled approach. but if you care about efficiency, then the by far best approach is to use a downstream sweep that carries information from the inflow to the outflow part of the domain. that's exactly what time stepping schemes do in time.
|
https://api.stackexchange.com
|
short answer yes, men and women's brains are different before birth. background first off, learning effects versus genetic differences is the familiar nature versus nurture issue. several genes on the y - chromosome, unique to males, are expressed in the pre - natal brain. in fact, about a third of the genes on the y - chromosome are expressed in the male prenatal brain ( reinius & jazin, 2009 ). hence, there are substantial genetic differences between male and female brains. importantly, the male testes start producing testosterone in the developing fetus. the female hormones have opposing effects on the brain as testosterone. in neural regions with appropriate receptors, testosterone influences patterns of cell death and survival, neural connectivity and neurochemical composition. in turn, while recognizing post - natal behavior is subject to parenting influences and others, prenatal testosterone may affect play behaviors between males and females, whereas influences on sexual orientation appear to be less dramatic ( hines, 2006 ). the question is quite broad and i would start with the cited review articles below, or if need be, the wikipedia page on the neuroscience of sex differences. references - hines, eur j endocrinol ( 2006 ) ; 155 : s115 - 21 - reinius & jazin, molecular psychiatry ( 2009 ) ; 14 : 988 β 9
|
https://api.stackexchange.com
|
voltage rating if a device says it needs a particular voltage, then you have to assume it needs that voltage. both lower and higher could be bad. at best, with lower voltage the device will not operate correctly in a obvious way. however, some devices might appear to operate correctly, then fail in unexpected ways under just the right circumstances. when you violate required specs, you don't know what might happen. some devices can even be damaged by too low a voltage for extended periods of time. if the device has a motor, for example, then the motor might not be able to develop enough torque to turn, so it just sits there getting hot. some devices might draw more current to compensate for the lower voltage, but the higher than intended current can damage something. most of the time, lower voltage will just make a device not work, but damage can't be ruled out unless you know something about the device. higher than specified voltage is definitely bad. electrical components all have voltages above which they fail. components rated for higher voltage generally cost more or have less desirable characteristics, so picking the right voltage tolerance for the components in the device probably got significant design attention. applying too much voltage violates the design assumptions. some level of too much voltage will damage something, but you don't know where that level is. take what a device says on its nameplate seriously and don't give it more voltage than that. current rating current is a bit different. a constant - voltage supply doesn't determine the current : the load, which in this case is the device, does. if johnny wants to eat two apples, he's only going to eat two whether you put 2, 3, 5, or 20 apples on the table. a device that wants 2 a of current works the same way. it will draw 2 a whether the power supply can only provide the 2 a, or whether it could have supplied 3, 5, or 20 a. the current rating of a supply is what it can deliver, not what it will always force thru the load somehow. in that sense, unlike with voltage, the current rating of a power supply must be at least what the device wants but there is no harm in it being higher. a 9 volt 5 amp supply is a superset of a 9 volt 2 amp supply, for example. replacing existing supply if you are replacing a previous power supply and don't know the device's requirements, then consider that power supply's rating to be the device '
|
https://api.stackexchange.com
|
s requirements. for example, if a unlabeled device was powered from a 9 v and 1 a supply, you can replace it with a 9 v and 1 or more amp supply. advanced concepts the above gives the basics of how to pick a power supply for some device. in most cases that is all you need to know to go to a store or on line and buy a power supply. if you're still a bit hazy on what exactly voltage and current are, it's probably better to quit now. this section goes into more power supply details that generally don't matter at the consumer level, and it assumes some basic understanding of electronics. regulated versus unregulated unregulated very basic dc power supplies, called unregulated, just step down the input ac ( generally the dc you want is at a much lower voltage than the wall power you plug the supply into ), rectify it to produce dc, add a output cap to reduce ripple, and call it a day. years ago, many power supplies were like that. they were little more than a transformer, four diodes making a full wave bridge ( takes the absolute value of voltage electronically ), and the filter cap. in these kinds of supplies, the output voltage is dictated by the turns ratio of the transformer. this is fixed, so instead of making a fixed output voltage their output is mostly proportional to the input ac voltage. for example, such a " 12 v " dc supply might make 12 v at 110 vac in, but then would make over 13 v at 120 vac in. another issue with unregulated supplies is that the output voltage not only is a function of the input voltage, but will also fluctuate with how much current is being drawn from the supply. a unregulated " 12 volt 1 amp " supply is probably designed to provide the rated 12 v at full output current and the lowest valid ac input voltage, like 110 v. it could be over 13 v at 110 v in at no load ( 0 amps out ) alone, and then higher yet at higher input voltage. such a supply could easily put out 15 v, for example, under some conditions. devices that needed the " 12 v " were designed to handle that, so that was fine. regulated modern power supplies don't work that way anymore. pretty much anything you can buy as consumer electronics will be a regulated power supply. you can still get unregulated supplies from more specialized electronics suppliers aimed
|
https://api.stackexchange.com
|
at manufacturers, professionals, or at least hobbyists that should know the difference. for example, jameco has wide selection of power supplies. their wall warts are specifically divided into regulated and unregulated types. however, unless you go poking around where the average consumer shouldn't be, you won't likely run into unregulated supplies. try asking for a unregulated wall wart at a consumer store that sells other stuff too, and they probably won't even know what you're talking about. a regulated supply actively controls its output voltage. these contain additional circuitry that can tweak the output voltage up and down. this is done continuously to compensate for input voltage variations and variations in the current the load is drawing. a regulated 1 amp 12 volt power supply, for example, is going to put out pretty close to 12 v over its full ac input voltage range and as long as you don't draw more than 1 a from it. universal input since there is circuitry in the supply to tolerate some input voltage fluctuations, it's not much harder to make the valid input voltage range wider and cover any valid wall power found anywhere in the world. more and more supplies are being made like that, and are called universal input. this generally means they can run from 90 - 240 v ac, and that can be 50 or 60 hz. minimum load some power supplies, generally older switchers, have a minimum load requirement. this is usually 10 % of full rated output current. for example, a 12 volt 2 amp supply with a minimum load requirement of 10 % isn't guaranteed to work right unless you load it with at least 200 ma. this restriction is something you're only going to find in oem models, meaning the supply is designed and sold to be embedded into someone else's equipment where the right kind of engineer will consider this issue carefully. i won't go into this more since this isn't going to come up on a consumer power supply. current limit all supplies have some maximum current they can provide and still stick to the remaining specs. for a " 12 volt 1 amp " supply, that means all is fine as long as you don't try to draw more than the rated 1 a. there are various things a supply can do if you try to exceed the 1 a rating. it could simply blow a fuse. specialty oem supplies that are stripped down for cost could catch fire or vanish into a greasy cloud of black
|
https://api.stackexchange.com
|
smoke. however, nowadays, the most likely response is that the supply will drop its output voltage to whatever is necessary to not exceed the output current. this is called current limiting. often the current limit is set a little higher than the rating to provide some margin. the " 12 v 1 a " supply might limit the current to 1. 1 a, for example. a device that is trying to draw the excessive current probably won't function correctly, but everything should stay safe, not catch fire, and recover nicely once the excessive load is removed. ripple no supply, even a regulated one, can keep its output voltage exactly at the rating. usually due to the way the supply works, there will be some frequency at which the output oscillates a little, or ripples. with unregulated supplies, the ripple is a direct function of the input ac. basic transformer unregulated supplies fed from 60 hz ac will generally ripple at 120 hz, for example. the ripple of unregulated supplies can be fairly large. to abuse the 12 volt 1 amp example again, the ripple could easily be a volt or two at full load ( 1 a output current ). regulated supplies are usually switchers and therefore ripple at the switching frequency. a regulated 12 v 1 a switcher might ripple Β±50 mv at 250 khz, for example. the maximum ripple might not be at maximum output current.
|
https://api.stackexchange.com
|
you can do this easily with bioawk, which is a version of awk with added features facilitating bioinformatics : bioawk - c fastx'{ print $ name " \ t0 \ t " length ( $ seq ) }'test. fa - c fastx tells the program that the data should be parsed as fasta or fastq format. this makes the $ name and $ seq variables available in the awk commands.
|
https://api.stackexchange.com
|
menthol it self gives a cold feeling in the mouth because it is active at the same receptor ( an ion channel ) on the tongue that cold temperature triggers. interestingly, although they act at the same receptor, they act at different sites, so that provides the intensified response when eating a mint and then drinking water. this reference gives an excellent detailed answer, with references to the original papers, which i'll summarize here. menthol acts at the trpm8 protein which forms an ion channel that allows $ \ ce { na + } $ and $ \ ce { ca ^ 2 + } $ ions to flow into cells and this sends a signal saying " cool " to the brain. ( as an aside, this protein monitors temperature across the body and not just on the tongue. ) cold temperatures actually change the confirmation of this protein, which allows the ions to flow more freely, and sends the signal to the brain. menthol, on the other hand, stabilizes the open channel ( allowing ions to flow even more freely ) and also... β menthol shifts the voltage dependence of channel activation to more negative values by slowing channel deactivation β. this is very significant to my question because it supports a claim made by the first web page i visited which stated that menthol acts on the receptors, leaving them sensitized for when the second stimulus is applied ( i. e. cold water ) resulting in the enhanced sensation. this mechanism of binding is very clearly different from the mechanism of cold affecting the trp channels. this is why the sensation is increased when both stimuli are applied, yet is not affected after addition stimulation from the same stimuli ( i. e. eating another mint ). all in all, pretty cool.
|
https://api.stackexchange.com
|
some general information on side - chain oxidation in alkylbenzenes is available at chemguide : an alkylbenzene is simply a benzene ring with an alkyl group attached to it. methylbenzene is the simplest alkylbenzene. alkyl groups are usually fairly resistant to oxidation. however, when they are attached to a benzene ring, they are easily oxidised by an alkaline solution of potassium manganate ( vii ) ( potassium permanganate ). methylbenzene is heated under reflux with a solution of potassium manganate ( vii ) made alkaline with sodium carbonate. the purple colour of the potassium manganate ( vii ) is eventually replaced by a dark brown precipitate of manganese ( iv ) oxide. the mixture is finally acidified with dilute sulfuric acid. overall, the methylbenzene is oxidised to benzoic acid. interestingly, any alkyl group is oxidised back to a - cooh group on the ring under these conditions. so, for example, propylbenzene is also oxidised to benzoic acid. regarding the mechanism, a ph. d. student at the university of british columbia did his doctorate on the mechanisms of permanganate oxidation of various organic substrates. 1 quoting from the abstract : it was found that the most vigorous oxidant was the permanganyl ion ( $ \ ce { mno3 + } $ ), with some contributing oxidation by both permanganic acid ( $ \ ce { hmno4 } $ ) and permanganate ion ( $ \ ce { mno4 - } $ ) in the case of easily oxidized compounds such as alcohols, aldehydes, or enols. the oxidation of toluene to benzoic acid was one of the reactions investigated, and a proposed reaction mechanism ( on pp 137 β 8 ) was as follows. in the slow step, the active oxidant $ \ ce { mno3 + } $ abstracts a benzylic hydrogen from the organic substrate. $ $ \ begin { align } \ ce { 2h + + mno4 - & < = > mno3 + + h2o } & & \ text { ( fast ) } \ \ \ ce { mno3 + + phcr2h & - > [ phcr2 ^. + hmno3 + ] & & \ text {
|
https://api.stackexchange.com
|
( slow ) } } \ \ \ ce { [ phcr2 ^. + hmno3 + ] & - > phcr2oh + mn ^ v } & & \ text { ( fast ) } \ \ \ ce { phcr2oh + mn ^ { vii } & - > aldehyde or ketone } & & \ text { ( fast ) } \ \ \ ce { aldehyde + mn ^ { vii } & - > benzoic acid } & & \ text { ( fast ) } \ \ \ ce { ketone + mn ^ { vii } & - > benzoic acid } & & \ text { ( slow ) } \ \ \ ce { 5 mn ^ v & - > 2mn ^ { ii } + 3mn ^ { vii } } & & \ text { ( fast ) } \ end { align } $ $ the abstraction of a benzylic hydrogen atom is consistent with the fact that arenes with no benzylic hydrogens, such as tert - butylbenzene, do not get oxidised. reference spitzer, u. a. the mechanism of permanganate oxidation of alkanes, arenes and related compounds. ph. d. thesis, the university of british columbia, november 1972. doi : 10. 14288 / 1. 0060242.
|
https://api.stackexchange.com
|
short answer : your confusion about whether ten is special may come from reading aloud " every base is base 10 " as " every base is base ten " β this is wrong ; not every base is base ten, only base ten is base ten. it is a joke that works better in writing. if you want to read it aloud, you should read it as " every base is base one - zero ". you must distinguish between numbers and representations. a pile of rocks has some number of rocks ; this number does not depend on what base you use. a representation is a string of symbols, like " 10 ", and depends on the base. there are " four " rocks in the cartoon, whatever the base may be. ( well, the word " four " may vary with language, but the number is the same. ) but the representation of this number " four " may be " 4 " or " 10 " or " 11 " or " 100 " depending on what base is used. the number " ten " β the number of dots in ".......... " β is not mathematically special. in different bases it has different representations : in base ten it is " 10 ", in base six it is " 14 ", etc. the representation " 10 " ( one - zero ) is special : whatever your base is, this representation denotes that number. for base $ b $, the representation " 10 " means $ 1 \ times b + 0 = b $. when we consider the base ten that we normally use, then " ten " is by definition the base for this particular representation, so it is in that sense " special " for this representation. but this is only an artefact of the base ten representation. if we were using the base six representation, then the representation " 10 " would correspond to the number six, so six would be special in that sense, for that representation.
|
https://api.stackexchange.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.