text
stringlengths 1
3.05k
| source
stringclasses 4
values |
|---|---|
short answer as far as i know, a complete neural map ( a connectome ) is only available for the roundworm c. elegens, a nematode with only 302 neurons ( fig. 1 ). fig. 1. c. elegans ( left, size : ~ 1 mm ) and connectome of c. elegans ( right ). sources : utrecht university & farber ( 2012 ) background looking at the least complex of animals will be your best bet and nematodes ( roundworms ) like caenorhabditis elegans are definitely a good option. c. elegans has some 300 neurons. below is a schematic of phyla in fig. 2. you mention insects ; these critters are much more complex than roundworms. the total number of neurons varies with each insect, but for comparison : one of the lesser complex insects like the fruit fly drosophila already has around 100k neurons, while a regular honey bee has about one million ( source : bio teaching ). complexity of the organism is indeed an indicator of the number of neurons to be expected. sponges, for instance ( fig. 1 ) have no neurons at all, so the least complex of animals won't help you. the next in line are the cnidaria ( fig. 2 ). the cnidaria include the jelly fish, and for example hydra vulgaris has 5. 6k neurons. so why do jelly fish feature more neurons? because size also matters. hydra vulgaris can grow up 15 mm, while c. elegans grows only up to 1 mm. see the wikipedia page for an informative list of # neurons in a host of species. a decent neuronal connectivity map ( a connectome ) only exists for c. elegans ( fig. 1 ) as far as i know, although other maps ( drosophila ( meinertzhagen, 2016 ) and human ) are underway. references - farber, sci am february 2012 - meinertzhagen, j neurogenet ( 2016 ) ; 30 ( 2 ) : 62 - 8 fig. 2. phyla within the kingdom of animalia. source : southwest tennessee university college
|
https://api.stackexchange.com
|
every simple closed curve that you can draw by hand will pass through the corners of some square. the question was asked by toeplitz in 1911, and has only been partially answered in 1989 by stromquist. as of now, the answer is only known to be positive, for the curves that can be drawn by hand. ( i. e. the curves that are piecewise the graph of a continuous function ) i find the result beyond my intuition. for details, see ( the figure is also borrowed from this site )
|
https://api.stackexchange.com
|
a question that requires quite a lot of guts to ask on this site : ) nonetheless, and risking sparking a debate, there are a few arguments that spring to ( my! ) mind that can support the notion that we thrive better in'day mode'( i. e., photopic conditions ). to start with a controversial assumption, humans are diurnal animals, meaning we are probably, but arguably, best adapted to photopic ( a lot of light ) conditions. a safer and less philosophical way to approach your question is by looking at the physiology and anatomy of the photosensitive organ of humans, i. e., the retina. the photosensitive cells in the retina are the rods and cones. photopic conditions favor cone receptors that mediate the perception of color. scotopic ( little light ) conditions favor rod activity, which are much more sensitive to photons, but operate on a gray scale only. the highest density of photoreceptors is found in the macular region, which is stacked with cones and confers high - acuity color vision. the periphery of the retina contains mostly rods, which mediate low - visual acuity only. since highest densities of photoreceptors are situated at the most important spot located at approximately 0 degrees, i. e., our point of focus, and since these are mainly cones, we apparently are best adapted to photopic conditions kolb, 2012 ). an evolutionary approach would be to start with the fact that ( most ) humans are trichromats ( barred folks with some sort of color blindness ), meaning we synthesize our color palette using 3 cone receptors sensitive to red ( long wavelength ), green ( intermediate ) and blue ( short ). humans are thought to have evolved from apes. those apes are thought to have been dichromats, which have only a long / intermediate cone and a blue cone. it has been put forward that the splitting of the short / intermediate cone in our ape ancestors to a separate red / green cone was favorable because we could better distinguish ripe from unripe fruits. since cones operate in the light, we apparently were selected for cone activity and thus photopic conditions ( bompas et al, 2013 ). literature - bompas et al., iperception ( 2013 ) ; 4 ( 2 ) : 84 β 94 - kolb, webvision - the organization of the retina and visual system ( 2012 ), moran eye center further reading -
|
https://api.stackexchange.com
|
why does a light object appear lighter in your peripheral vision when it's dark?
|
https://api.stackexchange.com
|
when you add salt to an ice cube, you end up with an ice cube whose temperature is above its melting point. this ice cube will do what any ice cube above its melting point will do : it will melt. as it melts, it cools down, since energy is being used to break bonds in the solid state. ( note that the above point can be confusing if you're new to thinking about phase transitions. an ice cube melting will take up energy, while an ice cube freezing will give off energy. i like to think of it in terms of le chatelier's principle : if you need to lower the temperature to freeze an ice cube, this means that the water gives off heat as it freezes. ) the cooling you get, therefore, comes from the fact that some of the bonds in the ice are broken to form water, taking energy with them. the loss of energy from the ice cube is what causes it to cool.
|
https://api.stackexchange.com
|
your image doesn't have uniform brightness, so you shouldn't work with a uniform threshold. you need an adaptive threshold. this can be implemented by preprocessing the image to make the brightness more uniform across the image ( code written in mathematica, you'll have to implement the matlab version for yourself ) : a simple way to make the brightness uniform is to remove the actual text from the image using a closing filter : white = closing [ src, diskmatrix [ 5 ] ] the filter size should be chosen larger than the font stroke width and smaller than the size of the stains you're trying to remove. edit : i was asked in the comments to explain what a closing operation does. it's a morphological dilation followed by a morphological erosion. the dilation essentially moves the structuring element at every position in the image, and picks the brightest pixel under the mask, thus : removing dark structures smaller than the structuring element shrinking larger dark structures by the size of the structuring element enlarging bright structures the erosion operation does the opposite ( it picks the darkest pixel under inside the structuring element ), so if you apply it on the dilated image : the dark structures that were removed because they're smaller than the structuring element are still gone the darker structures that were shrunk are enlarged again to their original size ( though their shape will be smoother ) the bright structures are reduced to their original size so the closing operation removes small dark objects with only minor changes to larger dark objects and bright objects. here's an example with different structuring element sizes : as the size of the structuring element increases, more and more of the characters is removed. at radius = 5, all of the characters are removed. if the radius is increased further, the smaller stains are removed, too : now you just divide the original image by this " white image " to get an image of ( nearly ) uniform brightness : whiteadjusted = image [ imagedata [ src ] / imagedata [ white ] * 0. 85 ] this image can now be binarized with a constant threshold : binarize [ whiteadjusted, 0. 6 ]
|
https://api.stackexchange.com
|
your understanding is correct. if you sample at rate $ f _ s $, then with real samples only, you can unambiguously represent frequency content in the region $ [ 0, \ frac { f _ s } { 2 } ) $ ( although the caveat that allows bandpass sampling still applies ). no additional information can be held in the other half of the spectrum when the samples are real, because real signals exhibit conjugate symmetry in the frequency domain ; if your signal is real and you know its spectrum from $ 0 $ to $ \ frac { f _ s } { 2 } $, then you can trivially conclude what the other half of its spectrum is. there is no such restriction for complex signals, so a complex signal sampled at rate $ f _ s $ can unambiguously contain content from $ - \ frac { f _ s } { 2 } $ to $ \ frac { f _ s } { 2 } $ ( for a total bandwidth of $ f _ s $ ). as you noted, however, there's not an inherent efficiency improvement to be made here, as each complex sample contains two components ( real and imaginary ), so while you require half as many samples, each requires twice the amount of data storage, which cancels out any immediate benefit. complex signals are often used in signal processing, however, where you have problems that map well to that structure ( such as in quadrature communications systems ).
|
https://api.stackexchange.com
|
let's talk about the balloon first because it provides a pretty good model for the expanding universe. it's true that if you draw a big circle then it will quickly expand as you blow into the balloon. actually, the apparent speed with which two of the points on the circle in a distance $ d $ of each other would move relative to each other will be $ v = h _ 0 d $ where $ h _ 0 $ is the speed the balloon itself is expanding. this simple relation is known as hubble's law and $ h _ 0 $ is the famous hubble constant. the moral of this story is that the expansion effect is dependent on the distance between objects and really only apparent for the space - time on the biggest scales. still, this is only part of the full picture because even on small distances objects should expand ( just slower ). let us consider galaxies for the moment. according to wikipedia, $ h _ 0 \ approx 70 \, { \ rm km \ cdot s ^ { - 1 } \ cdot { mpc } ^ { - 1 } } $ so for milky way which has a diameter of $ d \ approx 30 \, { \ rm kpc } $ this would give $ v \ approx 2 \, { \ rm km \ cdot s ^ { - 1 } } $. you can see that the effect is not terribly big but the given enough time, our galaxy should grow. but it doesn't. to understand why, we have to remember that space expansion isn't the only important thing that happens in our universe. there are other forces like electromagnetism. but most importantly, we have forgotten about good old newtonian gravity that holds big massive objects together. you see, when equations of space - time expansion are derived, nothing of the above is taken into account because all of it is negligible on the macroscopic scale. one assumes that universe is a homogenous fluid where microscopic fluid particles are the size of the galaxies ( it takes some getting used to to think about galaxies as being microscopic ). so it shouldn't be surprising that this model doesn't tell us anything about the stability of galaxies ; not to mention planets, houses or tables. and conversely, when investigating stability of objects you don't really need to account for space - time expansion unless you get to the scale of galaxies and even there the effect isn't that big.
|
https://api.stackexchange.com
|
until someone identifies an β update β function in pymol, i think the next best thing is to use scripting. ( see the pymol wiki ) it is an imperfect solution, but it may work for the situation presented in the original post if the session may be reproduced. to begin capturing the commands in pymol, including menu selections to a script file, select : file - > log file - > open - > myscript. pml when done with creating the display panels, select : file - > log file - > close the input data files and the script itself may then be updated or replaced. in a fresh pymol session, execute the script with : file - > run script - > myscript. pml to test the above, i generated a pymol session where i captured a script as above. i loaded the atomic coordinates of a small protein, a ligand and a 2mfobs - dfcalc electron density map. then i displayed some panels along with mesh surface around the compound. the co - structure was then re - refined, thus generating modified atomic coordinates and electron density maps. i replaced the original files with the updates and executed the script in a fresh pymol session. the display panels were updated accordingly. i recommend the advanced scripting workshop.
|
https://api.stackexchange.com
|
you get burned because energy is transferred from the hot object to your hand until they are both at the same temperature. the more energy transferred, the more damage done to you. aluminium, like most metals, has a lower heat capacity than water ( ie you ) so transferring a small amount of energy lowers the temperature of aluminium more than it heats you ( about 5x as much ). next the mass of the aluminium foil is very low - there isn't much metal to hold the heat, and finally the foil is probably crinkled so although it is a good conductor of heat you are only touching a very small part of the surface area so the heat flow to you is low. if you put your hand flat on an aluminium engine block at the same temperature you would get burned. the same thing applies to the sparks from a grinder or firework " sparkler ", the sparks are hot enough to be molten iron - but are so small they contain very little energy.
|
https://api.stackexchange.com
|
biological examples similar to programming statements : if : transcriptional activator ; when present a gene will be transcribed. in general there is no termination of events unless the signal is gone ; the program ends only with the death of the cell. so the if statement is always a part of a loop. while : transcriptional repressor ; gene will be transcribed until repressor is not present. there are no equivalents of function calls. all events happen is the same space and there is always a likelihood of interference. one can argue that organelles can act as a compartment that may have a function like properties but they are highly complex and are not just some kind of input - output devices. goto is always dependent on a condition. this can happen in case of certain network connections such as feedforward loops and branched pathways. for example if there is a signalling pathway like this : a β b β c and there is another connection d β c then if somehow d is activated it will directly affect c, making a and b dispensable. logic gates have been constructed using synthetic biological circuits. see this review for more information. note molecular biological processes cannot be directly compared to a computer code. it is the underlying logic that is important and not the statement construct itself and these examples should not be taken as absolute analogies. it is also to be noted that dna is just a set of instructions and not really a fully functional entity ( it is functional to some extent ). however, even being just a code it is comparable to a hll code that has to be compiled to execute its functions. see this post too. it is also important to note that the cell, like many other physical systems, is analog in nature. therefore, in most situations there is no 0 / 1 ( binary ) value of variables. consider gene expression. if a transcriptional activator is present, the gene will be transcribed. however, if you keep increasing the concentration of the activator, the expression of that gene will increase until it reaches a saturation point. so there is no digital logic here. having said that, i would add that switching behaviour is possible in biological systems ( including gene expression ) and is also used in many cases. certain kinds of regulatory network structures can give rise to such dynamics. co - operativity with or without positive feedback is one of the mechanisms that can implement switching behaviour. for more details read about ultrasensitivity. also check out " can molecular genetics make a boolean variable from a continuous
|
https://api.stackexchange.com
|
variable? "
|
https://api.stackexchange.com
|
i agree completely with srikant's explanation. to give a more heuristic spin on it : classical approaches generally posit that the world is one way ( e. g., a parameter has one particular true value ), and try to conduct experiments whose resulting conclusion - - no matter the true value of the parameter - - will be correct with at least some minimum probability. as a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a " confidence interval " - - a range of values designed to include the true value of the parameter with some minimum probability, say 95 %. a frequentist will design the experiment and 95 % confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. the other 5 might be slightly wrong, or they might be complete nonsense - - formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. ( of course we would prefer them to be slightly wrong, not total nonsense. ) bayesian approaches formulate the problem differently. instead of saying the parameter simply has one ( unknown ) true value, a bayesian method says the parameter's value is fixed but has been chosen from some probability distribution - - known as the prior probability distribution. ( another way to say that is that before taking any measurements, the bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be. ) this " prior " might be known ( imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the dmv ) or it might be an assumption drawn out of thin air. the bayesian inference is simpler - - we collect some data, and then calculate the probability of different values of the parameter given the data. this new probability distribution is called the " a posteriori probability " or simply the " posterior. " bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95 % of the probability - - this is called a " 95 % credibility interval. " a bayesian partisan might criticize the frequentist confidence interval like this : " so what if 95 out of 100 experiments yield a confidence interval that includes the true value? i don't care about 99 experiments i didn't do ; i care about this experiment i did do
|
https://api.stackexchange.com
|
. your rule allows 5 out of the 100 to be complete nonsense [ negative values, impossible values ] as long as the other 95 are correct ; that's ridiculous. " a frequentist die - hard might criticize the bayesian credibility interval like this : " so what if 95 % of the posterior probability is included in this range? what if the true value is, say, 0. 37? if it is, then your method, run start to finish, will be wrong 75 % of the time. your response is,'oh well, that's ok because according to the prior it's very rare that the value is 0. 37,'and that may be so, but i want a method that works for any possible value of the parameter. i don't care about 99 values of the parameter that it doesn't have ; i care about the one true value it does have. oh also, by the way, your answers are only correct if the prior is correct. if you just pull it out of thin air because it feels right, you can be way off. " in a sense both of these partisans are correct in their criticisms of each others'methods, but i would urge you to think mathematically about the distinction - - as srikant explains. here's an extended example from that talk that shows the difference precisely in a discrete example. when i was a child my mother used to occasionally surprise me by ordering a jar of chocolate - chip cookies to be delivered by mail. the delivery company stocked four different kinds of cookie jars - - type a, type b, type c, and type d, and they were all on the same truck and you were never sure what type you would get. each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. if you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips : a type - a cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! a type - d cookie jar has 70 cookies with one chip each. notice how each vertical column is a probability mass function - - the conditional probability of the number of chips you'd get, given that the jar = a, or b, or c, or d, and each column sums to 100. i used to love to play a game as soon as the deliveryman dropped off
|
https://api.stackexchange.com
|
my new cookie jar. i'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty - - at the 70 % level - - of which jars it could be. thus it's the identity of the jar ( a, b, c or d ) that is the value of the parameter being estimated. the number of chips ( 0, 1, 2, 3 or 4 ) is the outcome or the observation or the sample. originally i played this game using a frequentist, 70 % confidence interval. such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar i got, the interval would cover that true value with at least 70 % probability. an interval, of course, is a function that relates an outcome ( a row ) to a set of values of the parameter ( a set of columns ). but to construct the confidence interval and guarantee 70 % coverage, we need to work " vertically " - - looking at each column in turn, and making sure that 70 % of the probability mass function is covered so that 70 % of the time, that column's identity will be part of the interval that results. remember that it's the vertical columns that form a p. m. f. so after doing that procedure, i ended up with these intervals : for example, if the number of chips on the cookie i draw is 1, my confidence interval will be { b, c, d }. if the number is 4, my confidence interval will be { b, c }. notice that since each column sums to 70 % or greater, then no matter which column we are truly in ( no matter which jar the deliveryman dropped off ), the interval resulting from this procedure will include the correct jar with at least 70 % probability. notice also that the procedure i followed in constructing the intervals had some discretion. in the column for type - b, i could have just as easily made sure that the intervals that included b would be 0, 1, 2, 3 instead of 1, 2, 3, 4. that would have resulted in 75 % coverage for type - b jars ( 12 + 19 + 24 + 20 ), still meeting the lower bound of 70 %. my sister bayesia thought this approach was crazy, though. " you have to consider the deliverman as part of the system, " she said. " let's treat the identity of the jar as a random variable itself,
|
https://api.stackexchange.com
|
and let's assume that the deliverman chooses among them uniformly - - meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability. " " with that assumption, now let's look at the joint probabilities of the whole event - - the jar type and the number of chips you draw from your first cookie, " she said, drawing the following table : notice that the whole table is now a probability mass function - - meaning the whole table sums to 100 %. " ok, " i said, " where are you headed with this? " " you've been looking at the conditional probability of the number of chips, given the jar, " said bayesia. " that's all wrong! what you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! your 70 % interval should simply include the list jars that, in total, have 70 % probability of being the true jar. isn't that a lot simpler and more intuitive? " " sure, but how do we calculate that? " i asked. " let's say we know that you got 3 chips. then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. we'll need to scale up the probabilities proportionately so each row sums to 100, though. " she did : " notice how each row is now a p. m. f., and sums to 100 %. we've flipped the conditional probability from what you started with - - now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie. " " interesting, " i said. " so now we just circle enough jars in each row to get up to 70 % probability? " we did just that, making these credibility intervals : each interval includes a set of jars that, a posteriori, sum to 70 % probability of being the true jar. " well, hang on, " i said. " i'm not convinced. let's put the two kinds of intervals side - by - side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility. " here they are : confidence intervals : credibility intervals : " see how crazy your confidence intervals are? " said bayesia. " you don't even have a sensible answer when you
|
https://api.stackexchange.com
|
draw a cookie with zero chips! you just say it's the empty interval. but that's obviously wrong - - it has to be one of the four types of jars. how can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? and ditto when you pull a cookie with 3 chips - - your interval is only correct 41 % of the time. calling this a'70 %'confidence interval is bullshit. " " well, hey, " i replied. " it's correct 70 % of the time, no matter which jar the deliveryman dropped off. that's a lot more than you can say about your credibility intervals. what if the jar is type b? then your interval will be wrong 80 % of the time, and only correct 20 % of the time! " " this seems like a big problem, " i continued, " because your mistakes will be correlated with the type of jar. if you send out 100'bayesian'robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type - b days, you will expect 80 of the robots to get the wrong answer, each having > 73 % belief in its incorrect conclusion! that's troublesome, especially if you want most of the robots to agree on the right answer. " " plus we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random, " i said. " where did that come from? what if it's wrong? you haven't talked to him ; you haven't interviewed him. yet all your statements of a posteriori probability rest on this statement about his behavior. i didn't have to make any such assumptions, and my interval meets its criterion even in the worst case. " " it's true that my credibility interval does perform poorly on type - b jars, " bayesia said. " but so what? type b jars happen only 25 % of the time. it's balanced out by my good coverage of type a, c, and d jars. and i never publish nonsense. " " it's true that my confidence interval does perform poorly when i've drawn a cookie with zero chips, " i said. " but so what? chipless cookies happen, at most, 27 % of the time in the worst case ( a type - d jar ). i can afford to give nonsense for this outcome because
|
https://api.stackexchange.com
|
no jar will result in a wrong answer more than 30 % of the time. " " the column sums matter, " i said. " the row sums matter, " bayesia said. " i can see we're at an impasse, " i said. " we're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty. " " that's true, " said my sister. " want a cookie? "
|
https://api.stackexchange.com
|
yes, this helps as well with other infectious diseases. a good example is the flu, which season was measurably shorter this year than in other years on record. see the figure from the reference 1 for comparision : reference 2 shows that this is also true for other respiratory diseases ( figure 2 ) : this shows very well that the isolation measures and the social distancing work very well to control such transmissable diseases. references : how coronavirus lockdowns stopped flu in its tracks monitoring respiratory infections in covid - 19 epidemics
|
https://api.stackexchange.com
|
to follow up what mbq said, there have been a number of " origin of life " studies which suggest that rna was a precursor to dna, the so - called " rna world " ( 1 ). since rna can carry out both roles which dna and proteins perform today. further speculations suggest things like a peptide - nucleic acids " pna " may have preceded rna and so on. catalytic molecules and genetic molecules are generally required to have different features. for example, catalytic molecules should be able to fold and have many building blocks ( for catalytic action ), whereas genetic molecules should not fold ( for template synthesis ) and have few building blocks ( for high copy fidelity ). this puts a lot of demands on one molecule. also, catalytic biopolymers can ( potentially ) catalyse their own destruction. rna seems to be able to balance these demands, but then the difficulty is in making rna prebiotically - so far his has not been achieved. this has lead to interest in " metabolism first " models where early life has no genetic biopolymer and somehow gives rise to genetic inheritance. however, so far this seems to have been little explored and largely unsuccessful ( 2 ). edit i just saw this popular article in new scientist which also discusses tna ( threose nucleic acid ) and gives some background reading for pna, gna ( glycol nucleic acid ) and ana ( amyloid nucleic acid ). ( 1 ) gilbert, w., 1986, nature, 319, 618 " origin of life : the rna world " ( 2 ) copley et al., 2007, bioorg chem, 35, 430 " the origin of the rna world : co - evolution of genes and metabolism. "
|
https://api.stackexchange.com
|
storing local variables on a stack is an implementation detail β basically an optimization. you can think of it this way. when entering a function, space for all local variables is allocated somewhere. you can then access all variables, since you know their location somehow ( this is part of the process of allocation ). when leaving a function, the space is deallocated ( freed ). the stack is one way of implementing this process β you can think of it as a kind of " fast heap " which has limited size and so is only appropriate for small variables. as an additional optimization, all local variables are stored in one block. since each local variable has known size, you know the offset of each variable in the block, and that is how you access it. this is in contrast to variables allocated on the heap, whose addresses are themselves stored in other variables. you can think of the stack as very similar to the classical stack data structure, with one crucial difference : you are allowed to access items below the top - of - stack. indeed, you can access the $ k $ th item from the top. this is how you can access all your local variables with pushing and popping. the only pushing being done is upon entering the function, and the only popping upon leaving the function. finally, let me mention that in practice, some of the local variables are stored in registers. this is since access to registers is faster than access to the stack. this is another way of implementing a space for local variables. once again, we know exactly where a variable is stored ( this time not via offset, but via the name of a register ), and this kind of storage is only appropriate for small data.
|
https://api.stackexchange.com
|
there is a wide variety of algorithms ; barnes hut is a popular $ \ mathcal { o } ( n \ log n ) $ method, and the fast multipole method is a much more sophisticated $ \ mathcal { o } ( n ) $ alternative. both methods make use of a tree data structure where nodes essentially only interact with their nearest neighbors at each level of the tree ; you can think of splitting the tree between the set of processes at a sufficient depth, and then having them cooperate only at the highest levels. you can find a recent paper discussing fmm on petascale machines here.
|
https://api.stackexchange.com
|
the color burst is also an indicator that there is a color signal. this is for compatibility with black and white signals. no color burst means b & w signal, so only decode the luminance signal ( no croma ). no signal, no color burst, so the decoder falls back to b & w mode. same idea goes to fm stereo / mono. if there is no 19 khz subcarrier present, then the fm demodulator falls back to mono.
|
https://api.stackexchange.com
|
this is to expand on leon's suggestion to use a hub. the usb hubs are not all created equal. unofficially, there are several " grades " : cheap hubs. these are cost optimized to the point where they don't adhere to the usb spec any more. often, the + 5v lines of the downstream ports are wired directly to the computer. no protection switches. maybe a polyfuse, if lucky. edit : here's a thread where the o. p. is complaninig that an improperly designed usb hub is back - feeding his pc. decent hubs. the downstream + 5v is connected through a switch with over - current protection. esd protection is usually present. industrial hubs. there's usually respectable overvoltage protection in the form of tvs and resettable fuses. isolated hubs. there's actual galvanic isolation between upstream port and downstream ports. isolation rating tends to be 2kv to 5kv. isolated hubs are used when a really high voltage can come from a downstream port ( e. g. mains ac, defibrillator, back emf from a large motor ). isolated hubs are also used for breaking ground loops in vanilla conditions. what to use depends on the type of threat you're expecting. if you're concerned with shorts between power and data lines, you could use a decent hub. in the worst case, the hub controller will get sacrificed, but it will save the port on the laptop. if you're concerned that a voltage higher than + 5v can get to the pc, you can fortify the hub with overvoltage protection consisting of tvs & polyfuse. however, i'm still talking about relatively low voltages on the order of + 24v. if you're concerned with really high voltages, consider isolated hub, gas discharge tubes. consider using a computer which you can afford to lose.
|
https://api.stackexchange.com
|
unlike the conventional wisdom, the pain you feel the next day ( after a strenuous exercise ) has nothing to do with lactic acid. actually, lactic acid is rapidly removed from the muscle cell and converted to other substances in the liver ( see cori cycle ). if you start to feel your muscles " burning " during exercise ( due to lactic acid ), you just need to rest for some seconds, and the " burning " sensation disappears. according to scientific american : contrary to popular opinion, lactate or, as it is often called, lactic acid buildup is not responsible for the muscle soreness felt in the days following strenuous exercise. rather, the production of lactate and other metabolites during extreme exertion results in the burning sensation often felt in active muscles. researchers who have examined lactate levels right after exercise found little correlation with the level of muscle soreness felt a few days later. ( emphasis mine ) so if it's not lactic acid, what is the cause of the pain? what you're feeling in the next day is called delayed onset muscle soreness ( doms ). doms is basically an inflammatory process ( with accumulation of histamine and prostaglandins ), due to microtrauma or micro ruptures in the muscle fibers. the soreness can last from some hours to a couple of days or more, depending on the severity of the trauma ( see below ). according to the " damage hypothesis " ( also known as " micro tear model " ), microruptures are necessary for hypertrophy ( if you are working out seeking hypertrophy ), and that explains why lifting very little weight doesn't promote hypertrophy. however, this same microtrauma promotes an inflammatory reaction ( tiidus, 2008 ). this inflammation can take some time to develop ( that's why you normally feel the soreness the next day ) and, like a regular inflammation, has as signs pain, edema and heat. this figure from mcardle ( 2010 ) shows the proposed sequence for doms : figure : proposed sequence for delayed - onset muscle soreness. source : mcardle ( 2010 ). as anyone who works out at the gym knows, deciding how much weight to add to the barbell can be complicated : too little weight promotes no microtrauma, and you won't have any hypertrophy. too much weight leads to too much microtraumata, and you'll have trouble
|
https://api.stackexchange.com
|
to get out of the bed the next day. edit : this comment asks if there is evidence of the " micro tear model " or " damage model " ( also eimd, or exercise - induced muscle damage ). first, that's precisely why i was careful when i used the term hypothesis. second, despite the matter not being settled, there is indeed evidence supporting eimd. this meta - analysis ( schoenfeld, 2012 ) says : there is a sound theoretical rationale supporting a potential role for eimd in the hypertrophic response. although it appears that muscle growth can occur in the relative absence of muscle damage, potential mechanisms exist whereby eimd may enhance the accretion of muscle proteins including the release of inflammatory agents, activation of satellite cells, and upregulation of igf - 1 system, or at least set in motion the signaling pathways that lead to hypertrophy. the same paper, however, discuss the problems of eimd and a few alternative hypotheses ( some of them not mutually exclusive, though ). sources : tiidus, p. ( 2008 ). skeletal muscle damage and repair. champaign : human kinetics. mcardle, w., katch, f. and katch, v. ( 2010 ). exercise physiology. baltimore : wolters kluwer health / lippincott williams & wilkins. roth, s. ( 2017 ). why does lactic acid build up in muscles? and why does it cause soreness?. [ online ] scientific american. available at : [ accessed 22 jun. 2017 ]. schoenfeld, b. ( 2012 ). does exercise - induced muscle damage play a role in skeletal muscle hypertrophy?. journal of strength and conditioning research, 26 ( 5 ), pp. 1441 - 1453.
|
https://api.stackexchange.com
|
unfortunately the other 3 answers to the question are incorrect, but helps keeping a common misunderstanding alive : - ) thieving is added to the outer layers in order to help a more balanced chemical process for the plating. also notice that there is no need to " balance copper " ( or stackups for that matter ) in modern pcb fabrication to avoid " warped boards ". i wrote about this on my blog recently. you can find other references on the net.
|
https://api.stackexchange.com
|
zeroing bins in the frequency domain is the same as multiplying by a rectangular window in the frequency domain. multiplying by a window in the frequency domain is the same as circular convolution by the transform of that window in the time domain. the transform of a rectangular window is the sinc function ( $ \ sin ( \ omega t ) / \ omega t $ ). note that the sinc function has lots of large ripples and ripples that extend the full width of time domain aperture. if a time - domain filter that can output all those ripples ( ringing ) is a " bad idea ", then so is zeroing bins. these ripples will be largest for any spectral content that is " between bins " or non - integer - periodic in the fft aperture width. so if your original fft input data is a window on any data that is somewhat non - periodic in that window ( e. g. most non - synchronously sampled " real world " signals ), then those particular artifacts will be produced by zero - ing bins. another way to look at it is that each fft result bin represents a certain frequency of sine wave in the time domain. thus zeroing a bin will produce the same result as subtracting that sine wave, or, equivalently, adding a sine wave of an exact fft bin center frequency but with the opposite phase. note that if the frequency of some content in the time domain is not purely integer periodic in the fft width, then trying to cancel a non - integer periodic signal by adding the inverse of an exactly integer periodic sine wave will produce, not silence, but something that looks more like a " beat " note ( am modulated sine wave of a different frequency ). again, probably not what is wanted. conversely, if your original time domain signal is just a few pure unmodulated sinusoids that are all exactly integer periodic in the fft aperture width, then zero - ing fft bins will remove the designated ones without artifacts.
|
https://api.stackexchange.com
|
this is the xkcd nerd sniping problem. it forced me to abandon everything else i was doing to research and write up this answer. then, years later, it compelled me to return and edit it for clarity. the following full solution is based on the links posted in the other answer. but in addition to presenting this information in a convenient form, i've also made some significant simplifications of my own. now, nothing more than high school integration is needed! the strategy in a nutshell is to write down an expression for the resistance between any two points as an integral. use integration tricks to evaluate the integral found in step 1 for two diagonally separated points. use a recurrence relation to determine all other resistances from the ones found in step 2. the result is an expression for all resistances, of which the knight's move is just one. the answer for it turns out to be $ $ \ frac { 4 } { \ pi } - \ frac { 1 } { 2 } $ $ setting up the problem while we're ultimately interested in a two - dimensional grid, to start with nothing will depend on the dimension. therefore we will begin by working in $ n $ dimensions, and specialise to $ n = 2 $ only when necessary. label the grid points by $ \ vec { n } $, an $ n $ - component vector with integer components. suppose the voltage at each point is $ v _ \ vec { n } $. then the current flowing into $ \ vec { n } $ from its $ 2n $ neighbours is $ $ \ sum _ { i, \ pm } ( v _ { \ vec { n } \ pm \ vec { e } _ i } - v _ \ vec { n } ) $ $ ( $ \ vec { e } _ i $ is the unit vector along the $ i $ - direction. ) insist that an external source is pumping one amp into $ \ vec { 0 } $ and out of $ \ vec { a } $. current conservation at $ \ vec { n } $ gives $ $ \ sum _ { i, \ pm } ( v _ { \ vec { n } \ pm \ vec { e } _ i } - v _ \ vec { n } ) = - \ delta _ \ vec { n } + \ delta _ { \ vec { n } - \ vec { a } }
|
https://api.stackexchange.com
|
\ tag { 1 } \ label { eqv } $ $ ( $ \ delta _ \ vec { n } $ equals $ 1 $ if $ \ vec { n } = \ vec { 0 } $ and $ 0 $ otherwise. ) solving this equation for $ v _ \ vec { n } $ will give us our answer. indeed, the resistance between $ \ vec { 0 } $ and $ \ vec { a } $ will simply be $ $ r _ \ vec { a } = v _ \ vec { 0 } - v _ \ vec { a } $ $ unfortunately, there are infinitely many solutions for $ v _ \ vec { n } $, and their results for $ r _ \ vec { a } $ do not agree! this is because the question does not specify any boundary conditions at infinity. depending on how we choose them, we can get any value of $ r _ \ vec { a } $ we like! it will turn out that there's a unique reasonable choice, but for now, let's forget about this problem completely and just find any solution. solution by fourier transform to solve our equation for $ v _ \ vec { n } $, we will look for a green's function $ g _ \ vec { n } $ satisfying a similar equation : $ $ \ sum _ { i, \ pm } ( g _ { \ vec { n } \ pm \ vec { e } _ i } - g _ \ vec { n } ) = \ delta _ \ vec { n } \ tag { 2 } \ label { eqg } $ $ a solution to $ \ eqref { eqv } $ will then be $ $ v _ n = - g _ \ vec { n } + g _ { \ vec { n } - \ vec { a } } $ $ to find $ g _ \ vec { n } $, assume ( out of the blue ) that it can be represented as $ $ g _ \ vec { n } = \ int _ 0 ^ { 2 \ pi } \ frac { d ^ n \ vec { k } } { ( 2 \ pi ) ^ n } ( e ^ { i \ vec { k } \ cdot \ vec { n } } - 1 ) g ( \ vec { k } ) $ $ for some unknown function $ g ( \
|
https://api.stackexchange.com
|
vec { k } ) $. then noting that the two sides of $ \ eqref { eqg } $ can be written as \ begin { align } \ sum _ { i, \ pm } ( g _ { \ vec { n } \ pm \ vec { e } _ i } - g _ \ vec { n } ) & = \ int _ 0 ^ { 2 \ pi } \ frac { d ^ n \ vec { k } } { ( 2 \ pi ) ^ n } e ^ { i \ vec { k } \ cdot \ vec { n } } \ left ( \ sum _ { i, \ pm } e ^ { \ pm i k _ i } - 2n \ right ) g ( \ vec { k } ) \ \ \ delta _ \ vec { n } & = \ int _ 0 ^ { 2 \ pi } \ frac { d ^ n \ vec { k } } { ( 2 \ pi ) ^ n } e ^ { i \ vec { k } \ cdot \ vec { n } } \ end { align } we see $ \ eqref { eqg } $ can be solved by choosing $ $ g ( \ vec { k } ) = \ frac { 1 } { \ sum _ { i, \ pm } e ^ { \ pm k _ i } - 2n } $ $ which leads to the green's function $ $ g _ \ vec { n } = \ frac { 1 } { 2 } \ int _ 0 ^ { 2 \ pi } \ frac { d ^ n \ vec { k } } { ( 2 \ pi ) ^ n } \ frac { \ cos ( \ vec { k } \ cdot \ vec { n } ) - 1 } { \ sum _ i \ cos ( k _ i ) - n } $ $ by the way, the funny $ - 1 $ in the numerator doesn't seem to be doing much other than shifting $ g _ \ vec { n } $ by the addition of an overall constant, so you might wonder what it's doing there. the answer is that it's technically needed to make the integral finite, but other than that it doesn't matter as it will cancel out of the answer. so the final answer for the resistance is $ $ r _ \ vec { a
|
https://api.stackexchange.com
|
} = v _ \ vec { 0 } - v _ \ vec { a } = 2 ( g _ \ vec { a } - g _ \ vec { 0 } ) = \ int _ 0 ^ { 2 \ pi } \ frac { d ^ n \ vec { k } } { ( 2 \ pi ) ^ n } \ frac { 1 - \ cos ( \ vec { k } \ cdot \ vec { a } ) } { n - \ sum _ i \ cos ( k _ i ) } $ $ why is this the right answer? ( from this point on, $ n = 2 $. ) i said earlier that there were infinitely many solutions for $ v _ \ vec { n } $. but the one above is special, because at large distances $ r $ from the origin, the voltages and currents behave like $ $ v = \ mathcal { o } ( 1 / r ) \ qquad i = \ mathcal { o } ( 1 / r ^ 2 ) $ $ a standard theorem ( uniqueness of solutions to laplace's equation ) says there can be only one solution satisfying this condition. so our solution is the unique one with the least possible current flowing at infinity and with $ v _ \ infty = 0 $. and even if the question didn't ask for that, it's obviously the only reasonable thing to ask. or is it? maybe you'd prefer to define the problem by working on a finite grid, finding the unique solution for $ v _ \ vec { n } $ there, then trying to take some sort of limit as the grid size goes to infinity. however, one can argue that the $ v _ \ vec { n } $ obtained from a size - $ l $ grid should converge to our $ v _ \ vec { n } $ with an error of order $ 1 / l $. so the end result is the same. the diagonal case it turns out the integral for $ r _ { n, m } $ is tricky to do when $ n \ neq m $, but much easier to do when $ n = m $. therefore, we'll deal with that case first. we want to calculate \ begin { align } r _ { n, n } & = \ frac { 1 } { ( 2 \ pi ) ^ 2 } \ int _ a dx \, dy \, \ frac
|
https://api.stackexchange.com
|
{ 1 - \ cos ( n ( x + y ) ) } { 2 - \ cos ( x ) - \ cos ( y ) } \ \ & = \ frac { 1 } { 2 ( 2 \ pi ) ^ 2 } \ int _ a dx \, dy \, \ frac { 1 - \ cos ( n ( x + y ) ) } { 1 - \ cos ( \ frac { x + y } { 2 } ) \ cos ( \ frac { x - y } { 2 } ) } \ end { align } where $ a $ is the square $ 0 \ leq x, y \ leq 2 \ pi $. because the integrand is periodic, the domain can be changed from $ a $ to $ a'$ like so : then changing variables to $ $ a = \ frac { x + y } { 2 } \ qquad b = \ frac { x - y } { 2 } \ qquad dx \, dy = 2 \, da \, db $ $ the integral becomes $ $ r _ { n, n } = \ frac { 1 } { ( 2 \ pi ) ^ 2 } \ int _ 0 ^ \ pi da \ int _ { - \ pi } ^ \ pi db \, \ frac { 1 - \ cos ( 2na ) } { 1 - \ cos ( a ) \ cos ( b ) } $ $ the $ b $ integral can be done with the half - tan substitution $ $ t = \ tan ( b / 2 ) \ qquad \ cos ( b ) = \ frac { 1 - t ^ 2 } { 1 + t ^ 2 } \ qquad db = \ frac { 2 } { 1 + t ^ 2 } dt $ $ giving $ $ r _ { n, n } = \ frac { 1 } { 2 \ pi } \ int _ 0 ^ \ pi da \, \ frac { 1 - \ cos ( 2na ) } { \ sin ( a ) } $ $ the trig identity $ $ 1 - \ cos ( 2na ) = 2 \ sin ( a ) \ big ( \ sin ( a ) + \ sin ( 3a ) + \ dots + \ sin ( ( 2n - 1 ) a ) \ big ) $ $ reduces the remaining $ a $ integral to \ begin { align } r _ { n
|
https://api.stackexchange.com
|
, n } & = \ frac { 2 } { \ pi } \ left ( 1 + \ frac { 1 } { 3 } + \ dots + \ frac { 1 } { 2n - 1 } \ right ) \ end { align } a recurrence relation the remaining resistances can in fact be determined without doing any more integrals! all we need is rotational / reflectional symmetry, $ $ r _ { n, m } = r _ { \ pm n, \ pm m } = r _ { \ pm m, \ pm n } $ $ together with the recurrence relation $ $ r _ { n + 1, m } + r _ { n - 1, m } + r _ { n, m + 1 } + r _ { n, m - 1 } - 4 r _ { n, m } = 2 \ delta _ { ( n, m ) } $ $ which follows from $ r _ \ vec { n } = 2 g _ \ vec { n } $ and $ \ eqref { eqg } $. it says that if we know all resistances but one in a " plus " shape, then we can determine the missing one. start off with the trivial statement that $ $ r _ { 0, 0 } = 0 $ $ applying the recurrence relation at $ ( n, m ) = ( 0, 0 ) $ and using symmetry gives $ $ r _ { 1, 0 } = r _ { 0, 1 } = 1 / 2 $ $ the next diagonal is done like so : here the turquoise square means that we fill in $ r _ { 1, 1 } $ using the formula for $ r _ { n, n } $. the yellow squares indicate an appliation of the recurrence relation to determine $ r _ { 2, 0 } $ and $ r _ { 0, 2 } $. the dotted squares also indicate resistances we had to determine by symmetry during the previous step. the diagonal after that is done similarly, but without the need to invoke the formula for $ r _ { n, n } $ : repeatedly alternating the two steps above yields an algorithm for determining every $ r _ { m, n } $. clearly, all are of the form $ $ a + b / \ pi $ $ where $ a $ and $ b $ are rational numbers. now this algorithm can easily be performed by hand, but one might as well code it up
|
https://api.stackexchange.com
|
in python : import numpy as np import fractions as fr n = 4 arr = np. empty ( ( n * 2 + 1, n * 2 + 1, 2 ), dtype ='object') def plus ( i, j ) : arr [ i + 1, j ] = 4 * arr [ i, j ] - arr [ i - 1, j ] - arr [ i, j + 1 ] - arr [ i, abs ( j - 1 ) ] def even ( i ) : arr [ i, i ] = arr [ i - 1, i - 1 ] + [ 0, fr. fraction ( 2, 2 * i - 1 ) ] for k in range ( 1, i + 1 ) : plus ( i + k - 1, i - k ) def odd ( i ) : arr [ i + 1, i ] = 2 * arr [ i, i ] - arr [ i, i - 1 ] for k in range ( 1, i + 1 ) : plus ( i + k, i - k ) arr [ 0, 0 ] = 0 arr [ 1, 0 ] = [ fr. fraction ( 1, 2 ), 0 ] for i in range ( 1, n ) : even ( i ) odd ( i ) even ( n ) for i in range ( 0, n + 1 ) : for j in range ( 0, n + 1 ) : a, b = arr [ max ( i, j ), min ( i, j ) ] print ('( ', a,') + ( ', b,') / Ο ', sep ='', end ='\ t') print ( ) this produces the output $ $ \ large \ begin { array } { | c : c : c : c : c } 40 - \ frac { 368 } { 3 \ pi } & \ frac { 80 } { \ pi } - \ frac { 49 } { 2 } & 6 - \ frac { 236 } { 15 \ pi } & \ frac { 24 } { 5 \ pi } - \ frac { 1 } { 2 } & \ frac { 352 } { 105 \ pi } \ \ \ hdashline \ frac { 17 } { 2 } - \ frac { 24 } { \ pi } & \ frac { 46 } { 3 \ pi } - 4 &
|
https://api.stackexchange.com
|
\ frac { 1 } { 2 } + \ frac { 4 } { 3 \ pi } & \ frac { 46 } { 15 \ pi } & \ frac { 24 } { 5 \ pi } - \ frac { 1 } { 2 } \ \ \ hdashline 2 - \ frac { 4 } { \ pi } & \ frac { 4 } { \ pi } - \ frac { 1 } { 2 } & \ frac { 8 } { 3 \ pi } & \ frac { 1 } { 2 } + \ frac { 4 } { 3 \ pi } & 6 - \ frac { 236 } { 15 \ pi } \ \ \ hdashline \ frac { 1 } { 2 } & \ frac { 2 } { \ pi } & \ frac { 4 } { \ pi } - \ frac { 1 } { 2 } & \ frac { 46 } { 3 \ pi } - 4 & \ frac { 80 } { \ pi } - \ frac { 49 } { 2 } \ \ \ hdashline 0 & \ frac { 1 } { 2 } & 2 - \ frac { 4 } { \ pi } & \ frac { 17 } { 2 } - \ frac { 24 } { \ pi } & 40 - \ frac { 368 } { 3 \ pi } \ \ \ hline \ end { array } $ $ from which we can read off the final answer, $ $ r _ { 2, 1 } = \ frac { 4 } { \ pi } - \ frac { 1 } { 2 } $ $
|
https://api.stackexchange.com
|
in 1933, kurt godel showed that the class called $ \ lbrack \ exists ^ * \ forall ^ 2 \ exists ^ *, { \ mathrm { all } }, ( 0 ) \ rbrack $ was decidable. these are the formulas that begin with $ \ exists a \ exists b \ ldots \ exists m \ forall n \ forall p \ exists q \ ldots \ exists z $, with exactly two $ \ forall $ quantifiers, with no intervening $ \ exists $ s. these formulas may contain arbitrary relations amongst the variables, but no functions or constants, and no equality symbol. godel showed that there is a method which takes any formula in this form and decides whether it is satisfiable. ( if there are three $ \ forall $ s in a row, or an $ \ exists $ between the $ \ forall $ s, there is no such method. ) in the final sentence of the same paper, godel added : in conclusion, i would still like to remark that theorem i can also be proved, by the same method, for formulas that contain the identity sign. mathematicians took godel's word for it, and proved results derived from this one, until the mid - 1960s, when stal aanderaa realized that godel had been mistaken, and the argument godel used would not work. in 1983, warren goldfarb showed that not only was godel's argument invalid, but his claimed result was actually false, and the larger class was not decidable. godel's original 1933 paper is zum entscheidungsproblem des logischen funktionenkalkuls ( on the decision problem for the functional calculus of logic ) which can be found on pages 306 β 327 of volume i of his collected works. ( oxford university press, 1986. ) there is an introductory note by goldfarb on pages 226 β 231, of which pages 229 β 231 address godel's error specifically.
|
https://api.stackexchange.com
|
the shortest answer : never, unless you are sure that your linear approximation of the data generating process ( linear regression model ) either by some theoretical or any other reasons is forced to go through the origin. if not the other regression parameters will be biased even if intercept is statistically insignificant ( strange but it is so, consult brooks introductory econometrics for instance ). finally, as i do often explain to my students, by leaving the intercept term you insure that the residual term is zero - mean. for your two models case we need more context. it may happen that linear model is not suitable here. for example, you need to log transform first if the model is multiplicative. having exponentially growing processes it may occasionally happen that $ r ^ 2 $ for the model without the intercept is " much " higher. screen the data, test the model with reset test or any other linear specification test, this may help to see if my guess is true. and, building the models highest $ r ^ 2 $ is one of the last statistical properties i do really concern about, but it is nice to present to the people who are not so well familiar with econometrics ( there are many dirty tricks to make determination close to 1 : ) ).
|
https://api.stackexchange.com
|
there are a few good answers to this question, depending on the audience. i've used all of these on occasion. a way to solve polynomials we came up with equations like $ x - 5 = 0 $, what is $ x $?, and the naturals solved them ( easily ). then we asked, " wait, what about $ x + 5 = 0 $? " so we invented negative numbers. then we asked " wait, what about $ 2x = 1 $? " so we invented rational numbers. then we asked " wait, what about $ x ^ 2 = 2 $? " so we invented irrational numbers. finally, we asked, " wait, what about $ x ^ 2 = - 1 $? " this is the only question that was left, so we decided to invent the " imaginary " numbers to solve it. all the other numbers, at some point, didn't exist and didn't seem " real ", but now they're fine. now that we have imaginary numbers, we can solve every polynomial, so it makes sense that that's the last place to stop. pairs of numbers this explanation goes the route of redefinition. tell the listener to forget everything he or she knows about imaginary numbers. you're defining a new number system, only now there are always pairs of numbers. why? for fun. then go through explaining how addition / multiplication work. try and find a good " realistic " use of pairs of numbers ( many exist ). then, show that in this system, $ ( 0, 1 ) * ( 0, 1 ) = ( - 1, 0 ) $, in other words, we've defined a new system, under which it makes sense to say that $ \ sqrt { - 1 } = i $, when $ i = ( 0, 1 ) $. and that's really all there is to imaginary numbers : a definition of a new number system, which makes sense to use in most places. and under that system, there is an answer to $ \ sqrt { - 1 } $. the historical explanation explain the history of the imaginary numbers. showing that mathematicians also fought against them for a long time helps people understand the mathematical process, i. e., that it's all definitions in the end. i'm a little rusty, but i think there were certain equations that kept having parts of them which used $ \ sqrt { - 1 } $, and the mathematicians kept throwing out
|
https://api.stackexchange.com
|
the equations since there is no such thing. then, one mathematician decided to just " roll with it ", and kept working, and found out that all those square roots cancelled each other out. amazingly, the answer that was left was the correct answer ( he was working on finding roots of polynomials, i think ). which lead him to think that there was a valid reason to use $ \ sqrt { - 1 } $, even if it took a long time to understand it.
|
https://api.stackexchange.com
|
abbreviations auc = area under the curve. auroc = area under the receiver operating characteristic curve. auc is used most of the time to mean auroc, which is a bad practice since as marc claesen pointed out auc is ambiguous ( could be any curve ) while auroc is not. interpreting the auroc the auroc has several equivalent interpretations : the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative. the expected proportion of positives ranked before a uniformly drawn random negative. the expected true positive rate if the ranking is split just before a uniformly drawn random negative. the expected proportion of negatives ranked after a uniformly drawn random positive. the expected false positive rate if the ranking is split just after a uniformly drawn random positive. going further : how to derive the probabilistic interpretation of the auroc? computing the auroc assume we have a probabilistic, binary classifier such as logistic regression. before presenting the roc curve ( = receiver operating characteristic curve ), the concept of confusion matrix must be understood. when we make a binary prediction, there can be 4 types of outcomes : we predict 0 while the true class is actually 0 : this is called a true negative, i. e. we correctly predict that the class is negative ( 0 ). for example, an antivirus did not detect a harmless file as a virus. we predict 0 while the true class is actually 1 : this is called a false negative, i. e. we incorrectly predict that the class is negative ( 0 ). for example, an antivirus failed to detect a virus. we predict 1 while the true class is actually 0 : this is called a false positive, i. e. we incorrectly predict that the class is positive ( 1 ). for example, an antivirus considered a harmless file to be a virus. we predict 1 while the true class is actually 1 : this is called a true positive, i. e. we correctly predict that the class is positive ( 1 ). for example, an antivirus rightfully detected a virus. to get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of outcomes occur : in this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified. since to compare two different models it is often more convenient to have a single metric rather than several ones, we
|
https://api.stackexchange.com
|
compute two metrics from the confusion matrix, which we will later combine into one : true positive rate ( tpr ), aka. sensitivity, hit rate, and recall, which is defined as $ \ frac { tp } { tp + fn } $. intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. in other words, the higher tpr, the fewer positive data points we will miss. false positive rate ( fpr ), aka. fall - out, which is defined as $ \ frac { fp } { fp + tn } $. intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. in other words, the higher fpr, the more negative data points will be missclassified. to combine the fpr and the tpr into one single metric, we first compute the two former metrics with many different threshold ( for example $ 0. 00 ; 0. 01, 0. 02, \ dots, 1. 00 $ ) for the logistic regression, then plot them on a single graph, with the fpr values on the abscissa and the tpr values on the ordinate. the resulting curve is called roc curve, and the metric we consider is the auc of this curve, which we call auroc. the following figure shows the auroc graphically : in this figure, the blue area corresponds to the area under the curve of the receiver operating characteristic ( auroc ). the dashed line in the diagonal we present the roc curve of a random predictor : it has an auroc of 0. 5. the random predictor is commonly used as a baseline to see whether the model is useful. if you want to get some first - hand experience : python : matlab :
|
https://api.stackexchange.com
|
citing bellanger's classic digital processing of signals β theory and practice, the point is not where your cut - off frequency is, but how much attenuation you need, how much ripple in the signal you want to preserve you can tolerate and, most importantly, how narrow your transition from pass - to stopband ( transition width ) needs to be. i assume you want a linear phase filter ( though you specify minimum latency, i don't think a minimum phase filter is a good idea, in general, unless you know damn well what you're going to be doing with your signal afterwards ). in that case, the filter order ( which is the number of taps ) is $ $ n \ approx \ frac 23 \ log _ { 10 } \ left [ \ frac1 { 10 \ delta _ 1 \ delta _ 2 } \ right ] \, \ frac { f _ s } { \ delta f } $ $ with $ $ \ begin { align } f _ s & \ text { the sampling rate } \ \ \ delta f & \ text { the transition width, } \ \ & \ text { ie. the difference between end of pass band and start of stop band } \ \ \ delta _ 1 & \ text { the ripple in passband, } \ \ & \ text { ie. " how much of the original amplitude can you afford to vary " } \ \ \ delta _ 2 & \ text { the suppresion in the stop band }. \ end { align } $ $ let's plug in some numbers! you specified a cut - off frequency of $ \ frac { f _ s } { 100 } $, so i'll just go ahead and claim your transition width will not be more than half of that, so $ \ delta f = \ frac { f _ s } { 200 } $. coming from sdr / rf technology, 60 db of suppression is typically fully sufficient β hardware, without crazy costs, won't be better at keeping unwanted signals out of your input, so meh, let's not waste cpu on having a fantastic filter that's better than what your hardware can do. hence, $ \ delta _ 2 = - 60 \ text { db } = 10 ^ { - 3 } $. let's say you can live with a amplitude variation of 0. 1 % in the passband ( if you can live with more, also consider making the suppression requirement less strict ). that's
|
https://api.stackexchange.com
|
$ \ delta _ 1 = 10 ^ { - 4 } $. so, plugging this in : $ $ \ begin { align } n _ \ text { tommy's filter } & \ approx \ frac 23 \ log _ { 10 } \ left [ \ frac1 { 10 \ delta _ 1 \ delta _ 2 } \ right ] \, \ frac { f _ s } { \ delta f } \ \ & = \ frac 23 \ log _ { 10 } \ left [ \ frac1 { 10 \ cdot 10 ^ { - 4 } \ cdot10 ^ { - 3 } } \ right ] \, \ frac { f _ s } { \ frac { f _ s } { 200 } } \ \ & = \ frac 23 \ log _ { 10 } \ left [ \ frac1 { 10 \ cdot 10 ^ { - 7 } } \ right ] \, 200 \ \ & = \ frac 23 \ log _ { 10 } \ left [ \ frac1 { 10 ^ { - 6 } } \ right ] \, 200 \ \ & = \ frac 23 \ left ( \ log _ { 10 } 10 ^ 6 \ right ) \, 200 \ \ & = \ frac 23 \ cdot 6 \ cdot 200 \ \ & = 800 \ text {. } \ end { align } $ $ so with your 200 taps, you're far off, iff you use an extremely narrow pass band in your filter like i assumed you would. note that this doesn't have to be a problem β first of all, a 800 - taps filter is scary, but frankly, only at first sight : as i tested in this answer over at stackoverflow : cpus nowadays are fast, if you use someone's cpu - optimized fir implementation. for example, i used gnu radio's fft - fir implementation with exactly the filter specification outline above. i got a performance of 141 million samples per second β that might or might not be enough for you. so here's our question - specific test case ( which took me seconds to produce ) : decimation : if you are only going to keep a fraction of the input bandwidth, the output of your filter will be drastically oversampled. introducing a decimation of $ m $ means that your filter doesn't give you every output sample, but every $ m $ th one only β which normally would lead to lots and
|
https://api.stackexchange.com
|
lots of aliasing, but since you're eradicating all signal that could alias, you can savely do so. clever filter implementations ( polyphase decimators ) can reduce the computational effort by m, this way. in your case, you could easily decimate by $ m = 50 $, and then, your computer would only have to calculate $ \ frac { 1200 } { 50 } = 24 $ multiplications / accumulations per input sample β much much easier. the filters in gnu radio generally do have that capability. and this way, even out of the fft fir ( which doesn't lend itself very well to a polyphasing decimator implementation ), i can squeeze another factor of 2 in performance. can't do much more. that's pretty close to ram bandwidth, in my experience, on my system. for latency : don't care about it. really, don't, unless you need to. you're doing this with typical audio sampling rates? remember, $ 96 \, \ frac { \ text { ks } } { \ text { s } } \ overset { \ text { ridiculously } } { \ ll } 141 \, \ frac { \ text { ms } } { \ text { s } } $ mentioned above. so the time spent computing the filter output will only be relevant for ms / s live signal streaming. for dsp with offline data : well, add a delay to whatever signal you have in parallel to your filter to compensate. ( if your filter is linear phase, it's delay will be half the filter length. ) this might be relevant in a hardware implementation of the fir filter. hardware implementation : so maybe your pc's or embedded device's cpu and os really don't allow you to fulfill your latency constraints, and so you're looking into fpga - implemented firs. the first thing you'll notice is that for hardware, there's different design paradigma β a " i suppress everything but $ \ frac1 { 100 } $ of my input rate " filter needs a large bit width for the fixed point numbers you'd handle in hardware ( as oppposed to the floating point numbers on a cpu ). so that's the first reason why you'd typically split that filter into multiple, cascaded, smaller, decimating fir filters. another reason is that you can, with every cascade " step ", let your multi
|
https://api.stackexchange.com
|
##pliers ( typically, " dsp slices " ) run at a lower rate, and hence, multiplex them ( number of dsp slices is usually very limited ), using one multiplier for multiple taps. yet another reason is that especially half - band filters, i. e. lowpasses that suppress half the input band and deliver half the input rate, are very efficiently implementable in hardware ( as they have half the taps being zero, something that is hard to exploit in a cpu / simd implementation ).
|
https://api.stackexchange.com
|
great question! when i was teaching, anslyn and dougherty was a decent text for this. here are some general comments : first, please note that you cannot be sure about a mechanism. that's the real killer. you can devise experiments that are consistent with the mechanism but because you cannot devise and run all possible experiments, you can never be sure that your mechanism is correct. it only takes one good experiment to refute a mechanism. if it's inconsistent with your proposed mechanism, and you're unable to reconcile the differences, then your mechanism is wrong ( or incomplete at best ). writing mechanisms for new reactions is hard. good thing we have a whole slew of existing reactions that people already have established ( highly probable, but not 100 % guaranteed ) mechanisms for. computational chemistry is pretty awesome now and provides some really good insights into how a specific reaction takes place. it doesn't always capture all relevant factors so you need to be careful. like any tool, it can be used incorrectly. the types of reactions you run really depend heavily on the kind of reaction you're studying. here are some typical ones : labeling - - very good for complex rearrangements kinetics ( including kinetic isotope effects ) - - good for figuring out rate - determining steps stereochemistry - - good for figuring out if steps are concerted ( see this example mechanism i wrote for a different question ) capturing intermediates - - this can be pretty useful but some species that you capture aren't involved in the reaction, so be careful. substitution effects and lfer studies - - great for determining if charge build - up is accounted for in your mechanism for named reactions, the kurti - czako book generally has seminal references if you want to actually dig through the literature for experiments. for your specific reaction, what do we think the rate - determining step is? probably addition into the acylium? you could try to capture the acylium intermediate. you could run the reaction with reactants that have two labelled oxygens and reactants that have no labelled oxygens. do they mix? if not, it's fully intramolecular. otherwise, there's an intermolecular component and the mechanism as written is incomplete. a quick google search suggests that the boron trichloride mediated version has been studied via proton, deuterium, and boron nmr. i didn't follow up on this, but there's clearly some depth here. when i was
|
https://api.stackexchange.com
|
t. a. ing for greg fu, he really liked to use an example with the von richter reaction. i might be able to find those references...
|
https://api.stackexchange.com
|
the main thing is presumably that $ aa ^ t $ is symmetric. indeed $ ( aa ^ t ) ^ t = ( a ^ t ) ^ ta ^ t = aa ^ t $. for symmetric matrices one has the spectral theorem which says that we have a basis of eigenvectors and every eigenvalue is real. moreover if $ a $ is invertible, then $ aa ^ t $ is also positive definite, since $ $ x ^ taa ^ tx = ( a ^ tx ) ^ t ( a ^ tx ) > 0 $ $ then we have : a matrix is positive definite if and only if it's the gram matrix of a linear independent set of vectors. last but not least if one is interested in how much the linear map represented by $ a $ changes the norm of a vector one can compute $ $ \ sqrt { \ left < ax, ax \ right > } = \ sqrt { \ left < a ^ tax, x \ right > } $ $ which simplifies for eigenvectors $ x $ to the eigenvalue $ \ lambda $ to $ $ \ sqrt { \ left < ax, ax \ right > } = \ sqrt \ lambda \ sqrt { \ left < x, x \ right > }, $ $ the determinant is just the product of these eigenvalues.
|
https://api.stackexchange.com
|
short answer : yes, but you need to get permission ( and modified software ) from ont before doing that.... but that doesn't tell the whole story. this question has the potential to be very confusing, and that's through no fault of the questioner. the issue is that for the minion, sequencing ( or more specifically, generating the raw data in the form of an electrical signal trace ) is distinct and separable from base calling. many other sequencers also have distinct raw data and base - calling phases, but they're not democratised to the degree they are on the minion. the " sequencing " part of minion sequencing is carried out by ont software, namely minknow. as explained to me during porecampau 2017, when the minion is initially plugged into a computer it is missing the firmware necessary to carry out the sequencing. the most recent version of this firmware is usually downloaded at the start of a sequencing run by sending a request to ont servers. in the usual case, you can't do sequencing without being able to access those servers, and you can't do sequencing without ont knowing about it. however, ont acknowledge that there are people out there who won't have internet access when sequencing ( e. g. sequencing ebola in africa, or metagenomic sequencing in the middle of the ocean ), and an email to < support @ nanoporetech. com > with reasons is likely to result in a quick software fix to the local sequencing problem. once the raw signals are acquired, the " base - calling " part of minion sequencing can be done anywhere. the ont - maintained basecaller is albacore, and this will get the first model updates whenever the sequencing technology is changed ( which happens a lot ). albacore is a local basecaller which can be obtained from ont by browsing through their community pages ( available to anyone who has a minion ) ; ont switched to only allowing people to do basecalling locally in about april 2017, after establishing that using aws servers was just too expensive. albacore is open source and free - as - in - beer, but has a restrictive licensing agreement which limits the distribution ( and modification ) of the program. however, albacore is not the only available basecaller. ont provide a foss basecaller called nanonet. it's a little bit behind albacore on technology, but ont have said
|
https://api.stackexchange.com
|
that all useful albacore changes will eventually propagate through to nanonet. there is another non - ont basecaller that i'm aware of which uses a neural network for basecalling : deepnano. other basecallers exist, each varying distances away technology - wise, and i expect that more will appear in the future as the technology stabilises and more change - resistant computer scientists get in on the act. edit : ont has just pulled back the curtain on their basecalling software ; all the repositories that i've looked at so far ( except for the cliveome ) have been released under the mozilla public license ( free and open source, with some conditions and limitations ). included in that software repository is scrappie, which is their testing / bleeding - edge version of albacore.
|
https://api.stackexchange.com
|
in the early 90s we were looking for a method to solve the tdse fast enough to do animations in real time on a pc and came across a surprisingly simple, stable, explicit method described by pb visscher in computers in physics : " a fast explicit algorithm for the time - dependent schrodinger equation ". visscher notes that if you split the wavefunction into real and imaginary parts, $ \ psi = r + ii $, the se becomes the system : \ begin { eqnarray } \ frac { dr } { dt } & = & hi \ \ \ frac { di } { dt } & = & - hr \ \ h & = & - \ frac { 1 } { 2m } \ nabla ^ 2 + v \ end { eqnarray } if you then compute $ r $ and $ i $ at staggered times ( $ r $ at $ 0, \ delta t, 2 \ delta t,... $ and $ i $ at $ 0. 5 \ delta t, 1. 5 \ delta t,... ) $, you get the discretization : $ $ r ( t + \ frac { 1 } { 2 } \ delta t ) = r ( t - \ frac { 1 } { 2 } \ delta t ) + \ delta t hi ( t ) $ $ $ $ i ( t + \ frac { 1 } { 2 } \ delta t ) = i ( t - \ frac { 1 } { 2 } \ delta t ) - \ delta t hr ( t ) $ $ with $ $ \ nabla ^ 2 \ psi ( r, t ) = \ frac { \ psi ( r + \ delta r, t ) - 2 \ psi ( r, t ) + \ psi ( r - \ delta r, t ) } { \ delta r ^ 2 } $ $ ( standard three - point laplacian ). this is explicit, very fast to compute, and second - order accurate in $ \ delta t $. defining the probability density as $ $ p ( x, t ) = r ^ 2 ( x, t ) + i ( x, t + \ frac { 1 } { 2 } \ delta t ) i ( x, t - \ frac { 1 } { 2 } \ delta t ) $ $ at integer time steps and, $ $ p ( x, t ) = r ( x, t + \ frac {
|
https://api.stackexchange.com
|
1 } { 2 } \ delta t ) r ( x, t - \ frac { 1 } { 2 } \ delta t ) + i ^ 2 ( x, t ) $ $ at half - integer time steps makes the algorithm unitary, thus conserving probability. with enough code optimization, we were able to get very nice animations computed in real - time on 80486 machines. students could " draw " any potential, choose a total energy, and watch the time - evolution of a gaussian packet.
|
https://api.stackexchange.com
|
get someone to relax their neck as much as possible, stabilize their torso, then punch them in the head with a calibrated fist and measure the initial acceleration. apply $ \ vec f = m \ vec a $.
|
https://api.stackexchange.com
|
i haven't quite got this straight yet, but i think one way to go is to think about choosing points at random from the positive reals. this answer is going to be rather longer than it really needs to be, because i'm thinking about this in a few ( closely related ) ways, which probably aren't all necessary, and you can decide to reject the uninteresting parts and keep anything of value. very roughly, the idea is that if you " randomly " choose points from the positive reals and arrange them in increasing order, then the probability that the $ ( n + 1 ) ^ \ text { th } $ point is in a small interval $ ( t, t + dt ) $ is a product of probabilities of independent events, $ n $ factors of $ t $ for choosing $ n $ points in the interval $ [ 0, t ] $, one factor of $ e ^ { - t } $ as all the other points are in $ [ t, \ infty ) $, one factor of $ dt $ for choosing the point in $ ( t, t + dt ) $, and a denominator of $ n! $ coming from the reordering. at least, as an exercise in making a simple problem much harder, here it goes... i'll start with a bit of theory before trying to describe intuitively why the probability density $ \ dfrac { t ^ n } { n! } e ^ { - t } $ pops out. we can look at the homogeneous poisson process ( with rate parameter $ 1 $ ). one way to think of this is to take a sequence on independent exponentially distributed random variables with rate parameter $ 1 $, $ s _ 1, s _ 2, \ ldots $, and set $ t _ n = s _ 1 + \ cdots + s _ n $. as has been commented on already, $ t _ { n + 1 } $ has the probability density function $ \ dfrac { t ^ n } { n! } e ^ { - t } $. i'm going to avoid proving this immediately though, as it would just reduce to manipulating some integrals. then, the poisson process $ x ( t ) $ counts the number of times $ t _ i $ lying in the interval $ [ 0, t ] $. we can also look at poisson point processes ( aka, poisson random measures, but that wikipedia page is very
|
https://api.stackexchange.com
|
poor ). this is just makes rigorous the idea of randomly choosing unordered sets of points from a sigma - finite measure space $ ( e, \ mathcal { e }, \ mu ) $. technically, it can be defined as a set of nonnegative integer - valued random variables $ \ { n ( a ) \ colon a \ in \ mathcal { e } \ } $ counting the number of points chosen from each subset a, such that $ n ( a ) $ has the poisson distribution of rate $ \ mu ( a ) $ and $ n ( a _ 1 ), n ( a _ 2 ), \ ldots $ are independent for pairwise disjoint sets $ a _ 1, a _ 2, \ ldots $. by definition, this satisfies $ $ \ begin { array } { } \ mathbb { p } ( n ( a ) = n ) = \ dfrac { \ mu ( a ) ^ n } { n! } e ^ { - \ mu ( a ) }. & & ( 1 ) \ end { array } $ $ the points $ t _ 1, t _ 2, \ ldots $ above defining the homogeneous poisson process also define a poisson random measure with respect to the lebesgue measure $ ( \ mathbb { r } \ _ +, { \ cal b }, \ lambda ) $. once you forget about the order in which they were defined and just regard them as a random set that is, which i think is the source of the $ n! $. if you think about the probability of $ t _ { n + 1 } $ being in a small interval $ ( t, t + \ delta t ) $ then this is just the same as having $ n ( [ 0, t ] ) = n $ and $ n ( ( t, t + \ delta t ) ) = 1 $, which has probability $ \ dfrac { t ^ n } { n! } e ^ { - t } \ delta t $. so, how can we choose points at random so that each small set $ \ delta a $ has probability $ \ mu ( \ delta a ) $ of containing a point, and why does $ ( 1 ) $ pop out? i'm imagining a hopeless darts player randomly throwing darts about and, purely by luck, hitting the board with some of them. consider throwing a very large number $ n \ gg1 $ of darts, independently,
|
https://api.stackexchange.com
|
so that each one only has probability $ \ mu ( a ) / n $ of hitting the set, and is distributed according to the probability distribution $ \ mu / \ mu ( a ) $. this is consistent, at least, if you think about the probability of hitting a subset $ b \ subseteq a $. the probability of missing with all of them is $ ( 1 - \ mu ( a ) / n ) ^ n = e ^ { - \ mu ( a ) } $. this is a multiplicative function due to independence of the number hitting disjoint sets. to get the probability of one dart hitting the set, multiply by $ \ mu ( a ) $ ( one factor of $ \ mu ( a ) / n $ for each individual dart, multiplied by $ n $ because there are $ n $ of them ). for $ n $ darts, we multiply by $ \ mu ( a ) $ $ n $ times, for picking $ n $ darts to hit, then divide by $ n! $ because we have over - counted the subsets of size $ n $ by this factor ( due to counting all $ n! $ ways of ordering them ). this gives $ ( 1 ) $. i think this argument can probably be cleaned up a bit. getting back to choosing points randomly on the positive reals, this gives a probability of $ \ dfrac { t ^ n } { n! } e ^ { - t } dt $ of picking $ n $ in the interval $ [ 0, t ] $ and one in $ ( t, t + dt ) $. if we sort them in order as $ t _ 1 \ lt t _ 2 \ lt \ cdots $ then $ \ mathbb { p } ( t _ 1 \ gt t ) = e ^ { - t } $, so it is exponentially distributed. conditional on this, $ t _ 2, t _ 3, \ ldots $ are chosen randomly from $ [ t _ 1, \ infty ) $, so we see that the differences $ t _ { i + 1 } - t _ { i } $ are independent and identically distributed. why is $ \ dfrac { t ^ n } { n! } e ^ { - t } $ maximized at $ t = n $? i'm not sure why the mode should be a simple property of a distribution. it doesn't even exist except for unimodal distributions. as
|
https://api.stackexchange.com
|
$ t _ { n + 1 } $ is the sum of $ n + 1 $ iid random variables of mean one, the law of large numbers suggests that it should be peaked approximately around $ n $. the central limit theorem goes further, and gives $ \ dfrac { t ^ n } { n! } e ^ { - t } \ approx \ dfrac { 1 } { \ sqrt { 2 \ pi n } } e ^ { - ( t - n ) ^ 2 / { 2n } } $. stirling's formula is just this evaluated at $ t = n $. what's this to do with tate's thesis? i don't know, and i haven't read it ( but intend to ), but have a vague idea of what it's about. if there is anything to do with it, maybe it is something to do with the fact that we are relating the sums of independent random variables $ s _ 1 + \ cdots + s _ n $ distributed with respect to the haar measure on the multiplicative group $ \ mathbb { r } _ + $ ( edit : oops, that's not true, the multiplicative haar measure has cumulative distribution given by $ \ log $, not $ \ exp $ ) with randomly chosen sets according to the haar measure on the additive group $ \ mathbb { r } $.
|
https://api.stackexchange.com
|
i think the best method or combination of methods will depend on aspects of the data that might vary from one dataset to another. e. g. the type, size, and frequency of structural variants, the number snvs, the quality of the reference, contaminants or other issues ( e. g. read quality, sequencing errors ) etc. for that reason, i'd take two approaches : try a lot of methods, and look at their overlap validate a subset of calls from different methods by wet lab experiments - in the end this is the only real way of knowing the accuracy for a particular case.
|
https://api.stackexchange.com
|
first of all, it depends on how the tap water was treated before it was piped to your house. in most cases, the water was chlorinated to remove microorganisms. by the time the water arrives at your house, there is very little ( if any ) chlorine left in the water. when you fill you container, there is likely to be some microorganisms present ( either in the container or in the water ). in a nutrient rich environment, you can see colonies within 3 days. for tap water, it will probably take 2 to 3 weeks. but that doesn't mean that the small amount of growth doesn't produce bad tasting compounds ( acetic acid, urea, etc. ). btw nicolau saker neto, cold water dissolves more gas than hot water. watch when you heat water on your stove. before it boils, you will see gas bubbles that form on the bottom and go to the surface ( dissolved gases ) and bubbles that disappear while rising to the surface ( water vapor ).
|
https://api.stackexchange.com
|
the frequency resolution is dependent on the relationship between the fft length and the sampling rate of the input signal. if we collect 8192 samples for the fft then we will have : $ $ \ frac { 8192 \ \ text { samples } } { 2 } = 4096 \ \, \ text { fft bins } $ $ if our sampling rate is 10 khz, then the nyquist - shannon sampling theorem says that our signal can contain frequency content up to 5 khz. then, our frequency bin resolution is : $ $ \ frac { 5 \ \ text { khz } } { 4096 \ \, \ text { fft bins } } \ simeq \ frac { 1. 22 \ \ text { hz } } { \ text { bin } } $ $ this is may be the easier way to explain it conceptually but simplified : your bin resolution is just \ $ \ frac { f _ { samp } } { n } \ $, where \ $ f _ { samp } \ $ is the input signal's sampling rate and \ $ n \ $ is the number of fft points used ( sample length ). we can see from the above that to get smaller fft bins we can either run a longer fft ( that is, take more samples at the same rate before running the fft ) or decrease our sampling rate. the catch : there is always a trade - off between temporal resolution and frequency resolution. in the example above, we need to collect 8192 samples before we can run the fft, which when sampling at 10 khz takes 0. 82 seconds. if we tried to get smaller fft bins by running a longer fft it would take even longer to collect the needed samples. that may be ok, it may not be. the important point is that at a fixed sampling rate, increasing frequency resolution decreases temporal resolution. that is the more accurate your measurement in the frequency domain, the less accurate you can be in the time domain. you effectively lose all time information inside the fft length. in this example, if a 1999 hz tone starts and stops in the first half of the 8192 sample fft and a 2002 hz tone plays in the second half of the window, we would see both, but they would appear to have occurred at the same time. you also have to consider processing time. a 8192 point fft takes some decent processing power. a way to reduce this need
|
https://api.stackexchange.com
|
is to reduce the sampling rate, which is the second way to increase frequency resolution. in your example, if you drop your sampling rate to something like 4096 hz, then you only need a 4096 point fft to achieve 1 hz bins and can still resolve a 2 khz signal. this reduces the fft bin size, but also reduces the bandwidth of the signal. ultimately with an fft there will always be a trade - off between frequency resolution and time resolution. you have to perform a bit of a balancing act to reach all goals.
|
https://api.stackexchange.com
|
the obvious answer is that different people wrote them. it's fairly common in bioinformatics for people with a computer science background to get frustrated with existing tools and create their own alternative tool ( rather than improving an existing tool ). over time, tools with similar initial aims will have popular functionality implemented in them ( and eventually have bugs fixed ), such that it matters less which particular tool is used for common methods. here's my impression of the tools : samtools - - originally written by heng li ( who also wrote bwa ). the people who now work on samtools also maintain the alignment file format specification for sam, bam, and cram, so any new file format features are likely to be implemented in samtools first. bamtools - - this looks like it was written by derek barnett, erik garrison, gabor marth, michael stromberg to mirror the samtools toolkit, but using c + + instead of c picard - - java tools written by the broad institute for manipulating bam / sam files. being written in java makes it easier to port to other operating systems, so it may work better on windows systems. i'm more familiar with picard being used at a filtering level ( e. g. removing pcr duplicates ), and for statistical analysis, but it links in with the java hts library from samtools, so probably shares a lot of the functionality. sambamba - - a gpl2 - licensed toolkit written in the d programming language ( presumably by artem tarasov and pjotr prins ). i haven't used it ( and don't know people who have used it ), but the github page suggests " for almost 5 years the main advantage over samtools was parallelized bam reading. finally in march 2017 samtools 1. 4 was released, reaching parity on this. " biobambam - - written by german tischler in c + +. i also have no experience with this toolkit. this seems to have some multithreading capability, but is otherwise similar to other toolkits.
|
https://api.stackexchange.com
|
the choice of $ k = 10 $ is somewhat arbitrary. here's how i decide $ k $ : first of all, in order to lower the variance of the cv result, you can and should repeat / iterate the cv with new random splits. this makes the argument of high $ k $ = > more computation time largely irrelevant, as you anyways want to calculate many models. i tend to think mainly of the total number of models calculated ( in analogy to bootstrapping ). so i may decide for 100 x 10 - fold cv or 200 x 5 - fold cv. @ ogrisel already explained that usually large $ k $ mean less ( pessimistic ) bias. ( some exceptions are known particularly for $ k = n $, i. e. leave - one - out ). if possible, i use a $ k $ that is a divisor of the sample size, or the size of the groups in the sample that should be stratified. too large $ k $ mean that only a low number of sample combinations is possible, thus limiting the number of iterations that are different. for leave - one - out : $ \ binom { n } { 1 } = n = k $ different model / test sample combinations are possible. iterations don't make sense at all. e. g. $ n = 20 $ and $ k = 10 $ : $ \ binom { n = 20 } { 2 } = 190 = 19 β
k $ different model / test sample combinations exist. you may consider going through all possible combinations here as 19 iterations of $ k $ - fold cv or a total of 190 models is not very much. these thoughts have more weight with small sample sizes. with more samples available $ k $ doesn't matter very much. the possible number of combinations soon becomes large enough so the ( say ) 100 iterations of 10 - fold cv do not run a great risk of being duplicates. also, more training samples usually means that you are at a flatter part of the learning curve, so the difference between the surrogate models and the " real " model trained on all $ n $ samples becomes negligible.
|
https://api.stackexchange.com
|
the following are both plausible messages, but have a completely different meaning : sos help =... - - -......... -... - -. = >... - - -......... -... - -. i am his date =... - - -......... -... - -. = >... - - -......... -... - -.
|
https://api.stackexchange.com
|
the hydrochloric acid in the stomach is already quite dilute ; its ph is in fact no less than 1. 5 so that at the extreme maximum there is only 0. 03 molar hydrochloric acid. and even that small amount is, of course, stabilized by being dissociated into solvated ions. there is just not enough stuff to react violently.
|
https://api.stackexchange.com
|
" computational scientist " is somewhat broad because it includes people who doing numerical analysis with paper / latex and proof - of - concept implementations, people writing general purpose libraries, and people developing applications that solve certain classes of problems, and end users that utilize those applications. the skills needed for these groups are different, but there is a great advantage to having some familiarity with the " full stack ". i'll describe what i think are the critical parts of this stack, people who work at that level should of course have deeper knowledge. domain knowledge ( e. g. physics and engineering background ) everyone should know the basics of the class of problems they are solving. if you work on pdes, this would mean some general familiarity with a few classes of pde ( e. g. poisson, elasticity, and incompressible and compressible navier - stokes ), especially what properties are important to capture " exactly " and what can be up to discretization error ( this informs method selection regarding local conservation and symplectic integrators ). you should know about some functionals and analysis types of interest to applications ( optimization of lift and drag, prediction of failure, parameter inversion, etc ). mathematics everyone should have some general familiarity with classes of methods relevant to their problem domain. this includes basic characteristics of sparse versus dense linear algebra, availability of " fast methods ", properties of spatial and temporal discretization techniques and how to evaluate what properties of a physical problem are needed for a discretization technique to be suitable. if you are mostly an end user, this knowledge can be very high level. software engineering and libraries some familiarity with abstraction techniques and library design is useful for almost everyone in computational science. if you work on proof - of - concept methods, this will improve the organization of your code ( making it easier for someone else to " translate " it into a robust implementation ). if you work on scientific applications, this will make your software more extensible and make it easier to interface with libraries. be defensive when developing code, such that errors are detected as early as possible and the error messages are as informative as possible. tools working with software is an important part of computational science. proficiency with your chosen language, editor support ( e. g. tags, static analysis ), and debugging tools ( debugger, valgrind ) greatly improves your development efficiency. if you work in batch environments, you should know how to submit jobs and get interactive sessions. if you work
|
https://api.stackexchange.com
|
with compiled code, a working knowledge of compilers, linkers, and build tools like make will save a lot of time. version control is essential for everyone, even if you work alone. learn git or mercurial and use it for every project. if you develop libraries, you should know the language standards reasonably completely so that you almost always write portable code the first time, otherwise you will be buried in user support requests when your code doesn't build in their funky environment. latex latex is the de - facto standard for scientific publication and collaboration. proficiency with latex is important to be able to communicate your results, collaborate on proposals, etc. scripting the creation of figures is also important for reproducibility and data provenance.
|
https://api.stackexchange.com
|
it is very hard to define a human mind with a such mathematical rigor as it is possible to define a turing machine. we still do not have a working model of a mouse brain however we have the hardware capable of simulating it. a mouse has around 4 million neurons in the cerebral cortex. a human being has 80 - 120 billion neurons ( 19 - 23 billion neocortical ). thus, you can imagine how much more research will need to be conducted in order to get a working model of a human mind. you could argue that we only need to do top - down approach and do not need to understand individual workings of every neuron. in that case you might study some non - monotonic logic, abductive reasoning, decision theory, etc. when the new theories come, more exceptions and paradoxes occur. and it seems we are nowhere close to a working model of a human mind. after taking propositional and then predicate calculus i asked my logic professor : " is there any logic that can define the whole set of human language? " he said : " how would you define the following? to see a world in a grain of sand and a heaven in a wild flower, hold infinity in the palm of your hand and eternity in an hour. if you can do it, you will become famous. " there have been debates that a human mind might be equivalent to a turing machine. however, a more interesting result would be for a human mind not to be turing - equivalent, that it would give a rise to a definition of an algorithm that is not possibly computable by a turing machine. then the church's thesis would not hold and there could possibly be a general algorithm that could solve a halting problem. until we understand more, you might find some insights in a branch of philosophy. however, no answer to your question is generally accepted.
|
https://api.stackexchange.com
|
as far as i know, lapack is the only publicly available implementation of a number of algorithms ( nonsymmetric dense eigensolver, pseudo - quadratic time symmetric eigensolver, fast jacobi svd ). most libraries that don't rely on blas + lapack tend to support very primitive operations like matrix multiplication, lu factorization, and qr decomposition. lapack contains some of the most sophisticated algorithms for dense matrix computations that i don't believe are implemented anywhere else. so to answer your questions ( at least partially ), by opting out of blas / lapack, you are typically not missing functionality ( unless the optional interface was designed so that there is no substitute implementation, which is rare ). if you wanted to do very sophisticated operations, those other libraries probably don't implement it themselves anyways. since blas can be highly tuned to your architecture, you could be missing out on huge speedups ( an order of magnitude speed difference is not unheard of ). you mention umfpack, which is for sparse matrix factorization. blas / lapack is only concerned about dense matrices. umfpack at some level needs to work on medium size dense problems, which it can do using custom implementations or by calling blas / lapack. here the difference is only in speed. if speed is of great concern, try to use a library that supports optional blas / lapack bindings, and use them in the end when you want things faster.
|
https://api.stackexchange.com
|
firstly, it's not true that you can't tell racial background from dna. you most certainly can ; it's quite possible to give fairly accurate phenotypic reconstruction of the features we choose as racial markers from dna samples alone and also possible to identify real geographic ancestral populations from suitable markers. the reason that human races aren't useful is that they're actually only looking at a couple of phenotypic markers and ( a ) these phenotypes don't map well to underlying genetics and ( b ) don't usefully model the underlying populations. the big thing that racial typing is based on is skin colour, but skin colour is controlled by only a small number of alleles. on the basis of skin colour you'd think the big division in human diversity is ( and i simplify ) between white europeans and black africans. however, there is vastly more genetic diversity within africa than there is anywhere else. two randomly chosen africans will be, on average, more diverse from each other than two randomly chosen europeans. what's more europeans are no more genetically distinct overall from a randomly chosen african than two randomly chosen africans are from each other. this makes perfectly decent sense if you consider the deep roots of diversity within africa ( where humans originally evolved ) to the more recent separation of europeans from an african sub - population. it's also worth noting that the phenotypic markers of race don't actually tell you much about underlying heredity ; for example there's a famous photo of twin daughters one of whom is completely fair skinned, the other of whom is completely dark skinned ; yet these two are sisters. this is, of course, an extreme example but it should tell you something about the usefulness of skin colour as a real genetic marker.
|
https://api.stackexchange.com
|
short answer : in my opinion, my approach would be to pull out the cds exons and run bedtools on those. a few more details : when you pull out the exons, make sure that you assign them all ids if the don't already have them assigned and record which ids " belong " to which genes. now when you get exons that overlap, you know that they are coding and you can tie them back to which genes they originate from.
|
https://api.stackexchange.com
|
corrosion resistant products, ltd., with the help of dupont, has established this source of information on what can and cannot eat teflon. here's a list : sodium and potassium metal - these reduce and defluorinate ptfe, which finds use in etching ptfe finely divided metal powders, like aluminum and and magnesium, cause ptfe to combust at high temperatures these reactions probably reduce ptfe in a manner that starts : $ $ \ ce { ( cf2cf2 ) _ { n } + 2na - > ( cf = cf ) _ { n } + 2naf } $ $ the world's most powerful oxidizers like $ \ ce { f2 } $, $ \ ce { of2 } $, and $ \ ce { clf3 } $ can oxidize ptfe at elevated temperatures, probably by : $ $ \ ce { ( cf2cf2 ) _ { n } + 2nf2 - > 2ncf4 } $ $ similar things can occur under extreme conditions ( temperature and pressure ) with : boranes nitric acid 80 % naoh or koh aluminum chloride ammonia, some amines, and some imines
|
https://api.stackexchange.com
|
two examples of libraries that use modern c + + constructs : both the eigen and armadillo libraries ( linear algebra ) use several modern c + + constructs. for instance, they use both expression templates to simplify arithmetic expressions and can sometimes eliminate some temporaries : ( presentation on expression templates in armadillo ) the cgal library ( computational geometry ) uses many modern c + + features ( it heavily uses templates and specializations ) : note : modern c + + constructs are very elegant and can be very fun to use. it is both a strong point and a weakness : when using them, it is so tempting to add several layers of templates / specializations / lambdas that in the end you sometimes get more " administration " than effective code in the program ( in other words, your program " talks " more about the problem than describing the solution ). finding the right balance is very subtle. conclusion : one needs to track the evolution of the " signal / noise " ratio in the code by measuring : how many lines of code in the program? how many classes / templates? running time? memory consumption? everything that increases the first two ones may be considered as a cost ( because it may make the program harder to understand and to maintain ), everything that decreases the last two ones is a gain. for instance, introducing an abstraction ( a virtual class or a template ) can factor code and make the program simpler ( gain ), but if it is never derivated / instanced once only, then it introduces a cost for no associated gain ( again it is subtle because the gain may come later in the future evolution of the program, therefore there is no " golden rule " ). programmer's comfort is also an important factor to be taken into account in the cost / gain balance : with too many templates, compilation time may increase significantly, and error messages become difficult to parse. see also to what extent is generic and meta - programming using c + + templates useful in computational science?
|
https://api.stackexchange.com
|
the effective length is $ \ tilde { l } _ i = l _ i - \ mu + 1 $ ( note the r code at the bottom of harold's blog post ), which in the case of $ \ mu < l _ i $ should be 1. ideally, you'd use the mean fragment length mapped to the particular feature, rather than a global $ \ mu $, but that's a lot more work for likely 0 benefit. regarding choosing a particular transcript, ideally one would use a method like salmon or kallisto ( or rsem if you have time to kill ). otherwise, your options are ( a ) choose the major isoform ( if it's known in your tissue and condition ) or ( b ) use a " union gene model " ( sum the non - redundant exon lengths ) or ( c ) take the median transcript length. none of those three options make much of a difference if you're comparing between samples, though they're all inferior to a salmon / kallisto / etc. metric. why are salmon et al. better methods? they don't use arbitrary metrics that will be the same across samples to determine the feature length. instead, they use expectation maximization ( or similarish, since at least salmon doesn't actually use em ) to quantify individual isoform usage. the effective gene length in a sample is then the average of the transcript lengths after weighting for their relative expression ( yes, one should remove $ \ mu $ in there ). this can then vary between samples, which is quite useful if you have isoform switching between samples / groups in such a way that methods a - c above would miss ( think of cases where the switch is to a smaller transcript with higher coverage over it... resulting in the coverage / length in methods a - c to be tamped down ).
|
https://api.stackexchange.com
|
that's a good, concise statement of bent's rule. of course we could have just as correctly said that p character tends to concentrate in orbitals directed at electronegative elements. we'll use this latter phrasing when we examine methyl fluoride below. but first, let's expand on the definition a bit so that it is clear to all. bent's rule speaks to the hybridization of the central atom ( $ \ ce { a } $ ) in the molecule $ \ ce { x - a - y } $. $ \ ce { a } $ provides hybridized atomic orbitals that form $ \ ce { a } $'s part of its bond to $ \ ce { x } $ and to $ \ ce { y } $. bent's rule says that as we change the electronegativity of $ \ ce { x } $ and \ or $ \ ce { y } $, $ \ ce { a } $ will tend to rehybridize its orbitals such that more s character will placed in those orbitals directed towards the more electropositive substituent. let's examine how bent's rule might be applied to your example of methyl fluoride. in the $ \ ce { c - f } $ bond, the carbon hybrid orbital is directed towards the electronegative fluorine. bent's rule suggests that this carbon hybrid orbital will be richer in p character than we might otherwise have suspected. instead of the carbon hybrid orbital used in this bond being $ \ ce { sp ^ 3 } $ hybridized it will tend to have more p character and therefore move towards $ \ ce { sp ^ 4 } $ hybridization. why is this? s orbitals are lower in energy than p orbitals. therefore electrons are more stable ( lower energy ) when they are in orbitals with more s character. the two electrons in the $ \ ce { c - f } $ bond will spend more time around the electronegative fluorine and less time around carbon. if that's the case ( and it is ), why " waste " precious, low - energy, s orbital character in a carbon hybrid orbital that doesn't have much electron density to stabilize. instead, save that s character for use in carbon hybrid orbitals that do have more electron density around carbon ( like the $ \ ce { c - h } $ bonds ). so as a consequence of bent's rule, we would
|
https://api.stackexchange.com
|
expect more p character in the carbon hybrid orbital used to form the $ \ ce { c - f } $ bond, and more s - character in the carbon hybrid orbitals used to form the $ \ ce { c - h } $ bonds. the physically observable result of all this is that we would expect an $ \ ce { h - c - h } $ angle larger than the tetrahedral angle of 109. 5Β° ( reflective of more s character ) and an $ \ ce { h - c - f } $ angle slightly smaller than 109. 5Β° ( reflective of more p character ). in terms of bond lengths, we would expect a shortening of the $ \ ce { c - h } $ bond ( more s character ) and a lengthening of the $ \ ce { c - f } $ bond ( more p character ).
|
https://api.stackexchange.com
|
an article by snell and pleasanton,'the atomic and molecular consequenses of radioactive decay ', ( j. phys. chem., 62 ( 11 ), pp 1377 β 1382, $ 1958 $ ) supports ben norris's comment. it is clear... that $ \ ce { ^ { 14 } co2 } $ remains predominantly bound as $ \ ce { no2 + } $, a result that is perhaps not surprising. [ this occurs in ] $ 81 $ % of the decays. in $ \ ce { ^ { 14 } co2 - > no2 ^ + } $ dissociation yielding $ \ ce { no + } $, $ \ ce { o + } $ and $ \ ce { n + } $ follows [ in ], respectively, $ 8. 4 $, $ 5. 9 $, and $ 3. 6 $ % of the decays. a table summarising the results is given. $ $ \ begin { array } { | c | c | } \ hline \ mathbf { ion } & \ mathbf { \ % \ abundance } \ \ \ hline \ ce { no2 + } & 81. 4 ( 16 ) \ \ \ ce { no + } & 8. 4 ( 4 ) \ \ \ ce { o + } & 5. 9 ( 6 ) \ \ \ ce { n + } & 3. 6 ( 4 ) \ \ \ ce { no2 ^ { 2 + } } & 0. 40 ( 06 ) \ \ \ hline \ end { array } $ $
|
https://api.stackexchange.com
|
pedro f. felzenszwalb and daniel p. huttenlocher have published their implementation for the distance transform [ archive ]. you cannot use it for volumetric images, but maybe you can extend it to support 3d data. i have only used it as a black box.
|
https://api.stackexchange.com
|
imagine a big family dinner where everybody starts asking you about pca. first, you explain it to your great - grandmother ; then to your grandmother ; then to your mother ; then to your spouse ; finally, to your daughter ( a mathematician ). each time the next person is less of a layman. here is how the conversation might go. great - grandmother : i heard you are studying " pee - see - ay ". i wonder what that is... you : ah, it's just a method of summarizing some data. look, we have some wine bottles standing here on the table. we can describe each wine by its colour, how strong it is, how old it is, and so on. visualization originally found here. we can compose a whole list of different characteristics of each wine in our cellar. but many of them will measure related properties and so will be redundant. if so, we should be able to summarize each wine with fewer characteristics! this is what pca does. grandmother : this is interesting! so this pca thing checks what characteristics are redundant and discards them? you : excellent question, granny! no, pca is not selecting some characteristics and discarding the others. instead, it constructs some new characteristics that turn out to summarize our list of wines well. of course, these new characteristics are constructed using the old ones ; for example, a new characteristic might be computed as wine age minus wine acidity level or some other combination ( we call them linear combinations ). in fact, pca finds the best possible characteristics, the ones that summarize the list of wines as well as only possible ( among all conceivable linear combinations ). this is why it is so useful. mother : hmmm, this certainly sounds good, but i am not sure i understand. what do you actually mean when you say that these new pca characteristics " summarize " the list of wines? you : i guess i can give two different answers to this question. the first answer is that you are looking for some wine properties ( characteristics ) that strongly differ across wines. indeed, imagine that you come up with a property that is the same for most of the wines - like the stillness of wine after being poured. this would not be very useful, would it? wines are very different, but your new property makes them all look the same! this would certainly be a bad summary. instead, pca looks for properties that show as much variation
|
https://api.stackexchange.com
|
across wines as possible. the second answer is that you look for the properties that would allow you to predict, or " reconstruct ", the original wine characteristics. again, imagine that you come up with a property that has no relation to the original characteristics - like the shape of a wine bottle ; if you use only this new property, there is no way you could reconstruct the original ones! this, again, would be a bad summary. so pca looks for properties that allow reconstructing the original characteristics as well as possible. surprisingly, it turns out that these two aims are equivalent and so pca can kill two birds with one stone. spouse : but darling, these two " goals " of pca sound so different! why would they be equivalent? you : hmmm. perhaps i should make a little drawing ( takes a napkin and starts scribbling ). let us pick two wine characteristics, perhaps wine darkness and alcohol content - - i don't know if they are correlated, but let's imagine that they are. here is what a scatter plot of different wines could look like : each dot in this " wine cloud " shows one particular wine. you see that the two properties ( $ x $ and $ y $ on this figure ) are correlated. a new property can be constructed by drawing a line through the centre of this wine cloud and projecting all points onto this line. this new property will be given by a linear combination $ w _ 1 x + w _ 2 y $, where each line corresponds to some particular values of $ w _ 1 $ and $ w _ 2 $. now, look here very carefully - - here is what these projections look like for different lines ( red dots are projections of the blue dots ) : as i said before, pca will find the " best " line according to two different criteria of what is the " best ". first, the variation of values along this line should be maximal. pay attention to how the " spread " ( we call it " variance " ) of the red dots changes while the line rotates ; can you see when it reaches maximum? second, if we reconstruct the original two characteristics ( position of a blue dot ) from the new one ( position of a red dot ), the reconstruction error will be given by the length of the connecting red line. observe how the length of these red lines changes while the line rotates ; can you see when the total length reaches minimum? if you stare at this animation for some
|
https://api.stackexchange.com
|
time, you will notice that " the maximum variance " and " the minimum error " are reached at the same time, namely when the line points to the magenta ticks i marked on both sides of the wine cloud. this line corresponds to the new wine property that will be constructed by pca. by the way, pca stands for " principal component analysis ", and this new property is called " first principal component ". and instead of saying " property " or " characteristic ", we usually say " feature " or " variable ". daughter : very nice, papa! i think i can see why the two goals yield the same result : it is essentially because of the pythagoras theorem, isn't it? anyway, i heard that pca is somehow related to eigenvectors and eigenvalues ; where are they in this picture? you : brilliant observation. mathematically, the spread of the red dots is measured as the average squared distance from the centre of the wine cloud to each red dot ; as you know, it is called the variance. on the other hand, the total reconstruction error is measured as the average squared length of the corresponding red lines. but as the angle between red lines and the black line is always $ 90 ^ \ circ $, the sum of these two quantities is equal to the average squared distance between the centre of the wine cloud and each blue dot ; this is precisely pythagoras theorem. of course, this average distance does not depend on the orientation of the black line, so the higher the variance, the lower the error ( because their sum is constant ). this hand - wavy argument can be made precise ( see here ). by the way, you can imagine that the black line is a solid rod, and each red line is a spring. the energy of the spring is proportional to its squared length ( this is known in physics as hooke's law ), so the rod will orient itself such as to minimize the sum of these squared distances. i made a simulation of what it will look like in the presence of some viscous friction : regarding eigenvectors and eigenvalues. you know what a covariance matrix is ; in my example it is a $ 2 \ times 2 $ matrix that is given by $ $ \ begin { pmatrix } 1. 07 & 0. 63 \ \ 0. 63 & 0. 64 \ end { pmatrix }. $ $ what this means is that the variance
|
https://api.stackexchange.com
|
of the $ x $ variable is $ 1. 07 $, the variance of the $ y $ variable is $ 0. 64 $, and the covariance between them is $ 0. 63 $. as it is a square symmetric matrix, it can be diagonalized by choosing a new orthogonal coordinate system, given by its eigenvectors ( incidentally, this is called spectral theorem ) ; corresponding eigenvalues will then be located on the diagonal. in this new coordinate system, the covariance matrix is diagonal and looks like that : $ $ \ begin { pmatrix } 1. 52 & 0 \ \ 0 & 0. 19 \ end { pmatrix }, $ $ meaning that the correlation between points is now zero. it becomes clear that the variance of any projection will be given by a weighted average of the eigenvalues ( i am only sketching the intuition here ). consequently, the maximum possible variance ( $ 1. 52 $ ) will be achieved if we simply take the projection on the first coordinate axis. it follows that the direction of the first principal component is given by the first eigenvector of the covariance matrix. ( more details here. ) you can see this on the rotating figure as well : there is a gray line there orthogonal to the black one ; together, they form a rotating coordinate frame. try to notice when the blue dots become uncorrelated in this rotating frame. the answer, again, is that it happens precisely when the black line points at the magenta ticks. now i can tell you how i found them ( the magenta ticks ) : they mark the direction of the first eigenvector of the covariance matrix, which in this case is equal to $ ( 0. 81, 0. 58 ) $. per popular request, i shared the matlab code to produce the above animations.
|
https://api.stackexchange.com
|
the idea of the algorithm is this : assume you have a length $ n $ signal that is sparse in the frequency domain. this means that if you were to calculate its discrete fourier transform, there would be a small number of outputs $ k \ ll n $ that are nonzero ; the other $ n - k $ are negligible. one way of getting at the $ k $ outputs that you want is to use the fft on the entire sequence, then select the $ k $ nonzero values. the sparse fourier transform algorithm presented here is a technique for calculating those $ k $ outputs with lower complexity than the fft - based method. essentially, because $ n - k $ outputs are zero, you can save some effort by taking shortcuts inside the algorithm to not even generate those result values. while the fft has a complexity of $ o ( n \ log n ) $, the sparse algorithm has a potentially - lower complexity of $ o ( k \ log n ) $ for the sparse - spectrum case. for the more general case, where the spectrum is " kind of sparse " but there are more than $ k $ nonzero values ( e. g. for a number of tones embedded in noise ), they present a variation of the algorithm that estimates the $ k $ largest outputs, with a time complexity of $ o ( k \ log n \ log \ frac { n } { k } ) $, which could also be less complex than the fft. according to one graph of their results ( reproduced in the image below ), the crossover point for improved performance with respect to fftw ( an optimized fft library, made by some other guys at mit ) is around the point where only $ \ frac { 1 } { 2 ^ { 11 } } $ - th to $ \ frac { 1 } { 2 ^ { 10 } } $ - th of the output transform coefficients are nonzero. also, in this presentation they indicate that the sparse algorithm provides better performance when $ \ frac { n } { k } \ in [ 2000, 10 ^ 6 ] $. these conditions do limit the applicability of the algorithm to cases where you know there are likely to be few significantly - large peaks in a signal's spectrum. one example that they cite on their web site is that on average, 8 - by - 8 blocks of pixels often used in image and video compression are almost 90 % sparse in the frequency domain and thus could benefit from
|
https://api.stackexchange.com
|
an algorithm that exploited that property. that level of sparsity doesn't seem to square with the application space for this particular algorithm, so it may just be an illustrative example. i need to read through the literature a bit more to get a better feel for how practical such a technique is for use on real - world problems, but for certain classes of applications, it could be a fit.
|
https://api.stackexchange.com
|
short answer the concept of species is poorly defined and is often misleading. the concepts of lineage and clade / monophyletic group are much more helpful. imo, the only usefulness of this poorly defined concept that is the " species " is to have a common vocabulary for naming lineages. note that homo neanderthalis is sometimes ( although it is rare ) called h. sapiens neanderthalis though highlighting that some would consider neanderthals and modern humans as being part of the same species. long answer are neanderthals and modern humans really considered different species? often, yes they are considered as different species, neanderthals being called homo neanderthalis and modern humans are being called homo sapiens. however, some authors prefer to call neanderthals homo sapiens neanderthalis and modern humans homo sapiens sapiens, putting both lineages in the same species ( but different subspecies ). how common were interbreeding between h. sapiens and h. neanderthalis please, have a look at @ iayork's answer. the rest of the post is here to highlight that whether you consider h. sapiens and h. neanderthalis to be the same species or not is mainly a matter of personal preference given that the concept of species is mainly arbitrary. short history of the concept of species to my knowledge, the concept of species has first been used in the antiquity. at this time, most people viewed species as fixed entities, unable to change through time and without within - population variance ( see aristotle and plato's thoughts ). for some reason, we stuck to this concept even though it sometimes appears to not be very useful. charles darwin already understood that as he says in on the origin of species ( see here ) certainly no clear line of demarcation has as yet been drawn between species and sub - species - that is, the forms which in the opinion of some naturalists come very near to, but do not quite arrive at the rank of species ; or, again, between sub - species and well - marked varieties, or between lesser varieties and individual differences. these differences blend into each other in an insensible series ; and a series impresses the mind with the idea of an actual passage. you might also want to have a look at the post why are there species instead of a continuum of various animals? several definitions of species there are several definitions of species that yield me once again to argue that we should rather forget about this
|
https://api.stackexchange.com
|
concept and just use the term lineage and use an accurate description of the reproductive barriers or genetic / functional divergence between lineage rather than using this made - up word that is " species ". i will below discuss the most commonly used definition ( the one you cite ) that is called the biological species concept. problems with the definition you cite a species is often defined as the largest group of organisms where two hybrids are capable of reproducing fertile offspring, typically using sexual reproduction. only applies to species that reproduce sexually of course, this definition only applies to lineages that use sexual reproduction. if we were to use this definition for asexual lineages, then every single individual would be its own species. in practice in general, everybody refers to this definition when talking about sexual lineages but imo few people are correctly applying for practical reasons of communicating effectively. how low the fitness of the hybrids need to be? one has to arbitrarily define a limit of the minimal fitness ( or maximal outbreeding depression ) to get an accurate definition. such boundary can be defined in absolute terms or in relative terms ( relative to the fitness of the " parent lineages " ). if, the hybrid has a fitness that is 100 times lower than any of the two parent lineages, then would you consider the two parent lineages to belong to the same species? type of reproductive isolation we generally categorize the types of reproductive isolation into post - zygotic and pre - zygotic reproductive isolation ( see wiki ). there is a lot to say on this subject but let's just focus on two interesting hypothetical cases : let's consider two lineages of birds. one lineage has blue feathers while the other has red feathers. they absolutely never interbreed because the blue birds don't like the red and the red birds don't like the blue. but if you artificially fuse their gametes, then you get a viable and fertile offspring. are they of the same species? let's imagine we have two lineages of mosquitoes living in the same geographic region. one flying between 6 pm and 8 pm while the other is flying between 1 am and 3 am. they never see each other. but if they were to meet while flying they would mate together and have viable and fertile offsprings. are they of the same species? under what condition is the hybrids survival and fertility measured modern biology can do great stuff! does it count if the hybrid can't develop in the mother's ut
|
https://api.stackexchange.com
|
##erus ( let's assume we are talking about mammals ) but can develop in some other environment and then become a healthy adult? ring species in space as you said in your question, ring species is another good example as to why the concept of species is not very helpful ( see the post transitivity of species definitions ). ensatina eschscholtzii ( a salamander ; see devitt et al. 2011 and other articles from the same group ) is a classic example of ring species. species transition through time many modern lineages cannot interbreed with their ancestors. so, then people might be asking, when exactly did the species change occurred? what generation of parent where part of species a and offspring where part of species b. of course, there is no such clearly defined time in which transition occurred. it is more a smooth transition from being clearly reproductively isolated ( if they were placed to each other ) from being clearly the same species. practical issue - renaming lineages how boring it would be if every time we discover the two species can in some circumstances interbreed, we had to rename them! that would be a mess. time of course, when we talk about a species we refer to a group of individuals at a given time. however, we don't want to rename the group of individuals of interest every time a single individual die and get born. this notion yield to the question of how long in time can a single species exist. consider a lineage that has not split for 60, 000 years. was the population 60, 000 years ago the same species as the one today? the two groups may differ a lot phenotypically and may actually be reproductively isolated if they were to exist at the same time. special cases when considering a few special cases, the concept of species become even harder to apply. the amazon molly ( a fish ) is a " species " that have " sexual intercourse " without having " sexual reproduction " and there are no males in the species! how is it possible? the females have to seek for sperm in a sister species in order to activate the development of the eggs but the genes of the father from the sister species are not used ( kokko et al. ( 2008 ) ). in an ant " species ", males and females can both reproduce by parthenogenesis ( some kind of cloning but with meiosis and cross - over ) and don't need each other to reproduce. in this respect, males could actually be called females.
|
https://api.stackexchange.com
|
but they still meet to reproduce together. the offsprings of a male and a female ( via sexual reproduction ) are sterile workers. so males and females are just like two sister species that reproduce sexually to create a sterile army to protect and feed them ( fournier et al. ( 2005 ) ). bias it often brings fame to discover a large new species. in consequence, scientists might tend to apply a definition of species that allow them to tell that their species is a new one. a typical example of such eventual bias concern dinosaurs where many new fossils are abusively called a new species while they sometimes are just the same species but at a different stage of development ( according to this ted ). so why do we still use the concept of species? naming imo, its only usefulness is that it allows us to name lineages. and it is very important that we have the appropriate vocabulary to name different lineages even if this brings us to make a few mistakes and use some bad definitions. the alternative use of the concept of lineage it is important though that we are aware that the concept of species is poorly defined and that if we need to be accurate that we can talk in terms of lineages. the main issue with the term lineage is not semantic and comes about the fact that gene lineages may well differ considerably from what one would consider being the " species lineage " as defined by the " lineages of most sequences "... but this is a story for another time. in consequence in consequence to the above issues, we often call two lineages that can interbreed to some extent by different species names. on the other hand, two lineages that can hardly interbreed are sometimes called by the same species name but i would expect this case to be rarer ( as discussed by @ darrelhoffman and @ amr in the comments ). homo lineages i hope it makes sense from the above that the question is really not related to the special case of the interbreeding between the homo sapiens and the homo neanderthalis lineages. the issue is a matter of the definition of species. video and podcast scishow made a video on the subject : what makes a species a species? for the french speakers, you will find an interesting ( one hour long ) podcast on the consequence of the false belief that the concept of species is an objective concept on conservation science at podcast. unil. ch > la biodiversite - plus qu'une simple question de conservation > pierre - henry go
|
https://api.stackexchange.com
|
##uyon here is a related answer
|
https://api.stackexchange.com
|
in signal processing, two problems are common : what is the output of this filter when its input is $ x ( t ) $? the answer is given by $ x ( t ) \ ast h ( t ) $, where $ h ( t ) $ is a signal called the " impulse response " of the filter, and $ \ ast $ is the convolution operation. given a noisy signal $ y ( t ) $, is the signal $ x ( t ) $ somehow present in $ y ( t ) $? in other words, is $ y ( t ) $ of the form $ x ( t ) + n ( t ) $, where $ n ( t ) $ is noise? the answer can be found by the correlation of $ y ( t ) $ and $ x ( t ) $. if the correlation is large for a given time delay $ \ tau $, then we may be confident in saying that the answer is yes. note that when the signals involved are symmetric, convolution and cross - correlation become the same operation ; this case is also very common in some areas of dsp.
|
https://api.stackexchange.com
|
to all those who said β yes β i β ll offer a counter - point that the answer is β no β, by design. those languages will never be able to match the performance of statically compiled languages. kos offered the ( very valid ) point that dynamic languages have more information about the system at runtime which can be used to optimise code. however, there β s another side of the coin : this additional information needs to be kept track of. on modern architectures, this is a performance killer. william edwards offers a nice overview of the argument. in particular, the optimisations mentioned by kos can β t be applied beyond a very limited scope unless you limit the expressive power of your languages quite drastically, as mentioned by devin. this is of course a viable trade - off but for the sake of the discussion, you then end up with a static language, not a dynamic one. those languages differ fundamentally from python or ruby as most people would understand them. william cites some interesting ibm slides : every variable can be dynamically - typed : need type checks every statement can potentially throw exceptions due to type mismatch and so on : need exception checks every field and symbol can be added, deleted, and changed at runtime : need access checks the type of every object and its class hierarchy can be changed at runtime : need class hierarchy checks some of those checks can be eliminated after analysis ( n. b. : this analysis also takes time β at runtime ). furthermore, kos argues that dynamic languages could even surpass c + + performance. the jit can indeed analyse the program β s behaviour and apply suitable optimisations. but c + + compilers can do the same! modern compilers offer so - called profile - guided optimisation which, if they are given suitable input, can model program runtime behaviour and apply the same optimisations that a jit would apply. of course, this all hinges on the existence of realistic training data and furthermore the program cannot adapt its runtime characteristics if the usage pattern changes mid - run. jits can theoretically handle this. i β d be interested to see how this fares in practice, since, in order to switch optimisations, the jit would continually have to collect usage data which once again slows down execution. in summary, i β m not convinced that runtime hot - spot optimisations outweigh the overhead of tracking runtime information in the long run, compared to static
|
https://api.stackexchange.com
|
analysis and optimisation.
|
https://api.stackexchange.com
|
ok, it seems that user21820 is right ; this effect is caused by both the foreground and the background objects being out of focus, and occurs in areas where the foreground object ( your finger ) partially occludes the background, so that only some of the light rays reaching your eye from the background are blocked by the foreground obstacle. to see why this happens, take a look at this diagram : the black dot is a distant object, and the dashed lines depict light rays emerging from it and hitting the lens, which refocuses them to form an image on a receptor surface ( the retina in your eye, or the sensor in your camera ). however, since the lens is slightly out of focus, the light rays don't converge exactly on the receptor plane, and so the image appears blurred. what's important to realize is that each part of the blurred image is formed by a separate light ray passing through a different part of the lens ( and of the intervening space ). if we insert an obstacle between the object and the lens that blocks only some of those rays, those parts of the image disappear! this has two effects : first, the image of the background object appears sharper, because the obstacle effectively reduces the aperture of the lens. however, it also shifts the center of the aperture, and thus of the resulting image, to one side. the direction in which the blurred image shifts depends on whether the lens is focused a little bit too close or a little bit too far. if the focus is too close, as in the diagrams above, the image will appear shifted away from the obstacle. ( remember that the lens inverts the image, so the image of the obstacle itself would appear above the image of the dot in the diagram! ) conversely, if the focus is too far, the background object will appear to shift closer to the obstacle. once you know the cause, it's not hard to recreate this effect in any 3d rendering program that supports realistic focal blur. i used pov - ray, because i happen to be familiar with it : above, you can see two renderings of a classic computer graphics scene : a yellow sphere in front of a grid plane. the first image is rendered with a narrow aperture, showing both the grid and the sphere in sharp detail, while the second one is rendered with a wide aperture, but with the grid still perfectly in focus. in neither case does the effect occur, since the background is in focus. things change, however
|
https://api.stackexchange.com
|
, once the focus is moved slightly. in the first image below, the camera is focused slightly in front of the background plane, while in the second image, it is focused slightly behind the plane : you can clearly see that, with the focus between the grid and the sphere, the grid lines close to the sphere appear shifted away from it, while with the focus behind the grid plane, the grid lines shift towards the sphere. moving the camera focus further away from the background plane makes the effect even stronger : you can also clearly see the lines getting sharper near the sphere, as well as bending, because part of the blurred image is blocked by the sphere. i can even re - create the broken line effect in your photos by replacing the sphere with a narrow cylinder : to recap : this effect is caused by the background being ( slightly ) out of focus, and by the foreground object effectively occluding part of the camera / eye aperture, causing the effective aperture ( and thus the resulting image ) to be shifted. it is not caused by : diffraction : as shown by the computer renderings above ( which are created using ray tracing, and therefore do not model any diffraction effects ), this effect is fully explained by classical ray optics. in any case, diffraction cannot explain the background images shifting towards the obstacle when the focus is behind the background plane. reflection : again, no reflection of the background from the obstacle surface is required to explain this effect. in fact, in the computer renderings above, the yellow sphere / cylinder does not reflect the background grid at all. ( the surfaces have no specular reflection component, and no indirect diffuse illumination effects are included in the lighting model. ) optical illusion : the fact that this is not a perceptual illusion should be obvious from the fact that the effect can be photographed, and the distortion measured from the photos, but the fact that it can also be reproduced by computer rendering further confirms this. addendum : just to check, i went and replicated the renderings above using my old dslr camera ( and an lcd monitor, a yellow plastic spice jar cap, and some thread to hang it from ) : the first photo above has the camera focus behind the screen ; the second one has it in front of the screen. the first photo below shows what the scene looks like with the screen in focus ( or as close as i could get it with manual focus adjustment ). finally, the crappy cellphone camera picture below ( second )
|
https://api.stackexchange.com
|
shows the setup used to take the other three photos. addendum 2 : before the comments below were cleaned out, there was some discussion there about the usefulness of this phenomenon as a quick self - diagnostic test for myopia ( nearsightedness ). while i am not an opthalmologist, it does appear that, if you experience this effect with your naked eye, while trying to keep the background in focus, then you may have some degree of myopia or some other visual defect, and may want to get an eye exam. ( of course, even if you don't, getting one every few years or so isn't a bad idea, anyway. mild myopia, up to the point where it becomes severe enough to substantially interfere with your daily life, can be surprisingly hard to self - diagnose otherwise, since it typically appears slowly and, with nothing to compare your vision to, you just get used to distant objects looking a bit blurry. after all, to some extent that's true for everyone ; only the distance varies. ) in fact, with my mild ( about β1 dpt ) myopia, i can personally confirm that, without my glasses, i can easily see both the bending effect and the sharpening of background features when i move my finger in front of my eye. i can even see a hint of astigmatism ( which i know i have ; my glasses have some cylindrical correction to fix it ) in the fact that, in some orientations, i can see the background features bending not just away from my finger, but also slightly sideways. with my glasses on, these effects almost but not quite disappear, suggesting that my current prescription may be just a little bit off.
|
https://api.stackexchange.com
|
there are four comments on this reddit thread that may be on to something : by silver _ pc : could it be a form of'paper towns'on maps - aka fictitious entry to identify direct copies? by toybuilder : not that they are necessarily doing this, but i've heard it said that mass manufacturers will keep removing capacitors until their product stop working. ( certainly, it was common to see pc motherboards with unpopulated decoupling cap pads all over the place back when i used to hand - build pcs. ) if you have a mass - production setup to stuff boards and do automated visual quality inspection, maybe you don't want to take the downtime hit to reprogram your production line as you introduce and monitor ongoing production changes with the ultimate goal of removing the capacitors. if so, you could nullify the capacitors by stuffing them as before, but with both pads on the same plane. samsung manufactures capacitors, so maybe they're a bit more willing to burn through a short run of boards with wasted capacitors if, in the long run, they can more definitively get rid of them. keep in mind that large companies like samsung have the ability to test their products for certification purposes in - house, so it's probably cheap enough to run a small batch to test and accept / reject. and if accepted, to just release it into the market. at least, that would be my guess. by john _ barlycorn : i believe this has more to do with manufacturing process than it has to do with electrical purpose. modern electronics manufacturing is bat - shit insane with regard to speed. we're talking about robotic movements that are so fast, that air resistance and machine vibration have to be considered. the position of parts that feed the pick and place machines is critical to the speed of operation. so they spend a lot of time on setup. then press " start " and watch her whirl. so if they end up with 2 products that are similar, they have to go through this expensive setup change run by an expensive engineer to switch them out. but these caps are so cheap that after you consider this setup change, it might actually cost them more money to remove them during different runs. they might just say " tanj it " and let them populate them despite not needing them. my father worked in the industry for years, and had some experience in smaller volume stuff. in manufacturing this sort of backwards logic is not
|
https://api.stackexchange.com
|
uncommon. you do what's cheapest / most profitable which is not always the least wasteful option. by coppernickus : there are other planes in a tablet : the display and case. maybe the answer lies in the third dimension. might there be a brush / spring contact or some other connection on another layer of the device that completes a circuit when the tablet is assembled? that technique is used in their cellphones to mate various internal boards to the back and case. in the phones, it's spring contacts mating to gold or silver contacts when the device is assembled. or perhaps just some proximity based rf control related to the display?
|
https://api.stackexchange.com
|
i think that one of your problems is that ( as you observed in your comments ) neumann conditions are not the conditions you are looking for, in the sense that they do not imply the conservation of your quantity. to find the correct condition, rewrite your pde as $ $ \ frac { \ partial \ phi } { \ partial t } = \ frac { \ partial } { \ partial x } \ left ( d \ frac { \ partial \ phi } { \ partial x } + v \ phi \ right ) + s ( x, t ). $ $ now, the term that appears in parentheses, $ d \ frac { \ partial \ phi } { \ partial x } + v \ phi = 0 $ is the total flux and this is the quantity that you must put to zero on the boundaries to conserve $ \ phi $. ( i have added $ s ( x, t ) $ for the sake of generality and for your comments. ) the boundary conditions that you have to impose are then ( supposing your space domain is $ ( - 10, 10 ) $ ) $ $ d \ frac { \ partial \ phi } { \ partial x } ( - 10 ) + v \ phi ( - 10 ) = 0 $ $ for the left side and $ $ d \ frac { \ partial \ phi } { \ partial x } ( 10 ) + v \ phi ( 10 ) = 0 $ $ for the right side. these are the so - called robin boundary condition ( note that wikipedia explicitly says these are the insulating conditions for advection - diffusion equations ). if you set up these boundary conditions, you get the conservation properties that you were looking for. indeed, integrating over the space domain, we have $ $ \ int \ frac { \ partial \ phi } { \ partial t } dx = \ int \ frac { \ partial } { \ partial x } \ left ( d \ frac { \ partial \ phi } { \ partial x } + v \ phi \ right ) dx + \ int s ( x, t ) dx $ $ using integration by parts on the right hand side, we have $ $ \ int \ frac { \ partial \ phi } { \ partial t } dx = \ left ( d \ frac { \ partial \ phi } { \ partial x } + v \ phi \ right ) ( 10 ) - \ left ( d \ frac { \ partial \ phi } { \ partial
|
https://api.stackexchange.com
|
x } + v \ phi \ right ) ( - 10 ) + \ int s ( x, t ) dx $ $ now, the two central terms vanish thanks to the boundary conditions. integrating in time, we obtain $ $ \ int _ 0 ^ t \ int \ frac { \ partial \ phi } { \ partial t } dx dt = \ int _ 0 ^ t \ int s ( x, t ) dx dt $ $ and if we are allowed to switch the first two integrals, $ $ \ int \ phi ( x, t ) dx - \ int \ phi ( x, 0 ) dx = \ int _ 0 ^ t \ int s ( x, t ) dx $ $ this shows that the domain is insulated from the exterior. in particular, if $ s = 0 $, we get the conservation of $ \ phi $.
|
https://api.stackexchange.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.