text
stringlengths
1
3.05k
source
stringclasses
4 values
due to resistor colour - coding bands on leaded components two - significant digits were preferred and i reckon this graph speaks for itself : - these are the 13 resistors that span 10 to 100 in the old 10 % series and they are 10, 12, 15, 18, 22, 27, 33, 39, 47, 56, 68, 82, 100. i've plotted the resistor number ( 1 to 13 ) against the log of resistance. this, plus the desire for two - significant digits, looks like a good reason. i tried offsetting a few preferred values by + / - 1 and the graph wasn't as straight. there are 12 values from 10 to 82 hence e12 series. there are 24 values in the e24 range. edit - the magic number for the e12 series is the 12th root of ten. this equals approximately 1. 21152766 and is the theoretical ratio the next highest resistor value has to be compared to the current value i. e. 10k becomes 12. 115k etc. for the e24 series, the magic number is the 24th root of ten ( not suprisingly ) it's interesting to note that a slightly better straight line is got with several values in the range reduced. here are the theoretical values to three significant digits : - 10. 1, 12. 1, 14. 7, 17. 8, 21. 5, 26. 1, 31. 6, 38. 3, 46. 4, 56. 2, 68. 1 and 82. 5 clearly 27 ought to be 26, 33 ought to be 32, 39 ought to be 38 and 47 ought to be 46. maybe 82 should be 83 as well. here's the graph of traditional e12 series ( blue ) versus exact ( green ) : - so maybe the popularity of 47 is based on some poor maths?
https://api.stackexchange.com
overview the short answer is that they have the maximum number of vanishing moments for a given support ( i. e number of filter coefficients ). that's the " extremal " property which distinguishes daubechies wavelets in general. loosely speaking, more vanishing moments implies better compression, and smaller support implies less computation. in fact, the tradeoff between vanishing moments and filter size is so important that it dominates the way that wavelets are named. for example, you'll often see the d4 wavelet referred to either as d4 or db2. the 4 refers to the number of coefficients, and the 2 refers to the number of vanishing moments. both refer to the same mathematical object. below, i'll explain more about what moments are ( and why we want to make them disappear ), but for now, just understand that it relates to how well we can " fold up " most of the information in the signal into a smaller number of values. lossy compression is achieved by keeping those values, and throwing away the others. now, you may have noticed that cdf 9 / 7, which is used in jpeg 2000, has two numbers in the name, rather than one. in fact, it's also referred to as bior 4. 4. that's because it's not a " standard " discrete wavelet at all. in fact, it doesn't even technically preserve the energy in the signal, and that property is the entire reason people got so excited about the dwt in the first place! the numbers, 9 / 7 and 4. 4, still refer to the supports and vanishing moments respectively, but now there are two sets of coefficients that define the wavelet. the technical term is that rather than being orthogonal, they are biorthogonal. rather than getting too deep into what that means mathematically, i'll just review the factors which led to using non - energy - preserving biorthogonal wavelets in the first place. jpeg 2000 a much more detailed discussion of the design decisions surrounding the cdf 9 / 7 wavelet can be found in the following paper : usevitch, bryan e. a tutorial on modern lossy wavelet image compression : foundations of jpeg 2000. i'll just review the main points here. quite often, the orthogonal daubechies wavelets can actually result in increasing the number of values required to represent the signal. the effect is called coefficient expansion. if we're doing lossy compression that
https://api.stackexchange.com
may or may not matter ( since we're throwing away values at the end anyway ), but it definitely seems counterproductive in the context of compression. one way to solve the problem is to treat the input signal as periodic. just treating the input as periodic results in discontinuities at the edges, which are harder to compress, and are just artifacts of the transform. for example, consider the jumps from 3 to 0 in the following periodic extension : $ [ 0, 1, 2, 3 ] \ rightarrow [... 0, 1, 2, 3, 0, 1, 2, 3,... ] $. to solve that problem, we can use a symmetric periodic extension of the signal, as follows : $ [ 0, 1, 2, 3 ] \ rightarrow [..., 0, 1, 2, 3, 3, 2, 1, 0, 0, 1... ] $. eliminating jumps at the edges is one of the reasons the discrete cosine transform ( dct ) is used instead of the dft in jpeg. representing a signal with cosines implicitly assumes " front to back looping " of the input signal, so we want wavelets which have the same symmetry property. unfortunately, the only orthogonal wavelet which has the required characteristics is the haar ( or d2, db1 ) wavelet, which only as one vanishing moment. ugh. that leads us to biorthogonal wavelets, which are actually redundant representations, and therefore don't preserve energy. the reason cdf 9 / 7 wavelets are used in practice is because they were designed to come very close to being energy preserving. they have also tested well in practice. there are other ways to solve the various problems ( mentioned briefly in the paper ), but these are the broad strokes of the factors involved. vanishing moments so what are moments, and why do we care about them? smooth signals can be well approximated by polynomials, i. e. functions of the form : $ $ a + bx + cx ^ 2 + dx ^ 3 +... $ $ the moments of a function ( i. e. signal ) are a measure of how similar it is to a given power of x. mathematically, this is expressed as an inner product between the function and the power of x. a vanishing moment means the inner product is zero, and therefore the function doesn't " resemble " that power of
https://api.stackexchange.com
x, as follows ( for the continuous case ) : $ $ \ int { x ^ n f ( x ) dx = 0 } $ $ now each discrete, orthogonal wavelet has two fir filters associated with it, which are used in the dwt. one is a lowpass ( or scaling ) filter $ \ phi $, and the other is a highpass ( or wavelet ) filter $ \ psi $. that terminology seems to vary somewhat, but it's what i'll use here. at each stage of the dwt, the highpass filter is used to " peel off " a layer of detail, and the lowpass filter yields a smoothed version of the signal without that detail. if the highpass filter has vanishing moments, those moments ( i. e. low order polynomial features ) will get stuffed into the complementary smoothed signal, rather than the detail signal. in the case of lossy compression, hopefully the detail signal won't have much information in it, and therefore we can throw most of it away. here's a simple example using the haar ( d2 ) wavelet. there's typically a scaling factor of $ 1 / \ sqrt { 2 } $ involved, but i'm omitting it here to illustrate the concept. the two filters are as follows : $ $ \ phi = [ 1, 1 ] \ \ \ psi = [ 1, - 1 ] $ $ the highpass filter vanishes for the zero'th moment, i. e. $ x ^ 0 = 1 $, therefore it has one vanishing moment. to see this, consider this constant signal : $ [ 2, 2, 2, 2 ] $. now intuitively, it should be obvious that there's not much information there ( or in any constant signal ). we could describe the same thing by saying " four twos ". the dwt gives us a way to describe that intuition explicitly. here's what happens during a single pass of the dwt using the haar wavelet : $ $ [ 2, 2, 2, 2 ] \ rightarrow _ { \ psi } ^ { \ phi } \ left \ { \ begin { array } { rr } \ left [ 2 + 2, 2 + 2 \ right ] = \ left [ 4, 4 \ right ] \ \ \ left [ 2 - 2, 2 - 2 \ right ] = \ left [ 0, 0 \ right ] \ end { array } \ right. $ $ and
https://api.stackexchange.com
what happens on the second pass, which operates on just the smoothed signal : $ $ [ 4, 4 ] \ rightarrow _ { \ psi } ^ { \ phi } \ left \ { \ begin { array } { rr } \ left [ 4 + 4 \ right ] = \ left [ 8 \ right ] \ \ \ left [ 4 - 4 \ right ] = \ left [ 0 \ right ] \ end { array } \ right. $ $ notice how the constant signal is completely invisible to the detail passes ( which all come out to be 0 ). also notice how four values of $ 2 $ have been reduced to a single value of $ 8 $. now if we wanted to transmit the original signal, we could just send the $ 8 $, and the inverse dwt could reconstruct the original signal by assuming that all the detail coefficients are zero. wavelets with higher - order vanishing moments allow similar results with signals that are well approximated by lines, parabolas, cubics, etc. further reading i'm glossing over a lot of detail to keep the above treatment accessible. the following paper has a much deeper analysis : m. unser, and t. blu, mathematical properties of the jpeg2000 wavelet filters, ieee trans. image proc., vol. 12, no. 9, sept. 2003, pg. 1080 - 1090. footnote the above paper seems to suggest that the jpeg2000 wavelet is called daubechies 9 / 7, and is different from the cdf 9 / 7 wavelet. we have derived the exact form of the jpeg2000 daubechies 9 / 7 scaling filters... these filters result from the factorization of the same polynomial as $ daubechies _ { 8 } $ [ 10 ]. the main difference is that the 9 / 7 filters are symmetric. moreover, unlike the biorthogonal splines of cohen - daubechies - feauveau [ 11 ], the nonregular part of the polynomial has been divided among both sides, and as evenly as possible. [ 11 ] a. cohen, i. daubechies, and j. c. feauveau, “ biorthogonal bases of compactly supported wavelets, ” comm. pure appl. math., vol. 45, no. 5, pp. 485 – 560, 1992. the draft of the jpeg2000 standard ( pdf
https://api.stackexchange.com
link ) that i've browsed also calls the official filter daubechies 9 / 7. it references this paper : m. antonini, m. barlaud, p. mathieu, and i. daubechies, “ image coding using the wavelet transform, ” ieee trans. image proc. 1, pp. 205 - 220, april 1992. i haven't read either of those sources, so i can't say for sure why wikipedia calls the jpeg2000 wavelet cdf 9 / 7. it seems like there may be a difference between the two, but people call the official jpeg2000 wavelet cdf 9 / 7 anyway ( because it's based on the same foundation? ). regardless of the name, the paper by usevitch describes the one that's used in the standard.
https://api.stackexchange.com
if you have a few minutes, most people know how to add and multiply two three - digit numbers on paper. ask them to do that, ( or to admit that they could, if they had to ) and ask them to acknowledge that they do this task methodically : if this number is greater than 9, then add a carry, and so forth. this description they just gave of what to do that is an example of an algorithm. this is how i teach people the word algorithm, and in my experience this has been the best example. then you can explain that one may imagine there are more complex tasks that computers must do, and that therefore there is a need for an unambiguous language to feed a computer these algorithms. so there has been a proliferation of programming languages because people express their thoughts differently, and you're researching ways to design these languages so that it is harder to make mistakes. this is a very recognizable situation. most people have no concept that the computers they use run programs, or that those programs are human - written source code, or that a computer could'read'source code, or that computation, which they associate with arithmetic, is the only thing computers do ( and data movement, and networking, maybe ). my research is in quantum computing, so when people ask me what i do, i don't attempt to explain that. instead, i try to explain that quantum physics exists ( they've usually heard of schrodinger's cat, and things that are in two places at once ), and that because of this strange physics, faster computation might be possible. my goal is to leave the person feeling a little more knowledeable than they did going in, feeling excited about a world they didn't know existed, but with which you have now familiarized them. i find that that's much more valuable than explaining my particular research questions.
https://api.stackexchange.com
to expand on my comment from yesterday. you could do this with the ete toolkit ( i just copied one logo file rather than converting all 26 to png ) : from ete3 import tree, treestyle, faces def mylayout ( node ) : if node. is _ leaf ( ) : logo _ face = faces. imgface ( str. split ( node. name, '.') [ 0 ] + ". png " ) # this doesn't seem to work with eps files. you could try other formats faces. add _ face _ to _ node ( logo _ face, node, column = 0 ) node. img _ style [ " size " ] = 0 # remove blue dots from nodes t = tree ( " tree. nwk ", format = 3 ) ts = treestyle ( ) ts. layout _ fn = mylayout ts. show _ leaf _ name = false # remove sequence labels ts. scale = 10000 # rescale branch lengths so they are longer than the width of the logos t. render ( " formatted. png ", tree _ style = ts, h = 3000, w = 3000 ) # you may need to fiddle with dimensions and scaling to get the look you want if you want all of the logos lined up in a column add aligned = true to faces. add _ face _ to _ node
https://api.stackexchange.com
computers have a " real - time clock " - - a special hardware device ( e. g., containing a quartz crystal ) on the motherboard that maintains the time. it is always powered, even when you shut your computer off. also, the motherboard has a small battery that is used to power the clock device even when you disconnect your computer from power. the battery doesn't last forever, but it will last at least a few weeks. this helps the computer keep track of the time even when your computer is shut off. the real - time clock doesn't need much power, so it's not wasting energy. if you take out the clock battery in addition to removing the main battery and disconnecting the power cable then the computer will lose track of time and will ask you to enter the time and date when you restart the computer. to learn more, see real - time clock and cmos battery and why does my motherboard have a battery. also, on many computers, when you connect your computer to an internet connection, the os will go find a time server on the network and query the time server for the current time. the os can use this to very accurately set your computer's local clock. this uses the network time protocol, also called ntp.
https://api.stackexchange.com
short answer the cache efficiency argument has already been explained in detail. in addition, there is an intrinsic argument, why quicksort is fast. if implemented like with two “ crossing pointers ”, e. g. here, the inner loops have a very small body. as this is the code executed most often, this pays off. long answer first of all, the average case does not exist! as best and worst case often are extremes rarely occurring in practice, average case analysis is done. but any average case analysis assume some distribution of inputs! for sorting, the typical choice is the random permutation model ( tacitly assumed on wikipedia ). why $ o $ - notation? discarding constants in analysis of algorithms is done for one main reason : if i am interested in exact running times, i need ( relative ) costs of all involved basic operations ( even still ignoring caching issues, pipelining in modern processors... ). mathematical analysis can count how often each instruction is executed, but running times of single instructions depend on processor details, e. g. whether a 32 - bit integer multiplication takes as much time as addition. there are two ways out : fix some machine model. this is done in don knuth's book series “ the art of computer programming ” for an artificial “ typical ” computer invented by the author. in volume 3 you find exact average case results for many sorting algorithms, e. g. quicksort : $ 11. 667 ( n + 1 ) \ ln ( n ) - 1. 74n - 18. 74 $ mergesort : $ 12. 5 n \ ln ( n ) $ heapsort : $ 16 n \ ln ( n ) + 0. 01n $ insertionsort : $ 2. 25n ^ 2 + 7. 75n - 3ln ( n ) $ [ source ] these results indicate that quicksort is fastest. but, it is only proved on knuth's artificial machine, it does not necessarily imply anything for say your x86 pc. note also that the algorithms relate differently for small inputs : [ source ] analyse abstract basic operations. for comparison based sorting, this typically is swaps and key comparisons. in robert sedgewick's books, e. g. “ algorithms ”, this approach is pursued. you find there quicksort : $ 2n \ ln ( n ) $ comparisons and $ \ frac13n \ ln ( n ) $ swaps on average
https://api.stackexchange.com
mergesort : $ 1. 44n \ ln ( n ) $ comparisons, but up to $ 8. 66n \ ln ( n ) $ array accesses ( mergesort is not swap based, so we cannot count that ). insertionsort : $ \ frac14n ^ 2 $ comparisons and $ \ frac14n ^ 2 $ swaps on average. as you see, this does not readily allow comparisons of algorithms as the exact runtime analysis, but results are independent from machine details. other input distributions as noted above, average cases are always with respect to some input distribution, so one might consider ones other than random permutations. e. g. research has been done for quicksort with equal elements and there is nice article on the standard sort function in java
https://api.stackexchange.com
alright, i'm gonna answer this with an argument that " opponents " to my rigid nazi - like position regarding the dft have. first of all, my rigid, nazi - like position : the discrete fourier transform and discrete fourier series are one - and - the - same. the dft maps one infinite and periodic sequence, $ x [ n ] $ with period $ n $ in the " time " domain to another infinite and periodic sequence, $ x [ k ] $, again with period $ n $, in the " frequency " domain. and the idft maps it back. and they're " bijective " or " invertible " or " one - to - one ". dft : $ $ x [ k ] = \ sum \ limits _ { n = 0 } ^ { n - 1 } x [ n ] e ^ { - j 2 \ pi nk / n } $ $ idft : $ $ x [ n ] = \ frac { 1 } { n } \ sum \ limits _ { k = 0 } ^ { n - 1 } x [ k ] e ^ { j 2 \ pi nk / n } $ $ that is most fundamentally what the dft is. it is inherently a periodic or circular thing. $ $ x [ n + n ] = x [ n ] \ qquad \ forall n \ in \ mathbb { z } $ $ $ $ x [ k + n ] = x [ k ] \ qquad \ forall k \ in \ mathbb { z } $ $ but the periodicity deniers like to say this about the dft. it is true, it just doesn't change any of the above. so, suppose you had a finite - length sequence $ x [ n ] $ of length $ n $ and, instead of periodically extending it ( which is what the dft inherently does ), you append this finite - length sequence with zeros infinitely on both left and right. so $ $ \ hat { x } [ n ] \ triangleq \ begin { cases } x [ n ] \ qquad & \ text { for } 0 \ le n \ le n - 1 \ \ \ \ 0 & \ text { otherwise } \ end { cases } $ $ now, this non - repeating infinite sequence does have a dtft : dtft : $ $ \ hat { x } \ left ( e ^ { j \ omega } \ right ) = \ sum \ limits _ {
https://api.stackexchange.com
n = - \ infty } ^ { + \ infty } \ hat { x } [ n ] e ^ { - j \ omega n } $ $ $ \ hat { x } \ left ( e ^ { j \ omega } \ right ) $ is the z - transform of $ \ hat { x } [ n ] $ evaluated on the unit circle $ z = e ^ { j \ omega } $ for infinitely many real values of $ \ omega $. now, if you were to sample that dtft $ \ hat { x } \ left ( e ^ { j \ omega } \ right ) $ at $ n $ equally spaced points on the unit circle, with one point at $ z = e ^ { j \ omega } = 1 $, you would get $ $ \ begin { align } \ hat { x } \ left ( e ^ { j \ omega } \ right ) \ bigg | _ { \ omega = 2 \ pi \ frac { k } { n } } & = \ sum \ limits _ { n = - \ infty } ^ { + \ infty } \ hat { x } [ n ] e ^ { - j \ omega n } \ bigg | _ { \ omega = 2 \ pi \ frac { k } { n } } \ \ & = \ sum \ limits _ { n = - \ infty } ^ { + \ infty } \ hat { x } [ n ] e ^ { - j 2 \ pi k n / n } \ \ & = \ sum \ limits _ { n = 0 } ^ { n - 1 } \ hat { x } [ n ] e ^ { - j 2 \ pi k n / n } \ \ & = \ sum \ limits _ { n = 0 } ^ { n - 1 } x [ n ] e ^ { - j 2 \ pi k n / n } \ \ & = x [ k ] \ \ \ end { align } $ $ that is precisely how the dft and dtft are related. sampling the dtft at uniform intervals in the " frequency " domain causes, in the " time " domain, the original sequence $ \ hat { x } [ n ] $ to be repeated and shifted by all multiples of $ n $ and overlap - added. that's what uniform sampling in one domain causes in the other domain. but, since $ \ hat { x } [ n ] $ is hypothes
https://api.stackexchange.com
##ized to be $ 0 $ outside of the interval $ 0 \ le n \ le n - 1 $, that overlap - adding does nothing. it just periodically extends the non - zero part of $ \ hat { x } [ n ] $, our original finite - length sequence, $ x [ n ] $.
https://api.stackexchange.com
benefits of central repository for community having a central repository for packages is very useful. for couple of reasons : it makes very easy to resolve dependencies. installing all the dependencies manually would be exhausting but also dangerous ( point 2 ). package compatibility! if i install package with dependencies, i would like to be sure that i install correct versions of all the dependencies. reliability thanks to unified and integrated testing. bioconductor is trying really hard to force developers to write good test, they also have people manually testing submitted packages. they also remove packages that are not maintained. packages in bioconductor are ( reasonably ) reliable. in the end, installing dev versions of r packages is in my opinion very bad practise for reproducible science. if developers delete github repo, commit hash you have used won't be enough to get the code. benefits of central repository for developers i forgot about the advantages for you as developer to submit your package to bioconductor : your package will be more visible users will have a guarantee that your code was checked by third person your package will be for users easier to install your package will be forced to use standardized vignettes, version tags and tests - > will be more accessible by community to build on your code bioconductor specific advantages over cran i see the big advantage in the community support page, provided by bioconductor. @ llopis'comprehensive elaboration.
https://api.stackexchange.com
the answer to all questions is no. in fact, even the right reaction to the first sentence - that the planck scale is a " discrete measure " - is no. the planck length is a particular value of distance which is as important as $ 2 \ pi $ times the distance or any other multiple. the fact that we can speak about the planck scale doesn't mean that the distance becomes discrete in any way. we may also talk about the radius of the earth which doesn't mean that all distances have to be its multiples. in quantum gravity, geometry with the usual rules doesn't work if the ( proper ) distances are thought of as being shorter than the planck scale. but this invalidity of classical geometry doesn't mean that anything about the geometry has to become discrete ( although it's a favorite meme promoted by popular books ). there are lots of other effects that make the sharp, point - based geometry we know invalid - and indeed, we know that in the real world, the geometry collapses near the planck scale because of other reasons than discreteness. quantum mechanics got its name because according to its rules, some quantities such as energy of bound states or the angular momentum can only take " quantized " or discrete values ( eigenvalues ). but despite the name, that doesn't mean that all observables in quantum mechanics have to possess a discrete spectrum. do positions or distances possess a discrete spectrum? the proposition that distances or durations become discrete near the planck scale is a scientific hypothesis and it is one that may be - and, in fact, has been - experimentally falsified. for example, these discrete theories inevitably predict that the time needed for photons to get from very distant places of the universe to the earth will measurably depend on the photons'energy. the fermi satellite has showed that the delay is zero within dozens of milliseconds which proves that the violations of the lorentz symmetry ( special relativity ) of the magnitude that one would inevitably get from the violations of the continuity of spacetime have to be much smaller than what a generic discrete theory predicts. in fact, the argument used by the fermi satellite only employs the most straightforward way to impose upper bounds on the lorentz violation. using the so - called birefringence, one may improve the bounds by 14 orders of magnitude! this safely kills any imaginable theory that violates the lorentz symmetry - or even continuity of the spacetime - at the planck
https://api.stackexchange.com
scale. in some sense, the birefringence method applied to gamma ray bursts allows one to " see " the continuity of spacetime at distances that are 14 orders of magnitude shorter than the planck length. it doesn't mean that all physics at those " distances " works just like in large flat space. it doesn't. but it surely does mean that some physics - such as the existence of photons with arbitrarily short wavelengths - has to work just like it does at long distances. and it safely rules out all hypotheses that the spacetime may be built out of discrete, lego - like or any qualitatively similar building blocks.
https://api.stackexchange.com
the feeling you describe is called " paresthesia, " and according to the ninds info page, it happens " when sustained pressure is placed on a nerve. "
https://api.stackexchange.com
there is no stationary signal. stationary and non - stationary are characterisations of the process that generated the signal. a signal is an observation. a recording of something that has happened. a recording of a series of events as a result of some process. if the properties of the process that generates the events does not change in time, then the process is stationary. we know what a signal $ x ( n ) $ is, it is a collection of events ( measurements ) at different time instances ( $ n $ ). but how can we describe the process that generated it? one way of capturing the properties of a process is to obtain the probability distribution of the events it describes. practically, this could look like a histogram but that's not entirely useful here because it only provides information on each event as if it was unrelated to its neighbour events. another type of " histogram " is one where we could fix an event and ask what is the probability that the other events happen given another event has already taken place. so, if we were to capture this " monster histogram " that describes the probability of transition from any possible event to any other possible event, we would be able to describe any process. furthermore, if we were to obtain this at two different time instances and the event - to - event probabilities did not seem to change then that process would be called a stationary process. ( absolute knowledge of the characteristics of a process in nature is rarely assumed of course ). having said this, let's look at the examples : white noise : white noise is stationary because any signal value ( event ) is equally probable to happen given any other signal value ( another event ) at any two time instances no matter how far apart they are. coloured noise : what is coloured noise? it is essentially white - noise with some additional constraints. the constraints mean that the event - to - event probabilities are now not equal but this doesn't mean that they are allowed to change with time. so, pink noise is filtered white noise whose frequency spectrum decreases following a specific relationship. this means that pink noise has more low frequencies which in turn means that any two neighbouring events would have higher probabilities of occurring but that would not hold for any two events ( as it was in the case of white noise ). fine, but if we were to obtain these event - to - event probabilities at two different time instances and they did not seem to change, then the process that generated the signals would be stationary.
https://api.stackexchange.com
chirp : non stationary, because the event - to - event probabilities change with time. here is a relatively easy way to visualise this : consider a sampled version of the lowest frequency sinusoid at some sampling frequency. this has some event - to - event probabilities. for example, you can't really go from - 1 to 1, if you are at - 1 then the next probable value is much more likely to be closer to - 0. 9 depending of course on the sampling frequency. but, actually, to generate the higher frequencies you can resample this low frequency sinusoid. all you have to do for the low frequency to change pitch is to " play it faster ". aha! therefore, yes! you can actually move from - 1 to 1 in one sample, provided that the sinusoid is resampled really really fast. therefore!!! the event - to - event probabilities change with time!, we have by passed so many different values and went from - 1 to 1 in this extreme case.... so, this is a non - stationary process. sinus ( oid ) stationary... self - explanatory, given # 3 sum of multiple sinuses with different periods and amplitudes self explanatory given # 1, # 2, # 3 and # 4. if the periods and amplitudes of the components do not change in time, then the constraints between the samples do not change in time, therefore the process will end up stationary. ecg, eeg, ppt and similar i am not really sure what ppt is but ecg and eeg are prime examples of non - stationary signals. why? the ecg represents the electrical activity of the heart. the heart has its own oscillator which is modulated by signals from the brain at every heartbeat! therefore, since the process changes with time ( i. e. the way that the heart beats changes at each heart beat ) then it is considered non - stationary. the same applies for the eeg. the eeg represents a sum of localised electrical activity of neurons in the brain. the brain cannot be considered stationary in time since a human being performs different activities. conversely, if we were to fix the observation window we could claim some form of stationarity. for example, in neuroscience, you can say that 30 subjects were instructed to stay at rest with their eyes closed while eeg recordings were obtained for 30 seconds and then say
https://api.stackexchange.com
that for those specific 30 sec and condition ( rest, eyes closed ) the brain ( as a process ) is assumed to be stationary. chaotic system output. similar to # 6, chaotic systems could be considered stationary over brief periods of time but that's not general. temperature recordings : similar to # 6 and # 7. weather is a prime example of a chaotic process, it cannot be considered stationary for too long. financial indicators : similar to # 6, # 7, # 8, # 9. in general cannot be considered stationary. a useful concept to keep in mind when talking about practical situations is ergodicity. also, there is something that eventually creeps up here and that is the scale of observation. look too close and it's not stationary, look from very far away and everything is stationary. the scale of observation is context dependent. for more information and a large number of illustrating examples as far as the chaotic systems are concenred, i would recommend this book and specifically chapters 1, 6, 7, 10, 12 and 13 which are really central on stationarity and periodicity. hope this helps.
https://api.stackexchange.com
maybe you can be more specific about the scope and scale of your work ( academic project? desktop or mobile commercial product? web - based commercial project? ). some recommendations and comments : matlab is common in the academic world, and quite good for sketching / validating ideas. you will have access to a large body of code from other researchers ( in cv and machine learning ) ; prototyping and debugging will be very fast and easy, but whatever you will have developed in this environment will be hard to put in production. depending on what your code is doing, you might have memory / performance problems ( there are situations where you can't describe what you want to do in terms of matlab's primitives and have to start looping on pixels and matlab's being an interpreted language is not helping in this context ). interaction with databases, web servers etc is not easy, sometimes impossible ( you won't get a matlab program to become a thrift server called by a web front - end ). costs $ $ $. c + + is what is used for many production - grade cv systems ( think of something at the scale of google's image search or streetview, or many commercial robotics applications ). good libraries like opencv, excellent performance, easy to put into a production environment. if you need to do machine learning, there are many libraries out there ( libsvm / svmlight, torch ). if you have to resort to " loop on all pixels " code it will perform well. easy to use for coding the systems / storage layers needed in a large scale retrieval system ( eg : a very large on - disk hash map for storing an inverted index mapping feature hashes to images ). things like thrift / message pack can turn your retrieval program into a rpc server which can be called by a web front - end. however : not very agile for prototyping, quite terrible for trying out new ideas, slower development time ; and put in the hands of inexperienced coders might have hard to track performances and / or instability problems. python is somehow a middle ground between both. you can use it for matlab style numerical computing ( with numpy and scipy ) + have bindings to libraries like opencv. you can do systems / data structure stuff with it and get acceptable performances. there are quite a few machine learning packages out there though less than in matlab or c + +. unless you have to resort to "
https://api.stackexchange.com
loop on all pixels " code, you will be able to code pretty much everything you could have done with c + + with a 1 : 1. 5 to 1 : 3 ratio of performance and 2 : 1 to 10 : 1 ratio of source code size ( debatable ). but depending on the success of your project there will be a point where performance will be an issue and when rewriting to c + + won't be an option.
https://api.stackexchange.com
this is a very complex issue, since it deals with emi / rfi, esd, and safety stuff. as you've noticed, there are many ways do handle chassis and digital grounds - - everybody has an opinion and everybody thinks that the other people are wrong. just so you know, they are all wrong and i'm right. honest! : ) i've done it several ways, but the way that seems to work best for me is the same way that pc motherboards do it. every mounting hole on the pcb connects signal gnd ( a. k. a. digital ground ) directly to the metal chassis through a screw and metal stand - off. for connectors with a shield, that shield is connected to the metal chassis through as short of a connection as possible. ideally the connector shield would be touching the chassis, otherwise there would be a mounting screw on the pcb as close to the connector as possible. the idea here is that any noise or static discharge would stay on the shield / chassis and never make it inside the box or onto the pcb. sometimes that's not possible, so if it does make it to the pcb you want to get it off of the pcb as quickly as possible. let me make this clear : for a pcb with connectors, signal gnd is connected to the metal case using mounting holes. chassis gnd is connected to the metal case using mounting holes. chassis gnd and signal gnd are not connected together on the pcb, but instead use the metal case for that connection. the metal chassis is then eventually connected to the gnd pin on the 3 - prong ac power connector, not the neutral pin. there are more safety issues when we're talking about 2 - prong ac power connectors - - and you'll have to look those up as i'm not as well versed in those regulations / laws. tie them together at a single point with a 0 ohm resistor near the power supply don't do that. doing this would assure that any noise on the cable has to travel through your circuit to get to gnd. this could disrupt your circuit. the reason for the 0 - ohm resistor is because this doesn't always work and having the resistor there gives you an easy way to remove the connection or replace the resistor with a cap. tie them together with a single 0. 01uf / 2kv capacitor at near the power supply don't do that
https://api.stackexchange.com
. this is a variation of the 0 - ohm resistor thing. same idea, but the thought is that the cap will allow ac signals to pass but not dc. seems silly to me, as you want dc ( or at least 60 hz ) signals to pass so that the circuit breaker will pop if there was a bad failure. tie them together with a 1m resistor and a 0. 1uf capacitor in parallel don't do that. the problem with the previous " solution " is that the chassis is now floating, relative to gnd, and could collect a charge enough to cause minor issues. the 1m ohm resistor is supposed to prevent that. otherwise this is identical to the previous solution. short them together with a 0 ohm resistor and a 0. 1uf capacitor in parallel don't do that. if there is a 0 ohm resistor, why bother with the cap? this is just a variation on the others, but with more things on the pcb to allow you to change things up until it works. tie them together with multiple 0. 01uf capacitors in parallel near the i / o closer. near the i / o is better than near the power connector, as noise wouldn't travel through the circuit. multiple caps are used to reduce the impedance and to connect things where it counts. but this is not as good as what i do. short them together directly via the mounting holes on the pcb as mentioned, i like this approach. very low impedance, everywhere. tie them together with capacitors between digital gnd and the mounting holes not as good as just shorting them together, since the impedance is higher and you're blocking dc. tie them together via multiple low inductance connections near the i / o connectors variations on the same thing. might as well call the " multiple low inductance connections " things like " ground planes " and " mounting holes " leave them totally isolated ( not connected together anywhere ) this is basically what is done when you don't have a metal chassis ( like, an all plastic enclosure ). this gets tricky and requires careful circuit design and pcb layout to do right, and still pass all emi regulatory testing. it can be done, but as i said, it's tricky.
https://api.stackexchange.com
in the presence of these strong acids the $ \ ce { - nme2 } $ group is protonated, and the protonated form is electron - withdrawing via the inductive effect. this discourages attack at the electron - poor ortho position. under the conditions i know for that experiment, you get a mixture of para - and meta - product, but no ortho - product due to steric hindrance.
https://api.stackexchange.com
suppose the leg spacing for a square and triangular chair is the same then the positions of the legs look like : if we call the leg spacing $ 2d $ then for the square chair the distance from the centre to the edge is $ d $ while for the triangular chair it's $ d \ tan 30 ^ \ circ $ or about $ 0. 58d $. that means on the triangular chair you can only lean half as far before you fall over, so it is much less stable. to get the same stability as the square chair you'd need to increase the leg spacing to $ 2 / \ tan 30 ^ \ circ d $ or about $ 3. 5d $ which would make the chair too big. a pentagonal chair would be even more stable, and a hexagonal chair more stable still, and so on. however increasing the number of legs gives diminishing increases in stability and costs more. four - legged chairs have emerged ( from several millennia of people falling off chairs ) as a good compromise.
https://api.stackexchange.com
failing to look at ( plot ) the data.
https://api.stackexchange.com
aromaticity is not binary, but rather there are degrees of aromaticity. the degree of aromaticity in benzene is large, whereas the spiro - aromaticity in [ 4. 4 ] nonatetraene is relatively small. the aromaticity in naphthalene is not twice that of benzene. aromaticity has come to mean a stabilization resulting from p - orbital ( although other orbitals can also be involved ) overlap in a pi - type system. as the examples above indicate, the stabilization can be large or small. let's consider $ \ ce { c _ { 60 } } $ : bond alternation is often taken as a sign of non - aromatic systems. in $ \ ce { c _ { 60 } } $ there are different bond lengths, ~ 1. 4 and 1. 45 angstroms. however, this variation is on the same order as that found in polycyclic aromatic hydrocarbons, and less than that observed in linear polyenes. conclusion : aromatic, but less so than benzene. magnetic properties are related to electron delocalization and are often used to assess aromaticity. both experiment and calculations suggest the existence of ring currents ( diamagnetic and paramagnetic ) in $ \ ce { c _ { 60 } } $. conclusion : although analysis is complex, analysis is consistent with at least some degree of aromaticity. reactivity - substitution reactions are not possible as no hydrogens are present in $ \ ce { c _ { 60 } } $. when an anion or radical is added to $ \ ce { c _ { 60 } } $ the electron ( s ) are not delocalized over the entire fullerene structure. however, most addition reactions are reversible suggesting that there is some extra stability or aromaticity associated with $ \ ce { c _ { 60 } } $. conclusion : not as aromatic as benzene resonance energy calculations have been performed and give conflicting results, although most suggest a small stabilization. theoretical analysis of the following isodesmic reaction $ $ \ ce { c _ { 60 } + 120 ch4 - > 30 c2h4 + 60 c2h6 } $ $ suggested that it only took half as much energy to break all of the bonds in $ \ ce { c60 } $ compared to the same bond - breaking reaction with the appropriate number of benzenes. conclusion : some aromatic stabilization, but significantly less than benzene. this brief overview suggests that $ \ ce { c _ { 60 } } $
https://api.stackexchange.com
does display properties that are consistent with some degree of aromatic stabilization, albeit less than that found with benzene.
https://api.stackexchange.com
first, a warning. i suspect this response is likely not going to be immediately comprehensible. there is a formal set - up for your question, there are tools available to understand what's going on. they're not particularly light tools, but they exist and they're worthy of being mentioned. before i write down the main theorem, let me set - up some terminology. the tools belong to a subject called manifold theory and algebraic topology. the names of the tools i'm going to use are called things like : the isotopy extension theorem, fibre - bundles, fibrations and homotopy - groups. you have a surface $ \ sigma $, it's your shirt or whatever else you're interested in, some surface in 3 - dimensional space. surfaces have automorphism groups, let me call it $ \ operatorname { aut } ( \ sigma ) $. these are, say, all the self - homeomorphisms or diffeomorphisms of the surface. and surfaces can sit in space. a way of putting a surface in space is called an embedding. let's call all the embeddings of the surface $ \ operatorname { emb } ( \ sigma, \ mathbb r ^ 3 ) $. $ \ operatorname { emb } ( \ sigma, \ mathbb r ^ 3 ) $ is a set, but in the subject of topology these sets have a natural topology as well. we think of them as a space where " nearby " embeddings are almost the same, except for maybe a little wiggle here or there. the topology on the set of embeddings is called the compact - open topology ( see wikipedia, for details on most of these definitions ). okay, so now there's some formal nonsense. look at the quotient space $ \ operatorname { emb } ( \ sigma, \ mathbb r ^ 3 ) / \ operatorname { aut } ( \ sigma ) $. you can think of this as all ways $ \ sigma $ can sit in space, but without any labelling - - the surface has no parametrization. so it's the space of all subspaces of $ \ mathbb r ^ 3 $ that just happen to be homeomorphic to your surface. richard palais has a really nice theorem that puts this all into a pleasant context. the preamble is we need to think of everything as living in the world
https://api.stackexchange.com
of smooth manifolds - - smooth embeddings, $ \ operatorname { aut } ( \ sigma ) $ is the diffeomorphism group of the surface, etc. there are two locally - trivial fibre bundles ( or something more easy to prove - - serre fibrations ), this is the " global " isotopy - extension theorem : $ $ \ operatorname { diff } ( \ mathbb r ^ 3, \ sigma ) \ to \ operatorname { diff } ( \ mathbb r ^ 3 ) \ to \ operatorname { emb } ( \ sigma, \ mathbb r ^ 3 ) / \ operatorname { aut } ( \ sigma ) $ $ $ $ \ operatorname { diff } ( \ mathbb r ^ 3 \ operatorname { fix } \ sigma ) \ to \ operatorname { diff } ( \ mathbb r ^ 3, \ sigma ) \ to \ operatorname { aut } ( \ sigma ) $ $ here $ \ operatorname { diff } ( \ mathbb r ^ 3 ) $ indicates diffeomorphisms of $ \ mathbb r ^ 3 $ that are the identity outside of a sufficiently large ball, say. so the palais theorem, together with the homotopy long exact sequence of a fibration, is giving you a language that allows you to translate between automorphisms of your surface, and motions of the surface in space. it's a theorem of jean cerf's that $ \ operatorname { diff } ( \ mathbb r ^ 3 ) $ is connected. a little diagram chase says that an automorphism of a surface can be realized by a motion of that surface in 3 - space if and only if that automorphism of the surface extends to an automorphism of 3 - space. for closed surfaces, the jordan - brouwer separation theorem gives you an obstruction to turning your surface inside - out. but for non - closed surfaces you're out of tools. to figure out if you can realize an automorphism as a motion, you literally have to try to extend it " by hands ". this is a very general phenomena - - you have one manifold sitting in another, but rarely does an automorphism of the submanifold extend to the ambient manifold. you see this phenomena happening in various other branches of mathematics as well - - an automorphism of a subgroup does not always extend to the ambient group, etc. so you try your luck and
https://api.stackexchange.com
try to build the extension yourself. in some vague sense that's a formal analogy between the visceral mystery of turning the surface inside - out and a kind of formalized mathematical problem, but of a fundamentally analogous feel. we're looking for automorphisms that reverse orientation. for an arbitrary surface with boundary in 3 - space, it's not clear if you can turn the surface inside out. this is because the surface might be knotted. unknotted surfaces are examples like your t - shirt. let's try to cook up something that can't be turned inside - out. the automorphism group of a 3 - times punctured sphere has 12 path - components ( 12 elements up to isotopy ). there are 6 elements that preserve orientation, and 6 that reverse. in particular the orientation - reversing automorphisms reverse the orientation of all the boundary circles. so if you could come up with a knotted pair - of - pants ( 3 - times punctured surface ) so that its boundary circles did not admit a symmetry that reversed the orientations of all three circles simultaneously, you'd be done. maybe this doesn't seem like a reduction to you, but it is. for example, there are things called non - invertible knots : so how do we cook up a knotted pair - of - pants from that? here's the idea. the non - invertible knot in the link above is sometimes called $ 8 _ { 17 } $. here is another picture of it : here is a variant on that. interpret this image as a ribbon of paper that has three boundary circles. one boundary circle is unknotted. one is $ 8 _ { 17 } $. the other is some other knot. it turns out that other knot isn't trivial, nor is it $ 8 _ { 17 } $. so why can't this knotted pair of pants be turned inside - out? well, the three knots are distinct, and $ 8 _ { 17 } $ can't be reversed. the reason why i know the other knot isn't $ 8 _ { 17 } $? it's a hyperbolic knot and it has a different ( $ 4. 40083... $ ) hyperbolic volume than $ 8 _ { 17 } $ ( $ 10. 9859... $ ). fyi : in some sense this is one of the simplest surfaces with non - trivial boundary that can't be turned
https://api.stackexchange.com
inside - out. all discs can be turned inside - out. similarly, all annuli ( regardless of how they're knotted ) can be turned inside - out. so for genus zero surfaces, 3 boundary components is the least you can have if you're looking for a surface that can't be turned inside - out. edited to correct for jason's comment. comment added later : i suggest if you purchase a garment of this form you return it to the manufacturer.
https://api.stackexchange.com
this is what i have found on the topic so far. there are a few competing theories for why the solder mask of pcb is commonly green. possible explanations : the us military required pcbs to be green when mixing the base resin and the hardener together, they turn green it is an ergonomic choice due to the human eyes ability to detect green, and the contrast of green with white text some combination of the above source : thefreelibrary source : quora digging deeper... liquid photo imageable solder mask ( lpism ) technology was developed in the late 1970s and early 1980s to to meet the new application demands placed upon solder masks by the rise in surface mount technology. it seems that modern, green colored pcbs emerged with this technology, and the technology seems to trace back to this patent from 1980. consequently, endeavours have been made to produce improved processes for producing a mask image of relatively high resolution for the small - conductor art. it was therefore a relatively obvious step to use photo processes in association with uv ( ultra - violet ) sensitive photopolymers. so basically, uv sensitive photopolymers were available and were the first to be used for lpism. the polymer solution they used in the patent included 3g of dye, but did not describe the color of the dye or why they used it. when developing an invention for the first time, it seems highly unlikely they would choose the dye or photopolymers because of the military's request or for ergonimic considerations, so we can rule those out. the most plausible explanation is that it was the most accessible, inexpensive and effective materials to be used in fabrication. for whatever reason, the uv sensitive photopolymers that were effective for this invention happened to be green at the time, and this material's proliferation is most likely due to its low cost. alternatives do exist these days, and pcbs can be virtually any color. i know this is all speculation, and i wish i could give a more definitive answer. i've read through patents and papers and electronic materials and processes handbook, but still haven't nailed it down yet. maybe a pcb process engineer or researcher can help us here.
https://api.stackexchange.com
yes, this is possible through something called heteropaternal superfecundation ( see below for further explanation ). of all twin births, 30 % are identical and 70 % are non - identical ( fraternal ) twins. identical twins result when a zygote ( one egg, or ovum, fertilized by one sperm ) splits at an early stage to become twins. because the genetic material is essentially the same, they resemble each other closely. typically during ovulation only one ovum is released to be fertilized by one sperm. however, sometimes a woman's ovaries release two ova. each must be fertilized by a separate sperm cell. if she has intercourse with two different men, the two ova can be fertilized by sperm from different sexual partners. the term for this event is heteropaternal superfecundation ( hs ) : twins who have the same mother, but two different fathers. this has been proven in paternity suits ( in which there will be a bias selecting for possible infidelity ) involving fraternal twins, where genetic testing must be done on each child. the frequency of heteropaternal superfecundation in this group was found ( in one study ) to be 2. 4 %. as the study's authors state, " inferences about the frequency of hs in other populations should be drawn with caution. "
https://api.stackexchange.com
one argument put forward has been that aluminum is very poorly bioavailable, moreso than many other elements. aluminum oxide is very insoluble in water. in addition, any dissolved aluminum that does form in seawater is likely to be precipitated by silicic acid, forming hydroxyaluminosilicates. from chris exeter's 2009 article in trends in biochemical sciences : but how has the by far most abundant metal in the earth's crust remained hidden from biochemical evolution? there are powerful arguments, many of which influenced darwin's own thinking [ 15 ], which identify natural selection as acting upon geochemistry as it acts upon biochemistry. i have argued previously that the lithospheric cycling of aluminium, from the rain - fuelled dissolution of mountains through to the subduction of sedimentary aluminium and its re - emergence in mountain building, depends upon the ‘ natural selection ’ of increasingly insoluble mineral phases of the metal [ 7 ]. the success of this abiotic cycle is reflected in the observation that less than 0. 001 % of cycled aluminium enters and passes through the biotic cycle. in addition, only an insignificant fraction of the aluminium entering the biotic cycle, living things, is biologically reactive. however, my own understanding of such an explanation of how life on earth evolved in the absence of biologically available aluminium was arrived at by a somewhat serendipitous route! in studying the acute toxicity of aluminium in atlantic salmon i discovered that the aqueous form of silicon, silicic acid, protected against the toxicity of aluminium [ 16 ]. subsequent work showed that protection was afforded through the formation of hydroxyaluminosilicates ( has ) [ 17 ] which, intriguingly, are one of the sparingly soluble secondary mineral phases of the abiotic cycling of aluminium! the discovery that silicic acid was a geochemical control of the biological availability of aluminium, though now seemingly obvious in hindsight, was a seminal moment in my understanding of the bioinorganic chemistry of aluminium, and although it helped me to understand the non - selection of aluminium in biochemical evolution, it also provided me with a missing link in the wider understanding of the biological essentiality of silicon. dr. exeter is one of the few scholars who appears to have written in depth about this issue. thus, perhaps it is fair to say that ( a ) your question doesn't have a definitive answer, but ( b ) the poorly accessible
https://api.stackexchange.com
nature of aluminum over geological time due to its interaction with and precipitation by silicic acid is the leading hypothesis. it's worth noting that when aluminum is artificially introduced into metalloenzymes in place of naturally occuring metals, the resulting alumino - enzymes can retain activity, as a 1999 article in jacs by merkx & averill shows.
https://api.stackexchange.com
an led requires a minimum voltage before it will turn on at all. this voltage varies with the type of led, but is typically in the neighborhood of 1. 5v - 4. 4v. once this voltage is reached, current will increase very rapidly with voltage, limited only by the led's small resistance. consequently, any voltage much higher than this will result in a very huge current through the led, until either the power supply is unable to supply enough current and its voltage sags, or the led is destroyed. above is an example of the current - voltage relationship for an led. since current rises so rapidly with voltage, usually we can simplify our analysis by assuming the voltage across an led is a constant value, regardless of current. in this case, 2v looks about right. straight across the battery no battery is a perfect voltage source. as the resistance between its terminals decreases, and the current draw goes up, the voltage at the battery terminals will decrease. consequently, there is a limit to the current the battery can provide. if the battery can't supply too much current to destroy your led, and the battery itself won't be destroyed by sourcing this much current, putting the led straight across the battery is the easiest, most efficient way to do it. most batteries don't meet these requirements, but some coin cells do. you might know them from led throwies. series resistor the simplest method to limit the led current is to place a resistor in series. we known from ohm's law that the current through a resistor is equal to the voltage across it divided by the resistance. thus, there's a linear relationship between voltage and current for a resistor. placing a resistor in series with the led serves to flatten the voltage - current curve above such that small changes in supply voltage don't cause the current to shoot up radically. current will still increase, just not radically. the value of the resistor is simple to calculate : subtract the led's forward voltage from your supply voltage, and this is the voltage that must be across the resistor. then, use ohm's law to find the resistance necessary to get the current desired in the led. the big disadvantage here is that a resistor reduces the voltage by converting electrical energy into heat. we can calculate the power in the resistor with any of these : \ $ p = ie \ $ \ $ p = i ^ 2 r \ $ \ $ p = e ^ 2 /
https://api.stackexchange.com
r \ $ any power in the resistor is power not used to make light. so why don't we make the supply voltage very close to the led voltage, so we don't need a very big resistor, thus reducing our power losses? because if the resistor is too small, it won't regulate the current well, and our circuit will be subject to large variations in current with temperature, manufacturing variation, and supply voltage, just as if we had no resistor at all. as a rule of thumb, at least 25 % of the voltage should be dropped over the resistor. thus, one can never achieve better than 75 % efficiency with a series resistor. you might be wondering if multiple leds can be put in parallel, sharing a single current limiting resistor. you can, but the result will not be stable, one led may hog all the current, and be damaged. see why exactly can't a single resistor be used for many parallel leds?. linear current source if the goal is to deliver a constant current to the leds, why not make a circuit that actively regulates the current to the leds? this is called a current source, and here an example of one you can build with ordinary parts : here's how it works : q2 gets its base current through r1. as q2 turns on, a large current flows through d1, through q2, and through r2. as this current flows through r2, the voltage across r2 must increase ( ohm's law ). if the voltage across r2 increases to 0. 6v, then q1 will begin to turn on, stealing base current from q2, limiting the current in d1, q2, and r2. so, r2 controls the current. this circuit works by limiting the voltage across r2 to no more than 0. 6v. so to calculate the value needed for r2, we can just use ohm's law to find the resistance that gives us the desired current at 0. 6v. but what have we gained? now any excess voltage is just being dropped in q2 and r2, instead of a series resistor. not much more efficient, and much more complex. why would we bother? remember that with a series resistor, we needed at least 25 % of the total voltage to be across the resistor to get adequate current regulation. even so, the current still varies a little with supply voltage. with this
https://api.stackexchange.com
circuit, the current hardly varies with supply voltage under all conditions. we can put many leds in series with d1, such that their total voltage drop is say, 20v. then, we need only another 0. 6v for r2, plus a little more so q2 has room to work. our supply voltage could be 21. 5v, and we are wasting only 1. 5v in things that aren't leds. this means our efficiency can approach \ $ 20v / 21. 5v = 93 \ % \ $. that's much better than the 75 % we can muster with a series resistor. switched mode current sources for the ultimate solution, there is a way to ( in theory, at least ) drive leds with 100 % efficiency. it's called a switched mode power supply, and uses an inductor to convert any voltage to exactly the voltage needed to drive the leds. it's not a simple circuit, and we can't make it entirely 100 % efficient in practice since no real components are ideal. however, properly designed, this can be more efficient than the linear current source above, and maintain the desired current over a wider range of input voltages. here's a simple example that can be built with ordinary parts : i won't claim that this design is very efficient, but it does serve to demonstrate the principle of operation. here's how it works : u1, r1, and c1 generate a square wave. adjusting r1 controls the duty cycle and frequency, and consequently, the brightness of the led. when the output ( pin 3 ) is low, q1 is switched on. current flows through the inductor, l1. this current grows as energy is stored in the inductor. then, the output goes high. q1 switches off. but an inductor acts as a flywheel for current. the current that was flowing in l1 must continue flowing, and the only way to do that is through d1. the energy stored in l1 is transferred to d1. the output goes low again, and thus the circuit alternates between storing energy in l1 and dumping it in d1. so actually, the led blinks rapidly, but at around 25khz, it's not visible. the neat thing about this is it doesn't matter what our supply voltage is, or what the forward voltage of d1 is. in fact, we can put many leds in series
https://api.stackexchange.com
with d1 and they will still light, even if the total forward voltage of the leds exceeds the supply voltage. with some extra circuitry, we can make a feedback loop that monitors the current in d1 and effectively adjusts r1 for us, so the led will maintain the same brightness over a wide range of supply voltages. handy, if you want the led to stay bright as the battery gets low. replace u1 with a microcontroller and make some adjustments here and there to make this more efficient, and you really have something.
https://api.stackexchange.com
this is a nice question, as it confronts a very replicable and common experience with a well established yet seemingly contradictory fact. as you expected, the smell of metal has nothing to do with the metal actually getting into your nose, as most metals have far too low of a vapor pressure at ordinary temperatures to allow direct detection. the characteristic smell of metal, in fact, is caused by organic substances! there has been the focus on the specific case of the smell of iron ( free - access article! ). there are at least two ways in which iron produces a metallic smell. firstly, acidic substances are capable of corroding iron and steel, releasing phosphorus and carbon atoms present in the metal or alloy. these can react to form volatile organophosphorus compounds such as methylphosphine ( $ \ ce { h3cph2 } $ which have a garlic / metallic odor at small concentrations. from the article : the “ garlic ” metallic odor ( see supporting information ) of the gas product from the acidic dissolution of cast iron is dominated by these organophosphines. we measured an extremely low odor threshold for two key odorants, methylphosphine and dimethylphosphine ( 6 and 3 ng p / m³, respectively, garlic - metallic odor ), which belong therefore to the most potent odorants known. phosphine ( $ \ ce { ph3 } $ ) is not important for this odor because we found it has a much higher odor detection threshold ( > 10⁶ ng / m³ ). a “ calcium carbide ” ( or “ burned lime ” / “ cement ” ) attribute of the general “ garlic ” odor is probably caused by unsaturated hydrocarbons ( alkynes, alkadienes ) that are linked to a high carbon content of iron ( table 1, see supporting information ). also, it turns out that $ \ ce { fe ^ { 2 + } } $ ions ( but not $ \ ce { fe ^ { 3 + } } $ ) are capable of oxidizing substances present in oils produced by the skin, namely lipid peroxides. a small amount of $ \ ce { fe ^ { 2 + } } $ ions are produced when iron comes into contact with acids in sweat. these then decompose the oils releasing a mixture of ketones and aldehydes with carbon chains between 6 and 10 atoms long. in particular, most of the smell of metal comes from the unsaturated ketone 1 - octen -
https://api.stackexchange.com
3 - one, which has a fungal / metallic odour even in concentrations as low as $ 1 \ \ mu g \ m ^ { - 3 } $. in short : sweaty skin corrodes iron metal to form reactive $ \ ce { fe ^ { 2 + } } $ ions that are oxidized within seconds to $ \ ce { fe ^ { 3 + } } $ ions while simultaneously reducing and decomposing existing skin lipid peroxides to odorous carbonyl hydrocarbons that are perceived as a metallic odor. in the supporting information for the article ( also free - access ), the authors describe experiments performed with other metals, including copper : comparison of iron metal with other metals ( copper, brass, zinc, etc. ) : when solid copper metal or brass ( copper - zinc alloy ) was contacted with the skin instead of iron, a similar metallic odor and gc - peak pattern of carbonyl hydrocarbons was produced and up to one μmole / dm² of monovalent cuprous ion [ $ \ ce { cu + } $ ] was detected as a corrosion product ( supporting figs. s3 to s6 ). zinc, a metal that forms $ \ ce { zn ^ { 2 + } } $ but no stable $ \ ce { zn + } $, was hesitant to form metallic odor, except on very strong rubbing of the metal versus skin ( that could produce metastable monovalent $ \ ce { zn + } $ ). the use of common color - tests to demonstrate directly on human palm skin the presence of low - valence ions ( ferrous and cuprous ) from the corrosion of iron, copper and brass alloys is shown in supporting figure s6. alumina powder rubbed on skin did not produce significant odorants. these results provide additional evidence that it is not metal evaporation, but skin lipid peroxide reduction and decomposition by low valence metal ions that produces the odorants. the last paragraphs of the article summarize the findings : in conclusion : 1 ) the typical “ musty ” metallic odor of iron metal touching skin ( epidermis ) is caused by volatile carbonyl compounds ( aldehydes, ketones ) produced through the reaction of skin peroxides with ferrous ions ( $ \ ce { fe ^ { 2 + } } $ ) that are formed in the sweat - mediated corrosion of iron. $ \ ce { fe ^ { 2 + } } $ ion containing metal
https://api.stackexchange.com
surfaces, rust, drinking water, blood etc., but also copper and brass, give rise to a similar odor on contact with the skin. the human ability to detect this odor is probably a result of the evolutionarily developed but largely dormant ability to smell blood ( “ blood scent ” ). the “ garlic - carbide ” metallic odor of phosphorus - and carbon - rich cast iron and steel under attack by acid, is dominated by volatile organophosphines. corroding cast iron is an environmental source of c – p compounds that may lead to confusion in the verification and monitoring of the chemical weapons convention ( see also ref. [ 15 ] ) as an aside, this may be why sometimes people recommend getting strong smells off your hands by rubbing them against a metal object. while it probably doesn't work for some metals and for some smelly compounds, it's possible that the metal catalyzes the decomposition of the malodorous substances into less strongly smelling ones. you can read a little more in this press article on the study.
https://api.stackexchange.com
image processing applications are different from say audio processing applications, because many of them are tuned for the eye. gaussian masks nearly perfectly simulate optical blur ( see also point spread functions ). in any image processing application oriented at artistic production, gaussian filters are used for blurring by default. another important quantitative property of gaussian filters is that they're everywhere non - negative. this is important because most 1d signals vary about 0 ( $ x \ in \ mathbb { r } $ ) and can have either positive or negative values. images are different in the sense that all values of an image are non - negative ( $ x \ in \ mathbb { r } ^ + $ ). convolution with a gaussian kernel ( filter ) guarantees a non - negative result, so such function maps non - negative values to other non - negative values ( $ f : \ mathbb { r } ^ + \ rightarrow \ mathbb { r } ^ + $ ). the result is therefore always another valid image. in general, frequency rejection in image processing in not as crucial as in 1d signals. for example, in modulation schemes your filters need to be very precise to reject other channels transmitted on different carrier frequencies, and so on. i can't think of anything just as constraining for image processing problems.
https://api.stackexchange.com
there's a textbook waiting to be written at some point, with the working title data structures, algorithms, and tradeoffs. almost every algorithm or data structure which you're likely to learn at the undergraduate level has some feature which makes it better for some applications than others. let's take sorting as an example, since everyone is familiar with the standard sort algorithms. first off, complexity isn't the only concern. in practice, constant factors matter, which is why ( say ) quick sort tends to be used more than heap sort even though quick sort has terrible worst - case complexity. secondly, there's always the chance that you find yourself in a situation where you're programming under strange constraints. i once had to do quantile extraction from a modest - sized ( 1000 or so ) collection of samples as fast as possible, but it was on a small microcontroller which had very little spare read - write memory, so that ruled out most $ o ( n \ log n ) $ sort algorithms. shell sort was the best tradeoff, since it was sub - quadratic and didn't require additional memory. in other cases, ideas from an algorithm or data structure might be applicable to a special - purpose problem. bubble sort seems to be always slower than insertion sort on real hardware, but the idea of performing a bubble pass is sometimes exactly what you need. consider, for example, some kind of 3d visualisation or video game on a modern video card, where you'd like to draw objects in order from closest - to - the - camera to furthest - from - the - camera for performance reasons, but if you don't get the order exact, the hardware will take care of it. if you're moving around the 3d environment, the relative order of objects won't change very much between frames, so performing one bubble pass every frame might be a reasonable tradeoff. ( the source engine by valve does this for particle effects. ) there's persistence, concurrency, cache locality, scalability onto a cluster / cloud, and a host of other possible reasons why one data structure or algorithm may be more appropriate than another even given the same computational complexity for the operations that you care about. having said that, that doesn't mean that you should memorise a bunch of algorithms and data structures just in case. most of the battle is realising that there is a tradeoff to be exploited in the first place, and knowing where to look if you think there might be something appropriate.
https://api.stackexchange.com
as said by john rennie, it has to do with the shadows'fuzzyness. however, that alone doesn't quite explain it. let's do this with actual fuzzyness : i've simulated shadow by blurring each shape and multiplying the brightness values1. here's the gimp file, so you can see how exactly and move the shapes around yourself. i don't think you'd say there's any bending going on, at least to me the book's edge still looks perfectly straight. so what's happening in your experiment, then? nonlinear response is the answer. in particular in your video, the directly - sunlit wall is overexposed, i. e. regardless of the " exact brightness ", the pixel - value is pure white. for dark shades, the camera's noise surpression clips the values to black. we can simulate this for the above picture : now that looks a lot like your video, doesn't it? with bare eyes, you'll normally not notice this, because our eyes are kind of trained to compensate for the effect, which is why nothing looks bent in the unprocessed picture. this only fails at rather extreme light conditions : probably, most of your room is dark, with a rather narrow beam of light making for a very large luminocity range. then, the eyes also behave too non - linear, and the brain cannot reconstruct how the shapes would have looked without the fuzzyness anymore. actually of course, the brightness topography is always the same, as seen by quantising the colour palette : 1to simulate shadows properly, you need to use convolution of the whole aperture, with the sun's shape as a kernel. as ilmari karonen remarks, this does make a relevant difference : the convolution of a product of two sharp shadows $ a $ and $ b $ with blurring kernel $ k $ is $ $ \ begin { aligned } c ( \ mathbf { x } ) = & \ int _ { \ mathbb { r } ^ 2 } \! \ mathrm { d } { \ mathbf { x'} } \ : \ bigl ( a ( \ mathbf { x } - \ mathbf { x }') \ cdot b ( \ mathbf { x } - \ mathbf { x'} ) \ bigr ) \ cdot k ( \ mathbf { x }')
https://api.stackexchange.com
\ \ = & \ mathrm { ift } \ left ( \ backslash { \ mathbf { k } } \ to \ mathrm { ft } \ bigl ( \ backslash \ mathbf { x }'\ to a ( \ mathbf { x }') \ cdot b ( \ mathbf { x }') \ bigr ) ( \ mathbf { k } ) \ cdot \ tilde { k } ( \ mathbf { k } ) \ right ) ( \ mathbf { x } ) \ end { aligned } $ $ whereas seperate blurring yields $ $ \ begin { aligned } d ( \ mathbf { x } ) = & \ left ( \ int _ { \ mathbb { r } ^ 2 } \! \ mathrm { d } { \ mathbf { x'} } \ : a ( \ mathbf { x } - \ mathbf { x }') \ cdot k ( \ mathbf { x }') \ right ) \ cdot \ int _ { \ mathbb { r } ^ 2 } \! \ mathrm { d } { \ mathbf { x'} } \ : b ( \ mathbf { x } - \ mathbf { x'} ) \ cdot k ( \ mathbf { x }') \ \ = & \ mathrm { ift } \ left ( \ backslash { \ mathbf { k } } \ to \ tilde { a } ( \ mathbf { k } ) \ cdot \ tilde { k } ( \ mathbf { k } ) \ right ) ( \ mathbf { x } ) \ cdot \ mathrm { ift } \ left ( \ backslash { \ mathbf { k } } \ to \ tilde { b } ( \ mathbf { k } ) \ cdot \ tilde { k } ( \ mathbf { k } ) \ right ) ( \ mathbf { x } ). \ end { aligned } $ $ if we carry this out for a narrow slit of width $ w $ between two shadows ( almost a dirac peak ), the product's fourier transform can be approximated by a constant proportional to $ w $, while the $ \ mathrm { ft } $ of each shadow remains $ \ mathrm { sinc } $ - shaped, so if we take the taylor - series for the narrow overlap it shows the brightness will only decay as $
https://api.stackexchange.com
\ sqrt { w } $, i. e. stay brighter at close distances, which of course surpresses the bulging. and indeed, if we properly blur both shadows together, even without any nonlinearity, we get much more of a " bridging - effect " : but that still looks nowhere as " bulgy " as what's seen in your video.
https://api.stackexchange.com
intuitively, you can think of a binary indexed tree as a compressed representation of a binary tree that is itself an optimization of a standard array representation. this answer goes into one possible derivation. let's suppose, for example, that you want to store cumulative frequencies for a total of 7 different elements. you could start off by writing out seven buckets into which the numbers will be distributed : [ ] [ ] [ ] [ ] [ ] [ ] [ ] 1 2 3 4 5 6 7 now, let's suppose that the cumulative frequencies look something like this : [ 5 ] [ 6 ] [ 14 ] [ 25 ] [ 77 ] [ 105 ] [ 105 ] 1 2 3 4 5 6 7 using this version of the array, you can increment the cumulative frequency of any element by increasing the value of the number stored at that spot, then incrementing the frequencies of everything that come afterwards. for example, to increase the cumulative frequency of 3 by 7, we could add 7 to each element in the array at or after position 3, as shown here : [ 5 ] [ 6 ] [ 21 ] [ 32 ] [ 84 ] [ 112 ] [ 112 ] 1 2 3 4 5 6 7 the problem with this is that it takes o ( n ) time to do this, which is pretty slow if n is large. one way that we can think about improving this operation would be to change what we store in the buckets. rather than storing the cumulative frequency up to the given point, you can instead think of just storing the amount that the current frequency has increased relative to the previous bucket. for example, in our case, we would rewrite the above buckets as follows : before : [ 5 ] [ 6 ] [ 21 ] [ 32 ] [ 84 ] [ 112 ] [ 112 ] 1 2 3 4 5 6 7 after : [ + 5 ] [ + 1 ] [ + 15 ] [ + 11 ] [ + 52 ] [ + 28 ] [ + 0 ] 1 2 3 4 5 6 7 now, we can increment the frequency within a bucket in time o ( 1 ) by just adding the appropriate amount to that bucket. however, the total cost of doing a lookup now becomes o ( n ), since we have to recompute the total in the bucket by summing up the values in all smaller buckets. the first major insight we need to get from here to a binary indexed tree is the following : rather than continuously recomputing the sum of the array elements
https://api.stackexchange.com
that precede a particular element, what if we were to precompute the total sum of all the elements before specific points in the sequence? if we could do that, then we could figure out the cumulative sum at a point by just summing up the right combination of these precomputed sums. one way to do this is to change the representation from being an array of buckets to being a binary tree of nodes. each node will be annotated with a value that represents the cumulative sum of all the nodes to the left of that given node. for example, suppose we construct the following binary tree from these nodes : 4 / \ 2 6 / \ / \ 1 3 5 7 now, we can augment each node by storing the cumulative sum of all the values including that node and its left subtree. for example, given our values, we would store the following : before : [ + 5 ] [ + 1 ] [ + 15 ] [ + 11 ] [ + 52 ] [ + 28 ] [ + 0 ] 1 2 3 4 5 6 7 after : 4 [ + 32 ] / \ 2 6 [ + 6 ] [ + 80 ] / \ / \ 1 3 5 7 [ + 5 ] [ + 15 ] [ + 52 ] [ + 0 ] given this tree structure, it's easy to determine the cumulative sum up to a point. the idea is the following : we maintain a counter, initially 0, then do a normal binary search up until we find the node in question. as we do so, we also do the following : any time that we move right, add the current value to the counter. for example, suppose we want to look up the sum for 3. to do so, we do the following : start at the root ( 4 ). counter is 0. go left to node ( 2 ). counter is 0. go right to node ( 3 ). counter is 0 + 6 = 6. find node ( 3 ). counter is 6 + 15 = 21. you could imagine also running this process in reverse : starting at a given node, initialize the counter to that node's value, then walk up the tree to the root. any time you follow a right child link upward, add in the value at the node you arrive at. for example, to find the frequency for 3, we could do the following : start at node ( 3 ). counter is 15. go upward to node ( 2 ). counter is 15 + 6 = 21. go
https://api.stackexchange.com
upward to node ( 4 ). counter is 21. to increment the frequency of a node ( and, implicitly, the frequencies of all nodes that come after it ), we need to update the set of nodes in the tree that include that node in its left subtree. to do this, we do the following : increment the frequency for that node, then start walking up to the root of the tree. any time you follow a link that takes you up as a left child, increment the frequency of the node you encounter by adding in the current value. for example, to increment the frequency of node 1 by five, we would do the following : 4 [ + 32 ] / \ 2 6 [ + 6 ] [ + 80 ] / \ / \ > 1 3 5 7 [ + 5 ] [ + 15 ] [ + 52 ] [ + 0 ] starting at node 1, increment its frequency by 5 to get 4 [ + 32 ] / \ 2 6 [ + 6 ] [ + 80 ] / \ / \ > 1 3 5 7 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] now, go to its parent : 4 [ + 32 ] / \ > 2 6 [ + 6 ] [ + 80 ] / \ / \ 1 3 5 7 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] we followed a left child link upward, so we increment this node's frequency as well : 4 [ + 32 ] / \ > 2 6 [ + 11 ] [ + 80 ] / \ / \ 1 3 5 7 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] we now go to its parent : > 4 [ + 32 ] / \ 2 6 [ + 11 ] [ + 80 ] / \ / \ 1 3 5 7 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] that was a left child link, so we increment this node as well : 4 [ + 37 ] / \ 2 6 [ + 11 ] [ + 80 ] / \ / \ 1 3 5 7 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] and now we're done! the final step is to convert from this to a binary indexed tree, and this is where we get to do some fun things with binary numbers. let's rewrite each bucket index in this tree in binary : 100 [ + 37 ]
https://api.stackexchange.com
/ \ 010 110 [ + 11 ] [ + 80 ] / \ / \ 001 011 101 111 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] here, we can make a very, very cool observation. take any of these binary numbers and find the very last 1 that was set in the number, then drop that bit off, along with all the bits that come after it. you are now left with the following : ( empty ) [ + 37 ] / \ 0 1 [ + 11 ] [ + 80 ] / \ / \ 00 01 10 11 [ + 10 ] [ + 15 ] [ + 52 ] [ + 0 ] here is a really, really cool observation : if you treat 0 to mean " left " and 1 to mean " right, " the remaining bits on each number spell out exactly how to start at the root and then walk down to that number. for example, node 5 has binary pattern 101. the last 1 is the final bit, so we drop that to get 10. indeed, if you start at the root, go right ( 1 ), then go left ( 0 ), you end up at node 5! the reason that this is significant is that our lookup and update operations depend on the access path from the node back up to the root and whether we're following left or right child links. for example, during a lookup, we just care about the right links we follow. during an update, we just care about the left links we follow. this binary indexed tree does all of this super efficiently by just using the bits in the index. the key trick is the following property of this perfect binary tree : given node n, the next node on the access path back up to the root in which we go right is given by taking the binary representation of n and removing the last 1. for example, take a look at the access path for node 7, which is 111. the nodes on the access path to the root that we take that involve following a right pointer upward is node 7 : 111 node 6 : 110 node 4 : 100 all of these are right links. if we take the access path for node 3, which is 011, and look at the nodes where we go right, we get node 3 : 011 node 2 : 010 ( node 4 : 100, which follows a left link ) this means that we can very, very efficiently compute the cumulative sum up to a node as follows : write out node n in binary.
https://api.stackexchange.com
set the counter to 0. repeat the following while n = 0 : add in the value at node n. clear the rightmost 1 bit from n. similarly, let's think about how we would do an update step. to do this, we would want to follow the access path back up to the root, updating all nodes where we followed a left link upward. we can do this by essentially doing the above algorithm, but switching all 1's to 0's and 0's to 1's. the final step in the binary indexed tree is to note that because of this bitwise trickery, we don't even need to have the tree stored explicitly anymore. we can just store all the nodes in an array of length n, then use the bitwise twiddling techniques to navigate the tree implicitly. in fact, that's exactly what the bitwise indexed tree does - it stores the nodes in an array, then uses these bitwise tricks to efficiently simulate walking upward in this tree. hope this helps!
https://api.stackexchange.com
i'm not sure what your boss thinks " more predictive " means. many people incorrectly believe that lower $ p $ - values mean a better / more predictive model. that is not necessarily true ( this being a case in point ). however, independently sorting both variables beforehand will guarantee a lower $ p $ - value. on the other hand, we can assess the predictive accuracy of a model by comparing its predictions to new data that were generated by the same process. i do that below in a simple example ( coded with r ). options ( digits = 3 ) # for cleaner output set. seed ( 9149 ) # this makes the example exactly reproducible b1 =. 3 n = 50 # 50 data x = rnorm ( n, mean = 0, sd = 1 ) # standard normal x y = 0 + b1 * x + rnorm ( n, mean = 0, sd = 1 ) # cor ( x, y ) =. 31 sx = sort ( x ) # sorted independently sy = sort ( y ) cor ( x, y ) # [ 1 ] 0. 309 cor ( sx, sy ) # [ 1 ] 0. 993 model. u = lm ( y ~ x ) model. s = lm ( sy ~ sx ) summary ( model. u ) $ coefficients # estimate std. error t value pr ( > | t | ) # ( intercept ) 0. 021 0. 139 0. 151 0. 881 # x 0. 340 0. 151 2. 251 0. 029 # significant summary ( model. s ) $ coefficients # estimate std. error t value pr ( > | t | ) # ( intercept ) 0. 162 0. 0168 9. 68 7. 37e - 13 # sx 1. 094 0. 0183 59. 86 9. 31e - 47 # wildly significant u. error = vector ( length = n ) # these will hold the output s. error = vector ( length = n ) for ( i in 1 : n ) { new. x = rnorm ( 1, mean = 0, sd = 1 ) # data generated in exactly the same way new. y = 0 + b1 * x + rnorm ( n, mean = 0, sd = 1 ) pred. u = predict ( model. u, newdata = data. frame ( x = new. x ) ) pred. s = predict
https://api.stackexchange.com
( model. s, newdata = data. frame ( x = new. x ) ) u. error [ i ] = abs ( pred. u - new. y ) # these are the absolute values of s. error [ i ] = abs ( pred. s - new. y ) # the predictive errors } ; rm ( i, new. x, new. y, pred. u, pred. s ) u. s = u. error - s. error # negative values means the original # yielded more accurate predictions mean ( u. error ) # [ 1 ] 1. 1 mean ( s. error ) # [ 1 ] 1. 98 mean ( u. s < 0 ) # [ 1 ] 0. 68 windows ( ) layout ( matrix ( 1 : 4, nrow = 2, byrow = true ) ) plot ( x, y, main = " original data " ) abline ( model. u, col = " blue " ) plot ( sx, sy, main = " sorted data " ) abline ( model. s, col = " red " ) h. u = hist ( u. error, breaks = 10, plot = false ) h. s = hist ( s. error, breaks = 9, plot = false ) plot ( h. u, xlim = c ( 0, 5 ), ylim = c ( 0, 11 ), main = " histogram of prediction errors ", xlab = " magnitude of prediction error ", col = rgb ( 0, 0, 1, 1 / 2 ) ) plot ( h. s, col = rgb ( 1, 0, 0, 1 / 4 ), add = true ) legend ( " topright ", legend = c ( " original ", " sorted " ), pch = 15, col = c ( rgb ( 0, 0, 1, 1 / 2 ), rgb ( 1, 0, 0, 1 / 4 ) ) ) dotchart ( u. s, color = ifelse ( u. s < 0, " blue ", " red " ), lcolor = " white ", main = " difference between predictive errors " ) abline ( v = 0, col = " gray " ) legend ( " topright ", legend = c ( " u better ", " s better " ), pch = 1, col = c ( " blue ", " red
https://api.stackexchange.com
" ) ) the upper left plot shows the original data. there is some relationship between $ x $ and $ y $ ( viz., the correlation is about $. 31 $. ) the upper right plot shows what the data look like after independently sorting both variables. you can easily see that the strength of the correlation has increased substantially ( it is now about $. 99 $ ). however, in the lower plots, we see that the distribution of predictive errors is much closer to $ 0 $ for the model trained on the original ( unsorted ) data. the mean absolute predictive error for the model that used the original data is $ 1. 1 $, whereas the mean absolute predictive error for the model trained on the sorted data is $ 1. 98 $ — nearly twice as large. that means the sorted data model's predictions are much further from the correct values. the plot in the lower right quadrant is a dot plot. it displays the differences between the predictive error with the original data and with the sorted data. this lets you compare the two corresponding predictions for each new observation simulated. blue dots to the left are times when the original data were closer to the new $ y $ - value, and red dots to the right are times when the sorted data yielded better predictions. there were more accurate predictions from the model trained on the original data $ 68 \ % $ of the time. the degree to which sorting will cause these problems is a function of the linear relationship that exists in your data. if the correlation between $ x $ and $ y $ were $ 1. 0 $ already, sorting would have no effect and thus not be detrimental. on the other hand, if the correlation were $ - 1. 0 $, the sorting would completely reverse the relationship, making the model as inaccurate as possible. if the data were completely uncorrelated originally, the sorting would have an intermediate, but still quite large, deleterious effect on the resulting model's predictive accuracy. since you mention that your data are typically correlated, i suspect that has provided some protection against the harms intrinsic to this procedure. nonetheless, sorting first is definitely harmful. to explore these possibilities, we can simply re - run the above code with different values for b1 ( using the same seed for reproducibility ) and examine the output : b1 = - 5 : cor ( x, y ) # [ 1 ] - 0. 978 summary ( model. u ) $ coefficients [ 2, 4 ] # [ 1 ]
https://api.stackexchange.com
1. 6e - 34 # ( i. e., the p - value ) summary ( model. s ) $ coefficients [ 2, 4 ] # [ 1 ] 1. 82e - 42 mean ( u. error ) # [ 1 ] 7. 27 mean ( s. error ) # [ 1 ] 15. 4 mean ( u. s < 0 ) # [ 1 ] 0. 98 b1 = 0 : cor ( x, y ) # [ 1 ] 0. 0385 summary ( model. u ) $ coefficients [ 2, 4 ] # [ 1 ] 0. 791 summary ( model. s ) $ coefficients [ 2, 4 ] # [ 1 ] 4. 42e - 36 mean ( u. error ) # [ 1 ] 0. 908 mean ( s. error ) # [ 1 ] 2. 12 mean ( u. s < 0 ) # [ 1 ] 0. 82 b1 = 5 : cor ( x, y ) # [ 1 ] 0. 979 summary ( model. u ) $ coefficients [ 2, 4 ] # [ 1 ] 7. 62e - 35 summary ( model. s ) $ coefficients [ 2, 4 ] # [ 1 ] 3e - 49 mean ( u. error ) # [ 1 ] 7. 55 mean ( s. error ) # [ 1 ] 6. 33 mean ( u. s < 0 ) # [ 1 ] 0. 44
https://api.stackexchange.com
i think the wikipedia articles $ \ mathsf { p } $, $ \ mathsf { np } $, and $ \ mathsf { p } $ vs. $ \ mathsf { np } $ are quite good. still here is what i would say : part i, part ii [ i will use remarks inside brackets to discuss some technical details which you can skip if you want. ] part i decision problems there are various kinds of computational problems. however in an introduction to computational complexity theory course it is easier to focus on decision problem, i. e. problems where the answer is either yes or no. there are other kinds of computational problems but most of the time questions about them can be reduced to similar questions about decision problems. moreover decision problems are very simple. therefore in an introduction to computational complexity theory course we focus our attention to the study of decision problems. we can identify a decision problem with the subset of inputs that have answer yes. this simplifies notation and allows us to write $ x \ in q $ in place of $ q ( x ) = yes $ and $ x \ notin q $ in place of $ q ( x ) = no $. another perspective is that we are talking about membership queries in a set. here is an example : decision problem : input : a natural number $ x $, question : is $ x $ an even number? membership problem : input : a natural number $ x $, question : is $ x $ in $ even = \ { 0, 2, 4, 6, \ cdots \ } $? we refer to the yes answer on an input as accepting the input and to the no answer on an input as rejecting the input. we will look at algorithms for decision problems and discuss how efficient those algorithms are in their usage of computable resources. i will rely on your intuition from programming in a language like c in place of formally defining what we mean by an algorithm and computational resources. [ remarks : if we wanted to do everything formally and precisely we would need to fix a model of computation like the standard turing machine model to precisely define what we mean by an algorithm and its usage of computational resources. if we want to talk about computation over objects that the model cannot directly handle, we would need to encode them as objects that the machine model can handle, e. g. if we are using turing machines we need to encode objects like natural numbers and graphs as binary strings. ] $ \ mathsf { p } $ = problems with efficient algorithms
https://api.stackexchange.com
for finding solutions assume that efficient algorithms means algorithms that use at most polynomial amount of computational resources. the main resource we care about is the worst - case running time of algorithms with respect to the input size, i. e. the number of basic steps an algorithm takes on an input of size $ n $. the size of an input $ x $ is $ n $ if it takes $ n $ - bits of computer memory to store $ x $, in which case we write $ | x | = n $. so by efficient algorithms we mean algorithms that have polynomial worst - case running time. the assumption that polynomial - time algorithms capture the intuitive notion of efficient algorithms is known as cobham's thesis. i will not discuss at this point whether $ \ mathsf { p } $ is the right model for efficiently solvable problems and whether $ \ mathsf { p } $ does or does not capture what can be computed efficiently in practice and related issues. for now there are good reasons to make this assumption so for our purpose we assume this is the case. if you do not accept cobham's thesis it does not make what i write below incorrect, the only thing we will lose is the intuition about efficient computation in practice. i think it is a helpful assumption for someone who is starting to learn about complexity theory. $ \ mathsf { p } $ is the class of decision problems that can be solved efficiently, i. e. decision problems which have polynomial - time algorithms. more formally, we say a decision problem $ q $ is in $ \ mathsf { p } $ iff there is an efficient algorithm $ a $ such that for all inputs $ x $, if $ q ( x ) = yes $ then $ a ( x ) = yes $, if $ q ( x ) = no $ then $ a ( x ) = no $. i can simply write $ a ( x ) = q ( x ) $ but i write it this way so we can compare it to the definition of $ \ mathsf { np } $. $ \ mathsf { np } $ = problems with efficient algorithms for verifying proofs / certificates / witnesses sometimes we do not know any efficient way of finding the answer to a decision problem, however if someone tells us the answer and gives us a proof we can efficiently verify that the answer is correct by checking the proof to see if it is a valid proof. this is the idea behind the complexity class $ \ mathsf { np } $. if the proof is
https://api.stackexchange.com
too long it is not really useful, it can take too long to just read the proof let alone check if it is valid. we want the time required for verification to be reasonable in the size of the original input, not the size of the given proof! this means what we really want is not arbitrary long proofs but short proofs. note that if the verifier's running time is polynomial in the size of the original input then it can only read a polynomial part of the proof. so by short we mean of polynomial size. from this point on whenever i use the word " proof " i mean " short proof ". here is an example of a problem which we do not know how to solve efficiently but we can efficiently verify proofs : partition input : a finite set of natural numbers $ s $, question : is it possible to partition $ s $ into two sets $ a $ and $ b $ ( $ a \ cup b = s $ and $ a \ cap b = \ emptyset $ ) such that the sum of the numbers in $ a $ is equal to the sum of number in $ b $ ( $ \ sum _ { x \ in a } x = \ sum _ { x \ in b } x $ )? if i give you $ s $ and ask you if we can partition it into two sets such that their sums are equal, you do not know any efficient algorithm to solve it. you will probably try all possible ways of partitioning the numbers into two sets until you find a partition where the sums are equal or until you have tried all possible partitions and none has worked. if any of them worked you would say yes, otherwise you would say no. but there are exponentially many possible partitions so it will take a lot of time to enumerate all the possibilities. however if i give you two sets $ a $ and $ b $, you can easily check if the sums are equal and if $ a $ and $ b $ is a partition of $ s $. note that we can compute sums efficiently. here the pair of $ a $ and $ b $ that i give you is a proof for a yes answer. you can efficiently verify my claim by looking at my proof and checking if it is a valid proof. if the answer is yes then there is a valid proof, and i can give it to you and you can verify it efficiently. if the answer is no then there is no valid proof. so whatever i give you you can check and see
https://api.stackexchange.com
it is not a valid proof. i cannot trick you by an invalid proof that the answer is yes. recall that if the proof is too big it will take a lot of time to verify it, we do not want this to happen, so we only care about efficient proofs, i. e. proofs which have polynomial size. sometimes people use " certificate " or " witness " in place of " proof ". note i am giving you enough information about the answer for a given input $ x $ so that you can find and verify the answer efficiently. for example, in our partition example i do not tell you the answer, i just give you a partition, and you can check if it is valid or not. note that you have to verify the answer yourself, you cannot trust me about what i say. moreover you can only check the correctness of my proof. if my proof is valid it means the answer is yes. but if my proof is invalid it does not mean the answer is no. you have seen that one proof was invalid, not that there are no valid proofs. we are talking about proofs for yes. we are not talking about proofs for no. let us look at an example : $ a = \ { 2, 4 \ } $ and $ b = \ { 1, 5 \ } $ is a proof that $ s = \ { 1, 2, 4, 5 \ } $ can be partitioned into two sets with equal sums. we just need to sum up the numbers in $ a $ and the numbers in $ b $ and see if the results are equal, and check if $ a $, $ b $ is partition of $ s $. if i gave you $ a = \ { 2, 5 \ } $ and $ b = \ { 1, 4 \ } $, you will check and see that my proof is invalid. it does not mean the answer is no, it just means that this particular proof was invalid. your task here is not to find the answer, but only to check if the proof you are given is valid. it is like a student solving a question in an exam and a professor checking if the answer is correct. : ) ( unfortunately often students do not give enough information to verify the correctness of their answer and the professors have to guess the rest of their partial answer and decide how much mark they should give to the students for their partial answers, indeed a quite difficult task ). the amazing thing is that the same situation applies to many
https://api.stackexchange.com
other natural problems that we want to solve : we can efficiently verify if a given short proof is valid, but we do not know any efficient way of finding the answer. this is the motivation why the complexity class $ \ mathsf { np } $ is extremely interesting ( though this was not the original motivation for defining it ). whatever you do ( not just in cs, but also in math, biology, physics, chemistry, economics, management, sociology, business,... ) you will face computational problems that fall in this class. to get an idea of how many problems turn out to be in $ \ mathsf { np } $ check out a compendium of np optimization problems. indeed you will have hard time finding natural problems which are not in $ \ mathsf { np } $. it is simply amazing. $ \ mathsf { np } $ is the class of problems which have efficient verifiers, i. e. there is a polynomial time algorithm that can verify if a given solution is correct. more formally, we say a decision problem $ q $ is in $ \ mathsf { np } $ iff there is an efficient algorithm $ v $ called verifier such that for all inputs $ x $, if $ q ( x ) = yes $ then there is a proof $ y $ such that $ v ( x, y ) = yes $, if $ q ( x ) = no $ then for all proofs $ y $, $ v ( x, y ) = no $. we say a verifier is sound if it does not accept any proof when the answer is no. in other words, a sound verifier cannot be tricked to accept a proof if the answer is really no. no false positives. similarly, we say a verifier is complete if it accepts at least one proof when the answer is yes. in other words, a complete verifier can be convinced of the answer being yes. the terminology comes from logic and proof systems. we cannot use a sound proof system to prove any false statements. we can use a complete proof system to prove all true statements. the verifier $ v $ gets two inputs, $ x $ : the original input for $ q $, and $ y $ : a suggested proof for $ q ( x ) = yes $. note that we want $ v $ to be efficient in the size of $ x $. if $ y $ is a big proof the ve
https://api.stackexchange.com
##rifier will be able to read only a polynomial part of $ y $. that is why we require the proofs to be short. if $ y $ is short saying that $ v $ is efficient in $ x $ is the same as saying that $ v $ is efficient in $ x $ and $ y $ ( because the size of $ y $ is bounded by a fixed polynomial in the size of $ x $ ). in summary, to show that a decision problem $ q $ is in $ \ mathsf { np } $ we have to give an efficient verifier algorithm which is sound and complete. historical note : historically this is not the original definition of $ \ mathsf { np } $. the original definition uses what is called non - deterministic turing machines. these machines do not correspond to any actual machine model and are difficult to get used to ( at least when you are starting to learn about complexity theory ). i have read that many experts think that they would have used the verifier definition as the main definition and even would have named the class $ \ mathsf { vp } $ ( for verifiable in polynomial - time ) in place of $ \ mathsf { np } $ if they go back to the dawn of the computational complexity theory. the verifier definition is more natural, easier to understand conceptually, and easier to use to show problems are in $ \ mathsf { np } $. $ \ mathsf { p } \ subseteq \ mathsf { np } $ therefore we have $ \ mathsf { p } $ = efficient solvable and $ \ mathsf { np } $ = efficiently verifiable. so $ \ mathsf { p } = \ mathsf { np } $ iff the problems that can be efficiently verified are the same as the problems that can be efficiently solved. note that any problem in $ \ mathsf { p } $ is also in $ \ mathsf { np } $, i. e. if you can solve the problem you can also verify if a given proof is correct : the verifier will just ignore the proof! that is because we do not need it, the verifier can compute the answer by itself, it can decide if the answer is yes or no without any help. if the answer is no we know there should be no proofs and our verifier will just reject every suggested proof. if the answer is yes, there should be
https://api.stackexchange.com
a proof, and in fact we will just accept anything as a proof. [ we could have made our verifier accept only some of them, that is also fine, as long as our verifier accept at least one proof the verifier works correctly for the problem. ] here is an example : sum input : a list of $ n + 1 $ natural numbers $ a _ 1, \ cdots, a _ n $, and $ s $, question : is $ \ sigma _ { i = 1 } ^ n a _ i = s $? the problem is in $ \ mathsf { p } $ because we can sum up the numbers and then compare it with $ s $, we return yes if they are equal, and no if they are not. the problem is also in $ \ mathsf { np } $. consider a verifier $ v $ that gets a proof plus the input for sum. it acts the same way as the algorithm in $ \ mathsf { p } $ that we described above. this is an efficient verifier for sum. note that there are other efficient verifiers for sum, and some of them might use the proof given to them. however the one we designed does not and that is also fine. since we gave an efficient verifier for sum the problem is in $ \ mathsf { np } $. the same trick works for all other problems in $ \ mathsf { p } $ so $ \ mathsf { p } \ subseteq \ mathsf { np } $. brute - force / exhaustive - search algorithms for $ \ mathsf { np } $ and $ \ mathsf { np } \ subseteq \ mathsf { exptime } $ the best algorithms we know of for solving an arbitrary problem in $ \ mathsf { np } $ are brute - force / exhaustive - search algorithms. pick an efficient verifier for the problem ( it has an efficient verifier by our assumption that it is in $ \ mathsf { np } $ ) and check all possible proofs one by one. if the verifier accepts one of them then the answer is yes. otherwise the answer is no. in our partition example, we try all possible partitions and check if the sums are equal in any of them. note that the brute - force algorithm runs in worst - case exponential time. the size of the proofs is polynomial in the
https://api.stackexchange.com
size of input. if the size of the proofs is $ m $ then there are $ 2 ^ m $ possible proofs. checking each of them will take polynomial time by the verifier. so in total the brute - force algorithm takes exponential time. this shows that any $ \ mathsf { np } $ problem can be solved in exponential time, i. e. $ \ mathsf { np } \ subseteq \ mathsf { exptime } $. ( moreover the brute - force algorithm will use only a polynomial amount of space, i. e. $ \ mathsf { np } \ subseteq \ mathsf { pspace } $ but that is a story for another day ). a problem in $ \ mathsf { np } $ can have much faster algorithms, for example any problem in $ \ mathsf { p } $ has a polynomial - time algorithm. however for an arbitrary problem in $ \ mathsf { np } $ we do not know algorithms that can do much better. in other words, if you just tell me that your problem is in $ \ mathsf { np } $ ( and nothing else about the problem ) then the fastest algorithm that we know of for solving it takes exponential time. however it does not mean that there are not any better algorithms, we do not know that. as far as we know it is still possible ( though thought to be very unlikely by almost all complexity theorists ) that $ \ mathsf { np } = \ mathsf { p } $ and all $ \ mathsf { np } $ problems can be solved in polynomial time. furthermore, some experts conjecture that we cannot do much better, i. e. there are problems in $ \ mathsf { np } $ that cannot be solved much more efficiently than brute - force search algorithms which take exponential amount of time. see the exponential time hypothesis for more information. but this is not proven, it is only a conjecture. it just shows how far we are from finding polynomial time algorithms for arbitrary $ \ mathsf { np } $ problems. this association with exponential time confuses some people : they think incorrectly that $ \ mathsf { np } $ problems require exponential - time to solve ( or even worse there are no algorithm for them at all ). stating that a problem is in $ \ mathsf { np } $ does not mean a problem is difficult to solve, it just means that it is easy to verify, it is an upper bound on the difficulty of solving the problem
https://api.stackexchange.com
, and many $ \ mathsf { np } $ problems are easy to solve since $ \ mathsf { p } \ subseteq \ mathsf { np } $. nevertheless, there are $ \ mathsf { np } $ problems which seem to be hard to solve. i will return to this in when we discuss $ \ mathsf { np } $ - hardness. lower bounds seem difficult to prove ok, so we now know that there are many natural problems that are in $ \ mathsf { np } $ and we do not know any efficient way of solving them and we suspect that they really require exponential time to solve. can we prove this? unfortunately the task of proving lower bounds is very difficult. we cannot even prove that these problems require more than linear time! let alone requiring exponential time. proving linear - time lower bounds is rather easy : the algorithm needs to read the input after all. proving super - linear lower bounds is a completely different story. we can prove super - linear lower bounds with more restrictions about the kind of algorithms we are considering, e. g. sorting algorithms using comparison, but we do not know lower - bounds without those restrictions. to prove an upper bound for a problem we just need to design a good enough algorithm. it often needs knowledge, creative thinking, and even ingenuity to come up with such an algorithm. however the task is considerably simpler compared to proving a lower bound. we have to show that there are no good algorithms. not that we do not know of any good enough algorithms right now, but that there does not exist any good algorithms, that no one will ever come up with a good algorithm. think about it for a minute if you have not before, how can we show such an impossibility result? this is another place where people get confused. here " impossibility " is a mathematical impossibility, i. e. it is not a short coming on our part that some genius can fix in future. when we say impossible we mean it is absolutely impossible, as impossible as $ 1 = 0 $. no scientific advance can make it possible. that is what we are doing when we are proving lower bounds. to prove a lower bound, i. e. to show that a problem requires some amount of time to solve, means that we have to prove that any algorithm, even very ingenuous ones that do not know yet, cannot solve the problem faster. there are many intelligent ideas that we know of ( greedy, matching, dynamic programming, linear
https://api.stackexchange.com
programming, semidefinite programming, sum - of - squares programming, and many other intelligent ideas ) and there are many many more that we do not know of yet. ruling out one algorithm or one particular idea of designing algorithms is not sufficient, we need to rule out all of them, even those we do not know about yet, even those may not ever know about! and one can combine all of these in an algorithm, so we need to rule out their combinations also. there has been some progress towards showing that some ideas cannot solve difficult $ \ mathsf { np } $ problems, e. g. greedy and its extensions cannot work, and there are some work related to dynamic programming algorithms, and there are some work on particular ways of using linear programming. but these are not even close to ruling out the intelligent ideas that we know of ( search for lower - bounds in restricted models of computation if you are interested ). barriers : lower bounds are difficult to prove on the other hand we have mathematical results called barriers that say that a lower - bound proof cannot be such and such, and such and such almost covers all techniques that we have used to prove lower bounds! in fact many researchers gave up working on proving lower bounds after alexander razbarov and steven rudich's natural proofs barrier result. it turns out that the existence of particular kind of lower - bound proofs would imply the insecurity of cryptographic pseudorandom number generators and many other cryptographic tools. i say almost because in recent years there has been some progress mainly by ryan williams that has been able to intelligently circumvent the barrier results, still the results so far are for very weak models of computation and quite far from ruling out general polynomial - time algorithms. but i am diverging. the main point i wanted to make was that proving lower bounds is difficult and we do not have strong lower bounds for general algorithms solving $ \ mathsf { np } $ problems. [ on the other hand, ryan williams'work shows that there are close connections between proving lower bounds and proving upper bounds. see his talk at icm 2014 if you are interested. ] reductions : solving a problem using another problem as a subroutine / oracle / black box the idea of a reduction is very simple : to solve a problem, use an algorithm for another problem. here is simple example : assume we want to compute the sum of a list of $ n $ natural numbers and we have an algorithm $ \ operatorname { sum } $ that
https://api.stackexchange.com
returns the sum of two given numbers. can we use $ \ operatorname { sum } $ to add up the numbers in the list? of course! problem : input : a list of $ n $ natural numbers $ x _ 1, \ ldots, x _ n $, output : return $ \ sum _ { i = 1 } ^ { n } x _ i $. reduction algorithm : $ s = 0 $ for $ i $ from $ 1 $ to $ n $ 2. 1. $ s = \ operatorname { sum } ( s, x _ i ) $ return $ s $ here we are using $ \ operatorname { sum } $ in our algorithm as a subroutine. note that we do not care about how $ \ operatorname { sum } $ works, it acts like black box for us, we do not care what is going on inside $ \ operatorname { sum } $. we often refer to the subroutine $ \ operatorname { sum } $ as oracle. it is like the oracle of delphi in greek mythology, we ask questions and the oracle answers them and we use the answers. this is essentially what a reduction is : assume that we have algorithm for a problem and use it as an oracle to solve another problem. here efficient means efficient assuming that the oracle answers in a unit of time, i. e. we count each execution of the oracle a single step. if the oracle returns a large answer we need to read it and that can take some time, so we should count the time it takes us to read the answer that oracle has given to us. similarly for writing / asking the question from the oracle. but oracle works instantly, i. e. as soon as we ask the question from the oracle the oracle writes the answer for us in a single unit of time. all the work that oracle does is counted a single step, but this excludes the time it takes us to write the question and read the answer. because we do not care how oracle works but only about the answers it returns we can make a simplification and consider the oracle to be the problem itself in place of an algorithm for it. in other words, we do not care if the oracle is not an algorithm, we do not care how oracles comes up with its replies. for example, $ \ operatorname { sum } $ in the question above is the addition function itself ( not an algorithm for computing addition ). we can ask multiple questions from an oracle, and the questions
https://api.stackexchange.com
does not need to be predetermined : we can ask a question and based on the answer that oracle returns we perform some computations by ourselves and then ask another question based on the answer we got for the previous question. another way of looking at this is thinking about it as an interactive computation. interactive computation in itself is large topic so i will not get into it here, but i think mentioning this perspective of reductions can be helpful. an algorithm $ a $ that uses a oracle / black box $ o $ is usually denoted as $ a ^ o $. the reduction we discussed above is the most general form of a reduction and is known as black - box reduction ( a. k. a. oracle reduction, turing reduction ). more formally : we say that problem $ q $ is black - box reducible to problem $ o $ and write $ q \ leq _ t o $ iff there is an algorithm $ a $ such that for all inputs $ x $, $ q ( x ) = a ^ o ( x ) $. in other words if there is an algorithm $ a $ which uses the oracle $ o $ as a subroutine and solves problem $ q $. if our reduction algorithm $ a $ runs in polynomial time we call it a polynomial - time black - box reduction or simply a cook reduction ( in honor of stephen a. cook ) and write $ q \ leq ^ \ mathsf { p } _ t o $. ( the subscript $ t $ stands for " turing " in the honor of alan turing ). however we may want to put some restrictions on the way the reduction algorithm interacts with the oracle. there are several restrictions that are studied but the most useful restriction is the one called many - one reductions ( a. k. a. mapping reductions ). the idea here is that on a given input $ x $, we perform some polynomial - time computation and generate a $ y $ that is an instance of the problem the oracle solves. we then ask the oracle and return the answer it returns to us. we are allowed to ask a single question from the oracle and the oracle's answers is what will be returned. more formally, we say that problem $ q $ is many - one reducible to problem $ o $ and write $ q \ leq _ m o $ iff there is an algorithm $ a $ such that for all inputs $ x $, $ q ( x ) = o ( a ( x ) ) $. when the
https://api.stackexchange.com
reduction algorithm is polynomial time we call it polynomial - time many - one reduction or simply karp reduction ( in honor of richard m. karp ) and denote it by $ q \ leq _ m ^ \ mathsf { p } o $. the main reason for the interest in this particular non - interactive reduction is that it preserves $ \ mathsf { np } $ problems : if there is a polynomial - time many - one reduction from a problem $ a $ to an $ \ mathsf { np } $ problem $ b $, then $ a $ is also in $ \ mathsf { np } $. the simple notion of reduction is one of the most fundamental notions in complexity theory along with $ \ mathsf { p } $, $ \ mathsf { np } $, and $ \ mathsf { np } $ - complete ( which we will discuss below ). the post has become too long and exceeds the limit of an answer ( 30000 characters ). i will continue the answer in part ii.
https://api.stackexchange.com
to address the first question, consider the model $ $ y = x + \ sin ( x ) + \ varepsilon $ $ with iid $ \ varepsilon $ of mean zero and finite variance. as the range of $ x $ ( thought of as fixed or random ) increases, $ r ^ 2 $ goes to 1. nevertheless, if the variance of $ \ varepsilon $ is small ( around 1 or less ), the data are " noticeably non - linear. " in the plots, $ var ( \ varepsilon ) = 1 $. incidentally, an easy way to get a small $ r ^ 2 $ is to slice the independent variables into narrow ranges. the regression ( using exactly the same model ) within each range will have a low $ r ^ 2 $ even when the full regression based on all the data has a high $ r ^ 2 $. contemplating this situation is an informative exercise and good preparation for the second question. both the following plots use the same data. the $ r ^ 2 $ for the full regression is 0. 86. the $ r ^ 2 $ for the slices ( of width 1 / 2 from - 5 / 2 to 5 / 2 ) are. 16,. 18,. 07,. 14,. 08,. 17,. 20,. 12,. 01,. 00, reading left to right. if anything, the fits get better in the sliced situation because the 10 separate lines can more closely conform to the data within their narrow ranges. although the $ r ^ 2 $ for all the slices are far below the full $ r ^ 2 $, neither the strength of the relationship, the linearity, nor indeed any aspect of the data ( except the range of $ x $ used for the regression ) has changed. ( one might object that this slicing procedure changes the distribution of $ x $. that is true, but it nevertheless corresponds with the most common use of $ r ^ 2 $ in fixed - effects modeling and reveals the degree to which $ r ^ 2 $ is telling us about the variance of $ x $ in the random - effects situation. in particular, when $ x $ is constrained to vary within a smaller interval of its natural range, $ r ^ 2 $ will usually drop. ) the basic problem with $ r ^ 2 $ is that it depends on too many things ( even when adjusted in multiple regression ), but most especially on the variance of the independent variables and the variance of the residuals.
https://api.stackexchange.com
normally it tells us nothing about " linearity " or " strength of relationship " or even " goodness of fit " for comparing a sequence of models. most of the time you can find a better statistic than $ r ^ 2 $. for model selection you can look to aic and bic ; for expressing the adequacy of a model, look at the variance of the residuals. this brings us finally to the second question. one situation in which $ r ^ 2 $ might have some use is when the independent variables are set to standard values, essentially controlling for the effect of their variance. then $ 1 - r ^ 2 $ is really a proxy for the variance of the residuals, suitably standardized.
https://api.stackexchange.com
let me start off with corrections. no, odeint doesn't have any symplectic integrators. no, symplectic integration doesn't mean conservation of energy. what does symplectic mean and when should you use it? first of all, what does symplectic mean? symplectic means that the solution exists on a symplectic manifold. a symplectic manifold is a solution set which is defined by a 2 - form. the details of symplectic manifolds probably sound like mathematical nonsense, so instead the gist of it is there is a direct relation between two sets of variables on such a manifold. the reason why this is important for physics is because hamiltonian's equations naturally have that the solutions reside on a symplectic manifold in phase space, with the natural splitting being the position and momentum components. for the true hamiltonian solution, that phase space path is constant energy. a symplectic integrator is an integrator whose solution resides on a symplectic manifold. because of discretization error, when it is solving a hamiltonian system it doesn't get exactly the correct trajectory on the manifold. instead, that trajectory itself is perturbed $ \ mathcal { o } ( \ delta t ^ n ) $ for the order $ n $ from the true trajectory. then there's a linear drift due to numerical error of this trajectory over time. normal integrators tend to have a quadratic ( or more ) drift, and do not have any good global guarantees about this phase space path ( just local ). what this tends to mean is that symplectic integrators tend to capture the long - time patterns better than normal integrators because of this lack of drift and this almost guarantee of periodicity. this notebook displays those properties well on the kepler problem. the first image shows what i'm talking about with the periodic nature of the solution. this was solved using the 6th order symplectic integrator from kahan and li from differentialequations. jl. you can see that the energy isn't exactly conserved, but its variation is dependent on how far the perturbed solution manifold is from the true manifold. but since the numerical solution itself resides on a symplectic manifold, it tends to be almost exactly periodic ( with some linear numerical drift that you can see ), making it do very nicely for long term integration. if you do the same with rk4, you can
https://api.stackexchange.com
get disaster : you can see that the issue is that there's no true periodicity in the numerical solution and therefore overtime it tends to drift. this highlights the true reason to choose symplectic integrators : symplectic integrators are good on long - time integrations on problems that have the symplectic property ( hamiltonian systems ). so let's walk through a few things. note that you don't always need symplectic integrators even on a symplectic problem. for this case, an adaptive 5th order runge - kutta method can do fine. here's tsit5 : notice two things. one, it gets a good enough accuracy that you cannot see the actual drift in the phase space plot. however, on the right side you can see that there is this energy drift, and so if you are doing a long enough integration this method will not do as well as the solution method with the periodic properties. but that raises the question, how does it fare efficiency - wise vs just integrating extremely accurately? well, this is a bit less certain. in scimlbenchmarks. jl you can find some benchmarks investigating this question. for example, this notebook looks at the energy error vs runtime on a hamiltonian equation system from a quadruple boson model and shows that if you want really high accuracy, then even for quite long integration times it's more efficient to just use a high order rk or runge - kutta nystrom ( rkn ) method. this makes sense because to satisfy the symplectic property the integrators give up some efficiency and pretty much have to be fixed time step ( there is some research making headway into the latter but it's not very far along ). in addition, notice from both of these notebooks that you can also just take a standard method and project it back to the solution manifold each step ( or every few steps ). this is what the examples using the differentialequations. jl manifoldprojection callback are doing. you see that guarantees conservation laws are upheld but with an added cost of solving an implicit system each step. you can also use a fully - implicit ode solver or singular mass matrices to add on conservation equations, but the end result is that these methods are more computationally - costly as a tradeoff. so to summarize, the class of problems where you want to reach for a symplectic integrator
https://api.stackexchange.com
are those that have a solution on a symplectic manifold ( hamiltonian systems ) where you don't want to invest the computational resources to have a very exact ( tolerance < 1e - 12 ) solution and don't need exact energy / etc. conservation. this highlights that it's all about long - term integration properties, so you shouldn't just flock to them all willy - nilly like some of the literature suggests. but they are still a very important tool in many fields like astrophysics where you do have long time integrations that you need to solve sufficiently fast without having absurd accuracy. where do i find symplectic integrators? what kind of symplectic integrators exist? there are generally two classes of symplectic integrators. there are the symplectic runge - kutta integrators ( which are the ones shown in the above examples ) and there are implicit runge - kutta methods which have the symplectic property. as @ origimbo mentions, the symplectic runge - kutta integrators require that you provide them with a partitioned structure so they can handle the position and momentum parts separately. however, counter to the comment, the implicit runge - kutta methods are symplectic without requiring this, but instead require solving a nonlinear system. this isn't too bad because if the system is non - stiff this nonlinear system can be solved with functional iteration or anderson acceleration, but the symplectic rk methods should still probably be preferred for efficiency ( it's a general rule that the more information you provide to an integrator, the more efficient it is ). that said, odeint does not have methods from either of these families, so it is not a good choice if you're looking for symplectic integrators. in fortran, hairer's site has a small set you can use. mathematica has a few built in. the gsl ode solvers have implicit rk gaussian point integrators which iirc are symplectic, but that's about the only reason to use the gsl methods. but the most comprehensive set of symplectic integrators can be found in differentialequations. jl in julia ( recall this was used for the notebooks above ). the list of available symplectic runge - kutta methods is found on this page and you'll notice that the
https://api.stackexchange.com
implicit midpoint method is also symplectic ( the implicit runge - kutta trapezoid method is considered " almost symplectic " because it's reversible ). not only does it have the largest set of methods, but it's also open - source ( you can see the code and its tests in a high - level language ) and has a lot of benchmarks. a good introductory notebook for using it to solve physical problems is this tutorial notebook. but of course it's recommended you get started with the package through the first ode tutorial. in general you can find a detailed analysis of numerical differential equation suites at this blog post. it's quite detailed but since it has to cover a lot of topics it does each at less detail than this, so feel free to ask for it to be expanded in any way.
https://api.stackexchange.com
i will transform the integral via a substitution, break it up into two pieces and recombine, perform an integration by parts, and perform another substitution to get an integral to which i know a closed form exists. from there, i use a method i know to attack the integral, but in an unusual way because of the 8th degree polynomial in the denominator of the integrand. first sub $ t = ( 1 - x ) / ( 1 + x ) $, $ dt = - 2 / ( 1 + x ) ^ 2 dx $ to get $ $ 2 \ int _ 0 ^ { \ infty } dt \ frac { t ^ { - 1 / 2 } } { 1 - t ^ 2 } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } $ $ now use the symmetry from the map $ t \ mapsto 1 / t $. break the integral up into two as follows : \ begin { align } & 2 \ int _ 0 ^ { 1 } dt \ frac { t ^ { - 1 / 2 } } { 1 - t ^ 2 } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } + 2 \ int _ 1 ^ { \ infty } dt \ frac { t ^ { - 1 / 2 } } { 1 - t ^ 2 } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } \ \ & = 2 \ int _ 0 ^ { 1 } dt \ frac { t ^ { - 1 / 2 } } { 1 - t ^ 2 } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } + 2 \ int _ 0 ^ { 1 } dt \ frac { t ^ { 1 / 2 } } { 1 - t ^ 2 } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } \ \ & = 2 \ int _ 0 ^ { 1 } dt \ frac { t ^ { - 1 / 2 } } { 1 -
https://api.stackexchange.com
t } \ log { \ left ( \ frac { 5 - 2 t + t ^ 2 } { 1 - 2 t + 5 t ^ 2 } \ right ) } \ end { align } sub $ t = u ^ 2 $ to get $ $ 4 \ int _ 0 ^ { 1 } \ frac { du } { 1 - u ^ 2 } \ log { \ left ( \ frac { 5 - 2 u ^ 2 + u ^ 4 } { 1 - 2 u ^ 2 + 5 u ^ 4 } \ right ) } $ $ integrate by parts : $ $ \ left [ 2 \ log { \ left ( \ frac { 1 + u } { 1 - u } \ right ) } \ log { \ left ( \ frac { 5 - 2 u ^ 2 + u ^ 4 } { 1 - 2 u ^ 2 + 5 u ^ 4 } \ right ) } \ right ] _ 0 ^ 1 \ \ - 32 \ int _ 0 ^ 1 du \ frac { \ left ( u ^ 5 - 6 u ^ 3 + u \ right ) } { \ left ( u ^ 4 - 2 u ^ 2 + 5 \ right ) \ left ( 5 u ^ 4 - 2 u ^ 2 + 1 \ right ) } \ log { \ left ( \ frac { 1 + u } { 1 - u } \ right ) } $ $ one last sub : $ u = ( v - 1 ) / ( v + 1 ) $ $ du = 2 / ( v + 1 ) ^ 2 dv $, and finally get $ $ 8 \ int _ 0 ^ { \ infty } dv \ frac { ( v ^ 2 - 1 ) ( v ^ 4 - 6 v ^ 2 + 1 ) } { v ^ 8 + 4 v ^ 6 + 70v ^ 4 + 4 v ^ 2 + 1 } \ log { v } $ $ with this form, we may finally conclude that a closed form exists and apply the residue theorem to obtain it. to wit, consider the following contour integral : $ $ \ oint _ c dz \ frac { 8 ( z ^ 2 - 1 ) ( z ^ 4 - 6 z ^ 2 + 1 ) } { z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 } \ log ^ 2 { z } $ $ where $ c $ is a keyhole contour about the positive real axis. this contour integral is equal to
https://api.stackexchange.com
( i omit the steps where i show the integral vanishes about the circular arcs ) $ $ - i 4 \ pi \ int _ 0 ^ { \ infty } dv \ frac { 8 ( v ^ 2 - 1 ) ( v ^ 4 - 6 v ^ 2 + 1 ) } { v ^ 8 + 4 v ^ 6 + 70v ^ 4 + 4 v ^ 2 + 1 } \ log { v } + 4 \ pi ^ 2 \ int _ 0 ^ { \ infty } dv \ frac { 8 ( v ^ 2 - 1 ) ( v ^ 4 - 6 v ^ 2 + 1 ) } { v ^ 8 + 4 v ^ 6 + 70v ^ 4 + 4 v ^ 2 + 1 } $ $ it should be noted that the second integral vanishes ; this may be easily seen by exploiting the symmetry about $ v \ mapsto 1 / v $. on the other hand, the contour integral is $ i 2 \ pi $ times the sum of the residues about the poles of the integrand. in general, this requires us to find the zeroes of the eight degree polynomial, which may not be possible analytically. here, on the other hand, we have many symmetries to exploit, e. g., if $ a $ is a root, then $ 1 / a $ is a root, $ - a $ is a root, and $ \ bar { a } $ is a root. for example, we may deduce that $ $ z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 = ( z ^ 4 + 4 z ^ 3 + 10 z ^ 2 + 4 z + 1 ) ( z ^ 4 - 4 z ^ 3 + 10 z ^ 2 - 4 z + 1 ) $ $ which exploits the $ a \ mapsto - a $ symmetry. now write $ $ z ^ 4 + 4 z ^ 3 + 10 z ^ 2 + 4 z + 1 = ( z - a ) ( z - \ bar { a } ) \ left ( z - \ frac { 1 } { a } \ right ) \ left ( z - \ frac { 1 } { \ bar { a } } \ right ) $ $ write $ a = r e ^ { i \ theta } $ and get the following equations : $ $ \ left ( r + \ frac { 1 } { r } \ right ) \ cos { \
https://api.stackexchange.com
theta } = - 2 $ $ $ $ \ left ( r ^ 2 + \ frac { 1 } { r ^ 2 } \ right ) + 4 \ cos ^ 2 { \ theta } = 10 $ $ from these equations, one may deduce that a solution is $ r = \ phi + \ sqrt { \ phi } $ and $ \ cos { \ theta } = 1 / \ phi $, where $ \ phi = ( 1 + \ sqrt { 5 } ) / 2 $ is the golden ratio. thus the poles take the form $ $ z _ k = \ pm \ left ( \ phi \ pm \ sqrt { \ phi } \ right ) e ^ { \ pm i \ arctan { \ sqrt { \ phi } } } $ $ now we have to find the residues of the integrand at these 8 poles. we can break this task up by computing : $ $ \ sum _ { k = 1 } ^ 8 \ operatorname * { res } _ { z = z _ k } \ left [ \ frac { 8 ( z ^ 2 - 1 ) ( z ^ 4 - 6 z ^ 2 + 1 ) \ log ^ 2 { z } } { z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 } \ right ] = \ sum _ { k = 1 } ^ 8 \ operatorname * { res } _ { z = z _ k } \ left [ \ frac { 8 ( z ^ 2 - 1 ) ( z ^ 4 - 6 z ^ 2 + 1 ) } { z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 } \ right ] \ log ^ 2 { z _ k } $ $ here things got very messy, but the result is rather unbelievably simple : $ $ \ operatorname * { res } _ { z = z _ k } \ left [ \ frac { 8 ( z ^ 2 - 1 ) ( z ^ 4 - 6 z ^ 2 + 1 ) } { z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 } \ right ] = \ text { sgn } [ \ cos { ( \ arg { z _ k } ) } ] $ $ edit actually, this is a very simple computation. inspired by @ sos440, one may express the rational function of $ z $ in a very simple
https://api.stackexchange.com
form : $ $ \ frac { 8 ( z ^ 2 - 1 ) ( z ^ 4 - 6 z ^ 2 + 1 ) } { z ^ 8 + 4 z ^ 6 + 70z ^ 4 + 4 z ^ 2 + 1 } = - \ left [ \ frac { p'( z ) } { p ( z ) } + \ frac { p'( - z ) } { p ( - z ) } \ right ] $ $ where $ $ p ( z ) = z ^ 4 + 4 z ^ 3 + 10 z ^ 2 + 4 z + 1 $ $ the residue of this function at the poles are then easily seen to be $ \ pm 1 $ according to whether the pole is a zero of $ p ( z ) $ or $ p ( - z ) $. end edit that is, if the pole has a positive real part, the residue of the fraction is $ + 1 $ ; if it has a negative real part, the residue is $ - 1 $. now consider the log piece. expanding the square, we get 3 terms : $ $ \ log ^ 2 { | z _ k | } - ( \ arg { z _ k } ) ^ 2 + i 2 \ log { | z _ k | } \ arg { z _ k } $ $ summing over the residues, we find that because of the $ \ pm1 $ contributions above, that the first and third terms sum to zero. this leaves the second term. for this, it is crucial that we get the arguments right, as $ \ arg { z _ k } \ in [ 0, 2 \ pi ) $. thus, we have $ $ \ begin { align } i = \ int _ 0 ^ { \ infty } dv \ frac { 8 ( v ^ 2 - 1 ) ( v ^ 4 - 6 v ^ 2 + 1 ) } { v ^ 8 + 4 v ^ 6 + 70v ^ 4 + 4 v ^ 2 + 1 } \ log { v } & = \ frac12 \ sum _ { k = 1 } ^ 8 \ text { sgn } [ \ cos { ( \ arg { z _ k } ) } ] ( \ arg { z _ k } ) ^ 2 \ \ & = \ frac12 [ 2 ( \ arctan { \ sqrt { \ phi } } ) ^ 2 + 2 ( 2 \ pi - \ arctan { \ sqrt { \
https://api.stackexchange.com
phi } } ) ^ 2 \ \ & - 2 ( \ pi - \ arctan { \ sqrt { \ phi } } ) ^ 2 - 2 ( \ pi + \ arctan { \ sqrt { \ phi } } ) ^ 2 ] \ \ & = 2 \ pi ^ 2 - 4 \ pi \ arctan { \ sqrt { \ phi } } \ \ & = 4 \ pi \, \ text { arccot } { \ sqrt { \ phi } } \ \ \ end { align } $ $
https://api.stackexchange.com
no, it's not possible : at least, not in an asymptotic sense, where you require the problem to keep getting strictly easier, forever, as $ n \ to \ infty $. let $ t ( n ) $ be the best possible running time for solving such a problem, where $ n $ is the size of the input. note that the running time is a count of the number of instructions executed by the algorithm, so it has to be a non - negative integer. in other words, $ t ( n ) \ in \ mathbb { n } $ for all $ n $. now if we consider a function $ t : \ mathbb { n } \ to \ mathbb { n } $, we see there is no such function that is strictly monotonically decreasing. ( whatever $ t ( 0 ) $ is, it has to be finite, say $ t ( 0 ) = c $ ; but then since $ t $ is monotonically strictly decreasing, $ t ( c ) \ le 0 $ and $ t ( c + 1 ) \ le - 1 $, which is impossible. ) for similar reasons, there is no function that is asymptotically strictly decreasing : we can similarly prove that there's no running time function $ t ( n ) $ where there exists $ n _ 0 $ such that for all $ n \ ge n _ 0 $, $ t ( n ) $ is monotonically strictly decreasing ( any such function would have to become eventually negative ). so, such a problem cannot exist, for the simple reason that running times have to be non - negative integers. note that this answer covers only deterministic algorithms ( i. e., worst - case running time ). it doesn't rule out the possibility of randomized algorithms whose expected running time is strictly monotonically decreasing, forever. i don't know whether it's possible for such an algorithm to exist. i thank beni cherniavsky - paskin for this observation.
https://api.stackexchange.com
that's a great question! what you are asking about is one of the missing links between classical and quantum gravity. on their own, the einstein equations, $ g _ { \ mu \ nu } = 8 \ pi g t _ { \ mu \ nu } $, are local field equations and do not contain any topological information. at the level of the action principle, $ $ s _ { \ mathrm { eh } } = \ int _ \ mathcal { m } d ^ 4 x \, \ sqrt { - g } \, \ mathbf { r } $ $ the term we generally include is the ricci scalar $ \ mathbf { r } = \ mathrm { tr } [ r _ { \ mu \ nu } ] $, which depends only on the first and second derivatives of the metric and is, again, a local quantity. so the action does not tell us about topology either, unless you're in two dimensions, where the euler characteristic is given by the integral of the ricci scalar : $ $ \ int d ^ 2 x \, \ mathcal { r } = \ chi $ $ ( modulo some numerical factors ). so gravity in 2 dimensions is entirely topological. this is in contrast to the 4d case where the einstein - hilbert action appears to contain no topological information. this should cover your first question. all is not lost, however. one can add topological degrees of freedom to 4d gravity by the addition of terms corresponding to various topological invariants ( chern - simons, nieh - yan and pontryagin ). for instance, the chern - simons contribution to the action looks like : $ $ s _ { cs } = \ int d ^ 4 x \ frac { 1 } { 2 } \ left ( \ epsilon _ { ab } { } ^ { ij } r _ { cdij } \ right ) r _ { abcd } $ $ here is a very nice paper by jackiw and pi for the details of this construction. there's plenty more to be said about topology and general relativity. your question only scratches the surface. but there's a goldmine underneath! i'll let someone else tackle your second question. short answer is " yes ".
https://api.stackexchange.com
i think this approach is mistaken, but perhaps it will be more helpful if i explain why. wanting to know the best model given some information about a large number of variables is quite understandable. moreover, it is a situation in which people seem to find themselves regularly. in addition, many textbooks ( and courses ) on regression cover stepwise selection methods, which implies that they must be legitimate. unfortunately, however, they are not, and the pairing of this situation and goal is quite difficult to successfully navigate. the following is a list of problems with automated stepwise model selection procedures ( attributed to frank harrell, and copied from here ) : it yields r - squared values that are badly biased to be high. the f and chi - squared tests quoted next to each variable on the printout do not have the claimed distribution. the method yields confidence intervals for effects and predicted values that are falsely narrow ; see altman and andersen ( 1989 ). it yields p - values that do not have the proper meaning, and the proper correction for them is a difficult problem. it gives biased regression coefficients that need shrinkage ( the coefficients for remaining variables are too large ; see tibshirani [ 1996 ] ). it has severe problems in the presence of collinearity. it is based on methods ( e. g., f tests for nested models ) that were intended to be used to test prespecified hypotheses. increasing the sample size does not help very much ; see derksen and keselman ( 1992 ). it allows us to not think about the problem. it uses a lot of paper. the question is, what's so bad about these procedures / why do these problems occur? most people who have taken a basic regression course are familiar with the concept of regression to the mean, so this is what i use to explain these issues. ( although this may seem off - topic at first, bear with me, i promise it's relevant. ) imagine a high school track coach on the first day of tryouts. thirty kids show up. these kids have some underlying level of intrinsic ability to which neither the coach nor anyone else, has direct access. as a result, the coach does the only thing he can do, which is have them all run a 100m dash. the times are presumably a measure of their intrinsic ability and are taken as such. however, they are probabilistic ; some proportion of how well someone does is based on their actual ability, and some proportion is random
https://api.stackexchange.com
. imagine that the true situation is the following : set. seed ( 59 ) intrinsic _ ability = runif ( 30, min = 9, max = 10 ) time = 31 - 2 * intrinsic _ ability + rnorm ( 30, mean = 0, sd =. 5 ) the results of the first race are displayed in the following figure along with the coach's comments to the kids. note that partitioning the kids by their race times leaves overlaps on their intrinsic ability - - this fact is crucial. after praising some, and yelling at some others ( as coaches tend to do ), he has them run again. here are the results of the second race with the coach's reactions ( simulated from the same model above ) : notice that their intrinsic ability is identical, but the times bounced around relative to the first race. from the coach's point of view, those he yelled at tended to improve, and those he praised tended to do worse ( i adapted this concrete example from the kahneman quote listed on the wiki page ), although actually regression to the mean is a simple mathematical consequence of the fact that the coach is selecting athletes for the team based on a measurement that is partly random. now, what does this have to do with automated ( e. g., stepwise ) model selection techniques? developing and confirming a model based on the same dataset is sometimes called data dredging. although there is some underlying relationship amongst the variables, and stronger relationships are expected to yield stronger scores ( e. g., higher t - statistics ), these are random variables, and the realized values contain error. thus, when you select variables based on having higher ( or lower ) realized values, they may be such because of their underlying true value, error, or both. if you proceed in this manner, you will be as surprised as the coach was after the second race. this is true whether you select variables based on having high t - statistics, or low intercorrelations. true, using the aic is better than using p - values, because it penalizes the model for complexity, but the aic is itself a random variable ( if you run a study several times and fit the same model, the aic will bounce around just like everything else ). unfortunately, this is just a problem intrinsic to the epistemic nature of reality itself. i hope this is helpful.
https://api.stackexchange.com
tungsten's melting point of 3422 °c is the highest of all metals and second only to carbon's, for which melting occurs only at high pressure ( there's no standard melting point ). this is why tungsten is used in rocket nozzles and reactor linings. there are refractory ceramics and alloys that have higher melting points, notably $ \ ce { ta4hfc5 } $ with a melting point of 4215 °c, hafnium carbide at 3900 °c and tantalum carbide at 3800 °c. carbon cannot be used to hold molten tungsten because they will react to form tungsten carbide. sometimes ladles and crucibles used to prepare or transport high melting point materials like tungsten are lined with the various higher melting ceramics or alloys. more typically tungsten and other refractory materials are fabricated in a non - molten state. a process known as powder metallurgy is used. this process uses 4 basic steps : powder manufacture - a variety of techniques are available to generate small particles of the material being worked powder blending - routine procedures are used to blend the constituent particles into a uniform mixture compacting - the blended powder is placed in a mold and subjected to high pressure sintering - the compacted material is subjected to high temperature and some level of bonding occurs between particles.
https://api.stackexchange.com
claim : $ l $ is context - free. proof idea : there has to be at least one difference between the first and second half ; we give a grammar that makes sure to generate one and leaves the rest arbitrary. proof : for sake of simplicity, assume a binary alphabet $ \ sigma = \ { a, b \ } $. the proof readily extends to other sizes. consider the grammar $ g $ : $ \ qquad \ begin { align } s & \ to ab \ mid ba \ \ a & \ to a \ mid aaa \ mid aab \ mid baa \ mid bab \ \ b & \ to b \ mid aba \ mid abb \ mid bba \ mid bbb \ end { align } $ it is quite clear that it generates $ \ qquad \ mathcal { l } ( g ) = \ { \ underbrace { w _ 1 } _ k x \ underbrace { w _ 2v _ 1 } _ { k + l } y \ underbrace { v _ 2 } _ l \ mid | w _ 1 | = | w _ 2 | = k, | v _ 1 | = | v _ 2 | = l, x \ neq y \ } \ subseteq \ sigma ^ * ; $ the suspicious may perform a nested induction over $ k $ and $ l $ with case distinction over pairs $ ( x, y ) $. the length of a word in $ \ mathcal { l } ( g ) $ is $ 2 ( k + l + 1 ) $. the letters $ x $ and $ y $ occur on positions $ k + 1 $ and $ 2k + l + 2 $, respectively. when we split the word in half, i. e. after $ ( k + l + 1 ) $ letters, then the first half contains the letter $ x $ on position $ k + 1 $ and the second half has the letter $ y $ on position $ k + 1 $. therefore, $ x $ and $ y $ have the same position ( in their respective half ), which implies $ \ mathcal { l } ( g ) = l $ because $ g $ imposes no other restrictions on its language. the interested reader may enjoy two follow - up problems : exercise 1 : come up with a pda for $ l $! exercise 2 : what about $ \ { xyz \ mid | x | = | y | = | z |, x \ neq
https://api.stackexchange.com
y \ lor y \ neq z \ lor x \ neq z \ } $?
https://api.stackexchange.com
i'm not sure i'm doing it the best way, but here is an example where i read a compressed gzip fastq file and write the records in block gzip fastq : from bio import seqio, bgzf # used to convert the fastq stream into a file handle from io import stringio from gzip import open as gzopen records = seqio. parse ( # there is actually simpler ( thanks @ peterjc ) # stringio ( gzopen ( " random _ 10. fastq. gz " ). read ( ). decode ( " utf - 8 " ) ), gzopen ( " random _ 10. fastq. gz ", " rt " ), format = " fastq " ) with bgzf. bgzfwriter ( " test. fastq. bgz ", " wb " ) as outgz : seqio. write ( sequences = records, handle = outgz, format = " fastq " )
https://api.stackexchange.com
but what does frequency spectrum means in case of images? the " mathematical equations " are important, so don't skip them entirely. but the 2d fft has an intuitive interpretation, too. for illustration, i've calculated the inverse fft of a few sample images : as you can see, only one pixel is set in the frequency domain. the result in the image domain ( i've only displayed the real part ) is a " rotated cosine pattern " ( the imaginary part would be the corresponding sine ). if i set a different pixel in the frequency domain ( at the left border ) : i get a different 2d frequency pattern. if i set more than one pixel in the frequency domain : you get the sum of two cosines. so like a 1d wave, that can be represented as a sum of sines and cosines, any 2d image can be represented ( loosely speaking ) as a sum of " rotated sines and cosines ", as shown above. when we take fft of a image in opencv, we get weird picture. what does this image denote? it denotes the amplitudes and frequencies of the sines / cosines that, when added up, will give you the original image. and what is its application? there are really too many to name them all. correlation and convolution can be calculated very efficiently using an fft, but that's more of an optimization, you don't " look " at the fft result for that. it's used for image compression, because the high frequency components are usually just noise.
https://api.stackexchange.com
think about it. what exactly do you envision a " 256 bit " processor being? what makes the bit - ness of a processor in the first place? i think if no further qualifications are made, the bit - ness of a processor refers to its alu width. this is the width of the binary number that it can handle natively in a single operation. a " 32 bit " processor can therefore operate directly on values up to 32 bits wide in single instructions. your 256 bit processor would therefore contain a very large alu capable of adding, subtracting, oring, anding, etc, 256 bit numbers in single operations. why do you want that? what problem makes the large and expensive alu worth having and paying for, even for those cases where the processor is only counting 100 iterations of a loop and the like? the point is, you have to pay for the wide alu whether you then use it a lot or only a small fraction of its capabilities. to justify a 256 bit alu, you'd have to find an important enough problem that can really benefit from manipulating 256 bit words in single instructions. while you can probably contrive a few examples, there aren't enough of such problems that make the manufacturers feel they will ever get a return on the significant investment required to produce such a chip. if it there are niche but important ( well - funded ) problems that can really benefit from a wide alu, then we would see very expensive highly targeted processors for that application. their price, however, would prevent wide usage outside the narrow application that it was designed for. for example, if 256 bits made certain cryptography applications possible for the military, specialized 256 bit processors costing 100s to 1000s of dollars each would probably emerge. you wouldn't put one of these in a toaster, a power supply, or even a car though. i should also be clear that the wide alu doesn't just make the alu more expensive, but other parts of the chip too. a 256 bit wide alu also means there have to be 256 bit wide data paths. that alone would take a lot of silicon area. that data has to come from somewhere and go somewhere, so there would need to be registers, cache, other memory, etc, for the wide alu to be used effectively. another point is that you can do any width arithmetic on any width processor. you can add a 32 bit memory word into another 32 bit memory word on a pic 18 in 8 instructions
https://api.stackexchange.com
, whereas you could do it on the same architecture scaled to 32 bits in only 2 instructions. the point is that a narrow alu doesn't keep you from performing wide computations, only that the wide computations will take longer. it is therefore a question of speed, not capability. if you look at the spectrum of applications that need to use particular width numbers, you will see very very few require 256 bit words. the expense of accelerating just those few applications with hardware that won't help the others just isn't worth it and doesn't make a good investment for product development.
https://api.stackexchange.com
the cdc has made available online its ncov test kit. briefly, the kit contains primers and probes for real - time reverse - transcriptase pcr, as well as instructions for appropriate use and ( critically ) controls and guidelines to avoid false positives and negatives. kits from different countries may use slightly different primers and probes, though since they are all working from the same sequences and the same principles they should be broadly quite similar. explaining how quantitative pcr works and the details of the primers and probes is out of the scope of this se. a layman's introduction was written by john timmer at ars technica.
https://api.stackexchange.com
for fluid to flow from a wound there needs to be a significant pressure gradient between where it is now and the outside of the body. your skin generally does not have a strong compressive effect, which is why a deep cut exposing fat will not lead to the fatty tissue being expulsed from the body any more than the interstitial fluid is. blood, however, flows. for it to circulate there needs to be a pressure gradient between where it is now and where it is going. since veins ( including the vena cava, which channels blood back into the heart ) do not have vascular walls strong enough to create a suction effect ( i. e. lower pressure than the surrounding tissue ), you can conclude that the pressure of blood vessels is always higher than that of surrounding tissues, and thus higher than the pressure outside of your body. this is why all blood vessels, including veins, will bleed, whereas less pressurized systems such as interstitial fluid will not.
https://api.stackexchange.com
things are not empty space. our classical intuition fails at the quantum level. matter does not pass through other matter mainly due to the pauli exclusion principle and due to the electromagnetic repulsion of the electrons. the closer you bring two atoms, i. e. the more the areas of non - zero expectation for their electrons overlap, the stronger will the repulsion due to the pauli principle be, since it can never happen that two electrons possess exactly the same spin and the same probability to be found in an extent of space. the idea that atoms are mostly " empty space " is, from a quantum viewpoint, nonsense. the volume of an atom is filled by the wavefunctions of its electrons, or, from a qft viewpoint, there is a localized excitation of the electron field in that region of space, which are both very different from the " empty " vacuum state. the concept of empty space is actually quite tricky, since our intuition " space is empty when there is no particle in it " differs from the formal " empty space is the unexcited vacuum state of the theory " quite a lot. the space around the atom is definitely not in the vacuum state, it is filled with electron states. but if you go and look, chances are, you will find at least some " empty " space in the sense of " no particles during measurement ". yet you are not justified in saying that there is " mostly empty space " around the atom, since the electrons are not that sharply localized unless some interaction ( like measurements ) takes place that actually forces them to. when not interacting, their states are " smeared out " over the atom in something sometimes called the electron cloud, where the cloud or orbital represents the probability of finding a particle in any given spot. this weirdness is one of the reasons why quantum mechanics is so fundamentally different from classical mechanics – suddenly, a lot of the world becomes wholly different from what we are used to at our macroscopic level, and especially our intuitions about " empty space " and such fail us completely at microscopic levels. since it has been asked in the comments, i should probably say a few more words about the role of the exclusion principle : first, as has been said, without the exclusion principle, the whole idea of chemistry collapses : all electrons fall to the lowest 1s orbital and stay there, there are no " outer " electrons, and the world as we know it would not work. second, consider the situation of two equally charged classical particles : if you only invest
https://api.stackexchange.com
enough energy / work, you can bring them arbitrarily close. the pauli exclusion principle prohibits this for the atoms – you might be able to push them a little bit into each other, but at some point, when the states of the electrons become too similar, it just won't go any further. when you hit that point, you have degenerate matter, a state of matter which is extremely difficult to compress, and where the exclusion principle is the sole reason for its incompressibility. this is not due to coulomb repulsion, it is that that we also need to invest the energy to catapult the electrons into higher energy levels since the number of electrons in a volume of space increases under compression, while the number of available energy levels does not. ( if you read the article, you will find that the electrons at some point will indeed prefer to combine with the protons and form neutrons, which then exhibit the same kind of behaviour. then, again, you have something almost incompressible, until the pressure is high enough to break the neutrons down into quarks ( that is merely theoretical ). no one knows what happens when you increase the pressure on these quarks indefinitely, but we probably cannot know that anyway, since a black hole will form sooner or later ) third, the kind of force you need to create such degenerate matter is extraordinarily high. even metallic hydrogen, the probably simplest kind of such matter, has not been reliably produced in experiments. however, as mark a has pointed out in the comments ( and as is very briefly mentioned in the wikipedia article, too ), a very good model for the free electrons in a metal is that of a degenerate gas, so one could take metal as a room - temperature example of the importance of the pauli principle. so, in conclusion, one might say that at the levels of our everyday experience, it would probably enough to know about the coulomb repulsion of the electrons ( if you don't look at metals too closely ). but without quantum mechanics, you would still wonder why these electrons do not simply go closer to their nuclei, i. e. reduce their orbital radius / drop to a lower energy state, and thus reduce the effective radius of the atom. therefore, coulomb repulsion already falls short at this scale to explain why matter seems " solid " at all – only the exclusion principle can explain why the electrons behave the way they do
https://api.stackexchange.com
.
https://api.stackexchange.com